Principles of Random Signal Analysis and L ow Noise Design
Principles of Random Signal Analysis and L ow Noise Design T he Power Spectral Density and its Applications
Roy M. Howard Curtin University of Technology Perth, Australia
A JOHN WILEY & SONS, INC., PUBLICATION
This is printed on acid-free paper. Copyright 2002 by John Wiley & Sons, Inc., New York. All rights reserved. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQWILEY.COM. For ordering and customer service, call 1-800-CALL-WILEY. Library of Congress Cataloging-in-Publication Data is available. ISBN: 0-471-22617-3 Printed in the United States of America. 10 9 8 7 6 5 4 3 2 1
Contents
Preface
ix
About the Author
xi
1. Introduction
1
2. Background: Signal and System Theory
3
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10
Introduction / 3 Background Theory / 3 Functions, Signals and Systems / 7 Signal Properties / 12 Measure and Lebesgue Integration / 23 Signal Classification / 35 Convergence / 36 Fourier Theory / 38 Random Processes / 44 Miscellaneous Results / 45 Appendix 1: Proof of Theorem 2.11 / 46 Appendix 2: Proof of Theorem 2.13 / 47 Appendix 3: Proof of Theorem 2.17 / 47 Appendix 4: Proof of Theorem 2.27 / 49 Appendix 5: Proof of Theorem 2.28 / 50 Appendix 6: Proof of Theorem 2.30 / 52 Appendix 7: Proof of Theorem 2.31 / 53 Appendix 8: Proof of Theorem 2.32 / 56 v
vi
CONTENTS
3. The Power Spectral Density 3.1 3.2 3.3 3.4 3.5 3.6 3.7
Introduction / 59 Definition / 60 Properties / 65 Random Processes / 67 Existence Criteria / 73 Impulsive Case / 74 Power Spectral Density via Autocorrelation / 78 Appendix 1: Proof of Theorem 3.4 / 84 Appendix 2: Proof of Theorem 3.5 / 85 Appendix 3: Proof of Theorem 3.8 / 88 Appendix 4: Proof of Theorem 3.10 / 89
4. Power Spectral Density Analysis 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9
92
Introduction / 92 Boundedness of Power Spectral Density / 92 Power Spectral Density via Signal Decomposition / 95 Simplifying Evaluation of Power Spectral Density / 98 The Cross Power Spectral Density / 102 Power Spectral Density of a Sum of Random Processes / 107 Power Spectral Density of a Periodic Signal / 112 Power Spectral Density — Periodic Component Case / 119 Graphing Impulsive Power Spectral Densities / 122 Appendix 1: Proof of Theorem 4.2 / 123 Appendix 2: Proof of Theorem 4.4 / 126 Appendix 3: Proof of Theorem 4.5 / 128 Appendix 4: Proof of Theorem 4.6 / 128 Appendix 5: Proof of Theorem 4.8 / 130 Appendix 6: Proof of Theorem 4.10 / 132 Appendix 7: Proof of Theorem 4.11 / 134 Appendix 8: Proof of Theorem 4.12 / 136
5. Power Spectral Density of Standard Random Processes — Part 1 5.1 5.2 5.3 5.4 5.5 5.6
59
Introduction / 138 Signaling Random Processes / 138 Digital to Analogue Converter Quantization / 152 Jitter / 155 Shot Noise / 160 Generalized Signaling Processes / 166 Appendix 1: Proof of Theorem 5.1 / 168 Appendix 2: Proof of Theorem 5.2 / 171 Appendix 3: Proof of Equation 5.73 / 173 Appendix 4: Proof of Theorem 5.3 / 174
138
vii
CONTENTS
Appendix 5: Proof of Theorem 5.4 / 176 Appendix 6: Proof of Theorem 5.5 / 177 6. Power Spectral Density of Standard Random Processes — Part 2 6.1 6.2 6.3 6.4 6.5
179
Introduction / 179 Sampled Signals / 179 Quadrature Amplitude Modulation / 185 Random Walks / 192 1/f Noise / 198 Appendix 1: Proof of Theorem 6.1 / 200 Appendix 2: Proof of Theorem 6.2 / 201 Appendix 3: Proof of Theorem 6.3 / 202 Appendix 4: Proof of Equation 6.39 / 204
7. Memoryless Transformations of Random Processes
206
7.1 Introduction / 206 7.2 Power Spectral Density after a Memoryless Transformation / 206 7.3 Examples / 211 Appendix 1: Proof of Theorem 7.1 / 223 Appendix 2: Fourier Results for Raised Cosine Frequency Modulation / 224 8. Linear System Theory 8.1 8.2 8.3 8.4 8.5 8.6
Introduction / 229 Impulse Response / 230 Input-Output Relationship / 232 Fourier and Laplace Transform of Output / 232 Input-Output Power Spectral Density Relationship / 238 Multiple Input-Multiple Output Systems / 243 Appendix 1: Proof of Theorem 8.1 / 246 Appendix 2: Proof of Theorem 8.2 / 248 Appendix 3: Proof of Theorem 8.3 / 249 Appendix 4: Proof of Theorem 8.4 / 251 Appendix 5: Proof of Theorem 8.6 / 252 Appendix 6: Proof of Theorem 8.7 / 253 Appendix 7: Proof of Theorem 8.8 / 255
9. Principles of Low Noise Electronic Design 9.1 9.2 9.3 9.4 9.5
229
Introduction / 256 Gaussian White Noise / 259 Standard Noise Sources / 264 Noise Models for Standard Electronic Devices / 266 Noise Analysis for Linear Time Invariant Systems / 269
256
viii
9.6 9.7 9.8 9.9 9.10
CONTENTS
Input Equivalent Current and Voltage Sources / 278 Transferring Noise Sources / 282 Results for Low Noise Design / 285 Noise Equivalent Bandwidth / 285 Power Spectral Density of a Passive Network / 287 Appendix 1: Proof of Theorem 9.2 / 291 Appendix 2: Proof of Theorem 9.4 / 294 Appendix 3: Proof of Conjecture for Ladder Structure / 296
Notation
300
References
302
Index
307
Preface
This book gives a systematic account of the Power Spectral Density and details the application of this theory to Communications and Electronics. The level of the book is suited to final year Electrical and Electronic Engineering students, post-graduate students and researchers. This book arises from the author’s research experience in low noise amplifier design and analysis of random processes. The basis of the book is the definition of the power spectral density using results directly from Fourier theory rather than the more popular approach of defining the power spectral density in terms of the Fourier transform of the autocorrelation function. The difference between use of the two definitions, which are equivalent with an appropriate definition for the autocorrelation function, is that the former greatly facilitates analysis, that is, the determination of the power spectral density of standard signals, as the book demonstrates. The strength, and uniqueness, of the book is that, based on a thorough account of signal theory, it presents a comprehensive and straightforward account of the power spectral density and its application to the important areas of communications and electronics. The following people have contributed to the book in various ways. First, Prof. J. L. Hullett introduced me to the field of low noise electronic design and has facilitated my career at several important times. Second, Prof. L. Faraone facilitated and supported my research during much of the 1990s. Third, Prof. A. Cantoni, Dr. Y. H. Leung and Prof. K. Fynn supported my research from 1995 to 1997. Fourth, Mr. Nathanael Rensen collaborated on a research project with me over the period 1996 to early 1998. Fifth, Prof. A. Zoubir has provided collegial support and suggested that I contact Dr. P. Meyler from John Wiley & Sons with respect to publication. Sixth, Dr. P. Meyler, ix
x
PREFACE
Ms. Melissa Yanuzzi and staff at John Wiley have supported and efficiently managed the publication of the book. Finally, several students —undergraduate and postgraduate — have worked on projects related to material in the book and, accordingly, have contributed to the final outcome. I also wish to thank the staff at Kiribilli Cafe, the Art Gallery Cafe and the King Street Cafe for their indirect support whilst a significant level of editing was in progress. Finally, family and friends will hear a little less about ‘The Book’ in the future. R M. H December 2001
About the Author
Roy M. Howard has been awarded a BE and a Ph.D. in Electrical Engineering, as well as a BA (Mathematics and Philosophy), by the University of Western Australia. Currently he is a Senior Lecturer in the School of Electrical and Computer Engineering at Curtin University of Technology, Perth, Australia. His research interests include signal theory, stochastic modeling, 1/f noise, low noise amplifier design, nonlinear systems, and nonlinear electronics.
xi
ï ײ¬®±¼«½¬·±² ø ÷
ø
÷
ø
÷ ø
ø
÷
ø
¨
Ì ÙÌ º
È
÷
÷
ÈÌ º Ì ¨
ø
÷ Ì ï
î
×ÒÌÎÑÜËÝÌ×ÑÒ
¨ ¬ ¨ö ¬
ÎÌ ¬
Ì
ÎÌ
Ì
ÙÌ º
¬
Ì
ÎÌ ¬ ÎÌ ¬
ÎÌ
¬
Ì
ø
÷
ø
÷
ø
÷
¼¬ ¼¬
»
¼
î Þ¿½µ¹®±«²¼æ Í·¹²¿´ ¿²¼ ͧ-¬»³ ̸»±®§ îòï ×ÒÌÎÑÜËÝÌ×ÑÒ
îòî ÞßÝÕÙÎÑËÒÜ ÌØÛÑÎÇ îòîòï Í»¬ ̸»±®§
ø
÷
ø
÷
ø
÷ í
ì
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ø ÷
ø
ë
ÞßÝÕÙÎÑËÒÜ ÌØÛÑÎÇ
ø ÷ îòîòî λ¿´ ¿²¼ ݱ³°´»¨ ß²¿´§-·-
ø
÷
ø ø
÷
÷
ø
÷ Ò
Ò
Ò
ø
÷
ø
÷
ø
÷
Æ Æ Æ
Ò
Ï Ï
ø
Æ
Î
ø
÷
ø
÷
ø ø
÷ ÷
Ï Ï
ø
Ï
÷
ø
÷
ê
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
Ý
Ý
ø
÷
Î
ø
÷
ø
÷
ø
Î
ø
Ò ø
÷
Ò Æ
Ï
Î
Ý
Î
ø
÷
ø
÷
ø ø ø
÷
Î ÷
ÚËÒÝÌ×ÑÒÍô Í×ÙÒßÔÍô ßÒÜ ÍÇÍÌÛÓÍ
îòí ÚËÒÝÌ×ÑÒÍô Í×ÙÒßÔÍô ßÒÜ ÍÇÍÌÛÓÍ
ø
÷
ø
÷
ø ÷ Î Î
Î
Æ
Î Æ
Î
Î
Î Î
Î
Ý
Ý
º
¬ ¬±
º ø¬÷ º ø¬±÷
Ú·¹«®» îòï Ó¿°°·²¹ ·²ª±´ª»¼ ·² ¿ ½±²¬·²«±«- ®»¿´ º«²½¬·±²ò
é
è
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
Ú Í×
ÍÑ ¹·
º· Ú·¹«®» îòî Ó¿°°·²¹ °®±¼«½»¼ ¾§ ¿ -§-¬»³ò
Î
Î
Ý
Î
îòíòï Ü·-¶±·²¬ ¿²¼ Ñ®¬¸±¹±²¿´ Í·¹²¿´Î
Î
ø
Ý
Î
Ý
Î
÷
Ý
ø
÷
ø
÷
ø
÷
Ý
ÚËÒÝÌ×ÑÒÍô Í×ÙÒßÔÍô ßÒÜ ÍÇÍÌÛÓÍ
Î
ø
Ý
Æ Æ
ç
Æ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
Ý
ø
÷
îòíòî ̧°»- ±º ͧ-¬»³- ¿²¼ Ñ°»®¿¬±®-
ø ÷
ø ÷
ïð
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø ÷
ø ÷
ø ÷
ø÷
ïï
ÚËÒÝÌ×ÑÒÍô Í×ÙÒßÔÍô ßÒÜ ÍÇÍÌÛÓÍ
ø
÷
ø
÷
ø
÷
ø
÷
ø ÷
îòíòí Ü»•²·²¹ Ñ«¬°«¬ Í·¹²¿´ º®±³ ¿ Ó»³±®§´»-- ͧ-¬»³
Úø º ÷
¹ø¬÷ Ú·ø º ÷
º º· ›ï
º·
º· ›ï
º·
•
”
º ø¬÷ ¬ï º¬
º· ›ïô º·
¬
¬ïô ¬î
¬î ¬ Ú·¹«®» îòí ײ°«¬ ¿²¼ ±«¬°«¬ -·¹²¿´ ±º ¿ ³»³±®§´»-- -§-¬»³ò
¬
ïî
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
îòíòíòï Ü»½±³°±-·¬·±² ±º Ñ«¬°«¬ Ë-·²¹ Ì·³» ﮬ·¬·±²
îòì Í×ÙÒßÔ ÐÎÑÐÛÎÌ×ÛÍ
ïí
Í×ÙÒßÔ ÐÎÑÐÛÎÌ×ÛÍ
Î
Ý ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
îòìòï з»½»©·-» ݱ²¬·²«·¬§ ¿²¼ ݱ²¬·²«·¬§
÷
ïì
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
Ú·¹«®» îòì ݱ²-¬®¿·²¬- ±² ¿ º«²½¬·±² ·³°±-»¼ ¾§ ½±²¬·²«·¬§ò
Î ø
ø
Ý ÷ ø
÷
ø
÷
ø
÷
÷
ïë
Í×ÙÒßÔ ÐÎÑÐÛÎÌ×ÛÍ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
îòìòî Ü·ºº»®»²¬·¿¾·´·¬§ ¿²¼ з»½»©·-» ͳ±±¬¸²»--
÷
÷
ø
÷
ïê
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
Ú·¹«®» îòë ݱ²-¬®¿·²¬- ±² ¿ º«²½¬·±² ½±²-·-¬»²¬ ©·¬¸ ¼·ºº»®»²¬·¿¾·´·¬§ ¿¬ ¿ °±·²¬ ¬± ò
ø
÷
ø
÷
÷
ø
÷
ïé
Í×ÙÒßÔ ÐÎÑÐÛÎÌ×ÛÍ
÷
÷
ø
÷
ø
÷
÷
îòìòí Þ±«²¼»¼²»--ô Þ±«²¼»¼ Ê¿®·¿¬·±²ô ¿²¼ ß¾-±´«¬» ݱ²¬·²«·¬§
ø
÷
Ý
ø
ø
÷
ïè
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
Ú·¹«®» îòê ×´´«-¬®¿¬·±² ±º ¬¸» -·¹²¿´ °¿¬¸´»²¹¬¸ ±º ¿ º«²½¬·±² ¾»¬©»»² ¬©± ½´±-»´§ -°¿½»¼ °±·²¬-ò
Î
ø
Ý
÷
ø
ø
÷
ø
÷
÷
ø
÷
Æ
÷
Î
Ý
ïç
Í×ÙÒßÔ ÐÎÑÐÛÎÌ×ÛÍ
ºø¬÷
– í í
î
ï
¬ ¬ï
•
±ñ
”
í
”
±ñ
í
’
’
±ñ
í
Ú·¹«®» îòé ×´´«-¬®¿¬·±² ±º ¬¸» ®»¯«·®»³»²¬ ±º ¿¾-±´«¬» ½±²¬·²«·¬§ò ̸» ½¿-» -¸±©² ·- º±® ¬¸®»» ¼·-¶±·²¬ ·²¬»®ª¿´- ±º »¯«¿´ ´»²¹¬¸ò
÷
ø
ø
÷
ø
ø
÷
îòìòì λ´¿¬·±²-¸·°- ¾»¬©»»² Í·¹²¿´ Ю±°»®¬·»-
ø
÷
ø
÷
÷
îð
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
÷
ø
÷
ø îòìòìòï ݱ²¬·²«·¬§ ¿²¼ ײ•²·¬» п¬¸´»²¹¬¸ ÷
îòìòìòî ݱ²¬·²«·¬§ ¿²¼ ײ•²·¬» Ò«³¾»® ±º Ü·-½±²¬·²«·¬·»-
îï
Í×ÙÒßÔ ÐÎÑÐÛÎÌ×ÛÍ
ºø¬÷
ï µ õ óóó í
ï µ õ óóó ë
µ ï µ – óóó ì
ï µ – óóó î ¬
ïñë ïñì
ïñí
ïñî
ï
Ú·¹«®» îòè Ú«²½¬·±² ©¸·½¸ ¸¿- ¿² ·²•²·¬» ²«³¾»® ±º ¼·-½±²¬·²«·¬·»- ·² ¿´´ ²»·¹¸¾±®¸±±¼- ±º ¬ 𠾫¬ ·¬ ½±²¬·²«±«- ¿¬ ¬¸·- °±·²¬ò
îòìòìòí з»½»©·-» ͳ±±¬¸²»-- ¿²¼ ײ•²·¬» Ò«³¾»® ±º Ü·-½±²¬·²«·¬·»-
ø
÷
ø
÷
ø
÷
îî
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
÷
ø
÷
ø
÷
÷
÷ ø ø
ø
÷
ÓÛßÍËÎÛ ßÒÜ ÔÛÞÛÍÙËÛ ×ÒÌÛÙÎßÌ×ÑÒ
ø
ø
÷
ø
ø ÷ ø
÷
÷ îòë ÓÛßÍËÎÛ ßÒÜ ÔÛÞÛÍÙËÛ ×ÒÌÛÙÎßÌ×ÑÒ
îòëòï Ó»¿-«®» ¿²¼ Ó»¿-«®¿¾´» Í»¬-
îí
÷
îì
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø ÷
÷
ø
÷
ø ÷
ø
ø
÷
ø
÷
÷
ø
÷
ø
÷
ø
÷
ø ÷
Ï ø
îòëòî Ó»¿-«®¿¾´» Ú«²½¬·±²-
÷
îë
ÓÛßÍËÎÛ ßÒÜ ÔÛÞÛÍÙËÛ ×ÒÌÛÙÎßÌ×ÑÒ
ø
÷
ø
÷
ø Î
÷
Ý
Ý
îòëòí Ô»¾»-¹«» ײ¬»¹®¿¬·±² ø ø
÷
ø
÷
ø
÷
÷ Î
ø
Î
÷
ø
÷
ø
÷
ø
÷
ºø¬÷ ºË ã ºÒ ºî ºï ºÔ ã ºð
Ûð ¿
Ûï
Ûî
Ûï
Ûð
¬ ¾
Ú·¹«®» îòç ×´´«-¬®¿¬·±² ±º ¬¸» °¿®¬·¬·±² ±º ¬¸» ®¿²¹» ±º ºô ¿²¼ ¬¸» -»¬- °¿®¬·¬·±²·²¹ ¬¸» ¼±³¿·² ±º ºô º±® ¬¸» ½¿-» ©¸»®» Ò íò
îê
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
ø
÷
÷
ø
ø
÷
÷ ø
÷
ø
÷
îòëòì Ô»¾»-¹«» ײ¬»¹®¿¾´» Ú«²½¬·±²ø
÷ Î
Ý
Ý
ø
÷
îé
ÓÛßÍËÎÛ ßÒÜ ÔÛÞÛÍÙËÛ ×ÒÌÛÙÎßÌ×ÑÒ
ø Î
÷ Ý
Ý
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
Î îòëòë Ю±°»®¬·»- ±º Ô»¾»-¹«» ײ¬»¹®¿¾´» Ú«²½¬·±²îòëòëòï Þ¿-·½ Ю±°»®¬·»ø
÷
îè
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
Î ø
÷
ø
÷
ø
÷
´±©»® ®¿¬» ±º ·²½®»¿-» ·³°®±ª»- ·²¬»¹®¿¾·´·¬§ ï
ï – ¬
¸·¹¸»® ®¿¬» ±º ¼»½®»¿-» ·³°®±ª»- ·²¬»¹®¿¾·´·¬§ ¬
ï Ú·¹«®» îòïð ×´´«-¬®¿¬·±² ±º ¸±© ®¿¬» ±º ·²½®»¿-»ô ±® ®¿¬» ±º ¼»½®»¿-»ô ¿ºº»½¬- ·²¬»¹®¿¾·´·¬§ò
îç
ÓÛßÍËÎÛ ßÒÜ ÔÛÞÛÍÙËÛ ×ÒÌÛÙÎßÌ×ÑÒ
Î Î
îòëòëòî ͽ¸©¿®¦ ײ»¯«¿´·¬§ ø ÷
ø
÷
ø
÷
ø
÷
ø
÷
íð
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
Ý
ø
÷
ø
÷
ø
÷
îòëòëòí ß°°®±¨·³¿¬·±² ¾§ ¿ Í·³°´» Ú«²½¬·±² Ý
Î
Ý
Î
ø
÷
ø
÷
îòëòëòì ݱ²¬·²«±«- ß°°®±¨·³¿¬·±² ¬± ¿ Ô»¾»-¹«» ײ¬»¹®¿¾´» Ú«²½¬·±²
ø
Î
÷
ø
÷
Ý Î
Ý
ø
÷
íï
ÓÛßÍËÎÛ ßÒÜ ÔÛÞÛÍÙËÛ ×ÒÌÛÙÎßÌ×ÑÒ
ºø¬÷ í
ð ºø¬÷ ã
î
²
¬âï ï ä¬ ï ô – ² ² ²õï ¬
Æ
õ
ø¬÷
ï
¬ ðòî ðòîë ðòíí
ðòë
ï
Ú·¹«®» îòïï ß°°®±¨·³¿¬·²¹ ¿ º«²½¬·±²ô ©¸·½¸ ¸¿- ¿² ·²•²·¬» ²«³¾»® ±º ¼·-½±²¬·²«·¬·»- ¬¸¿¬ ¼·ª»®¹» ¿¬ ¬¸» °±·²¬ ¬ ðô ¾§ ¿ ½±²¬·²«±«- º«²½¬·±²ò
ø
Î
÷
Ý Î
Ý
ø
÷
ºø¬÷ ïòë ï ðòë
ºø¬÷ ã ø¬÷
ð ïõ– ï – î ²
¬âï ï ä¬ ï ô – ² ² ²õï
¬
Æ
õ
¬ ðòî ðòîë ðòíí
ðòë
ï
Ú·¹«®» îòïî ß°°®±¨·³¿¬·²¹ ¿ º«²½¬·±²ô ©¸·½¸ ¸¿- ¿² ·²•²·¬» ²«³¾»® ±º ¼·-½±²¬·²«·¬·»- ¬¸¿¬ ½±²ª»®¹» ¿¬ ¬¸» °±·²¬ ¬ ðô ¾§ ¿ ½±²¬·²«±«- º«²½¬·±²ò
íî
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
Î Î
ø
÷
Ý
Ý ø
÷
ø
÷
ø
÷
ø
îòëòëòë ͬ»° ß°°®±¨·³¿¬·±² ¬± ¿ Ô»¾»-¹«» ײ¬»¹®¿¾´» Ú«²½¬·±²
Ï Ï
ø
÷
ÓÛßÍËÎÛ ßÒÜ ÔÛÞÛÍÙËÛ ×ÒÌÛÙÎßÌ×ÑÒ
íí
Ï ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
Ï
íì
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
÷
ø
÷
ø
÷
îòëòëòê ײ¬»®½¸¿²¹·²¹ ײ¬»¹®¿¬·±² Ñ®¼»® ¿²¼ Í«³³¿¬·±² Ñ®¼»®
ø
÷ Î
ø
Ý
÷
Í×ÙÒßÔ ÝÔßÍÍ×Ú×ÝßÌ×ÑÒ
íë
îòê Í×ÙÒßÔ ÝÔßÍÍ×Ú×ÝßÌ×ÑÒ
ø ÷ ø ÷
ø ÷ ø ÷ ø ÷ Î
ø
Ý
÷ ø
÷ ø ÷ Î
ø ÷
Î
Ý
Þ±«²¼»¼æ Ü»½¿§ â ïñ¤¬¤ Ô±½¿´´§ ײ¬»¹®¿¾´» – Ô Þ±«²¼»¼ ±² º·²·¬» ·²¬»®ª¿´Ð·»½»©·-» ݱ²¬·²«±«-
Ô
б·²¬©·-» ݱ²¬·²«±«ß¾-±´«¬»´§ ݱ²¬·²«±«Ð·»½»©·-» ͳ±±¬¸ Ü·ºº»®»²¬·¿¾´»
Ú·¹«®» îòïí Ý´¿--·•½¿¬·±² ±º ³»¿-«®¿¾´» -·¹²¿´-ò
íê
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
îòé ÝÑÒÊÛÎÙÛÒÝÛ
Î ø
Ý ÷
Î
îòéòï ܱ³·²¿¬»¼ ¿²¼ Ó±²±¬±²» ݱ²ª»®¹»²½»
ø
÷
Ý
ÝÑÒÊÛÎÙÛÒÝÛ
íé
ø
÷
ø
÷
ø
÷
Æ
ø
÷
ø
÷
ø
÷
ø
÷
íè
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
÷
ø
÷
ø
÷
îòè ÚÑËÎ×ÛÎ ÌØÛÑÎÇ
îòèòï Ú±«®·»® Í»®·»- ¿²¼ Ú±«®·»® Ì®¿²-º±®³
ø
ÚÑËÎ×ÛÎ ÌØÛÑÎÇ
íç
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
ø
ø
÷
÷
ø
÷
Æ Æ
ø
÷
ø
÷
ìð
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷
ø
÷
ø
÷
ø ÷ ø ÷
ø ÷ ø
÷
îòèòî η»³¿²² Ô»¾»-¹«» ̸»±®»³ ø ø
÷
ø ø
÷
ø
÷
÷
ø
ø
÷
ø
÷
îòèòí Ü·®·½¸´»¬ б·²¬-
ÚÑËÎ×ÛÎ ÌØÛÑÎÇ
ìï
Í · ²ø î « ÷ «
î
« ›ì î
›í î
›î î
›ï î
ï î
î î
í î
ì î
Ú·¹«®» îòïì ×´´«-¬®¿¬·±² ±º ‘‘·³°«´-·ª»•• º«²½¬·±²ò ̸» ¿®»¿ «²¼»® ¬¸» ¹®¿°¸ »¯«¿´- «²·¬§ò
ø
Î
÷
Ý
ø
ø
Î
÷
ø
Ý
÷
îòèòíòï Û¨·-¬»²½» ±º Ü·®·½¸´»¬ б·²¬ø ÷
ø
÷ ø
÷ ø
÷
÷
ìî
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
ø
ø
÷
÷ ø ø
ø
÷
ø
÷
ø
÷
îòèòíòî ݱ²¬·²«·¬§ ¿²¼ Û¨·-¬»²½» ±º ¿ Ü·®·½¸´»¬ б·²¬
ø
÷
îòèòì ײª»®-» Ú±«®·»® Ì®¿²-º±®³
ÚÑËÎ×ÛÎ ÌØÛÑÎÇ
ø
ìí
÷
îòèòë Ü·®¿½ Ü»´¬¿ Ú«²½¬·±²
ø
÷
îòèòëòï ײ¬»¹®¿´ Ю±°»®¬·»- ±º Ü·®¿½ Ü»´¬¿ Ú«²½¬·±² ø
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ìì
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
îòèòê п®-»ª¿´•- ̸»±®»³ ø ÷
ø
÷
ø
÷
îòç ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ ø ø
÷
÷
ø
÷ ÷
ø
Ý
ø
÷
ø
÷
Æ Î Æ
Æ
Ó×ÍÝÛÔÔßÒÛÑËÍ ÎÛÍËÔÌÍ
ìë
îòïð Ó×ÍÝÛÔÔßÒÛÑËÍ ÎÛÍËÔÌÍ
ø
÷
ø
÷
ø
÷
ø
÷
Æ ø Æ ø
ø
ø
ìê
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ßÐÐÛÒÜ×È ïæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòïï
ßÐÐÛÒÜ×È íæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòïé
ìé
ßÐÐÛÒÜ×È îæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòïí
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ßÐÐÛÒÜ×È íæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòïé
Æ
ìè
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
Ï Ï
Î ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ßÐÐÛÒÜ×È ìæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòîé
ìç
ßÐÐÛÒÜ×È ìæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòîé
ø ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ø
ø ø
ø
Î
ø
Ý
ø
ø
ëð
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
ø
ø ø
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø ø
÷ ø
ø
ø
ø
÷ ø
ßÐÐÛÒÜ×È ëæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòîè
÷
ø
ø ø
ëï
ßÐÐÛÒÜ×È ëæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòîè
ø
÷
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
ø
ø
÷ ø
÷
ø
÷
÷
÷
÷
ëî
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
÷ ø
÷
ø ø
ø
÷
÷ ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ßÐÐÛÒÜ×È êæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòíð
ø
ëí
ßÐÐÛÒÜ×È éæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòíï
ø ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ßÐÐÛÒÜ×È éæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòíï
÷
÷
÷
ø
÷
ø
÷
ëì
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
÷
÷
ø ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
÷ Î
ø
ø
ø ø
ø
ø
ßÐÐÛÒÜ×È éæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòíï
ëë
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
Î
ø
ø
÷
ø
Î
ø
÷ ø
ø ÷
ø
÷
ëê
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ßÐÐÛÒÜ×È èæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòíî ßòèòï Ю±±º ±º Ú·®-¬ Ì©± λ-«´¬-
ø
÷
ø
÷
ø
÷
Æ ø
ø
Æ
ø
Æ ø
÷
ø
Æ ø
÷
ø
ø
ø
÷ ø ø
ø ø ø
ø
÷
÷
ø
ßÐÐÛÒÜ×È èæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ îòíî
ëé
ø
÷
ø
÷
ø
÷
ø
÷
÷
ßòèòî Ю±±º ±º ̸·®¼ λ-«´¬ Æ
ø ø Æ ø ø Æ
ø
Æ ø
ø
Æ
ø
÷
ø
÷
÷ ø
÷ ø
ø
ø
÷
ëè
ÞßÝÕÙÎÑËÒÜæ Í×ÙÒßÔ ßÒÜ ÍÇÍÌÛÓ ÌØÛÑÎÇ
ø
ø ø
ø
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ø Æ
ßòèòí Ю±±º ±º Ú·²¿´ λ-«´¬
ø
ø
÷
í ̸» б©»® Í°»½¬®¿´ Ü»²-·¬§ íòï ×ÒÌÎÑÜËÝÌ×ÑÒ
ø
÷
íòïòï λ´¿¬·ª» б©»® Ó»¿-«®»-
ëç
êð
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
íòî ÜÛÚ×Ò×Ì×ÑÒ ø ÷
ø
÷
ø
÷
íòîòï ݸ¿®¿½¬»®·-¬·½- ±º ¿ б©»® Í°»½¬®¿´ Ü»²-·¬§
ø ÷ ø ÷
ø
÷
ø
÷
÷
ø ÷
íòîòî б©»® Í°»½¬®¿´ Ü»²-·¬§ ±º ¿ Í·²¹´» É¿ª»º±®³
êï
ÜÛÚ×Ò×Ì×ÑÒ
½–í
½–î
î
î
½–ï î
½ð
î
½ï
î
½î
î
½í
î
º –íº±
–
–º±
º±

íº±
Ú·¹«®» íòï Ü·-°´¿§ ±º °±©»® ·² -·²«-±·¼¿´ ½±³°±²»²¬- ±º ¿ -·¹²¿´ò
ø
÷
ø
ø ø
÷
ø
÷
÷
ø
ÈøÌô ð÷ Ì ÈøÌô –÷ Ì
î
½–î ã º±
î
ã
½ð º±
î
ÙøÌô º ÷
î
ÈøÌô ÷ Ì
î
½î ã º±
î
º –íº±
–
–º±
º±

íº±
Ú·¹«®» íòî ß °±©»® -°»½¬®¿´ ¼»²-·¬§ º«²½¬·±²ò Ì ¸» -¸¿¼»¼ ¿®»¿- »¯«¿´ ¬¸» °±©»® ¿--±½·¿¬»¼ ©·¬¸ -·²«-±·¼¿´ ½±³°±²»²¬- ¬¸¿¬ ¸¿ª» ¿ º®»¯«»²½§ ±º  ئò
êî
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ø
÷
ø
÷
ø
÷
ø
÷
ø ÷
ø ÷
íòîòí ß Ý±²¬·²«±«- б©»® Í°»½¬®¿´ Ü»²-·¬§ Ú«²½¬·±² ø
÷ ø
÷
ø
÷
êí
ÜÛÚ×Ò×Ì×ÑÒ
î
Ùø Ìô º ÷ ã È ø Ì ô º ÷ ñÌ Ù ø Ì ô º¨÷ º º¨ Ú·¹«®» íòí ݱ²¬·²«±«- °±©»® -°»½¬®¿´ ¼»²-·¬§ º«²½¬·±² ¾¿-»¼ ±² п®-»ª¿´•- ®»´¿¬·±²-¸·°ò
ø
÷ ø
÷
ø
÷ ø
ø
Î Î
÷
÷
÷
êì
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
íòîòíòï ײ¬»®°®»¬¿¬·±² ±º ݱ²¬·²«±«- б©»® Í°»½¬®¿´ Ü»²-·¬§ Ú«²½¬·±²
÷
Æ
ø
÷
ø
÷
ø
÷
Æ
÷
ÈøÌô º ÷ Ì
ÙøÌô º ÷ ã º±
–ø· õ ï÷º±
–·º±
î î
½ ÈøÌô ·º±÷ ã · º± Ì
–ø· – ï÷º±
º¨ ã ·º± ø· – ï÷º±
î
º ø· õ ï÷º±
Ú·¹«®» íòì ͬ»° ¿°°®±¨·³¿¬·±² ¬± °±©»® -°»½¬®¿´ ¼»²-·¬§ º«²½¬·±²ò ̸» ¿®»¿ «²¼»® ¬¸» ¬©± ·º± ¿²¼ º ·º± »¯«¿´- ¬¸» °±©»® ·² ¬¸» -·²«-±·¼¿´ ½±³°±²»²¬- ±º ¬¸» ´»ª»´- ¿--±½·¿¬»¼ ©·¬¸ º -·¹²¿´ ©·¬¸ ¿ º®»¯«»²½§ ±º ·º± ئò
êë
ÐÎÑÐÛÎÌ×ÛÍ
ø
÷
ø
÷
ø
÷
ø
÷
íòîòíòî б©»® ¿- ß®»¿ ˲¼»® ¬¸» б©»® Í°»½¬®¿´ Ü»²-·¬§ Ù®¿°¸
íòîòíòí Û¨¿³°´» œ б©»® Í°»½¬®¿´ Ü»²-·¬§ ±º ¿ Í·²«-±·¼ ÷ ø
íòí ÐÎÑÐÛÎÌ×ÛÍ
÷
êê
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ÙøÌô º ÷ ðòîë ðòî ðòïë ðòï ðòðë ð ð
î
ì
ê
è Ú®»¯«»²½§ øئ÷
Ú·¹«®» íòë б©»® -°»½¬®¿´ ¼»²-·¬§ ±º ¿ -·²«-±·¼ ©·¬¸ ¿ º®»¯«»²½§ ±º ì ئô ¿² ¿³°´·¬«¼» ±º «²·¬§ô ¿²¼ »ª¿´«¿¬»¼ ±² ¿ ï -»½ ·²¬»®ª¿´ øº± ï÷ò
íòíòï ͧ³³»¬®§ ·² б©»® Í°»½¬®¿´ Ü»²-·¬§ ø
÷
íòíòî λ-±´«¬·±² ·² б©»® Í°»½¬®¿´ Ü»²-·¬§
÷ ÷
íòíòí ײ¬»¹®¿¾·´·¬§ ±º б©»® Í°»½¬®¿´ Ü»²-·¬§
êé
ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
íòíòì б©»® Í°»½¬®¿´ Ü»²-·¬§ ±² ײ•²·¬» ײ¬»®ª¿´ ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
íòì ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
Ý
êè
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
Æ Î
ø
Î
Î
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
êç
ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
íòìòï б©»® Í°»½¬®¿´ Ü»²-·¬§ ±² ײ•²·¬» ײ¬»®ª¿´
Î
Î ø
÷
éð
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
íòìòî Û¨¿³°´» œ ÐÍÜ ±º Þ·²¿®§ Ü·¹·¬¿´ ο²¼±³ Ю±½»--
¨øïô –ïô –ïô ïô ›ïô ïô ïô ›ïô ¬÷ °ø¬÷ ï Ü îÜ ìÜ ›ï
èÜ
ø
÷
ø
÷
ø
÷
ø
÷
¬
êÜ
Ú·¹«®» íòê Ѳ» ©¿ª»º±®³ º®±³ ¿ ¾·²¿®§ ¼·¹·¬¿´ ®¿²¼±³ °®±½»-- ±² ¬¸» ·²¬»®ª¿´ Åðô èÜÃò
éï
ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
ø
ø
÷
ø
÷
ø
÷
ø
÷
÷
íòìòí Ó·-½»´´¿²»±«- ×--«»íòìòíòï Ò±²-¬¿¬·±²¿®§ ο²¼±³ Ю±½»--»ø ÷
ÙÈ øÒÜô º ÷
É ã Üñî
ÉãÜ
ï ðòë ðòî ðòï ðòðë ðòðî ðòðï ðòðï
ðòðë
ðòï
ðòë ï Ú®»¯«»²½§ øئ÷
Ú·¹«®» íòé б©»® -°»½¬®¿´ ¼»²-·¬§ ±º ¾·²¿®§ ¼·¹·¬¿´ ®¿²¼±³ °®±½»-- º±® ¬¸» ½¿-» ©¸»®» Ü
ïò
éî
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
íòìòíòî Í·²¹´»óÍ·¼»¼ б©»® Í°»½¬®¿´ Ü»²-·¬§ ø ÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
íòìòíòí Ü·-½®»¬» ß°°®±¨·³¿¬·±² º±® ¬¸» ˲½±«²¬¿¾´» Ý¿-»
Î
Ý
Ý
Î
éí
ÛÈ×ÍÌÛÒÝÛ ÝÎ×ÌÛÎ×ß
Î
Î
ø
÷
Î
íòë ÛÈ×ÍÌÛÒÝÛ ÝÎ×ÌÛÎ×ß
ø
÷
ø
÷
ø
Æ
Î
Î Æ
Æ
Î
÷
éì
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
Î Î Î ÷
Î
÷ Î
Æ
Î Î
íòëòï Û¨¿³°´»- ©¸»®» ݱ²¼·¬·±²- ¿®» Ê·±´¿¬»¼ ø ÷
ø
÷
ø ÷
ø ÷
íòê ×ÓÐËÔÍ×ÊÛ ÝßÍÛ
éë
×ÓÐËÔÍ×ÊÛ ÝßÍÛ
íòêòï ݱ²¼·¬·±²- º±® Þ±«²¼»¼ б©»® Í°»½¬®¿´ Ü»²-·¬§
÷
Æ
Î
Ý
÷
Æ Î
ø
÷
ø
÷
Î
ø
÷
Î
ø
÷
ø
÷
éê
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
Î ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
íòêòî ׳°«´-·ª» б©»® Í°»½¬®¿´ Ü»²-·¬§
ø
÷
éé
×ÓÐËÔÍ×ÊÛ ÝßÍÛ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
íòêòí ײ¬»¹®¿¬»¼ Í°»½¬®«³
ø ÷
ø
÷
éè
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ÙøÌô º ÷ Ì
º
º± º± õ ï îÌ º
ÙøÌô ÷¼
º º± Úø º ÷
º º± Ú·¹«®» íòè ×´´«-¬®¿¬·±² ±º ¬¸» ·²¬»¹®¿¬»¼ -°»½¬®¿´ º«²½¬·±² º±® ¬¸» ½¿-» ©¸»®» ÙøÌô º÷ ¸¿- ¿² ‘‘·³°«´-·ª»•• ½±³°±²»²¬ò
íòé ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ Ê×ß ßËÌÑÝÑÎÎÛÔßÌ×ÑÒ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
éç
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ Ê×ß ßËÌÑÝÑÎÎÛÔßÌ×ÑÒ
ø ø
÷ ÷
ø
÷
ø
÷
íòéòï Ü»•²·¬·±² ±º ¬¸» ß«¬±½±®®»´¿¬·±² Ú«²½¬·±² íòéòïòï Ü»•²·¬·±² ±º ß«¬±½±®®»´¿¬·±² Ú«²½¬·±²œÍ·²¹´» É¿ª»º±®³ Ý¿-» Î Ý
¬ Ì
¬
Ì
–Ì
¬
Ì
Ú·¹«®» íòç λ¹·±² ©¸»®» ¿«¬±½±®®»´¿¬·±² º«²½¬·±² ·- ²±²¦»®±ò
ø
÷
ø
÷
èð
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
¨ ø¬÷
¬ Ì ¨ ø¬
÷
ð
ð
¬ Ì
Ì
Ì
Ú·¹«®» íòïð ß ©¿ª»º±®³ ¿²¼ ·¬- -¸·º¬»¼ ½±«²¬»®°¿®¬ ø®»¿´ ½¿-»÷ò
íòéòïòî Ò±¬¿¬·±² ø ÷
ø ø
÷
÷
íòéòïòí Ü»•²·¬·±² ±º ß«¬±½±®®»´¿¬·±² Ú«²½¬·±² œ ο²¼±³ Ю±½»-Ý¿-»
Î
Ý
Æ
ø
÷
ø
÷
ø
÷
ø
÷
èï
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ Ê×ß ßËÌÑÝÑÎÎÛÔßÌ×ÑÒ
Æ
ø
÷
Æ
ø
÷
ø
÷
ø
÷
Æ ÷
íòéòî б©»® Í°»½¬®¿´ Ü»²-·¬§ œ ß«¬±½±®®»´¿¬·±² λ´¿¬·±²-¸·°
Æ
èî
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
íòéòí ß«¬±½±®®»´¿¬·±² Ú«²½¬·±² ±² ײ•²·¬» ײ¬»®ª¿´
ø
÷
ø
÷
èí
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ Ê×ß ßËÌÑÝÑÎÎÛÔßÌ×ÑÒ
Î Æ
ø ÷ íòéòíòï ׳°«´-·ª» Ý¿-» œ Ú±®³¿´ Ü»•²·¬·±² ±º É·»²»® Õ¸·²¬½¸·²» λ´¿¬·±²-
ø
ø
÷
ø
÷
ø
÷
÷
ø ÷
ø ÷ ø
÷
èì
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
íòéòíòî Û¨¿³°´»
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ßÐÐÛÒÜ×È ïæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ íòì
Î
Î
èë
ßÐÐÛÒÜ×È îæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ íòë
ßÐÐÛÒÜ×È îæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ íòë
ø
÷
ø
÷
ø
÷
ø
÷
èê
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ßòîòï Û¨·-¬»²½» ±º ÐÍÜ œ Ú·²·¬» Ý¿-» ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø ÷
ø
÷ ø
÷
Æ
ø ÷ ø ÷
Æ
ßÐÐÛÒÜ×È îæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ íòë
ø
ø
÷
Î
Î
ø
÷
ø
÷
ø
÷
÷ ø
ø
èé
÷
÷
ßòîòî Û¨·-¬»²½» ±º ÐÍÜ œ ײ•²·¬» Ý¿-»
ø
÷
ø
÷
èè
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
Æ
Æ
ø
÷
ø
÷
ø
÷
Î
ø ÷
Æ Æ
Î Æ
Î Î
ø
÷
ßÐÐÛÒÜ×È íæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ íòè
ø
ø ÷
÷
ßÐÐÛÒÜ×È ìæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ íòïð
ø
èç
÷ ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
Æ
ßÐÐÛÒÜ×È ìæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ íòïð
ø
÷
çð
ÌØÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ø ÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
Æ
Æ
ßÐÐÛÒÜ×È ìæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ íòïð
çï
ø
÷
ì б©»® Í°»½¬®¿´ Ü»²-·¬§ ß²¿´§-·ìòï ×ÒÌÎÑÜËÝÌ×ÑÒ
ìòî ÞÑËÒÜÛÜÒÛÍÍ ÑÚ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
Î çî
Î
çí
ÞÑËÒÜÛÜÒÛÍÍ ÑÚ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
Î
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø ÷ ø
÷
÷
ìòîòï ß´¬»®²¿¬·ª» Ú±®³«´¿¬·±² º±® Þ±«²¼»¼²»--
çì
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
÷
ø
÷
ø
÷
ø
÷
Î Î
Î
Î
Î
ìòîòïòï Ò±¬»-
ø
÷
çë
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ Ê×ß Í×ÙÒßÔ ÜÛÝÑÓÐÑÍ×Ì×ÑÒ
ìòîòî Ü»•²·¬·±² œ Þ±«²¼»¼ б©»® Í°»½¬®¿´ Ü»²-·¬§
Î Î
ø
Î
÷
ø÷ ìòí ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ Ê×ß Í×ÙÒßÔ ÜÛÝÑÓÐÑÍ×Ì×ÑÒ
Æ
÷
÷
ø
÷
ø
÷
ø
÷
ø
÷
çê
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ìòíòï Û¨¿³°´»
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø ÷
¨ ø¬÷
° ø¬÷
ï
ï ¬
¬ ï
î
í
ì
ï
Ú·¹«®» ìòï Ù®¿°¸- º±® ¬¸» -·¹²¿´ ¨ ¿²¼ ¬¸» °«´-» º«²½¬·±² °ò
î
çé
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ Ê×ß Í×ÙÒßÔ ÜÛÝÑÓÐÑÍ×Ì×ÑÒ
ÐÍÜ îòë î ÙÈ øÌô º ÷ ïòë ï ðòë
ÙÈÜøÌô º ÷ ðòîë º±
ðòë 
ðòéë íº±
ï ïòîë Ú®»¯«»²½§ øئ÷
Ú·¹«®» ìòî Ù®¿°¸- ±º ¬®«» ¿²¼ ¿´¬»®²¿¬·ª» °±©»® -°»½¬®¿´ ¼»²-·¬§ º«²½¬·±²-ò
ø
÷ ø
ø
÷
ø
÷
÷
ìòíòî Û¨°´¿²¿¬·±² ø
÷
çè
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ìòì Í×ÓÐÔ×ÚÇ×ÒÙ ÛÊßÔËßÌ×ÑÒ ÑÚ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
Æ
ø
÷
ø
÷
ø
÷
ìòìòï Û¨¿³°´»
° ø¬÷
° ø¬ – Ü÷
° ø¬ – îÜ÷
° ø¬ – íÜ÷
° ø¬ – ìÜ÷
¬ Ü
îÜ
íÜ
Ì ×²½´«¼» ½±³°±²»²¬
Ú·¹«®» ìòí Ы´-» ©¿ª»º±®³ ° ¿²¼ ©¿ª»º±®³- ½±³°®·-·²¹ ¬¸» -·¹²¿´ ¨ò
çç
Í×ÓÐÔ×ÚÇ×ÒÙ ÛÊßÔËßÌ×ÑÒ ÑÚ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ìòìòî ß°°®±¨·³¿¬» б©»® Í°»½¬®¿´ Ü»²-·¬§
÷
ïðð
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
Ú Ú×
ÚÑ Ì
ð ÚÎ
Ú·¹«®» ìòì Ü»•²·¬·±² ±º ¬¸» -»¬- Ú× ô ÚÑ ô ¿²¼ ÚÎ ò
Æ
Æ
ø
÷
ø
÷
ø
÷
ïðï
Í×ÓÐÔ×ÚÇ×ÒÙ ÛÊßÔËßÌ×ÑÒ ÑÚ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ìòìòí Í°»½·•½ Ý¿-»÷
ïðî
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
÷
ø
÷
ìòìòì Û¨¿³°´»
ø
ø
÷
ìòë ÌØÛ ÝÎÑÍÍ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ïðí
ÌØÛ ÝÎÑÍÍ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ÙÈ øïðô º ÷ô ÙÈßø ô º ÷ ïð Ì®«»
ï
ðòï
ðòðï
ðòððï ðòðî
ðòðë
ðòï
ðòî
ðòë
ï î Ú®»¯«»²½§ øئ÷
Ú·¹«®» ìòë Ì®«» ø¬¸·½µ÷ ¿²¼ ¿°°®±¨·³¿¬» ø¬¸·²÷ °±©»® -°»½¬®¿´ ¼»²-·¬·»-ò
Ý
ø
÷
Ý
ø
÷
ÙÈ øïðô º ÷–ÙÈßø ô º ÷ ï ðòë ðòï ðòðë ðòðï ðòððë ðòððï ðòðî
ðòðë
ðòï
ðòî
ðòë ï î Ú®»¯«»²½§ øئ÷
Ú·¹«®» ìòê ̸» ¿¾-±´«¬» ¼·ºº»®»²½» ¾»¬©»»² ¬¸» ¬®«» ¿²¼ ¿°°®±¨·³¿¬» °±©»® -°»½¬®¿´ ¼»²-·¬·»- -¸±©² ·² Ú·¹«®» ìòëò
ïðì
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
Æ
Î
Æ ø
÷
ø
÷
ø
÷
ø
÷
Î
Æ Î
ø ÷
ïðë
ÌØÛ ÝÎÑÍÍ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ìòëòï Ý®±-- б©»® Í°»½¬®¿´ Ü»²-·¬§ œ Ü»°»²¼»²¬ Ý¿-»
Æ
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ìòëòî Ý®±-- б©»® Í°»½¬®¿´ Ü»²-·¬§ œ ײ¼»°»²¼»²¬ Ý¿-» ø
÷
ø
÷
ïðê
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
ø
ø
ìòëòí ݱ²¼·¬·±²- º±® Ý®±-- б©»® Í°»½¬®¿´ Ü»²-·¬§ ¬± ¾» Æ»®± Î
ìòëòì Þ±«²¼- ±² Ý®±-- б©»® Í°»½¬®¿´ Ü»²-·¬§
ïðé
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ß ÍËÓ ÑÚ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
ø
÷
ø
÷
ø
÷
ìòê ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ß ÍËÓ ÑÚ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
ìòêòï б©»® Í°»½¬®¿´ Ü»²-·¬§ œ Í«³ ±º Ì©± ο²¼±³ Ю±½»--»-
÷ ø
Ý
÷ ø
÷
Æ
ø
÷
ø
÷
ïðè
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ìòêòî б©»® Í°»½¬®¿´ Ü»²-·¬§ œ ײ•²·¬» Í«³
Æ
Ý
Æ
÷
ïðç
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ß ÍËÓ ÑÚ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
Æ
Ý
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
÷
Æ
Æ
÷ ÷
Æ
Æ
Ý
÷
Æ
Æ
ïïð
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø ÷
Æ
Æ
Æ
Æ
Î
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø ÷
ø
÷ ø
÷
Æ
Æ
Î
ïïï
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ß ÍËÓ ÑÚ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
Æ
Æ Î
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ïïî
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
ø
÷
ø
÷
ø
÷
÷
ìòêòí б©»® Í°»½¬®¿´ Ü»²-·¬§ œ Í«³ ±º Ò Î¿²¼±³ Ю±½»--»-
Æ Î
ìòé ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ß ÐÛÎ×ÑÜ×Ý Í×ÙÒßÔ
ïïí
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ß ÐÛÎ×ÑÜ×Ý Í×ÙÒßÔ
ìòéòï б©»® Í°»½¬®¿´ Ü»²-·¬§ œ ß®¾·¬®¿®§ ײ¬»®ª¿´ Ý¿-» Æ
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ïïì
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
Æ
÷
ø
÷
ø
÷
ø
÷
ø
÷
ìòéòî Ò±¬»Æ
ø
÷
ø ø
ø
÷
ø
÷
÷
ïïë
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ß ÐÛÎ×ÑÜ×Ý Í×ÙÒßÔ
ìòéòí Û¨¿³°´»
ø ø
÷
ø
ø
÷
ø
÷
ø
÷
÷ ø
ø
ø
÷
÷ ø
÷
¨ø¬÷ ß ¬ É
Ì°
îÌ °
íÌ °
Ú·¹«®» ìòé л®·±¼·½ °«´-» ¬®¿·²ò
ìÌ °
ïïê
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ÙÈøìÌ°ô º ÷ ½± î ñº±
ðòè ðòê
½ï î ñº±
ðòì ðòî ½î î ñº± ð
ï º°
– º°
î î º°
½í î ñº±
í ì Ú®»¯«»²½§ øئ÷
Ú·¹«®» ìòè б©»® -°»½¬®¿´ ¼»²-·¬§ ±º °«´-» ¬®¿·² ±² ¿² ·²¬»®ª¿´ ±º Åðô ìÌ°Ã º±® ¬¸» ½¿-» ©¸»®» ß ïô Ì° ïô º° ïô É ðòëô ¿²¼ º± ðòîëò
ÙÈøèÌ°ô º÷ ½± î ñº±
ïòéë ïòë ïòîë
½ï î ñº±
ï ðòéë ðòë
½î î ñº±
ðòîë ð – º°
ï º°
î î º°
½í î ñº±
í ì Ú®»¯«»²½§ øئ÷
Ú·¹«®» ìòç б©»® -°»½¬®¿´ ¼»²-·¬§ ±º °«´-» ¬®¿·² ±² ¿² ·²¬»®ª¿´ ±º Åðô èÌ°Ã º±® ¬¸» ½¿-» ©¸»®» ß ïô Ì° ïô º° ïô É ðòëô ¿²¼ º± ðòïîëò
ïïé
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ß ÐÛÎ×ÑÜ×Ý Í×ÙÒßÔ
ÙÈ ø º ÷ ½›î
î
½›ï
î
½ð
î
½ï
î
½î
î
º –
º°
–º°

Ú·¹«®» ìòïð б©»® -°»½¬®¿´ ¼»²-·¬§ ±º ¿ °»®·±¼·½ -·¹²¿´ »ª¿´«¿¬»¼ ±² ¿² ·²•²·¬» ·²¬»®ª¿´ò
ìòéòì Ù»²»®¿´·¦¿¬·±²-
Æ
Æ
ø
÷
ø
÷
ø
÷
ø
÷
Æ Æ
Æ
ïïè
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
÷
ø ÷ ìòéòìòï б©»® Í°»½¬®¿´ Ü»²-·¬§ ±º ¿² ײ•²·¬» Í«³ ±º л®·±¼·½ Í·¹²¿´-
Æ
Æ Æ ø
÷
ø
÷
ø
÷
ø
÷
Æ
ø Æ ø
ïïç
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ œ ÐÛÎ×ÑÜ×Ý ÝÑÓÐÑÒÛÒÌ ÝßÍÛ
ø
÷
ìòè ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ œ ÐÛÎ×ÑÜ×Ý ÝÑÓÐÑÒÛÒÌ ÝßÍÛ
Æ
ø
÷
ø
÷
ø
÷
ø
÷
÷
Æ
Æ
ø
Æ
ïîð
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
Æ
Æ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
ø
Æ
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ
ïîï
ÐÛÎ×ÑÜ×Ý ÝÑÓÐÑÒÛÒÌ ÝßÍÛ
ìòèòï б©»® Í°»½¬®¿´ Ü»²-·¬§ œ Ò±²¦»®± Ó»¿² Ý¿-»
ø
÷
ø
÷
Æ
ø
÷
Æ
ø
÷
ø
÷
ø
÷
ø
÷
Æ
ïîî
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ìòèòïòï Ò±¬»-
ìòç ÙÎßÐØ×ÒÙ ×ÓÐËÔÍ×ÊÛ ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×Ì×ÛÍ
ìòçòï Û¨¿³°´»
ø
ø
÷
ø
÷
ø
÷
÷
ø
ïîí
ßÐÐÛÒÜ×È ïæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòî
ÙÈ ø º ÷
É ã Üñî
ÉãÜ
б©»® ·² ·³°«´-»
ï ðòë ðòî ðòï ðòðë ðòðî ðòðï ðòðï
ðòðë
ðòï
ðòë
ï
Ú®»¯«»²½§ øئ÷ Ú·¹«®» ìòïï ×´´«-¬®¿¬·±² ±º ¹®¿°¸·²¹ ¬¸» °±©»® -°»½¬®¿´ ¼»²-·¬§ ±º ¿ ®¿²¼±³ °®±½»-- ©¸±-» °±©»® -°»½¬®¿´ ¼»²-·¬§ ½±²¬¿·²- ¾±¬¸ ¿ ¾±«²¼»¼ ¿²¼ ¿² ·³°«´-·ª» ½±³°±²»²¬ò
ßÐÐÛÒÜ×È ïæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòî
ø
ø
÷
ø
÷
÷
ïîì
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ïîë
ßÐÐÛÒÜ×È ïæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòî
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ïîê
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ßÐÐÛÒÜ×È îæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòì ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
Î
ø
÷
ïîé
ßÐÐÛÒÜ×È îæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòì
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
Æ
ø
÷
ïîè
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ßÐÐÛÒÜ×È íæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòë
ßÐÐÛÒÜ×È ìæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòê
ïîç
ßÐÐÛÒÜ×È ìæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòê
ø
÷ Æ
ø
÷
ø
ø
ø
Î
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
÷
÷
ïíð
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
÷
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ßÐÐÛÒÜ×È ëæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòè
ïíï
ßÐÐÛÒÜ×È ëæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòè
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
ø
÷
ø
ø
ø
ø
÷
ø Æ
ø
÷
÷
ïíî
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø ø
÷
÷
Æ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ßÐÐÛÒÜ×È êæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòïð
ø
ø
ø
÷
÷
ïíí
ßÐÐÛÒÜ×È êæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòïð
Æ
Æ
Î
÷
ø
÷
ø
÷
ø
÷
Æ
Æ
Æ ø
ø
ø
ø
÷
÷
÷
ïíì
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
ø
÷
ø
÷
ßÐÐÛÒÜ×È éæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòïï
Æ
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷ Æ
ø
Æ
÷
Æ
ø
ïíë
ßÐÐÛÒÜ×È éæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòïï
ø
ø
÷
ø
ø
÷
ø
÷
ø
÷
ø
÷
÷
ø
Æ ø
Æ
÷
Æ
Æ
Æ
ø
÷
ïíê
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ßÒßÔÇÍ×Í
ø
÷
ø
÷
ø
÷
ø
÷
ßÐÐÛÒÜ×È èæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòïî
ø
ø
÷
Æ
Æ
Æ
Î
ïíé
ßÐÐÛÒÜ×È èæ ÐÎÑÑÚ ÑÚ ÌØÛÑÎÛÓ ìòïî
ø
÷
Æ
ø
ø
÷
ø
÷
Î
÷
ë б©»® Í°»½¬®¿´ Ü»²-·¬§ ±º ͬ¿²¼¿®¼ ο²¼±³ Ю±½»--»- œ ﮬ ï ëòï ×ÒÌÎÑÜËÝÌ×ÑÒ
ëòî Í×ÙÒßÔ×ÒÙ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
ïíè
ïíç
Í×ÙÒßÔ×ÒÙ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
÷
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
Æ
Î
÷
ïìð
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ÍÌßÒÜßÎÜ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ œ ÐßÎÌ ï
÷
ø
ø
÷
ø
÷
ø
÷
ø
÷
÷
ëòîòðòï Û¨¿³°´» œ ͬ¿²¼¿®¼ ݱ³³«²·½¿¬·±² Í·¹²¿´-
÷
ø
ø
ø
ø
ø
÷
ø
÷
ø
÷
ïìï
Í×ÙÒßÔ×ÒÙ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
¨øïô îô îô ïô ïô ¬÷ ï ðòéë ðòë ðòîë ð –ðòîë –ðòë –ðòéë ï
î
í
Ú·¹«®» ëòï Þ¿-»¾¿²¼ -·¹²¿´·²¹ ©¿ª»º±®³ô ß
ì
ïô Ü
ë Ì·³» øÍ»½÷
ïô ¿²¼ º½
ìò
ëòîòðòî Ù»²»®¿´·¬§ ±º ײº±®³¿¬·±² Í·¹²¿´ Ú±®³ ø ø
÷
÷
Î
ëòîòï б©»® Í°»½¬®¿´ Ü»²-·¬§ ±º ¿ Í·¹²¿´·²¹ ο²¼±³ Ю±½»--
ø
÷
ø
÷
ïìî
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ÍÌßÒÜßÎÜ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ œ ÐßÎÌ ï
ø ø
÷
ø
÷
ø
÷
ø
÷
ïìí
Í×ÙÒßÔ×ÒÙ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ëòîòïòï Ò±¬»ø
÷
ïìì
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ÍÌßÒÜßÎÜ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ œ ÐßÎÌ ï
Æ Æ
ëòîòïòî Ý¿-» ïæ Ó»¿² ±º Í·¹²¿´·²¹ É¿ª»º±®³- ·- Æ»®± Î
ø
ø
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
÷
÷
ëòîòïòí Ý¿-» îæ ײº±®³¿¬·±² Û²½±¼»¼ ·² Ы´-» ß³°´·¬«¼»ø Æ Î
ïìë
Í×ÙÒßÔ×ÒÙ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ø
÷
ïìê
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ÍÌßÒÜßÎÜ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ œ ÐßÎÌ ï
ø
ø
÷
ø
÷
ø
÷
ø
÷
÷
ø
ëòîòî Û¨¿³°´»- ¿²¼ Í°»½¬®¿´ ×--«»- º±® ݱ³³«²·½¿¬·±² ͧ-¬»³-
ëòîòîòï Û¨¿³°´»æ б©»® Í°»½¬®¿´ Ü»²-·¬§ ±º ¿ 묫®² ¬± Æ»®± Í·¹²¿´
ïìé
Í×ÙÒßÔ×ÒÙ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
¬÷ ï ¬ Ü™î
Ü
Ú·¹«®» ëòî Ы´-» ©¿ª»º±®³ò
ø
ø
÷
ø
ø
÷
ø
÷
ø
÷
ø
÷
÷
÷
ïìè
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ÍÌßÒÜßÎÜ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ œ ÐßÎÌ ï
ÙÇ øîëêÜô º÷ ðòï ðòðè ðòðê ðòðì ðòðî
ðòë
ï
ïòë
î
îòë
Ú·¹«®» ëòí б©»® -°»½¬®¿´ ¼»²-·¬§ ±º ¿ ÎÆ -·¹²¿´ ©¸»² Ü
í íòë ì Ú®»¯«»²½§ øئ÷ ïô ®
ïô ß
ïô ¿²¼ Ò
îëêò
ëòîòîòî Û¨¿³°´»æ б©»® Í°»½¬®¿´ Ü»²-·¬§ ±º ¿ Þ·°±´¿® Í·¹²¿´
÷
ø ø ø
ø
÷
ø
÷
ø
÷
ïìç
Í×ÙÒßÔ×ÒÙ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ
ÙÇ ø º ÷ ðòï ðòðë ðòðï ðòððë ðòððï ðòðððë ðòðððï ðòðë
ðòï
ðòë
ï
ïð ë Ú®»¯«»²½§ øئ÷
Ú·¹«®» ëòì б©»® -°»½¬®¿´ ¼»²-·¬§ ±º ¿ ÎÆ -·¹²¿´ »ª¿´«¿¬»¼ ±² ¬¸» ·²•²·¬» ·²¬»®ª¿´ ©¸»² Ü ïô ® ïô ¿²¼ ß ïò ̸» ¼±¬- ®»°®»-»²¬ ¬¸» °±©»® ·² ·³°«´-·ª» ½±³°±²»²¬-ò ̸» °±©»® ·² ¬¸» ·³°«´-» ¿¬ ð ئ ·- ðòðêîëò
ø
÷
ø
÷
ø
÷
÷
ø
÷
ïëð
ÐÑÉÛÎ ÍÐÛÝÌÎßÔ ÜÛÒÍ×ÌÇ ÑÚ ÍÌßÒÜßÎÜ ÎßÒÜÑÓ ÐÎÑÝÛÍÍÛÍ œ ÐßÎÌ ï
Ðø ô º ÷ ï ðòè
ðòë ï
ðòê ðòì ðòî
ðòî
ðòì
ðòê
ðòè ï Ú®»¯«»²½§ øئ÷
Ú·¹«®» ëòë ο·-»¼ ½±-·²» -°»½¬®«³ º±® ¬¸» ½¿-» ©¸»®» ®
ß
ïò
°ø ô ¬÷ ï ðòè ðòê ðòì ðòî ï ð ðòë –î
–ï
ð
ï
î í Ì·³» øÍ»½÷
Ú·¹«®» ëòê ײª»®-» Ú±«®·»® ¬®¿²-º±®³ ±º ®¿·-»¼ ½±-·²» -°»½¬®«³ º±® ¬¸» ½¿-» ©¸»®» Ü
ß
ïò
SIGNALING RANDOM PROCESSES
151
x (1, 0, 1, 1, 1, 1, 0, 0, 1, 0, t) 1 0.5 0 −0.5 −1 2
4
6
8
10 Time (Sec)
Figure 5.7 Bipolar signaling waveform where pulses associated with a raised cosine spectrum have been used. : 1, D : 1, A : 1, and data is 1, 0, 1, 1, 1, 1, 0, 0, 1, 0.
In these equations, p : p : 0.25 and p : 0.5. By considering possible data \ and the corresponding signaling waveforms in two consecutive signaling intervals, it follows that the probabilities of two consecutive signaling waveforms, that is P[ ( , t), ( , t)] : p , are as tabulated in Table 5.1. Using AA the results in this table, it follows that ( f ) : 0.5P(, f )
and
R
( f ) : 90.25P(, f )
and hence,
1 G (ND, f ) : 0.5rP(, f ) 1 9 1 9 cos(2 f /r) 6 N G
( f ) : r sin( f /r)P(, f ) 6
(5.50) (5.51)
where the relationship sin(A) : 0.5 9 0.5 cos(2A) has been used. The power TABLE 5.1 Possible Outcomes for Two Consecutive Signaling Intervals Data 00 01 10 11
Signaling Waveforms 0, 0 0, (91, t ) or 0, (1, t)
(91, t ), 0 or (1, t ), 0
(91, t ), (1, t ) or (1, t ), (91, t )
Probabilities p00 : 0.25 p0,91 : p01 : 0.125 p91,0 : p10 : 0.125 p91,1 : p1,91 : 0.125
152
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
GX ( f ) ∞
β = 0.5 0.6 0.5 0.4
β=1
0.3 0.2 0.1 0.2
0.4
0.6
0.8
1
Frequency (Hz) Figure 5.8 Power spectral density of a signaling random process using pulses associated with a raised cosine spectrum, bipolar coding and with D : r : A : 1.
spectral density is plotted in Figure 5.8 for the case of N ; - when : 0.5 and : 1.0. In comparison with RZ signaling, the following can be noted. First, the bipolar coding ensures that the signaling random process has a zero mean, and therefore, there are no impulses and no redundant signal components in the power spectral density. Second, the signaling is more spectrally efficient. For example, with : 1, the spectrum is bandlimited to r Hz which implies 1 bit/Hz (twice as efficient as RZ signaling). Third, on the infinite interval with no truncation of the signaling pulses defined in Eq. (5.46), the spectral rolloff is infinite (there is no spectral spread). In practice, the signaling pulses are truncated and this results in spectral spread which can be readily determined. Finally, the encoding ensures that the power spectral density is zero at zero frequency, ensuring that a bipolar signal can be passed by a linear system whose transfer function has zero response at dc.
5.3 DIGITAL TO ANALOGUE CONVERTER QUANTIZATION Increasingly, information signals are generated via a digital processor that generates very accurate sample values, and these are put to a M bit digital to analogue converter (DAC), at a constant rate of r : 1/D samples/sec. To ascertain the power spectral density of the generated signal, consider a M bit DAC with 2+ equally spaced levels between and including
153
DIGITAL TO ANALOGUE CONVERTER QUANTIZATION
A xi
yi
εi
∆
−A
Figure 5.9 Illustration of quantization error with a 2-bit DAC (4 levels).
that in the ith sample interval [(i 9 1)D, iD], the constant level y : x ; is G G G generated. The model is one of an additive error to an ideal signal. In general, the actual levels in a DAC will vary from device to device because of manufacturing tolerances and will vary with device age, etc. Accordingly, it is appropriate to consider an infinite ensemble of DACs, where each is driven by the same sample values, such that in the ith sample interval is independent G of x when considered across the ensemble. From the nature of quantization, it G follows that takes on values with a uniform distribution, from the interval G [9/2, /2). A further assumption is that the DAC resolution and rate of signal change are such that the quantization errors from one sample interval to the next are uncorrelated. With such assumptions, the ensemble of DAC output signals for the interval [0, ND], which define a random process Y, is
, y( , . . . , , t) : (x ; ) (t 9 (i 9 1)D) , G G G E : 7 , : x (t) ; (t 9 (i 9 1)D) 1 G G
+ G
9 , 2 2
where is a pulse function defined according to
(t) :
1 0tD 0 elsewhere
( f ) :
1 f sinc r r
(5.52)
(5.53)
and x (t) is a step approximation to the desired signal x, that is, 1 , x (t) : x (t 9 (i 9 1)D) 1 G G The probabilities associated with the quantization error are such that P[ + [ , ;d]] : f ( ) d G M M C M
(5.54)
154
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
with f () : 1/ for 9/2 /2 and f () : 0 elsewhere. Clearly, the C C mean of is zero and the variance of is given by : /12 (Papoulis, 2002 CG G G p. 165). The random process Y can be considered to be the summation of a degenerate random process, defined by the signal x and a random process E 1 associated with the quantization error, and defined by the ensemble
, E : e( , . . . , , t) : (t 9 (i 9 1)D) # , G G
(5.55)
As E has zero mean, it follows from Theorem 4.6 that the power spectral densities of x and E add, that is, 1 G (T, f ) : G (T, f ) ; G (T, f ) 61 # 7
(5.56)
From Eq. (5.39) it follows that the power spectral density of E is G (ND, f ) : #
r( f ) sinc( f /r) (2A) sinc( f /r) : : 12 12r 12(2+ 9 1)r
(5.57)
A normalized power spectral density, with normalization in respect of the DAC range of 2A and the output rate r, can be defined as G (ND, f ) : L
sinc( f ) rG (ND, rf ) # : 12(2+ 9 1) (2A)
(5.58)
This power spectral density is plotted in Figure 5.10 for DACs with 8, 10, 12, 14, and 16 bits. 5.3.1 Notes First, for a fixed DAC range, the power spectral density due to quantization is inversely proportional to the number of levels and inversely proportional to the output rate. Second, in the frequency range of 9r/2 f r/2, where the spectrum of a generated signal is located, it is common to approximate the power spectral density by the constant level of (2A) G (ND, 0) : # 12(2+ 9 1)r
(5.59)
As a measure of the signal to noise ratio performance that is achievable with a M bit DAC, consider the case where a sinusoid with amplitude of A volts is being generated, and the DAC output is filtered such that the effective quantization noise power is consistent with the level given by Eq. (5.59) in an
JITTER
155
Gn(ND, f ) 1 . · 10−6 1 . · 10−7 1 . · 10−8 1 . · 10−9 1 . · 10−10 1 . · 10−11 0.5
1
1.5
2
2.5
3 3.5 4 frequency (Hz)
Figure 5.10 Normalized power spectral density of a DAC due to quantization. Upper to lower curves, respectively, are for 8-, 10-, 12-, 14-, and 16-bit DACs.
ideal bandwidth of r/2 Hz. The signal to noise ratio achievable is (Franco, 2002 p. 565)
SNR(M) : (A/(2)
1 12(2+ 9 1)r : 1.5(2+ 9 1) 1.5(2+) (5.60) 2(r/2) (2A)
SNR (M) : 10 log[SNR(M)] 1.76 ; 6.02M
dB
(5.61)
For 8-, 10-, 12-, 14-, and 16-bit DACs, the respective achievable signal to noise ratios are 50 dB, 62 dB, 74 dB, 86 dB, and 98 dB. One example of signal generation of a bandpass communications signal, with a digital signal processor and DAC, is discussed in Rensen (1999).
5.4 JITTER The additive noise on a signal will introduce variations in the time instants a signal crosses a set threshold level. Consequently, a signal generated on the basis of the time an input signal crosses a set threshold will exhibit variations, denoted jitter, from the ideal zero noise case. Jitter arises in many practical applications, including synchronization of signals, hard limiting of signals, and digital circuitry. The archetypical jitter case is that of a periodic pulse train, which is corrupted by noise prior to the input of a comparator, as shown in Figure 5.11. The noise will alter both the start and finish times of a given output pulse shape, as shown in Figure 5.12.
156
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
n y
x
+
z
− Bias x(t)
p(t) t B
2B
3B
Figure 5.11 Schematic diagram of periodic signal, that is corrupted by noise and input into a comparator.
The signals x and z defined in these figures can be modeled on the interval [0, NB], according to , x(t) : p(t 9 (i 9 1)B) (5.62) G , z( , . . . , , t) : ( , t 9 (i 9 1)B) (5.63) , G G where p is the pulse waveform defining the input pulse train, and z is one waveform from a random process Z, defined by the ensemble E , 8 , E : z( , . . . , , t) : ( , t 9 (i 9 1)B), : (a , d , w ) + S , + E 8 , G G G G G G (5.64)
Here, S : S ; S ; S , where S , S , and S are sample spaces of random " 5 " 5 variables A, D, and W with respective outcomes a, d, and w, and respective probability density functions f , f , and f . The set of signaling waveforms, " 5 E is defined according to (assuming A " 0), M t9 9d " , : (a, d, w), a + S , d + S , w + S E : (, t) : A (1 ; a)r M " 5 ;w 5 (5.65)
Here, A is the amplitude of the comparator output pulse for the ideal zero M noise case, r is a normalized pulse waveform defined in Figure 5.13, a accounts for variations in the comparator output amplitude, d accounts for the advance/
157
JITTER
x(t)
p(t−iB) t
(i + 1)B
iB y(t)
with noise
no noise
threshold level t (i + 1)B
iB µW
z(t) Ao
with noise
no noise
iB iB + µD
t
(i + 1)B
Figure 5.12 Illustration of how noise alters the ith comparator output pulse shape.
delay of the commencement of the output pulse, w accounts for the variation in the width of the output pulse, and and , respectively are the mean " 5 delay and mean width of the output pulse. By definition, the random variables A, D, and W have zero mean. To facilitate analysis, the assumption is made that the noise is uncorrelated over a time interval consistent with the duration of the comparator output pulse. The implication of this assumption is that the delay d , of the ith G comparator output pulse is independent of the width, as specified by w . G Further, the delay and width of the ith comparator output pulse are assumed to be independent of the delay and width of any other output pulse, and the pulse amplitude is assumed to be independent of the pulse delay and width. With such assumptions, it follows that
P[ (, t)
?Z'BZ'"UZ'5
]:
'
f (a) da
f ( ) d "
'"
'5
(5.66)
r t − d w
r t w
r(t)
f (w) dw 5
1 t 1
t w
Figure 5.13 Definition of the rectangle function r.
d
d+w
t
158
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
consistent with the probability density function of the random variable , whose outcomes are denoted , being such that f () : f (a) f (d) f (w) " 5
: (a, d, w)
(5.67)
Clearly, Z is a signaling random process. As detailed in Appendix 2, previously derived results for the power spectral density of such a random process can be used to derive the power spectral density of Z. The result is given in the following theorem. T 5.2. P S D — J S T he power spectral density of the random process Z characterizing jitter and modeled by the ensemble and associated signaling set, as per Eqs. (5.64) and (5.65), is G (NB, f ) : rA(1 ; A ) M 8
( ; w)R[( ; w) f ]f (w) dw 5 5 5 \ ;rAF ( f ) ( ;w)R[( ;w) f ] f (w) dw M " 5 5 5 \ 1 sin( Nf /r) 91 (5.68) ; N sin( f /r)
G ( f ) : rA(1 ; A ) 8 M
;rAF ( f ) M "
( ; w)R[( ; w) f ] f (w) dw 5 U 5
\
( ; w)R[( ; w) f ] f (w) dw 5 5 5
\ ; r ( f 9 ir) 9 1 G\
(5.69)
where r : 1/B, F is the Fourier transform of f , and R is the Fourier transform " " of r, that is, F ( f): "
\
f ( )e\HLDB d "
R( f ) :
r(t)e\HLDR dt : sinc( f )e\HLD
\
and A :
\
af (a) da
Proof. The proof of this theorem is given in Appendix 2.
(5.70)
159
JITTER
5.4.1 Case 1 — Constant Amplitude The common case is where the amplitude is constant, that is, A : 0. For this case, the expressions for the power spectral density given in Eqs. (5.68) and (5.69), readily simplify. 5.4.2 Case 2 — Zero Mean Amplitude For the special case where the mean of the amplitude is zero, that is, A : 0, M the signaling set is
E : (, t) : ar
t9 9d " , : (a, d, w), a + S , d + S , w + S " 5 ;w 5
(5.71)
For this case, the power spectral density takes on the simpler form, G (NB, f ) : G ( f ) : rA 8 8
\
( ; w)R[( ; w) f ] f (w) dw (5.72) 5 5 5
5.4.3 Example — Jitter of a Pulse Train with Gaussian Variations Consider the case where a comparator is driven by a periodic pulse train with period B : 1/r and a pulse width . Further, assume the pulse train is 5 corrupted by noise such that the delay and width density functions, f and f , " 5 are Gaussian with variances and , that is, f () : e9/2 /(2 for " 5 + D, W . The comparator output pulses are assumed to be of constant height A , and have a mean width . As shown in Appendix 3, the power M 5 spectral density of the comparator output random process is rA M [1 9 cos(2 f )e 92 f 5] G (NB, f ) : 8 5 2 f ;
rAe94 " f M [1 ; e94 f 5 9 2 cos(2 f )e 92 f 5 ] 5 4 f
;
1 sin( Nf /r) 91 N sin( f /r)
(5.73)
rA M [1 9 cos(2 f )e 92 f 5 ] G ( f): 8 5 2 f ;
rAe94 " f M [1 ; e94 f 5 9 2 cos(2 f )e 92 f 5 ] 5 4 f
; r ( f 9 ir) 9 1 G\
(5.74)
160
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
GZ (8, f ) 1 0.5 0.1 0.05 0.01 0.005 0.001 0.5
1
1.5
2
2.5
3
3.5
Frequency (Hz) Figure 5.14 Effect of jitter on the power spectral density of a pulse train, evaluated on [0, 8], for the case where B : r : Ao : 1. The thicker line is for the zero jitter case.
The power spectral density, G (8, f ), for the case where the mean compara8 tor output pulse width is : B/2, is plotted in Figure 5.14 for the ideal case, 5 and for the case of : : (0.05B). Clearly, the effect of jitter is to lead to 5 " spectral spread and to reduce the peak height of the harmonic components. Note that the zero jitter case yields a periodic pulse train, as shown in Figure 4.7, where W : T /2 and T : 1. Accordingly, the power spectral N N density shown for the zero jitter case is the same as that shown in Figure 4.9 for a periodic pulse train.
5.5 SHOT NOISE Shot noise occurs in many physical processes, including current flow in active electronic devices. Accordingly, a derivation of the power spectral density of such a process is important. To this end, consider an interval [0, T ] that is quantized into M intervals of duration t sec. Such quantization defines the set of times: 0, t, 2t, . . . , (M 9 1)t. Clearly, Mt : T. Next, consider the following experiment: 1. A time on the interval [0, T ] is chosen at random, with a uniform probability density function, and quantized to yield a number from the set 0, t, 2t, . . . , (M 9 1)t. It then follows that P[it] : 1/M. 2. Step 1 is repeated N times.
161
SHOT NOISE
z(γ1, …,γN, t) p(t − t1) t1 = γ1∆t
p(t − t2) t2 = γ2∆t
t3
t4
tN
t
Figure 5.15 Illustration of a signal from a shot noise process.
A shot noise process is one where a ‘‘pulse’’ waveform is associated with each of the N times defined by the above experiment, and an example of a shot noise waveform is shown in Figure 5.15. Consistent with the above description, a shot noise process Z is defined by the ensemble
, E : z( , . . . , , t) : p(t 9 t), + 0, 1, . . . , M91, P[ ] : 1/M 8 , G G G G (5.75) where p + L . The power spectral density of a shot noise process is detailed in the following theorem. T 5.3. P S D S N P W ith the assumption that the interval [0, T ] is sufficiently long, such that the effect of including the contribution of pulse waveforms outside of this interval is negligible, and with the limit of t ; 0, the power spectral density of a shot noise process is
1 G (T, f ) : P( f ) ; P( f ) 1 9 T sinc( f T ) 8 T
(5.76)
G ( f ) : P( f ) ; P(0) ( f ) 8
(5.77)
where : N/T is the average number of waveforms/sec. Proof. The proof of this theorem is given in Appendix 4. 5.5.1 Shot Noise due to Electrons Crossing a Barrier Consider a classical description where electrons are moving through an entity due to some mechanism, and at random times, are crossing a boundary x : x M as illustrated in Figure 5.16. With a classical description, the electrons behave as particles with finite dimensions, and an electron will take dt sec to cross a boundary. The charge
162
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
Figure 5.16 Electron movement.
q , passing a boundary due to the ith electron is as illustrated in Figure 5.17, G where q : 1.6 · 10\C. Also shown in this figure is the current flow i through G the boundary due to the ith electron. The relationship i(t) : dq/dt implies that
i (t) dt : 9q G
It is convenient to define a normalized pulse function h, according to h(t) :
9i (t ; t ) G G q
H(0) :
h(t) dt : 1
(5.78)
where H is the Fourier transform of h. Assuming all electrons behave in a similar manner, the current generated by electrons passing the boundary can be written as i(t) : 9qh(t 9 t ) G G
(5.79)
If, on average, there are electrons/sec passing the boundary, it then follows from Theorem 5.3 that the power spectral density of the random process associated with such a current flow is G ( f ) : qH( f ) ; qH(0) ( f )
(5.80)
Figure 5.17 Illustration of a charge q crossing a boundary in dt sec and the resulting contribution to current flow through the boundary.
163
SHOT NOISE
As I : q is the magnitude of the mean current flowing through the boundary, and H(0) : 1, it follows that the power spectral density can be written as G ( f ) : qI H( f ) ; I ( f )
(5.81)
If the transition time for electrons is short relative to the measurement response time, then for frequencies less than the bandwidth of the measuring system H( f ) H(0) : 1 and the most commonly used result G ( f ) qI ; I ( f )
(5.82)
is obtained. Ignoring the impulse at zero frequency, Schottky’s formula G ( f ) : qI results (Davenport, 1958 p. 123). Appropriate references for shot noise are Davenport (1958 ch. 7) and Rice (1944). 5.5.2 Shot Noise with Dead Time Consider a shot noise process, where the occurrence of a pulse precludes the occurrence of a second pulse for a time t as illustrated in Figure 5.18. This 8 exclusion time t , represents a ‘‘dead time’’ or ‘‘dead zone.’’ Underlying such a 8 random process is a point random process where, for the interval [0, T ], the outcomes are sets of times t , . . . , t . Such a set of times is generated by the ,K following experiment: A number is chosen at random from the sample space S , defined by a random variable , with a density function f , and is added to the dead time t to create the first time t . The density function is such that f () : 0 for 8 0. The second time, t is given by t plus the dead time t , plus another 8 number , chosen at random from S . This is repeated and the ith time is given by G G t : t ; : it ; G 8 I 8 I I I This process is stopped when t
,K>
T and t
,K
+S I
(5.83)
T.
Figure 5.18 Illustration of a signal from a shot noise process with a dead time tZ .
164
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
This experiment generates N numbers , . . . , that have a probability ,K K of occurrence consistent with
,K P[( , . . . , ) ]: f () d ,K AG Z'G G 'G
(5.84)
Associated in a one-to-one manner with these numbers, is a set of times t , . . . , t as defined by Eq. (5.83). Hence, ,K
,K P[(t , . . . , t ) ]: f () d ,K AG Z'G G 'G
G t : it ; G 8 I I
(5.85)
When a pulse function is associated with these times, a shot noise process Z with a dead time t , is defined. Before formally defining such a random process, 8 it is useful to consider the variation in N for a fixed interval [0, T ], a fixed K dead time t , and with the mean of the random variable denoted . 8 5.5.2.1 Variation in Number of Pulses Consider the random variable , , defined as the sum of N identically distributed and independent random variables , . . . , , plus the time of N dead zones, that is, , , : Nt ; , 8 G G
(5.86)
The mean and variance of , are N(t ; ) and N , respectively, where , 8 and are the mean and variance of . It follows from the central limit G theorem (Grimmett, 1992 p. 175; Larson, 1986 p. 322), with probability 0.95, that an outcome of , is within 1.96(N of the mean N(t 8 ; ) as N becomes increasingly large. Hence, the relative variation in an outcome from , as given by , 1.96(N 1.96 : N(t ; ) (N (t ; ) 8 8
(5.87)
clearly approaches zero as N increases. Accordingly, a reasonable approximation is to use a fixed number of outcomes N : T /(t ; ) in the interval 8 [0, T ], rather than a variable number N . K 5.5.2.2 Power Spectral Density A shot noise process Z, with a dead time t , can be defined for the interval [0, T ] by the ensemble 8
, G E : z( , . . . , , t) : p(t 9 t ), t : it ; , + S 8 , G G 8 I I G I
(5.88)
165
SHOT NOISE
where p + L , N : T /(t ; ), and P[z( , . . . , , t)] : P[ , . . . , ]. The power 8 , , spectral density of this shot noise process is detailed in the following theorem. T 5.4. P S D S N P D T T he power spectral density on the intervals [0, T ] and [0, -], of a shot noise process with a dead time t is 8
,\ k 19 eHLDIR8[FI (9 f )] T I G ( f ) : P( f ) 1 ; 2Re eHLDIR8 [FI (9 f )] 8 I
G (T, f ) : P( f ) 1 ; 2Re 8
(5.89) (5.90)
where F is the Fourier transform of the density function f , N : T /(t ; ), 8 is the mean of the random variable , and : N/T : 1/(t ; ) is the 8 average number of pulses/sec. Proof. The proof of this theorem is given in Appendix 5. 5.5.3 Example Consider the case where the probability density function f is uniform on the interval [0, ], that is, f () : 1/ for 0 and is zero elsewhere. Then, from Theorem 2.33 F ( f ) : sinc(f )e\HLDO
: /2
(5.91)
and
,\ k G (T, f ) : P( f ) 1 ; 2 1 9 cos 2 f k t ; 8 8 2 T I
sincI(f )
(5.92)
where N : T /(t ; /2). This result is plotted in Figure 5.19 along with the 8 power spectral density of a regular shot noise process, for the case where the pulse function is rectangular, that is, p(t) : 1 for 0 t t and is zero N elsewhere, whereupon P( f ) : t sinc(t f ). The values used are t : 1, N N N t : 5, : 10, : 0.1, and T : 1000. 8 Not unexpectedly, the power spectral densities are similar. This is because the mean time between pulses t ; /2 : 10, is double the dead zone time 8 t : 5, and the dead zone has only a moderate influence. Clearly, the power 8 spectral densities of shot noise processes with and without a dead zone, will become increasingly different when the mean time between pulses, for the shot noise process, becomes less than the dead zone time.
166
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
Figure 5.19 Power spectral density of a shot noise process with and without a dead time.
For the case where the dead zone time is much longer than the mean of , the random process approaches that of a periodic signal with period close to t , that is, a jittered periodic signal. For example, with T : 8, t : 1, 8 8 : 0.01, t : 0.5, N : 8, and : 1, the power spectral density given by Eq. N (5.92) can be shown to approach that of Figure 4.9, which is for a periodic pulse train. 5.6 GENERALIZED SIGNALING PROCESSES Combining the characteristics of a signaling random process and a shot noise process, the generalized signaling random process W, can be defined on [0, T ] by the ensemble
, E : w( , . . . , , t) : ( , t 9 q ), :( , q ), + E , + S , q + S 5 , G G G G G G G / G (5.93) where S and S are the respective sample spaces for the independent random / variables and Q, with outcomes and q and with probability density functions f and f . For the case where is independent of for i " j and q / G H G is independent of q for i " j, the probability associated with outcomes of W H are such that
, P[w( , . . . , , t) ]: f () d f (q) dq , HG Z'GOG Z(G / (G G 'G
(5.94)
GENERALIZED SIGNALING PROCESSES
167
The signal set E is defined as E : (, t): + S
(5.95)
where P[ (, t)
HZ HMHM>BH
] : f ( ) d M
(5.96)
The following theorem details the power spectral density of this generalized signaling random process. T 5.5. P S D G S P Assuming the effect of including components of the signaling waveforms outside the interval [0, T ] is negligible, and the ith signaling waveform is independent of the jth waveform when i " j, the power spectral density of the random process W, defined by the ensemble as per Eq. (5.93), is
1 G (T, f ) : r( f ) ; r 1 9 T F ( f ) ( f ) 5 / rT
(5.97)
where r : N/T is the average waveform rate, F is the Fourier transform of the / density function f , and / ( f):
\
( f ) :
(, f ) f () d
\
(, f )f () d (5.98)
Here, is the Fourier transform of a waveform from the signaling set, and evaluated over the interval (9-, -), that is, (, f ) :
(, t)e\HLDR dt
(5.99)
\
Proof. The proof of this theorem is given in Appendix 6. 5.6.1 Uniform Distribution of Times For the usual case of a uniform distribution of times on the interval [0, T ], consistent with f (q) : 1/T for 0 q T and zero elsewhere, it follows that / F ( f ) : sinc( f T )e\HLD2, and hence, /
1 G (T, f ) : r( f ) ; r 1 9 T sinc( f T ) ( f ) 5 rT
(5.100)
G ( f ) : r( f ) ; r (0) ( f ) 5
(5.101)
168
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
Not surprisingly, by comparison with the results for the shot noise case, that is, Eqs. (5.76) and (5.77), this result is a straightforward generalization of that case.
APPENDIX 1: PROOF OF THEOREM 5.1 Theorem 4.7 states that the power spectral density of the sum of N random processes X , . . . , X , is given by , , , , G (T, f ) : G (T, f ) ; G (T, f ) 8 G GI G G I I$G
(5.102)
With T : ND, this result can be used directly with the ith random process X G being defined by the ensemble E : x ( , t) : ( , t 9 (i 9 1)D), + S , + E 6G G G G G
(5.103)
where E is defined by Eq. (5.3). Using the definitions for the power spectral density for the countable and uncountable cases, it follows, assuming contributions of the signaling waveforms outside of the interval [0, T ] can be included, that 1 G (T, f ) : p (, f ) G A T A
1 G (T, f ) : G T
(, f )f () d (5.104)
\
Hence, using the definitions for ( f ), as per Eq. (5.17), it follows that G (T, f ) : ( f )/T for both cases. Further, when i " k, the respective results G follow for the countable and uncountable cases, 1 G (T, f ) : GI T : 1 G (T, f ): GI T
p ( , f )*( , f ) e\HLG\"DeHLI\"D AA G I AG AI G I
1 e\HLG\I"DR ( f) G I T
\ \
(5.105)
( , f )*( , f ) f ( , ) d d e\HLG\"DeHLI\"D G I G I G I G I
1 : e\HLG\I"DR (f) G I T
(5.106)
Hence, using the definition for R ( f ), as per Eq. 5.18, it follows for both the G I countable and uncountable cases, that G (T, f ) : e\HLG\I"DR ( f )/T. G I GI
APPENDIX 1: PROOF OF THEOREM 5.1
169
Thus, with r : N/T : 1/D, it follows that 1 , , G (ND, f ) : r( f ) ; e\HLG\I"DR ( f) G I 6 T G I I$G
(5.107)
Further simplification relies on simplying the second term in this equation, denoted T . This is done for the countable case — the uncountable case follows in an analogous manner. By definition R (f): p GI A A GI AG AI G I
: ( , f )*( , f ) GI G I
: * GI IG (5.108)
and T :
e\HL"D eHL"D p ; p AA A A T T A A A A eHL"D p ; A A T A A e\HL"D ; p ;% A A T A A
(5.109)
To further simplify this equation, first note that there are N 9 1 summations where k : i < 1, N 9 2 summations where k : i < 2, . . . , and 1 summation where k : i < (N 9 1). Second, note the assumption p : p , that is, A AI AO>GAO>I correlations only depend on the difference between the G location of signaling intervals and not on their absolute location. Third, note it is the case that p :p which follows because p is the probability of ( , t) in the ith AGAI AIAG AG AI G interval, and ( , t) in the kth interval, while p is the probability of ( , t) AIAG I I in the kth interval, and ( , t) in the ith interval. It then follows that G
1 Re eHL"D p ( , f )*( , f ) T : 2r 1 9 AA N A A 2 ; 2r 1 9 Re eHL"D p ( , f )*( , f ) ; · · · AA N A A N91 ; 2r 1 9 Re eHL,\"D p ( , f )*( , f ) A A, , N A A, (5.110)
170
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
which can be rewritten as
,\ i T : 2r 1 9 Re eHLG"D p ( , f )*( , f) A A>G >G N G A A>G (5.111) Adding and subtracting the term p p ( , f )*( , f ) in the inner A A>G >G summation yields
,\ i ,\ i T : r ( f ) 2 1 9 cos(2 iDf ) ; 2r 1 9 N N G G ;Re eHLG"D ;[p 9p p ]( , f )*( , f) AA>G >G A A>G A A>G
Using the definition for R
G I
(5.112)
( f ), a result implicit in Theorem 2.32, namely
,\ i 1 sin( Nf D) cos[2 iDf ] : 91 2 19 N N sin( f D) G
(5.113)
:p p for i m, it follows that and assuming p A A>G AA>G
1 sin( Nf D) T : r ( f ) 91 N sin( f D)
K i ; 2r 1 9 Re[eHLG"D(R ( f ) 9 ( f ))] G >G N G
(5.114)
Substituting this result into Eq. (5.107) yields
1 sin( Nf D) G (ND, f ) : r( f ) 9 r ( f ) ; r ( f ) 6 N sin( f D)
K i ;2r 1 9 Re[eHLG"D(R ( f ) 9 ( f ))] G >G N G (5.115)
171
APPENDIX 2: PROOF OF THEOREM 5.2
Again, using a result from Theorem 3.32, the result for the infinite interval follows: G ( f ) : r( f ) 9 r ( f ) ; r ( f ) 6
( f 9 nr) L\
K ; 2r Re[eHLG"D(R ( f ) 9 ( f ))] G >G G
(5.116)
APPENDIX 2: PROOF OF THEOREM 5.2 As the noise is assumed to be uncorrelated over a time interval consistent with ( f ), as defined by Eq. the comparator output pulse, it follows that R G >G (5.18), equals ( f ). Hence, from Theorem 5.1, the power spectral density of Z is given by
1 sin( Nf /r) G (NB, f ) : r( f ) ; r ( f ) 91 8 N sin( f /r)
(5.117)
where r : 1/B, and it remains to determine ( f ) and ( f ). As
(, t) : A (1 ; a)r M
t9 9d " ;w 5
with : (a, d, w), and r(t) R( f ) $ r
t t9 R(f ) $ r R(f )e\HLD@
0 (5.118)
it follows that (, f ) : A (1 ; a)( ; w)R[( ; w) f ]e\HLDI">B M 5 5
(5.119)
Then, as f () : f (a) f (d) f (w), and ( f ) : (, f ) f () d, it follows that " 5 ( f):
A (1 ; a)( ; w) M 5
\ \ \ ;R[( ; w) f ]e\HLDI">Bf (a) f ( ) f (w) da d dw 5 " 5 : A e\HLDI" e\HLDBf ( ) d ( ; w)R[( ; w) f ] f (w) dw M " 5 5 5 \ \ (5.120)
172
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
where the independence and the zero mean property of the random variables has been used. With the Fourier transform definition
F ( f): "
\
(5.121)
e\HLDBf ( ) d "
it follows that ( f ) : A F ( f ) M "
\
( ; w)R[( ; w) f ] f (w) dw 5 5 5
(5.122)
Similarly, from the definition of ( f ) : (, f ) f () d, it follows that ( f ) :
A(1 ; a)( ; w) M 5
\ \ \ ; R[( ; w) f ]f (a) f ( ) f (w) da d dw 5 " 5 : A(1 ; A ) ( ; w)R[( ; w) f ]f (w) dw M 5 5 5 \
(5.123)
Hence,
G (NB, f ) : rA(1 ; A ) M 8
\
; rAF ( f ) M "
( ; w)R[( ; w) f ]f (w) dw 5 5 5
( ; w)R[( ; w) f ] f (w) dw 5 5 5
\ 1 sin( Nf /r) 91 ; N sin( f /r)
(5.124)
and
G
8
( f ) : rA(1 ; A ) M
\
; rAF ( f ) M "
( ; w)R[( ; w) f ]f (w) dw 5 5 5
( ; w)R[( ; w) f ] f (w) dw 5 5 5
\ ; r ( f 9 ir) 9 1 G\
(5.125)
173
APPENDIX 3: PROOF OF EQUATION 5.73
APPENDIX 3: PROOF OF EQUATION 5.73 With A : 0, Gaussian probability density functions for W and D and R( f ) : sinc( f )e\HLD, it follows from Eq. (5.68) that e9w/2 5 ( ; w) sinc[( ; w) f ] dw 5 5 (2
\ 5 1 sin( Nf /r) ; rAe94 " f 91 M N sin( f /r)
G (NB, f ) : rA M 8
;
e9w/2 5 dw ( ; w) sinc[( ; w) f ]e9j ( 5;w) f 5 5 (2 \ 5 (5.126)
where the Fourier transform result (McGillem, 1991 p. 168) e9bt
( 9 f /b e b
$
e9d/2 " (2 "
e92 " f
(5.127)
has been used to evaluate F ( f ). Using the definition for the sinc function, it " follows that rA e9w/2 5 G (NB, f ) : M sin[ ( ; w) f ] dw 8 5 f (2
\ 5 rAe94 " f 1 sin( Nf /r) 91 ; M N sin( f /r) f
;
e9w/2 5 sin[ ( ; w) f ]e9j ( 5;w) f dw (5.128) 5 (2 \ 5
Using the identity 2 sin(A) : 1 9 cos(2A), the standard expansion for cos(A ; B), the fact that the integral of a density function is unity, the integral of an even and odd function is zero, and the integral result (Spiegel, 1968 p. 98)
cos(bx)e\?V dx :
1 2
e\@? a
174
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
it follows that
e9w/2 5
dw : e92 5 f (2
\ 5 and the first term in Eq. (5.128) simplifies to cos(2 wf )
rA M [1 9 cos(2 f )e 92 f 5 ] 5 2 f
(5.129)
(5.130)
Consider the second integral in Eq. (5.128) with the sin function written in its equivalent exponential form: 1 e9w/2 5 I : [e j ( 5;w) f 9 e9j ( 5;w) f ]e9j ( 5;w) f dw 2j (2
\ 5 (5.131) 9w/2 e 1 5 e9j2 wf 1 9 e9j2 5 f dw : 2j (2 \ 5 Writing the complex exponential term in its equivalent trigonometric form, and noting that one of the resulting integrals is of an even and odd function which integrates to zero, it follows from Eq. (5.129) that
1 I : [1 9 e9j2 5 f e92 f 5 ] 2j
(5.132)
Substitution of these integral results yields the required form rA M [1 9 cos(2 f )e 92 f 5 ] G (NB, f ) : 8 5 2 f ;
rAe94 " f M [1 ; e94 f 5 9 2 cos(2 f )e 92 f 5 ] 5 4 f
;
1 sin( Nf /r) 91 N sin( f /r)
(5.133)
Using a limit result from Theorem 2.32 for the last term in this equation yields the result for the infinite interval.
APPENDIX 4: PROOF OF THEOREM 5.3 Theorem 4.7 states that the power spectral density of the sum of N random processes X , . . . , X , is given by , , , , G (T, f ) : G (T, f ) ; G (T, f ) (5.134) 8 G GI G G I I$G
175
APPENDIX 4: PROOF OF THEOREM 5.3
This result can be used directly with the ith random process X being defined G by the ensemble E : x ( , t) : p(t 9 t), + 0, . . . , M 9 1, P[ ] : 1/M (5.135) 6G G G G G G whereupon, it follows that 1 +\ 1 P( f ) G (T, f ) : P( f )e 9j2 G tf : G T M T AG
(5.136)
To establish an expression for the cross power spectral density between X and G X for i " k, note that these random processes are independent and identical. I From Eq. (4.52) it then follows that
1 +\ 1 X (T, f )X *(T, f ) X (T, f ) I : G : P( f )e 9j2 G tf G (T, f ) : G GI T T M T AG P( f ) sin( Mt f ) : (5.137) T Msin( t f ) where the last result is from Theorem 2.32. Using these results, it then follows that the power spectral density is given by G (T, f ) : 8
NP( f ) (N 9 N)P( f ) sin( Mtf ) ; T T M sin( tf )
(5.138)
For the finite interval [0, T ] with N and T fixed, it follows for any fixed frequency range f + [9 f , f ], that there will exist a t ; 0 and a M ; - with V V tM : T, such that sin( tf ) tf, and hence, G (T, f ) 8
NP( f ) NP( f ) 1 T sin( Mtf ) ; 19 T N ( Mt f ) T
1 : P( f ) ; P( f ) 1 9 T sinc( f T ) T
(5.139)
where : N/T is the average number of waveforms/sec. This is the required result for the finite interval. For the infinite interval, a result from Theorem 2.32 yields the required result, namely, G ( f ) : lim G(T, f ) : P( f ) ; P(0) ( f ) 8 2
(5.140)
176
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
APPENDIX 5: PROOF OF THEOREM 5.4 As is independent of for i " k the power spectral density of the random G I process Z, with an ensemble given by Eq. (5.88), is 1 G (T, f ) : 8 T
%
\
, f ( ) % f ( ) P( f )e\HLDR G d % d (5.141) , , \ G
Substitution of the result t : it ; G yields I I G 8 G (T, f ) : 8
P( f ) T
%
\
\
f ( ) % f ( ) ,
, G ; exp 9j2 f it ; 8 I G I
(5.142) d % d ,
Further simplification relies on the following result:
, , , eHFG : N ; eHFN e\HFO G N OO$N , , : N ; 2Re eH FN\FO
N OON
(5.143)
With N h(p) : 92 fpt 9 2 f 8 I I
(5.144)
and q p, it follows that h(p) 9 h(q) : 2 f (q 9 p)t ; 2 f 8
O I IN>
(5.145)
Hence, G (T, f ) : 8
P( f ) N ; 2Re % f ( ) % f ( ) , T \ \ , , O ; exp[ j2 f (q 9 p)t ] exp j2 f d % d 8 I , N O IN> ON (5.146)
Interchanging the order of summation and integration in the second term in
APPENDIX 6: PROOF OF THEOREM 5.5
177
this equation yields, for the argument of the Re operator,
O , , eHLO\NDR8 % exp j2 f f ( ) % f ( ) d % d I , , \ \ N O IN> ON , , (5.147) : eHLO\NDR8FO\N(9 f ) N O ON where F is the Fourier transform of the density function f . In the double summation in this equation there are N 9 1 terms, where q : p ; 1; N 9 2 terms, where q : p ; 2, . . . ; and 1 term where q : p ; (N 9 1). Thus, this double summation can be written as , , ,\ eHLO\NDR8FO\N(9 f ) : [N 9 k]eHLDIR8FI (9 f ) N O I ON
(5.148)
With this result, it follows that G (T, f ) : 8
(5.149)
(5.150)
NP( f ) ,\ k 1 ; 2Re 1 9 eHLDIR8FI (9 f ) T N I
With : N/T the required results follow, that is,
,\ k 19 eHLIDR8FI (9 f ) T I G ( f ) : P( f ) 1 ; 2Re eHLIDR8FI (9 f ) 8 I
G (T, f ) : P( f ) 1 ; 2Re 8
(5.151)
APPENDIX 6: PROOF OF THEOREM 5.5 Theorem 4.7 states that the power spectral density of the sum of N random processes W , . . . , W , is given by , , , , G(T, f ) : G (T, f ) ; G (T, f ) G GH G G H H$G
(5.152)
This result can be used directly with the ith random process W being defined G by the ensemble E : w ( , t) : ( , t 9 q ), : ( , q ), + S , q + S , + E (5.153) 5G G G G G G G G G G /
178
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 1
Using the definition for the power spectral density for the uncountable case, it follows, assuming contributions of the signaling waveforms outside of the interval [0, T ] can be included, that 1 G (T, f ) : G T
1 1/
(, f )e\HLDO f () f (q) dq d : /
( f ) (5.154) T
where, by definition ( f ) :
1
(, f )f () d
(5.155)
Further, when i " k, 1 G (T, f ) : GI T ;
1 1
1/ 1/ F ( f ) : / T
( , f )*( , f ) f ( ) f ( ) d d G I G I G I e\HLDOG eHLDOI f (q ) f (q ) dq dq / G / I G I
1
(5.156)
F ( f ) (, f ) f () d : / ( f ) T
where (f):
1
(, f ) f () d
(5.157)
and F is the Fourier transform of the density function f . With r : N/T, it / / follows that G (ND, f ) : r( f ) ; r(N 9 1)F ( f ) ( f ) / 5 which is the required result.
(5.158)
6 Power Spectral Density of Standard Random Processes — Part 2 6.1 INTRODUCTION This chapter continues the discussion of standard random processes commenced in Chapter 5. Specifically, the power spectral density associated with sampling, quadrature amplitude modulation, and a random walk, are discussed. It is shown that a 1/ f power spectral density is consistent with a summation of bounded random walks. 6.2 SAMPLED SIGNALS Sampling of signals is widespread with the increasing trend towards processing signals digitally. One goal is to establish, from samples of the signal, the Fourier transform of the signal. Consider a signal x, that is piecewise smooth on [0, ND], as illustrated in Figure 6.1. One approach for establishing the Fourier transform of such a signal is to use a Riemann sum (Spivak, 1994 p. 279) to approximate the integral defining the Fourier transform, that is,
,"
x(t)e\HLDR dt D
x(0>) ,\ x(ND\)e\HL,"D ; x(pD)e\HLN"D ; 2 2 N
(6.1)
If x is piecewise smooth on [0, T ], then from Theorem 2.7 it has bounded variation on this interval. It then follows from Theorem 2.19, that this 179
180
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
x(t) t D
2D
ND
3D
Figure 6.1 Piecewise smooth function on [0, ND].
approximation can be made arbitrarily accurate by increasing the number of samples taken. The following theorem establishes an exact relationship between this Riemann sum and the Fourier transform of x. This relationship facilitates evaluation of the power spectral density of a sampled signal. T 6.1. S R Consider N ; 1 samples, taken at 0, D, . . . , ND sec with a sampling frequency f : 1/D Hz, of a piecewise smooth 1 signal x (see Figure 6.1). If X is the Fourier transform of x, and + lim X(ND, f 9 kf ) 1 + I\+ converges for all f + R, then x(0>) ,\ x(pD\);x(pD>) ; e\HLN"D f X(ND, f 9 kf ) : 1 1 2 2 N I\ x(ND\)e\HL,"D ; 2
(6.2)
A sufficient condition for + X(ND, f 9 kf ) to converge as M ; -, is the I\+ 1 existence of k , 0, such that X(ND, f ) k / f >? for f + R. M M Proof. The proof of this result is given in Appendix 1. 6.2.0.1 Example Consider the function x(t) :
1 0
0 t ND elsewhere
whose Fourier transform (see Theorem 2.33) is X(ND, f ) : (N/f ) sinc(Nf / f )e\HLD,D1 1 1 and which does not satisfy the requirement that there exists k , 0, such that M
181
SAMPLED SIGNALS
X(ND, f ) k / f >?. However, for f : if , with i + Z, the summation M 1 + lim X(ND, f 9 kf ) 1 + I\+ converges and is equal to the ith term N/ f . Equation (6.2) is then easily 1 proved as both sides are equal to N. When f " if , with i + Z, it follows, that after standard manipulation that 1 f
1
Nf + + 1 X(ND, f 9 kf ) : N sinc e\HL,DD1 1 ; 2 1 f k 1 I\+ I 1 9 f / f 1
(6.3)
which clearly converges as M ; -, provided f / f , Z. For example, if f : f /4 1 1 and N : 2, it follows from the result (Gradshteyn, 1980 p. 8) 1 1 :9 ; 2 8 (1 9 4k)(1 ; 4k) I that f
f X 2D, 1 9 kf : 9j 1 1 4 I\
(6.4)
This result agrees with the Riemann sum for the case where N : 2 as
x(0>) x(ND\)e\HL,"D ; x(pD)e\HLN"D ; : 0.5 9 j 9 0.5 : 9j 2 2 DD1 N (6.5) 6.2.1 Power Spectral Density of Sampled Signal Consider a signal x, as illustrated in Figure 6.1, which is piecewise smooth on [0, ND] and is sampled at a rate f : 1/D by a sampling signal S , defined 1 according to S (t) : (t 9 kD) I\
(6.6)
where is defined by the graph of S shown in Figure 6.2. On the interval [0, ND) the signal y , as a consequence of sampling the signal x, is defined
182
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
δ∆(t)
S∆(t) 1 − ∆
Area = 1
∆ t −2D
−D
D
2D
Figure 6.2 Sampling signal.
according to
y (t) :
,\ x(t)S (t) : x(t) (t) ; x(t) (t 9 kD) ; x(t) (t 9 ND) (6.7) I 0 t , [0, ND)
The Fourier transform and power spectral density of y as approaches zero, are specified in the following theorem. T 6.2. F T P S D A S If x is piecewise smooth on [0, ND], is sampled at a rate f : 1/D, 1 and is such that lim + X(ND, f 9 kf ) converges for all f + R, then with + I\+ 1 Y as the Fourier transform of y , it follows that Y (ND, f ) : lim Y (ND, f ) : f X(ND, f 9 kf ) (6.8) 1 1 I\ G (ND, f ) : lim G (ND, f ) : f G (ND, f 9 kf ) 1 6 1 7 7 I\ (6.9) f ; 1 X(ND, f 9 kf )X*(ND, f 9 n f ) 1 1 ND I\ L\ L$I Proof. The proof of this theorem is given in Appendix 2. 6.2.1.1 Notes If it is the case that
X(ND, f ) X(ND, f 9 kf ) 1 I$
for f + (9 f /2, f /2) 1 1
then G (ND, f ) f G (ND, f ) 1 6 7
f + (9 f /2, f /2) 1 1
(6.10)
SAMPLED SIGNALS
183
S[xi] = S∆xi S x1, …, xN, …
y∆ FT
FT
lim ∆→0
X1, …, XN, …
Y∆
∞
Y(ND, f) = fS ∑
k = −∞
Y
∞
X1(ND, f − kfS) = fS
∑ k = −∞
X2(ND, f − kfS) = ...
Figure 6.3 Illustration of sampling relationships.
and sampling has produced a scaled version of the true power spectral density in the frequency interval [9 f /2, f /2]. 1 1 Figure 6.3 illustrates the relationship between the set of signals x , . . . , x , . . . , that are identical on arbitrarily small neighborhoods of the , points 0>, D, . . . , ND\, and the Fourier transform of the sampled signal Y . Clearly, sampling results in the Fourier transform and the power spectral density being repeated at integer multiples of the sampling frequency. To illustrate this, the power spectral density of a sampled 4 Hz sinusoid A sin(2 f t) is shown in Figure 6.4, where the sampling rate is 20 Hz and the A
Figure 6.4 Power spectral density of a sampled 4 Hz sinusoid with unity amplitude. The sampling rate is 20 Hz and samples are from a 1 sec interval.
184
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
measurement interval is 1 sec. The power spectral density of such a sinusoid has been detailed in Section 3.2.3.3. References for sampling theory include, Papoulis (1977 p. 160f ), Champeney (1987 p. 162f ), and Higgins (1996). 6.2.2 Power Spectral Density of Sampled Random Process Consider a random process X that is characterized by an ensemble E of 6 piecewise smooth signals on [0, ND], E : x: S ; R ; C 6 6
(6.11)
where S Z> for the countable case and S R for the uncountable case. 6 6 Consider a specific signal x(, t) from E . Associated with this signal is an 6 infinite set of sampled signals, defined according to y (, t) : S (t)x(, t): t + [0, ND], +
G
(6.12)
where is a sequence that converges to zero. The power spectral density G associated with the limit of this sequence is given in Eq. (6.9), that is, G (, ND, f ) : lim G (, ND, f ) : f G (, ND, f 9 kf ) 1 7 7 6 1 I\ (6.13) f X(, ND, f 9 kf )X*(, ND, f 9 n f ) ; 1 1 1 ND I\ L\ L$I The power spectral density of the random process formed through sampling each signal in E is the weighted summation of the resulting individual power 6 spectral densities, that is, for the countable case, G (ND, f ) : p G (, ND, f ) : f p G (, ND, f 9 kf ) 1 7 A 7 A 6 1 A A I\ f X(, ND, f 9 kf )X*(, ND, f 9 n f ) ; 1 p A 1 1 ND A I\ L\ L$I (6.14) where P[x(, t)] : P[] : p . An analogous result holds for the uncountable A case.
QUADRATURE AMPLITUDE MODULATION
185
6.3 QUADRATURE AMPLITUDE MODULATION One of the most popular and important communication modulation formats is quadrature amplitude modulation (QAM). A QAM signal x, is defined according to x(t) : i(t) cos(2 f t) 9 q(t) sin(2 f t) A A : u(t) 9 v(t)
(6.15)
where i and q, respectively, are denoted the ‘‘inphase’’ and ‘‘quadrature’’ signals, f is the carrier frequency, u(t) : i(t) cos(2 f t), and v(t) : q(t) sin(2 f t). A A A In the general case, the signals i and q are specific signals from ensembles of two different random processes I and Q. Consider the case where the random process I is defined by the ensemble E , according to ' E : i : R ; C, k + Z>, P[i ] : p ' I I I
(6.16)
A corresponding random process U, is defined by the ensemble E : 3 E : u : R ; C, u (t) : i (t) cos(2 f t), k + Z>, P[u ] : p 3 I I I A I I
(6.17)
Similarly, the random processes Q and V can be defined by the ensembles E / and E : 4 E : q : R ; C, l + Z>, P[q ] : p / J J J E : v : R ; C, v (t) : q (t) sin(2 f t), l + Z>, P[v ] : p 4 J J J A J J
(6.18) (6.19)
The random process X : U 9 V can then be defined, in a manner consistent with Eq. (6.15), by the ensemble E : 6
E : x : R;C 6 IJ
x (t) : i (t) cos(2 f t) 9 q (t) sin(2 f t), IJ I A J A k, l + Z>, P[x ] : P[i , q ] : p IJ I J IJ
(6.20)
For practical communication systems, the energy associated with all signals is finite. Thus, according to Theorem 3.6, the power spectral density of the modulating random processes I and Q, denoted G and G , are finite ' / for all frequencies when evaluated over the finite interval [0, T ]. The assumption of finite energy is implicit in the following theorem and subsequent results.
186
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
T 6.3. P S D U, V, X T he power spectral density of U, V, and X on the interval [0, T ], are G (T, f 9 f ) ; G (T, f ; f ) A ' A G (T, f ) : ' 3 4
p [I (T, f 9 f )I * (T, f ; f )] I I A I A I G (T, f 9 f ) ; G (T, f ; f ) A / A G (T, f ) : / 4 4 1 ; Re 2T
p [Q (T, f 9 f )Q *(T, f ; f )] J J A J A J G (T, f 9 f ) ; G (T, f ; f ) A ' A G (T, f ) : ' 6 4 1 9 Re 2T
p [I (T, f 9 f )I *(T, f ; f )] I I A I A I G (T, f 9 f ) ; G (T, f ; f ) A / A ; / 4 ;
1 Re 2T
p [Q (T, f 9 f )Q * (T, f ; f )] J J A J A J Im[G (T, f 9 f )] '/ A ; 2 9
1 Re 2T
1 Im 2T
1 Im 2T
p [I (T, f 9 f )Q *(T, f ; f )] IJ I A J A I J 9Im[G (T, f ; f )] '/ A ; 2 9
;
p [I (T, f ; f )Q *(T, f 9 f )] IJ I A J A I J
(6.21)
(6.22)
(6.23)
where I and Q , are respectively, the Fourier transforms of i and q . I J I J Proof. The proof of this theorem is given in Appendix 3. 6.3.1 Case 1: Bandlimited Signals A common practical case in communication systems is where the power spectral densities of the inphase and quadrature components are only of significant level in the frequency range 9W f W, where W f , as A
187
QUADRATURE AMPLITUDE MODULATION
GI, Q(T, f + fc)
GI, Q(T, f )
GI, Q(T, f − fc)
f −fc
−W
fc
W
Figure 6.5 Forms for GI (T, f) and GQ (T, f) consistent with the bandlimited case.
illustrated in Figure 6.5. A general condition for the simplification that follows, is for the Fourier transforms of the inphase and quadrature signals to have negligible magnitude for frequencies greater than f , or less than 9 f . A A For the case where I, Q, and the carrier frequency f are such that A
2 Re p [I (T, f 9 f )I *(T, f ; f )] G (T, f 9 f ) ; G (T, f ; f ) (6.24) I I A I A ' A ' A T I 2 p [Q (T, f 9 f )Q *(T, f ; f )] A J A G (T, f 9 f ) ; G (T, f ; f ) Re J J / A / A T T J (6.25)
1 Im p [I (T, f 9 f )Q *(T, f ; f )] IJ I A J A T I J 1 ; Im p [I (T, f ; f )Q *(T, f 9 f )] G (T, f ) IJ I A J A 6 T I J
(6.26)
then the following approximation is valid: G (T, f 9 f ) ; G (T, f ; f ) G (T, f 9 f ) ; G (T, f ; f ) A ' A ; / A / A G (T, f ) ' 6 4 4 ;
Im[G (T, f 9 f )] Im[G (T, f ; f )] '/ A 9 '/ A 2 2
(6.27)
This approximate expression can be written very simply, if the definition of an equivalent low pass process, as discussed next, is used. D: E L P R P An equivalent low pass signal w, defined according to (Proakis, 1995 p. 155), w(t) : i(t) ; jq(t)
(6.28)
188
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
where i and q are real signals, can be associated with a quadrature carrier signal x(t) : i(t) cos(2 f t) 9 q(t) sin(2 f t) A A
(6.29)
x(t) : Re[w(t)e HLDA R]
(6.30)
as
With the quadrature carrier random process X, defined by the ensemble E , 6 as per Eq. (6.20), the equivalent low pass random process W can be defined by the ensemble E , according to 5 E : w : R ; C, w (t) : i (t);jq (t), k, l + Z>, P[w ]:P[i , q ]:p 5 IJ IJ I J IJ I J IJ (6.31) The power spectral density of W is specified in the following theorem. T 6.4. P S D E L P R P If the power spectral densities of I and Q, denoted G and G , can be ' / validly defined, then the power spectral density of W, on the interval [0, T ], is G (T, f ) : G (T, f ) ; G (T, f ) ; 2Im[G (T, f )] 5 ' / '/ G (T, 9 f ) : G (T, f ) ; G (T, f ) 9 2Im[G (T, f )] 5 ' / '/
(6.32)
Proof. The proof of the first result follows directly from Theorem 4.5, and by noting that Re[9jG (T, f )]:Im[G (T, f )]. The proof of the second '/ '/ result follows from the first result using the fact that for real signals, X(T, 9 f ) : X*(T, f ), which implies G (T, 9f ):G (T, f ) and G (T, 9 f ): 6 6 '/ G* (T, f ). '/ 6.3.1.1 Notes With such a definition, it follows for the case of real bandlimited random processes, that the power spectral density of the QAM random process, as given in Eq. (6.27), can be written as G (T, f 9 f ) ; G (T, 9 f 9 f ) A 5 A G (T, f ) 5 6 4
(6.33)
This simple form is one reason for the popularity of equivalent low pass random processes. 6.3.2 Case 2: Independent Inphase and Quadrature Processes For the case where the random processes I and Q are independent, that is, p : p p , the result from Section 4.5.2 for independent random processes, IJ I J
QUADRATURE AMPLITUDE MODULATION
189
namely, G (T, f ) : I (T, f )Q *(T, f )/T, where I and Q are the respective aver'/ aged Fourier transforms of the signals defined by the random processes I and Q, yields G (T, f 9 f ) ; G (T, f ; f ) A ' A G (T, f ) : ' 6 4
p [I (T, f 9 f )I *(T, f ; f )] I I A I A I G (T, f 9 f ) ; G (T, f ; f ) A / A ; / 4 ;
1 Re 2T
p [Q (T, f 9 f )Q *(T, f ; f )] J J A J A J Im[I (T, f 9 f )Q *(T, f 9 f )] Im[I (T, f 9 f )Q *(T, f ; f )] A A 9 A A ; 2T 2T 9
1 Re 2T
9
Im[I (T, f ; f )Q *(T, f ; f )] Im[I (T, f ; f )Q *(T, f 9 f )] A A ; A A 2T 2T (6.34)
For the independent and bandlimited case, the following approximation is valid: G (T, f 9 f ) ; G (T, f ; f ) G (T, f 9 f ) ; G (T, f ; f ) A ' A ; / A / A G (T, f ) ' 6 4 4 ;
Im[I (T, f 9 f )Q *(T, f 9 f )] Im[I (T, f ; f )Q *(T, f ; f )] A A 9 A A 2T 2T (6.35)
Further, if I and Q are identical random processes, then Q *(T, f ) equals the conjugate of I (T, f ), and hence, I (T, f 9 f )Q *(T, f 9 f ) : I (T, f 9 f ). A A A Thus, for the identical, independent, and bandlimited case, it follows that G (T, f 9 f ) ; G (T, f ; f ) G (T, f 9 f ) ; G (T, f ; f ) A ' A ; / A / A G (T, f ) ' 6 4 4 G (T, f 9 f ) ; G (T, f ; f ) G (T, f 9 f ) ; G (T, f ; f ) A ' A : / A / A : ' 2 2 (6.36) For the case where the random processes I and Q have constant means G
190
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
and on the interval [0, T ], it follows from Section 4.5.2 that O G (T, f ) : I (T, f )Q *(T, f )/T : *T sinc( f T ) '/ G O
(6.37)
Hence, if the signals are real with a constant mean, then the imaginary part of I (T, f )Q *(T, f ) is zero, and the following result holds when the signals are not necessarily bandlimited. G (T, f 9 f ) ; G (T, f ; f ) A ' A G (T, f ) : ' 6 4
p [I (T, f 9 f )I *(T, f ; f )] I I A I A I G (T, f 9 f ) ; G (T, f ; f ) A / A ; / 4 ;
1 Re 2T
p [Q (T, f 9 f )Q *(T, f ; f )] J J A J A J Im[I (T, f 9 f )Q *(T, f ; f )] Im[I (T, f ; f )Q *(T, f 9 f )] A A ; A A 9 2T 2T 9
1 Re 2T
(6.38) As shown in Appendix 4, for the independent case with real constant means, the last two terms in Eq. (6.38) can be neglected as T ; - to yield G ( f 9 f ) ;G ( f ; f ) G ( f 9 f ) ;G ( f ; f ) ' / A A ; / A A G ( f ) : ' 6 4 4
1 ; lim Re p [I (T, f 9 f )I *(T, f ; f )] I I A I A 2T 2 I 1 9 lim Re p [Q (T, f 9 f )Q *(T, f ; f )] J J A I A 2T 2 J
(6.39)
With the further assumption of bandlimited signals, it follows that 1 G (T, f ) [G (T, f 9 f ) ; G (T, f ; f ) ; G (T, f 9 f ) ; G (T, f ; f )] 6 A ' A / A / A 4 ' 1 : [G (T, f 9 f ) ; G (T, f ; f )] A 5 A 4 5
(6.40)
1 G ( f ) [G ( f 9 f ) ; G ( f ; f ) ; G ( f 9 f ) ; G ( f ; f )] 6 ' / / A A A A 4 ' 1 : [G ( f 9 f ) ; G ( f ; f )] 5 A A 4 5
(6.41)
191
QUADRATURE AMPLITUDE MODULATION
6.3.3 Case 3: Independent and Zero Mean Case When the inphase and quadrature random processes are independent, and one or both of them have a zero mean, it follows that G (T, f 9 f ) ; G (T, f ; f ) A ' A G (T, f ) : ' 6 4
p [I (T, f 9 f )I *(T, f ; f )] I I A I A I G (T, f 9 f ) ; G (T, f ; f ) A / A ; / 4 ;
9
1 Re 2T
1 Re 2T
p [Q (T, f 9 f )Q *(T, f ; f )] J J A J A J
(6.42)
For the case of bandlimited signals, 1 G (T, f ) : [G (T, f 9 f ) ; G (T, f ; f ) ; G (T, f 9 f ) ; G (T, f ; f )] A ' A / A / A 6 4 ' 1 : [G (T, f 9 f ) ; G (T, f ; f )] A 5 A 4 5
(6.43)
1 G ( f ) : [G ( f 9 f ) ; G ( f ; f ) ; G ( f 9 f ) ; G ( f ; f )] 6 ' / / A A A A 4 ' 1 : [G ( f 9 f ) ; G ( f ; f )] 5 A A 4 5
(6.44)
6.3.4 Example For communication systems, I and Q are usually signaling random processes with power spectral densities given by Theorem 5.1. For example, consider the quadrature amplitude modulation random process X, where I and Q are independent and have identical RZ signaling random processes with power spectral densities as per Eqs. (5.42) and (5.43), and as shown in Figure 5.3. With f appropriately chosen, the bandlimited approximation, as per Eq. (6.36), is A valid, that is, G (T, f 9 f ) ; G (T, f ; f ) G (T, f 9 f ) ; G (T, f ; f ) A ' A : / A / A G (T, f ) ' 6 2 2 (6.45) This power spectral density is shown in Figure 6.6 for the case where f : 10. A
192
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
GX(256, f) 0.05 0.04 0.03 0.02 0.01
7
8
9
10 fc
11
12 13 14 Frequency (Hz)
Figure 6.6 Power spectral density of a QAM signal where both the inphase and quadrature random processes are RZ random processes with power spectral densities as shown in Figure 5.3.
6.4 RANDOM WALKS The quintessential nonstationary random process is a random walk, and such a random process has been extensively studied (for example, see Feller, 1957 ch. 3). The limit of a random walk in terms of an increasingly small step size and step interval, yields the Wiener process or Brownian motion (Grimmett, 1992 p. 342; Gillespie, 1996). A random walk is clearly nonstationary, however, this does not present a problem for the power spectral density evaluated on an interval [0, T ] because it has its basis in the average power on this interval. The average power, and hence, the power spectral density, will change with the interval length and appropriate care must be taken when interpreting the power spectral density. The model used for a random walk leads to a model for a bounded random walk which has a signaling random process form. Such a process has constant average power after an initial transient period. Bounded random walks provide a basis for synthesizing a 1/ f power spectral density form. A synthesis is given for this form in the next section, and such a synthesis is consistent with a simple model for 1/ f noise. 6.4.1 Modeling of a Random Walk D: S D R W A random walk is a signal that exhibits a step jump every D sec, with a step size randomly chosen
RANDOM WALKS
193
with equal probability from the set
, E : x( , . . . , , t) : u(t 9 iD), + S (6.47) 6 , G G G where P[ + ( , ; d)] : f ( ) d. For independent step sizes, it is the case M M M that
, , P[x( , . . . , , t) ] : P[ + I ] : f () d (6.48) , AG Z'G G G G G 'G Two random walks from the ensemble E , for the case of equally probable step 6 sizes, S : <1 and D : 1, are shown in Figure 6.8 for t + [0, 100]. 6.4.2 Power Spectral Density of a Random Walk By defining a random process X , on the interval [0, T ], by the ensemble G E : x( , t) : u(t 9 iD), + S (6.49) 6G G G G it follows that the random process X, for [0, T ], is the sum of the N random
A
A D −A
T
t
t
D 2D
T
–A
Figure 6.7 Possible waveforms associated with the first (left graph) and second (right graph) steps of a random walk.
194
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
Amplitude 15 10 5 0 −5 −10 20
40
60
80 100 Step Number
Figure 6.8 Two random walks on the interval [0, 100], for the case where S : <1 and D : 1.
processes X , . . . , X . The power spectral density of X is given by , G G (T, f ) : 6G
U (T, f ) G T
U (T, f ) : G
e\HLD2 9 e\HLDG" 9j2 f
(6.50)
where U (T, f ) is the Fourier transform of u(t 9 iD) evaluated over the interval G [0, T ], is the variance of , and has a zero mean. As the random processes X , . . . , X are independent and have zero mean, it follows from Theorem 4.6 , that the power spectral density of X is the sum of the individual power spectral densities according to , G (T, f ) : G (T, f ) 6 6 G G
(6.51)
Evaluation of the appropriate Fourier transforms yields , G (T, f ) : sin(i f D) T : (N ; 1)D 6 f T G 1 sin((2N ; 1) f D) r 19 19 : 2 f 2N ; 2 (2N ; 1) sin( f D)
(6.52)
where r : 1/D, and the last result follows from writing sin in terms of complex exponentials and using standard results for geometric series (Gradshteyn, 1980 p. 30).
RANDOM WALKS
195
By integrating the power spectral density, it follows that the average power in the random process is P :
\
N T G (T, f ) df : : 91 6 2 2 D
(6.53)
Clearly, for T D the average power increases linearly with T, that is, the rms value of a random walk increases according to (T . The power spectral density is shown in Figure 6.9 for the case of D : r : 1, P : 1, and N : 10 which is consistent with : 0.2. For the case where f 1/T and N 1, the power spectral density approaches the constant value G (T, f ) 6
ND 3
f 1/T
(6.54)
The term 19
sin((2N ; 1) f D) (2N ; 1) sin( f D)
in Eq. (6.52), has the form shown in Figure 6.10, and hence, for f 1/T and N 1 the power spectral density can be approximated as follows:
r G (T, f ) 2 f 6 0
f 1/T, f " kr, k + Z
(6.55)
f : kr, k + Z
PSD 10 1 0.1 0.01 0.001 0.0001 0.01
0.05 0.1
0.5
1
5 10 Frequency (Hz)
Figure 6.9 Power spectral density of a 10-step random walk with unity power and D : r : 1.
196
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
Amplitude 1.2 1 0.8 0.6 0.4 0.2 0.5
1 r
1.5
2 2r
2.5
3 3r
3.5 4 Frequency (Hz)
Figure 6.10 Plot of 1 9 sin((2N ; 1) fD)/(2N ; 1) sin( fD) as a function of frequency f for the case of r : D : 1 and N : 10. The ripple around the level of 1 decreases as N increases.
6.4.3 Bounded Random Walk One model for a random walk X bounded on the interval [0, T ], where T : (N ; 1)D, is defined by the ensemble
, E : x( , . . . , , t) : (t 9 iD), + S , G G 6 G
(6.56)
where has the form shown in Figure 6.11. For the case where T : bD, @ b + Z>, the random walk is correlated for T sec and, if + <1 , it is bounded @ G above and below by the levels
Φ( f ) = Tb Sinc ( fTb)e t D
Tb = bD
T
Figure 6.11 Pulse function for bounded random walk.
−jπ f Tb
RANDOM WALKS
197
Amplitude 8 6 4 2 0 −2 −4 −6 20
40
60
80
100 Step Number
Figure 6.12 A bounded random walk with bounds of <10 (b : 10), D : 1, and S : <1 .
density is given by r bT @ ( f ) : sinc( f T ) G (T, f ) : 6 @ 1 ; 1/N 1 ; 1/N
T T @
(6.57)
where r : 1/D, is the Fourier transform of , and the factor 1 ; 1/N arises from the fact that N/T : r/(1 ; 1/N) for the case where T : (N ; 1)D. The assumption made in this expression, is that the energy associated with the windowing effect of the interval [0, T ] on pulse functions is negligible. This is the case for T T or equivalently, b N. @ The average power in such a random process, obtained by integrating the power spectral density, and noting that the integral of sin(px)/x over the interval (9-, -) equals p (Spiegel, 1968 p. 96), is P :
\
rT b @ : G (T, f ) df : 6 1 ; 1/N 1 ; 1/N
(6.58)
With this result, the power spectral density can be written as G (T, f ) : P T sinc( f T ) 6 @ @
T T @
(6.59)
The power spectral density is shown in Figure 6.13 for the case of D : r : 1, P : 1, b : 10, and N : 100.
198
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
GX(T, f) 10 5 1 0.5 0.1 0.05 0.01 0.01
0.02
0.05
0.1
0.2
1 ⁄ Tb
2 ⁄ Tb
0.5
1
Frequency (Hz)
Figure 6.13 Power spectral density of a 100-step bounded random walk with unity power, correlation time of 10 steps, and D : r : 1.
6.5 1/f NOISE Random processes that exhibit a power spectral density of the form 1/ f ? over a finite frequency range, where is close to unity, are ubiquitous and, for example, such a form has been associated with economic data, traffic flow, annual rainfall, and noise in resistors, metals, and semiconductor devices [see, for example, Keshner (1982), Buckingham (1983), and Stephany (1998)]. Such noise, denoted 1/ f or flicker noise, has been the subject of thorough investigation and modeling. Research to explain 1/ f noise has been along two lines. First, to ascertain physical attributes and origin of 1/ f noise in a given entity (Buckingham, 1983; Hooge, 1981; Bell, 1980; Stephany, 1998). Second, research has been conducted to propose models, that is, random processes, that exhibit 1/ f noise [see, for example, Hooge (1997), Kaulakys (1998), and Howard (2000)]. Many modeling approaches have been used including use of random walks [see, for example, Jantsch (1987) and Tunaley (1976)]. In the following section, a specific model for 1/ f noise, based on bounded random walks, is demonstrated. 6.5.1 Synthesis of a 1/f Power Spectral Density Using Bounded Random Walks A 1/ f power spectral density form can be synthesized over a finite frequency range from a summation of distinct power spectral densities. For independent random processes with zero means, the goal of synthesis is to find N practical
199
1/f NOISE
random processes, with respective power spectral densities G , . . . , G , such , that the 1/ f form is approximated over a set frequency range [ f , f ], that is, * 3 , k G (T, f ) G f G
f ff * 3
or
D3 , k D3 k df G (T, f ) 9 df G f f D* G D* (6.60)
To demonstrate such a synthesis, consider the power spectral density of the summation of N ; 1 bounded random walks, , G(T, f ) : G (T, f ) G G
(6.61)
where G (T, f ) is the power spectral density of a bounded random walk, as G given by Eq. (6.59), with T :bD and D :2GD. For equal powers in all random @ G G processes, with unity total power, D : 1 and b : 10, the power spectral density of G is shown in Figure 6.14, along with the ideal 1/ f form. The step durations in the individual random processes are 1, 2, 4, 8, . . . , 1024 sec. The summation of bounded random walks clearly approximates the 1/ f form over a restricted frequency range. A smoother approximation to the 1/ f form can be achieved through a distribution of step durations or step rates, such that the average power in random processes with step durations between D and 2D, is the same
G(T, f) 1000 100 10 1 −k f 0.1 0.0001
0.001
0.01
0.1 Frequency (Hz)
Figure 6.14 Power spectral density from the summation of 11 independent bounded random walks with step duration of 1, 2, 4, 8, . . . , 1024 sec.
200
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
as that for random processes with step durations between 2D and 4D, and so on. The lower limit to the 1/ f form decreases as the step duration and interval duration increases. This synthesis shows that a first order model consistent with 1/ f noise, is a sum of equal power bounded random walks, with step durations that form a geometric series with a ratio of 2. Such a description provides a simple model and explanation for 1/ f noise.
APPENDIX 1: PROOF OF THEOREM 6.1 If x is piecewise smooth on [0, ND], then X(ND, f ) is finite for all f. Further, if there exists k , 0, such that X(ND, f ) k / f >? for f + R, then, from M M the integral test (Knopp, 1956 pp. 64—65), it follows that + lim X(ND, f 9 kf ) 1 + I\+
(6.62)
converges for all f and is bounded above. This condition is a sufficient condition. In general, if this summation converges, and this is assumed below, it is valid to define Y according to Y (ND, f ) : f
X(ND, f 9 kf ) 1 1 I\
f + R, f + R> 1
(6.63)
Clearly, Y is periodic with respect to f with period f . Hence, on any interval 1 of the form [ f , f ; f ], where f + R, Y can be written as an exponential V V 1 V Fourier series, according to Y (ND, f ) : c eHLL"D L L\
(6.64)
where D : 1/ f and the nth coefficient c , is given by 1 L
1 DV>D1 DV>D1 c : f X(ND, f 9 kf ) e\HLL"D df : lim g(M, f ) df L 1 1 f DV 1 DV I\ + (6.65) Here, g(M, f ) :
+ X(ND, f 9 kf ) e\HLL"D 1 I\+
(6.66)
201
APPENDIX 2: PROOF OF THEOREM 6.2
As it has been assumed that lim g(M, f ) is finite for f + [ f , f ; f ], the + V V 1 interchange of limit and integral operations in this equation is valid. Thus,
DV>D1 c : X(ND, f 9 kf )e\HLL"D df L 1 I\ DV
(6.67)
A change of variable, : f 9 kf , yields 1
DV\I\D1 c : e\HLLI X(ND, )e\HLL"H d L DV\IDQ I\
(6.68)
Since the exponential term outside of the integral is unity for all values of the index k, and x is piecewise smooth, it follows from Theorem 2.30 that
c : L
9nD , [0, ND]
0
x(0>)/2 X(ND, )e HL\L"H d : x(9nD\) ; x(9nD>) \ 2 x(ND\)/2
n:0 9nD + (0, ND) 9nD : ND (6.69)
Thus, Y (ND, f ) :
x(0>) ,\ x(pD\) ; x(pD>) x(ND\)e\HL,"D ; e\HLN"D ; 2 2 2 N (6.70)
which is the required result.
APPENDIX 2: PROOF OF THEOREM 6.2 The Fourier transform of y , as defined by Eq. (6.7), is
,\ 1 I"> 1 Y (ND, f ) : x(t)e\HLDR dt ; x(t)e\HLDR dt I"\ I 1 ," ; x(t)e\HLDR dt (6.71) ,"\
For any 0 and for any frequency range [9 f , f ], there will exist a X X 0, such that over an interval of measure , centered at kD, both x(t) and
202
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
e\HLDR are such that 9 x(kD\) 9 x(t)
kD 9 /2 t kD
9 x(t) 9 x(kD>)
kD t kD ; /2
9 e\HLDR 9 e\HLDI"
kD 9 /2 t kD ; /2
(6.72)
These bounds are guaranteed by piecewise smoothness. Hence, for a frequency range [9 f , f ], there will exist a , such that Y (ND, f ) can be approximated X X by x(0>) ,\ x(kD\) ; x(kD>) x(ND\)e\HLD," ; e\HLDI" ; 2 2 2 I
(6.73)
with an arbitrarily small error. Using the results from Theorem 6.1, it follows that Y (ND, f ) : lim Y (ND, f ) : f X(ND, f 9 kf ) 1 1 I\
(6.74)
where f : 1/D. Further, since lim Y (ND, f ) is bounded for f + R, it follows 1 that Y (ND, f ) : lim Y (ND, f ) is finite for all frequencies, and hence,
f Y (ND, f ) X(ND, f 9 kf ) : 1 G (ND, f ) : lim 1 7 ND ND I\
(6.75)
Expanding the summation within the modulus sign, yields the required result, namely, G (ND, f ) : f G (ND, f 9 kf ) 1 7 6 1 I\ f ; 1 X(ND, f 9 kf )X*(ND, f 9 nf ) 1 1 ND I\ L\ L$I
(6.76)
APPENDIX 3: PROOF OF THEOREM 6.3 A.3.1 Power Spectral Density of U and V By definition, the power spectral densities of I and U on the interval [0, T ], are respectively, I (T, f ) G (T, f ) : p I ' I T I
U (T, f ) I G (T, f ) : p 3 I T I
(6.77)
APPENDIX 3: PROOF OF THEOREM 6.3
203
where I (T, f ) : I U (T, f ) : I
2 2
i (t)e\HLDR dt I (6.78)
i (t) cos(2 f t)e\HLDR dt I A
I (T, f 9 f ) ; I (T, f ; f ) A I A : I 2
Here, the relationship cos() : 0.5[eHF ; e\HF] has been used to obtain the result for U . It then follows that I I (T, f 9 f ) ; I (T, f ; f ) G (T, f 9 f ) ; G (T, f ; f ) A ' A A I A : ' G (T, f ) : p I 3 I 4 4T I ;
1 Re 2T
p [I (T, f 9 f )I *(T, f ; f )] I I A I A I
(6.79)
Similarly, since v (t) : q (t) sin(2 f t), the result sin() : 0.5j[9eHF ; e\HF] J J A implies that V (T, f ) : J
9jQ (T, f 9 f ) ; jQ (T, f ; f ) J A J A 2
(6.80)
where Q and V , are respectively, the Fourier transforms of q and v evaluated J J J J on the interval [0, T ]. It then follows that G (T, f 9 f ) ; G (T, f ; f ) A / A G (T, f ) : / 4 4 9
1 Re 2T
p [Q (T, f 9 f )Q *(T, f ; f )] J J A J A J
(6.81)
A.3.2 Power Spectral Density of Quadrature Amplitude Modulation Signal Since X : U 9 V, it follows from Theorem 4.5 that G (T, f ) : G (T, f ) ; G (T, f ) 9 2Re[G (T, f )] 6 3 4 34
(6.82)
204
POWER SPECTRAL DENSITY OF STANDARD RANDOM PROCESSES — PART 2
where G and G are given in Eqs. (6.79) and (6.81) and 3 4 G
34
1 p U (T, f )V * (T, f ) J IJ I T I J 1 [I (T, f 9 f ) ; I (T, f ; f )] I A I A : p IJ 4T ;[ jQ*(T, f 9 f ) 9 jQ* (T, f ; f )] J J I J A A
(T, f ) :
(6.83)
With the result p I (T, f 9 f ) jQ* (T, f 9 f ) A J A : jG (T, f 9 f ) IJ I '/ A T I J
(6.84)
it then follows that 1 G (T, f ) : G (T, f ) ; G (T, f ) 9 Re[ jG (T, f 9 f ) 9 jG (T, f ; f )] '/ A '/ A 6 3 4 2
91 Re 9j p I (T, f 9 f )Q * (T, f ; f ) IJ I A J A 2T I J 91 ; Re j p I (T, f ; f )Q * (T, f 9 f ) IJ I A J A 2T I J ;
(6.85)
As Re[ jG ] : 9Im[G ], the required result follows. '/ '/ APPENDIX 4: PROOF OF EQUATION 6.39 For the case where the random processes I and Q have constant means on the interval [0, T ], that is, (T, t) : and (T, t) : , it follows that G G O O I (T, f ) : G
2
e\HLDR dt :
[1 9 e\HLD2]/j2 f G T G
f "0 f :0
(6.86)
and similarly for Q(T, f ). Hence,
* [1 9 e\HLD\DA2][1 9 eHLD>DA2] G O 4 ( f 9 f ) A I(T, f 9 f )Q *(T, f ; f ): A A *[1 9 eHLDA2]T G O 9j2 (2f ) A
f"
A
f :
A
(6.87)
APPENDIX 4: PROOF OF EQUATION 6.39
205
For f " < f , and for any 0, there will exist a T, such that A
I (T, f 9 f )Q *(T, f ; f ) 4 * A A G O 2T 8 f 9 f T A
(6.88)
For f : < f , and for the case where and are both real, it follows that A G O Im[I (T, f 9 f )Q *(T, f ; f )] [1 9 cos(2 (2f )T )] A A : G O A 2T 8 f A
(6.89)
which is clearly finite for all values of T. However, the integral of this component is zero, that is, it is a component associated with zero energy. Accordingly, this component can be neglected as T ; -.
7 Memoryless Transformations of Random Processes 7.1 INTRODUCTION This chapter uses the fact that a memoryless nonlinearity does not affect the disjointness of a disjoint random process to illustrate a procedure for ascertaining the power spectral density of a signaling random process after a memoryless transformation. Several examples are given, including two illustrating the application of this approach to frequency modulation (FM) spectral analysis. Alternative approaches are given in Davenport (1958 ch. 12) and Thomas (1969 ch. 6).
7.2 POWER SPECTRAL DENSITY AFTER A MEMORYLESS TRANSFORMATION The approach given in this chapter relies on a disjoint partition of signals on a fixed interval. The following section gives the relevant results. 7.2.1 Decomposition of Output Using Input Time Partition Consider a signal f which, based on a set of disjoint time intervals I , . . . , I , , can be written as a summation of disjoint waveforms according to , f (t) : f (t) G G 206
f (t) f (t) : G 0
t+I G elsewhere
(7.1)
POWER SPECTRAL DENSITY AFTER A MEMORYLESS TRANSFORMATION
207
If such a signal is input into a memoryless nonlinearity characterized by an operator G, then the output signal g : G( f ) can be written as a summation of disjoint waveforms according to
, g(t) : g (t) G G
g(t) g (t) : G 0
t+I G t,I G
(7.2)
where, as detailed in Section 2.3.3,
G( f (t)) G g (t) : G 0
t+I G t,I G
(7.3)
7.2.1.1 Implication If all signals from a signaling random process can be written as a summation of disjoint signals, then this result can be used to define each of the corresponding output signals after a memoryless transformation and hence, define a signaling random process for the output random process. As the power spectral density of a signaling random process is well defined (see Theorem 5.1), such an approach allows the output power spectral density to be readily evaluated. Clearly, the applicability of this approach depends on the extent to which signals from a signaling random processes can be written as a summation of disjoint waveforms, that is, to the extent a signaling random process can be written as a disjoint signaling random process, which is defined as follows. D: D S R P A disjoint signaling random process X, with a signaling period D, is a signaling random process where each waveform in the signaling set is zero outside the interval [0, D]. The ensemble E characterizing such a random process for the interval [0, ND] is 6
, E : x( , . . . , , t) : ( , t 9 (i 9 1)D), + S , + E , G G 6 G
(7.4)
where S is the sample space of the index random variable , and is such that S Z> for the countable case, and S R for the uncountable case. The set of signaling waveforms, E , is defined according to E : (, t): + S , (, t) : 0, t 0, t D
(7.5)
7.2.1.2 Equivalent Disjoint Signaling Random Process Consider a signaling random process X, defined by the ensemble
, E : x( , . . . , , t) : ( , t 9 (i 9 1)D), + S , + E 6 , G G 8 G
(7.6)
208
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
ψ(ζ, t)
−qLD
D
(qU + 1)D
t
Figure 7.1 Illustration of signaling waveform.
where S is the sample space of the index random variable Z and the set of 8 signaling waveforms, E , is defined according to E : ( , t): + S (7.7) 8 Further, assume, as illustrated in Figure 7.1, that all signaling waveforms are nonzero only on a finite number of signaling intervals. It then follows that if a waveform in the random process starts with the signals associated with data in [0, D], [D, 2D], . . . then a transient waveform exists in the interval [0, q D]. 3 This transient is avoided for t 0 if signals associated with data in the interval [9q D, 9(q 9 1)D] and subsequent intervals are included. 3 3 The following theorem states that the random process defined in Eq. (7.6) can be written as a disjoint signaling random process with an appropriate disjoint signaling set. A likely, but not necessary consequence of this alternative characterization of a random process is the correlation between signaling waveforms in adjacent signaling intervals. T 7.1. E D S R P If all signaling waveforms in the signaling set E , associated with a signaling random process X, are zero outside [9q D, (q ; 1)D], where q , q + 0 Z>, then, * 3 * 3 for the steady state case, the signaling random process can be written on the interval [0, ND], as a disjoint signaling random process with an ensemble
, E : x( , . . . , , t) : ( , t 9 (i 9 1)D), + E , + S , G G 6 G T he associated signaling set E is defined as
(7.8)
E : (, t): + S : S ;%;S , : ( , . . . , ), , . . . , + S (7.9) O* \O3 O* 8 8 8 \O3 where
(, t) :
( , t 9 (9q )D) ; % ; ( , t 9 q D) \O3 O* 3 * 0
Proof. The proof of this result is given in Appendix 1.
0 tD (7.10) elsewhere
POWER SPECTRAL DENSITY AFTER A MEMORYLESS TRANSFORMATION
209
7.2.1.3 Notes All waveforms in E are zero outside the interval [0, D]. The probability of each waveform and the correlation between waveforms, can be readily inferred from the original signaling random process. For the finite case where there are M independent signaling waveforms in E , potentially there are MO*>O3> waveforms in E . In most instances the waveforms from different signaling intervals will be correlated. 7.2.2 Power Spectral Density After a Nonlinear Memoryless Transformation Consider a disjoint signaling random process characterized over the interval [0, ND] by the ensemble E and associated signaling set as per Eqs. (7.4) and 6 (7.5). If waveforms from such a random process are passed through a memoryless nonlinearity, characterized by an operator G, then the corresponding output random process Y is characterized by the ensemble E and 7 associated signaling set E , where
, E : y( , . . . , , t) : ( , t 9 (i 9 1)D), + S , + E , G G 7 G
(7.11)
and E : : (, t) : G[ (, t)], + S , + E
(7.12)
Here, P[ (, t)] : P[ (, t)] : P[]. Clearly, the memoryless nonlinearity does not alter the signaling random process form, and the following result from Theorem 5.1 can be directly used to ascertain the power spectral density of the output random process,
1 sin(Nf /r) G (ND, f ) : r ( f ) 9 r ( f ) ; r ( f ) 7 N sin(f /r)
(7.13) K i ; 2r 1 9 Re[eHLG"D(R ( f ) 9 ( f ) )] >G N G G ( f ) : r ( f ) 9 r ( f ) ; r ( f ) ( f 9 nr) 7 L\ (7.14) K ; 2r Re[eHLG"D(R ( f ) 9 ( f ) )] >G G
where r : 1/D and , ( f ) , and R are defined consistent with the >G
210
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
definitions given in Theorem 5.1. For example, for the countable case P[] : p A and ( f ) : p (, f ) A A
( f ) : p (, f ) A A
(7.15)
R (f): ( , f )*( , f) p >G >G AA>G A A>G
(7.16)
where (, f ) : " (, t)e\HLDR dt. 7.2.3 Extension to Nonmemoryless Systems It is clearly useful if the above approach can be extended to nonmemoryless systems. To facilitate this, it is useful to define a signaling invariant system. 7.2.3.1 Definition — Signaling Invariant System A system is a signaling invariant system, if the output random process, in response to an input signaling random process is also a signaling random process and there is a one-to-one correspondence between waveforms in the signaling sets associated with the input and output random processes, that is, if E : and G E : are, respectively, the input and output signaling sets, then there G exists an operator G, such that : G[ ]. G G A simple example of a signaling varying system is one where the output y, in response to an input x is defined as, y(t) : x(t) ; x(t/4). For the case where the input is a waveform from a signaling random process the output is the summation of two signaling waveforms whose signaling intervals have an irrational ratio. 7.2.3.2 Implication If a system is a signaling invariant system and is driven by a signaling random process, then the output is also a signaling random process whose power spectral density can be readily ascertained through use of Eqs. (7.13) and (7.14). 7.2.3.3 Signaling Invariant Systems A simple example of a nonmemoryless, but signaling invariant system, is a system characterized by a delay, t . In " fact, all linear time invariant systems are signaling invariant, as can be readily seen from the principle of superposition. However, the results of Chapter 8 yield a simple method for ascertaining the power spectral density of the output of a linear time invariant system, in terms of the input power spectral density, and the ‘‘transfer function’’ of the system.
EXAMPLES
211
7.3 EXAMPLES The following sections give several examples of the above theory related to nonlinear transformations of random processes. 7.3.1 Amplitude Signaling through Memoryless Nonlinearity Consider the case where the input random process X to a memoryless nonlinearity is a disjoint signaling random process, characterized on the interval [0, ND], by the ensemble E : 6
, E : x(a , . . . , a , t) : (a , t 9 (i 9 1)D), a + S , + E 6 , G G G
(7.17)
where
0 tD elsewhere
1 E : (a, t) : ap(t), a + S , p(t) : 0
(7.18)
and P[ (a, t) ] : P[a + [a , a ; da]] : f (a ) da. Here, f is the den?Z ? ?M>B?
M M M sity function of aM random process A with outcomes a and sample space S . Assuming the signaling amplitudes are independent from one signaling interval to the next, it follows that the power spectral density of X is
1 sin(Nf /r) G (ND, f ) : r ( f ) 9 r ( f ) ; r ( f ) 6 N sin(f /r) G ( f ) : r ( f ) 9 r ( f ) ; r ( f ) 6
( f 9 nr) L\
(7.19) (7.20)
where r : 1/D, and ( f ) : P( f )
( f ) : P( f )
\
\
af (a) da : P( f )
af (a) da : A P( f )
:
af (a) da \ (7.21) A : af (a) da \
If signals from X are passed through a memoryless nonlinearity G, then, because of the disjointness of the input components of the signaling waveform, the output ensemble of the output random process Y, is
, E : y(a , . . . , a , t) : (a , t 9 (i 9 1)D), a + S , + E 7 , G G G
(7.22)
212
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
where E : (a, t) : G(a)p(t), a + S
(7.23)
and P[ (a, t)
] : P[ (a, t) ] : f (a ) da ?Z ?M?M>B?
?Z ?M?M>B?
M
(7.24)
It then follows that the power spectral density of the output random process is
1 sin(N f /r) G (ND, f ) : r ( f ) 9 r ( f ) ; r ( f ) 7 N sin(f /r) G ( f ) : r ( f ) 9 r ( f ) ; r ( f ) 7
( f 9 nr) L\
(7.25) (7.26)
where ( f ) : P( f ) : P( f ) %
G(a) f (a) da \ G(a) f (a) da
( f ) : G P( f ) : P( f ) \
(7.27)
where the following definitions have been used: : %
\
G(a) f (a) da
G :
\
(7.28)
G(a) f (a) da
To illustrate these results, consider a square law device, that is, G(a) : a, and a Gaussian distribution of amplitudes according to f (a) : e9a/2/(2 whereupon it follows that : and G :3 (Papoulis, 2002 p. 148). Thus, % with P( f ) : sinc( f /r) /r, it follows that G (ND, f ) : G ( f ) : sinc( f /r) 6 6 r
2 1 sin(Nf /r) G (ND, f ) : sinc( f /r) ; sinc( f /r) 7 N sin(f /r) r r 2 G ( f ) : sinc( f /r) ; ( f ) 7 r
(7.29)
(7.30) (7.31)
EXAMPLES
213
Clearly, for this case, and in general, for disjoint signaling waveforms with information encoded in the signaling amplitude as per Eq. (7.18), the nonlinear transformation has scaled, but not changed the shape of the power spectral density function with frequency apart from impulsive components. For the case where the mean of the Fourier transform of the output signaling set is altered, compared with the corresponding input mean, potentially there is the introduction or removal of impulsive components in the power spectral density. 7.3.2 Nonlinear Filtering to Reduce Spectral Spread Many nonlinearities yield spectral spread, that is, a broadening of the power spectral density. However, spectral spread is not inevitable and depends on the nature of the nonlinearity and the nature of the input signal. The following is one example of nonlinear filtering where the power spectral density spread is reduced. Consider the case where the input signaling random process X is characterized on the interval [0, ND], by the ensemble
, E : x( , . . . , , t) : ( , t 9 (i 9 1)D), + S , + E , G G 6 G
(7.32)
where S : 91, 1, P[ : <1] : 0.5, G
E : ( , t): ( , t) : A G G G
t 9 D/2 , + 91, 1 G D/2
(7.33)
and the waveforms in different signaling intervals are independent, Here, is the triangle function defined according to
1;t
(t) : 1 9 t 0
91 t 0 0 t1
(7.34)
elsewhere
Consider a nonlinearity, defined according to
G(x) :
9A M
x 9A
x A sin M 2A A M
9A x A xA
which is shown in Figure 7.2, along with input and output waveforms.
(7.35)
214
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
G(x) Ao x
−A
A −Ao
Input A
t D Output Ao t D Figure 7.2 Memoryless nonlinearity and input and output waveforms.
It follows that the output signaling random process Y is characterized on the interval [0, ND], by the ensemble
, E : y( , . . . , , t) : ( , t 9 (i 9 1)D), + S , + E 7 , G G G
(7.36)
where
t 9 D/2 t
( , t): ( , t) : A sin : A sin G G G M G M 2 D/2 D E : (7.37) 0 t D, + S : 91, 1 G Clearly, P[ ( , t)] : P[ ] : 0.5. It follows that the power spectral density of G G the input and output waveforms are G (ND, f ) : G ( f ) : r ( f ) 6 6 G (ND, f ) : G ( f ) : r ( f ) 7 7
(7.38) (7.39)
where
( f ) :
A f sinc 2r 4r
( f ) :
4A cos(f /r) M r (1 9 4f /r)
(7.40)
There is equal power in the input and output spectral densities when A : M (2A/(3.
215
EXAMPLES
Input
GX∞ ( f ), GY∞ ( f ) 0.1 0.01 0.001 0.0001 0.00001 1. . 10−6 0.2
0.5
1
2
Output
5 10 Frequency (Hz)
Figure 7.3 Input and output power spectral densities associated with the memoryless nonlinearity and waveforms shown in Figure 7.2.
These power spectral densities are plotted in Figure 7.3 for the case of r : D : 1, A : 1, and A : (2/(3. For this equal input and output power case, M there is clear spectral narrowing consistent with the ‘‘smoothing’’ of the input waveform via the nonlinear transformation. 7.3.3 Power Spectral Density of Binary Frequency Shifted Keyed Modulation As the following two examples show, signaling random process theory can readily be applied to ascertaining the power spectral density of FM random processes. First, consider an FM signal, y(t) : A cos[x(t)]
x(t) : 2f t ; (t) A
t0
(7.41)
where the carrier frequency f is an integer multiple of the signaling rate A r : 1/D, and the binary digital modulation is such that has the form
R p( 9 (i 9 1)D) d + 91, 1 G G G R" > R p( 9 (i 9 1)D) d : 2f G B G
(t) : 2f B
(7.42)
216
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
Here, P[ : 91] : p and P[ : 1] : p , and the pulse function p is G \ G assumed to be such that p(t) : 0
t 0, t D
"
p(t)dt : D
(7.43)
which is consistent with a phase change of <2( f /r) during each signaling B interval of duration D sec. Clearly, p(t) and p(t 9 iD) are disjoint for i 1. With the assumptions that both f /r and f /r are integer ratios, it follows, B A as far as a cosine function is concerned, that the phase signal x in any interval of the form[(i 9 1)D, iD], where i + Z>, can be written as x(t) :2f (t9(i91)D);2f A B : (, t 9 (i 9 1)D)
R\G\"
t + [(i91)D, iD], + 91, 1
p() d
(7.44)
where
2f t ; 2f A B
(, t) :
R
p() d
0 t D, + <1
(7.45)
0
elsewhere
It then follows, for the ith signaling interval, that y(t) : A cos[x(t)] : A cos[ (, t 9 (i 9 1)D)]
(i 9 1)D t iD (7.46)
This formulation can be generalized to the random process case as follows: The random phase process X is defined by the ensemble E 6
, E : x( , . . . , , t) : ( , t 9 (i 9 1)D), + E , + <1, t + [0, ND] 6 , G G G (7.47) where P[ ] + p , p and the signaling set E is defined as G \
E : (, t) :
2f t ; 2f A B 0
R
p() d
0 t D, + <1
elsewhere
(7.48)
As any waveform in E consists of a summation of disjoint signals, a random 6
EXAMPLES
217
process Y : cos[X] can be defined with an ensemble E , 7
, E : y( , . . . , , t) : ( , t 9 (i 9 1)D, + E , + <1, t + [0, ND] 7 , G G G (7.49) where y( , . . . , , t) is a summation of disjoint signals from the signaling set , E ,
0 t D , + <1, + E elsewhere
A cos[ (, t)] E : (, t) : 0
(7.50)
and P[ (, t)] : P[ (, t)] + p , p . \ With independent data, consistent with being independent of for i " j, G H it follows from Eq. (7.14) that the power spectral density of Y is
G ( f ) : r p (, f ) 9 r p (, f ) 7 A A A A ; r p (, f ) ( f 9 kr) A A I\
(7.51) + 91, 1
where is the Fourier transforms of . For the case where p(t) : 1 for 0 t D and zero elsewhere, that is, binary frequency shifted keyed (FSK) modulation, it follows that the signaling set is
(1, t) : A cos[2( f ; f )t] A B E :
(91, t) : A cos[2( f 9 f )t] A B
0 tD 0 tD
(7.52)
and (1, f ) :
A f ;f 9f B eHLDA>DB\DP sinc A 2r r
A f ;f ;f B ; e\HLDA>DB>DP sinc A 2r r (91, f ) :
A f 9f 9f B eHLDA\DB\DP sinc A 2r r
A f 9f ;f B ; e\HLDA\DB>DP sinc A 2r r
(7.53)
(7.54)
For this case, and where r : D : f : 1, A : (2, f : 10, and p : p : 0.5, B A \
218
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
G Y∞ ( f ) 0.1
0.01
0.001
0.0001
6
8
10
12
14 Frequency (Hz)
Figure 7.4 Power spectral density of a binary FSK random process with r : D : fd : 1, A : (2, and fc : 10. The dots represent the power in impulses.
the power spectral density, as defined by Eq. (7.51), is shown in Figure 7.4. A check on the power in the impulses can be simply undertaken by writing the FM signal A cos[2( f < f )t] in the quadrature carrier form, A B A cos(2f t) cos(2f t) h A sin(2f t) sin(2f t) A B A B
(7.55)
The first term is periodic, and independent of the data, and yields impulses at < f < f where the area under each impulse is A/16, which equals 0.125 when A B A : (2. 7.3.4 Frequency Modulation with Raised Cosine Pulse Shaping Consider a FM signal with continuous phase modulation that is achieved through the use of raised cosine pulse shaping,
x(t) : A sin 2f t ; 2r A
R
\"
m() d
t0
(7.56)
where r : 1/D and the lower limit of 9D in the integral arises from the pulse waveform in the modulating signal m, which is defined according to m(t) : p(t 9 (i 9 1)D) G G
+ <0.5, t 9D G
(7.57)
219
EXAMPLES
Here, p is a raised cosine pulse with a duration of three signaling intervals (Proakis, 1995 p. 218), that is,
1 2(t ; D) 1 9 cos 3D p(t) : 3
0
9D t 2D (7.58)
elsewhere
and is shown in Figure 7.5. The integral of this raised cosine pulse shape, q, is
0 t;D D 2(t ; D) : q(t) : 9 sin \" 3 3D 2 0 t 9D D
R
p() d
t 9D
9D t 2D
(7.59)
t 2D
and the area under p is D. The value of + <0.5 in Eq. (7.57), results in each G signaling waveform yielding a phase change of <. The normalized integral of p, that is, q(t/D)/D, is shown in Figure 7.5. As
R
R" > R p( 9 (i 9 1)D) d : p( 9 (i 9 1)D) d G G G\" \" G G (7.60) R" > : q(t 9 (i 9 1)D) t 9D G G p(t ⁄ D), q(t ⁄ D) ⁄ D 1
q(t ⁄ D) ⁄ D
5⁄6 p(t ⁄ D)
2⁄3 1⁄2 1⁄3 1⁄6 − 0.5
0
0.5
1
1.5
2 t⁄D
Figure 7.5 Raised cosine pulse waveform and normalized integral of such a waveform.
220
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
it follows that the FM signal defined by Eqs. (7.56) and (7.57), can be written as
x(t) : A sin 2f t ; 2r A
R" > q(t 9 (i 9 1)D) G G
+ <0.5, t 0 G (7.61)
The random process, of which this signal is one outcome, is denoted X and is defined, on the interval [0, ND), by the ensemble E : 6
R" > x( , . . . , , t) : A sin w t ; 2r q(t 9 (i 9 1)D) ,> A G E : G 6 w : 2f , + <0.5, t + [0, ND) A A G
(7.62)
where the effect of symbols in the interval [9D, 0] and [ND, (N ; 1)D], have been included to establish a steady state ensemble for [0, ND]. As the integral of the pulse shape is D : 1/r, and + <0.5, it follows that each pulse conG tributes a final phase shift of < radians to the argument of the sine function. Hence, each pair of symbols results in a phase shift from the set 92, 0, 2. As the sine function is periodic with period of 2, it follows that in the ith signaling interval, [(i 9 1)D, iD], the phase accumulation from the previous . . . , i 9 3, i 9 2 symbols can be neglected. Thus, it is possible to rewrite the ensemble defining the random process X on the interval [0, 2ND] in a signaling random process form, with a signaling rate of r/2, that is,
, E : x( , . . . , , t): (t9(i91)2D), + 1, . . . , 16, +E , t+[0, 2ND] A AG 6 , G G G (7.63) where the signaling set E consists of waveforms that are zero outside the interval [0, 2D], and is defined according to
A sin[2f t ; (t)] A G E : (t) : G 0
0 t 2D elsewhere
i + 1, . . . , 16
(7.64)
The waveforms in E , as well as the component phase waveforms , are G detailed in Table 7.1. All waveforms in this set have equal probability, and the phase waveforms are plotted in Figure 7.6. The correlation between the G signaling waveforms in adjacent signaling intervals of duration 2D, is detailed in Table 7.2. The signaling waveforms in signaling intervals separated by at least 2D, are independent as far as the sine operator is concerned.
EXAMPLES
221
Table 7.1 Signaling Waveforms in Signaling set Data
Phase Waveforms , . . . , in [0, 2D]
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
r [9q(t ; D) 9 q(t ) 9 q(t 9 D) 9 q(t 9 2D)] r [9q(t ; D) 9 q(t ) 9 q(t 9 D) ; q(t 9 2D)] r [9q(t ; D) 9 q(t ) ; q(t 9 D) 9 q(t 9 2D)] r [9q(t ; D) 9 q(t ) ; q(t 9 D) ; q(t 9 2D)] r [9q(t ; D) ; q(t ) 9 q(t 9 D) 9 q(t 9 2D)] r [9q(t ; D) ; q(t ) 9 q(t 9 D) ; q(t 9 2D)] r [9q(t ; D) ; q(t ) ; q(t 9 D) 9 q(t 9 2D)] r [9q(t ; D) ; q(t ) ; q(t 9 D) ; q(t 9 2D)] r [q(t ; D) 9 q(t ) 9 q(t 9 D) 9 q(t 9 2D)] r [q(t ; D) 9 q(t ) 9 q(t 9 D) ; q(t 9 2D)] r [q(t ; D) 9 q(t ) ; q(t 9 D) 9 q(t 9 2D)] r [q(t ; D) 9 q(t ) ; q(t 9 D) ; q(t 9 2D)] r [q(t ; D) ; q(t ) 9 q(t 9 D) 9 q(t 9 2D)] r [q(t ; D) ; q(t ) 9 q(t 9 D) ; q(t 9 2D)] r [q(t ; D) ; q(t ) ; q(t 9 D) 9 q(t 9 2D)] r [q(t ; D) ; q(t ) ; q(t 9 D) ; q(t 9 2D)]
Signaling Waveforms in [0, 2D]
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
(t ) : A sin(2fc t ; (t ))
Data of 0 and 1 correspond, respectively, to : 90.5 and : 0.5. The data in the first column are for the G G intervals [9D,0], [0, D], [D, 2D], and [2D, 3D].
7.3.4.1 Determining Power Spectral Density The power spectral density from Theorem 5.1, for a signaling random process with a rate r : 1/D , is M M 1 sin(Nf /r ) M G (ND , f ) : r ( f ) 9 r ( f ) ; r ( f ) 6 M M M M N sin(f /r ) M (7.65) K i ; 2r 1 9 Re[eHLG"MD(R ( f ) 9 ( f ) )] >G M N G where, for the case being considered, r : r/2 : 1/2D, D : 2D, m : 1, and M M (f): p ( f)
( f ) : p ( f ) (7.66) G G G G G G R ( f ) : p ( f ) * ( f ) (7.67) AA A A A A To evaluate the power spectral density, the Fourier transform of the individual waveforms in the signaling set, as defined by Eq. (7.64) and Table 7.1, are required to be evaluated. The details are given in Appendix 2. Using the results from this appendix, the power spectral density, as defined by Eqs. (7.65) to Eq. (7.67), is shown in Figure 7.7, for the case of f : 10, r : D : 1, A : (2, and A N ; -. For the parameters used, the average power is 1V assuming a voltage signal. The power in each of the sinusoidal components with frequencies of f < r/2 is 0.11V , and the remaining power of 0.78V is in the continuous A spectrum.
222
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
ϕ1(t), …, ϕ16(t) 9.42 6.28 3.14 0 −3.14 −6.28 0.25
0.5
0.75
1
1.25
1.5
1.75
2 t⁄D
Figure 7.6 Phase signaling waveforms for [0, 2D].
The power in the impulsive components is consistent with inefficient signaling. These components can be eliminated by reducing the phase variation in each signaling waveform from to /2 radians. This also leads to better spectral efficiency (Proakis, 1995 p. 218). With respect to spectral efficiency, the power spectral density shown in Figure 7.7 should be compared with that Table 7.2 Correlation between Signals in Signaling Intervals of Duration 2D
Data xx0000 xx0001 xx0010 xx0011 xx0100 xx0101 xx0110 xx0111 xx1000 xx1001 xx1010 xx1011 xx1100 xx1101 xx1110 xx1111
Signal in ith Interval
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
Signal in (i ; 1)th Interval
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
(t )
Probability 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64 1/64
The data in the first column are for the (i 9 1)th, ith and (i ; 1)th signaling intervals of duration 2D. The symbol x implies the data is arbitrary.
223
APPENDIX 1: PROOF OF THEOREM 7.1
GX∞( f ) 1 0.1 0.01 0.001 0.0001 0.00001 1. . 10−6 8.5
9
9.5
10
10.5
11 11.5 12 Frequency (Hz)
Figure 7.7 Power spectral density of a raised cosine pulse shaped FM random process with a carrier frequency of 10 Hz, r : D : 1, and A : (2. The dots represent the power in impulses.
shown in Figure 7.4, where pulse shaping has not been used, and the phase change for each signaling waveform is 2 radians. Finally, a further comparison of Figures 7.7 and 7.4, reveals that the pulse shaping has led to a very rapid spectral rolloff.
APPENDIX 1: PROOF OF THEOREM 7.1 Consider the steady state case and a single signaling waveform ( , t) from the ensemble E , that could be associated with every signaling interval as shown in Figure 7.8. Clearly, the signal in the interval [0, D] is given by
( , t 9 (9q )D) ; % ; ( , t) ; % ; ( , t 9 q D) 3 *
ψ(ζ0, t − (−qU)D)
−qUD
−qLD
ψ(ζ0, t)
D
(7.68)
ψ(ζ0, t − qLD)
qLD
(qU + 1)D
t
Figure 7.8 Illustration of signaling waveforms that have nonzero contributions in the interval [0, D].
224
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
In general, the signal (, t) in the interval [0, D] has the form
(, t) :
( , t9(9q )D);%; ( , t);%; ( , t9q D) \O3 O* 3 * 0
0 tD elsewhere (7.69)
where : ( , . . . , ) and , . . . , + S . By definition, (, t) is zero \O3 O* \O3 O* 8 outside the interval [0, D]. As this interval is representative of any other interval of the form [(i 9 1)D, iD], it follows that a signal from the random process can be written in the interval [0, ND], as a sum of disjoint signals, , ( , t 9 (i 9 1)D), G G
+E , +S G
(7.70)
where E : (, t): + S : S ; % ; S , : ( , . . . , ), , . . . , + S \O3 O* \O3 O* 8 8 8 (7.71) this is the required result.
APPENDIX 2: FOURIER RESULTS FOR RAISED COSINE FREQUENCY MODULATION To establish the Fourier transform of each signaling waveform, explicit expressions for q(t ; D), q(t), q(t 9 D), and q(t 9 2D) are first required. Using the definition for q, as in Eq. (7.59), it follows that q(t ; D) :
t ; 2D D (3 D ; sin(q t) ; cos(q t) K K 4 3 4
q(t) :
92D t D
t;D D (3 D ; sin(q t) 9 cos(q t) K K 3 4 4 t 2D q(t 9 D) : 9 sin(q t) K 3 4
q(t 9 2D) : where q : 2/3D. K
9D t 2D
0 t 3D
t9D D (3 D ; sin(q t) ; cos(q t) K K 3 4 4
(7.72) (7.73) (7.74)
D t 4D (7.75)
APPENDIX 2: FOURIER RESULTS FOR RAISED COSINE FREQUENCY MODULATION
225
A.2.1 Phase Waveforms for [0, D ] For 0 t D, the phase of the waveforms, as detailed in Table 7.1, can be written as (t) : r[ q(t ; D) ; q(t) ; q(t 9 D)] G \
(7.76)
where, , , + 91, 1 depend, respectively, on data in the intervals \ [9D, 0], [0, D], and [D, 2D]. From Eqs. (7.72)—(7.75) it follows that
t D (t) : r (2 ; ) ; ( ; ; ) G \ 3 3 \ D (3 D ; ( ; 9 2 ) sin(q t) ; ( 9 ) cos(q t) \ K \ K 4 4
(7.77)
which can be rewritten as (t) : q ; q t ; q sin(q t ; ) G M K O
(7.78)
where q : (2 ; ) M 3 \
q : ( ; ; ) 3D \
1 q : (( ; 9 2 ) ; 3( 9 ) 4 \ \
(3( 9 ) \ ; 9 2 \ : (3( 9 ) O \ ; tan\ ; 9 2 \ 0 tan\
(7.79) (7.80)
; 9 2 0 \ ; 9 2 0 \ 9 : 0, ; 9 2 : 0 \ \ (7.81)
A.2.2 Phase Waveforms for [D, 2D] For D t 2D, the phase of the waveforms, as per Table 7.1, can be written as (t) : r[ D ; q(t) ; q(t 9 D) ; q(t 9 2D)] G \
(7.82)
226
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
where
, , , + 91, 1, and thus, \
D t (t) : r (3 ; 9 ) ; ( ; ; ) G \ 3 3 ;
D (3 D ( 9 2 ; ) sin(q t) ; (9 ; ) cos(q t) K K 4 4
(7.83)
This can be rewritten as (t) : q ; q t ; q sin(q t ; ) G M K O
(7.84)
where q : (3 ; 9 ) M 3 \
q : ( ; ; ) 3D
1 q : (( 9 2 ; ) ; 3(9 ; ) 4
(3(9 ; ) 92 ; : (3(9 ; ) O tan\ ; 9 2 ; 0 tan\
(7.85) (7.86)
9 2 ; 0 9 2 ; 0 9 ; : 0, 9 2 ; : 0 (7.87)
A.2.3 Fourier Transform of Signaling Waveforms For the interval [0, 2D], the Fourier transform of the ith waveform in the signaling set is (f): G
"
A sin(2f t ; (t))e\HLDR dt A G A " [eH LDA\DR>PGR 9 e\H LDA>DR>PGR ] dt : 2j
(7.88)
Substituting for (t), and with the definitions u ( f ) : 2( f 9 f ) ; q and G L A u ( f ) : 2( f ; f ) ; q , it follows that ( f ) can be written as N A G
A " [eH OM>SLDR>O OKR>FO 9 e\H OM>SNDR>O OKR>FO dt 2j (7.89) A " [eH OM>SLDR>O OKR>FO 9 e\H OM>SNDR>O OKR>FO dt ; 2j "
APPENDIX 2: FOURIER RESULTS FOR RAISED COSINE FREQUENCY MODULATION
227
where, as is clear from the above derivation of the phase waveforms in [0, D] and [D, 2D], the coefficients q , u , u , q , and vary from [0, D] to [D, 2D]. M L N O With the change of variable : t ; /q , and with the definitions v ( f ) : O K L u ( f ) /q , v ( f ) : u ( f ) /q , it follows that ( f ) can be written as L O K N N O K G (f): G
9AjeH OM\TLD ">FOOK eH SLDH>O OKH d 2 FOOK Aje\H OM\TND ">FOOK ; e\H SNDH>O OKH d 2 FOOK 9AjeH OM\TLD ">FOOK ; eH SLDH>O OKH d 2 ">FOOK Aje\H OM\TND ">FOOK ; e\H SNDH>O OKH d 2 ">FOOK
(7.90)
Evaluation of these integrals relies on the result, I(k , w , w , w , , ) : M ! K
H
eHIM U!H>U UKH d
H sin((w 9 iw ) ) 9 sin((w 9 iw ) ) ! K ! K : (91)GJ (w ) G w 9 iw G ! K sin((w ; iw ) ) 9 sin((w ; iw ) ) ! K ! K ; J (w ) G w ; iw G ! K cos((w 9 iw ) ) 9 cos((w 9 iw ) ) ! K ! K ;(9j)k (91)GJ (w ) M G w 9 iw G ! K cos((w ; iw ) ) 9 cos((w ; iw ) ) ! K ! K ;(9j)k J (w ) M G w ; iw G ! K (7.91)
where k + 91, 1. This result arises from the standard Bessel function expanM sions for the terms cos(w sin(w )) and sin(w sin(w )) (Spiegel, 1968 p. 145), K K that is, cos(w sin(w )) : J (w ) ; 2J (w ) cos(2w ) ; 2J (w ) cos(4w ) ; % K K K (7.92) sin(w sin(w )) : 2J (w ) sin(w ) ; 2J (w ) sin(3w ) ; % K K K
(7.93)
228
MEMORYLESS TRANSFORMATIONS OF RANDOM PROCESSES
( f ) can be evaluated using Eq. (7.91), that is, G ( f): G
9AjeH OM\TLD
I(1, u ( f ), q , q , /q , D ; /q ) L K O K O K 2 ;
Aje\H OM\TND
I(91, u ( f ), q , q , /q , D ; /q ) N K O K O K 2
;
9AjeH OM\TLD
I(1, u ( f ), q , q , D ; /q , 2D ; /q ) L K O K O K 2
;
Aje\H OM\TND
I(91, u ( f ), q , q , D ; /q , 2D ; /q ) N K O K O K 2 (7.94)
In the first two component expressions in this equation, q , v , v , q , and M L N O are defined for [0, D], whereas in the last two component expressions, these variables are defined for [D, 2D].
8 Linear System Theory 8.1 INTRODUCTION In this chapter, the fundamental relationships between the input and output of a linear time invariant system, as illustrated in Figure 8.1, are detailed. Specifically, the relationships between the input and output time signals, Fourier transforms and power spectral densities, are established. Such relationships are fundamental to many aspects of system theory, including analysis of noise in linear systems, and low noise amplifier design. The relationships between the parameters defined in Figure 8.1, and proved in this chapter are, R y(t) : x()h(t 9 ) d (8.1) Y (T, f ) H(T, f )X(T, f ) (8.2)
where X and Y are the respective Fourier transforms, evaluated on the interval [0, T ], of the signals x and y. However, as will be shown in this chapter, the relationship defined in Eq. (8.2) is an approximation. If both x, h + L , then the relative error in this approximation can be made arbitrarily small by making T sufficiently large. However, stationary random signals are not Lebesgue integrable on the interval (0, -) and hence, this convergence is not guaranteed. However, it is shown, for a broad class of signals and random processes, including periodic signals and stationary random processes, that the corresponding relationship between the input and output power spectral densities, namely, G (T, f ) H(T, f )G (T, f ) 7 6
(8.3)
becomes exact as T increases without bound. Establishing the relationships, as per Eqs. (8.1)—(8.3), for a linear time invariant system requires the system impulse response to be defined, and this is the subject of the next section. 229
230
LINEAR SYSTEM THEORY
x ∈ EX
y ∈ EY
h↔H
GX
GY
Figure 8.1 Schematic diagram of a linear system. EX and EY , respectively, represent the ensemble of input and output signals. H is the Fourier transform of the impulse response function h. GX and GY , respectively, are the power spectral densities of the input and output random processes.
8.2 IMPULSE RESPONSE Fundamental to defining the impulse function of a time invariant linear system, is the function defined by the graph shown in Figure 8.2. The response of a linear time invariant system to the input signal is denoted h . D: I R By definition, the impulse response of a linear system is the output signal, in response to the input signal , as becomes increasingly small, that is, h(t) : lim h (t)
(8.4)
8.2.1 Restrictions on Impulse Response General requirements on the impulse function are, first, that it is integrable, that is, h + L [0, -], and second, that as ; 0, the integrated difference between h and h is negligible on sets of nonzero measure, that is, convergence in the mean over (0, -):
lim h (t) 9 h(t)dt : 0 δ∆(t) 1 − ∆
∞ ∫ δ∆(t)dt = 1 –∞ ∆
t
Figure 8.2 Definition of the function .
(8.5)
IMPULSE RESPONSE
231
iβ
i−1
i
i+
1 i1 + 3β ⁄ 2
t i+1
Figure 8.3 Illustration of a function that is Lebesgue integrable on [0, -], but is not square Lebesgue integrable on the same interval.
The following are two examples where, as ; 0, the integrated error between h and h is finite. First, the ‘‘identity’’ system where h (t) : (t) and second, the system where
h (t) : 0
t + [, ; 1/] elsewhere
(8.6)
For both systems, and for t + (0, -), it follows that h(t) : lim h (t) : 0
but
h (t) 9 h(t)dt " 0 lim
(8.7)
To ensure h + L [0, -], and as ; 0 the integrated difference between h and h is negligible, the following restriction on the set of functions h , denoted condition 1, is sufficient: 1. There exists a function g + L [0, -], such that, for all 0 it is the case that h (t) g(t) for t + [0, -]. The validity of this condition, in terms of guaranteeing that Eq. (8.5) holds, is given by Theorem 2.25. Practical and stable systems are such that h is bounded and has finite energy for all values of . As per Theorem 2.14 these two criteria are met by condition 1 and the following condition. 2. For 0, h is bounded, that is, 0, h (t) h for t + [0, -].
This second condition excludes a signal such as 1/(t, which is integrable on [0, -], but has infinite energy on all intervals of the form [0, t ]. It M also excludes signals such as the one shown in Figure 8.3, whose integral equals (1/i>@), which from the comparison test (Knopp, 1956 G pp. 56f ), is finite for 0, but whose energy is given by (1/i\@) and is G infinite when 0.
232
LINEAR SYSTEM THEORY
8.3 INPUT‒OUTPUT RELATIONSHIP Consider the causal linear time invariant system illustrated in Figure 8.1. The well-known relationship between the input and output signals is specified in the following theorem. T 8.1. I—O R L S If the input signal x to, and the system impulse response h of, a linear time invariant system are both causal, are locally integrable, and have bounded variation on all finite intervals, then the output signal, y, is given by y(t) :
R
x()h(t 9 ) d
t0
(8.8)
Proof. The proof of this result is given in Appendix 1. Note that this result is applicable to unstable systems where h , L [0, -].
8.4 FOURIER AND LAPLACE TRANSFORM OF OUTPUT The following theorem states the important result of the relationship between the Fourier and Laplace transforms of the input and output of a linear time invariant system. T 8.2. T O S L S If both x,h + L [0, T ], have bounded variation on [0, T ], and their respective Fourier transforms are denoted X and H, then the Fourier transform Y of the output signal y, evaluated on [0, T ], is given by Y (T, f ) :
x()h(p)e\HLDN>H d dp
WNH2 N>HW2 : Y (T, f ) 9 I(T, f ) : X(T, f )H(T, f ) 9 I(T, f )
(8.9)
where Y (T, f ) : X(T, f )H(T, f ) I(T, f ) :
WNH2 N>H2
x()h(p)e\HLDN>H d dp
(8.10)
FOURIER AND LAPLACE TRANSFORM OF OUTPUT
233
Figure 8.4 Illustration of area of integration for Y and I.
and the integration regions for both Y and I are as shown in Figure 8.4. W ith 2 X(T, s) : x(t)e\QR dt (8.11)
and similarly for other L aplace transformed variables, it is the case that Y (T, s) :
x()h(p)e\QN>H d dp
x()h(p)e\QN>H d dp
WNH2 N>HW2 : X(T, s)H(T, s) 9 I(T, s)
(8.12)
where I(T, s) :
(8.13)
WNH2 N>H2 Proof. The proof of this theorem is given in Appendix 2. For the Fourier transform case Y (T, f ), because of its simplicity, is the approximation that is normally used, and I(T, f ) is clearly the error between the approximate and true output Fourier transforms for a given frequency f. The next theorem gives a sufficient condition for the term I to approach zero as the interval under consideration becomes increasingly large. T 8.3 C A If both x, h + L [0, -], and have bounded variation on all closed finite intervals, then lim Y (T, f ) : lim X(T, f )H(T, f ) f +R 2 2 lim Y (T, s) : lim X(T, s)H(T, s) Re[s] 0 2 2
(8.14) (8.15)
234
LINEAR SYSTEM THEORY
Further, if h + L [0, -], x is locally integrable and does not exhibit exponential increase, then Re[s] 0 is a sufficient condition for lim Y (T, s) : lim X(T, s)H(T, s) 2 2
(8.16)
Proof. The proof is given in Appendix 3. 8.4.1 Windowed Input and Nonwindowed Output For completeness, the response of a linear time invariant system for the case where the input and impulse response are windowed, but the output is not windowed, as illustrated in Figure 8.5, is stated in the following theorem. T 8.4 T O S: N C If both x, h + L [0, T ], and have bounded variation on [0, T ], then the Fourier and L aplace transforms Y of the output signal y, which is not windowed, are given by Y (2T, f ) : X(T, f )H(T, f )
(8.17)
Y (2T, s) : X(T, s)H(T, s)
(8.18)
Proof. The proof of this result is given in Appendix 4. This result has application, when the output signal y is to be derived for the interval [0, T ]. The procedure is as follows for the Fourier transform case. First, evaluate X(T, f ) and H(T, f ), second, evaluate Y (2T, f ):X(T, f )H(T, f ), and third, evaluate y by taking the inverse Fourier transform of Y (2T, f ). The evaluated response is valid for the interval [0, T ], but not [T, 2T ]. 8.4.2 Fourier Transform of Output — Power Input Signals Theorem 8.3 states that lim Y (T, f ) : lim Y (T, f ), provided x, h + L . 2 2 However, for the common case of signals whose average power evaluated on [0, T ], does not significantly vary with T, for example, stationary or periodic
x(t)
h(t)
y(t)
t T
t T
t T
2T
Figure 8.5 Illustration of waveforms in a linear system for the case where the impulse response and the input are windowed but the output is not.
235
FOURIER AND LAPLACE TRANSFORM OF OUTPUT
signals, it is the case that x , L . For this situation, it can be the case that lim Y (T, f ) " lim Y (T, f ) almost everywhere. The following example 2 2 illustrates this point. 8.4.2.1 Example Consider a linear system with an impulse response and input signal, respectively, defined according to h e\RO h(t) : M x(t) : (2 sin(2f t) V V
t 0, 0
(8.19)
t0
For the case where : 1, h : 1, : 0.1, T : 1, and f : 4, the output signal V M V y is plotted in Figure 8.6. For these parameters, the magnitude of the true, Y, and approximate, Y , Fourier transforms, as well as the magnitude of the error, I, between these transforms, is plotted in Figure 8.7. To establish bounds on the integral I, and hence, on how well Y approximates Y, note that I(T, f ) :
2
2
2\H 2
(2 V
h(p)e\HLDN dp x()e\HLDH d 2
2\H
h e\NO Te\2O M dp d : (2 h 19e\2O 9 V M
(8.20)
When T is sufficiently large, such that Te\2O/ 1, it follows that I(T, f ) (2 h , which is independent of the interval length T, and only depends on V M y(t) 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 0.25
0.5
0.75
1
1.25
Figure 8.6 Output waveform y.
1.5
1.75 2 Time (Sec)
236
LINEAR SYSTEM THEORY
Magnitude 0.5
Approximation
True
0.1 0.05 Error
0.01 0.005
0.5
1
5
10
50 Frequency (Hz)
Figure 8.7 Magnitude of the true and approximate Fourier transform of the output signal as well as the magnitude of the error between these two transforms, for the case where T : 1.
the input signal amplitude and the system impulse response characteristics h M and . For the given parameters, the bound for I(T, f ) is 0.141. From Figure 8.7, it follows that the maximum magnitude of I is 0.05, which is within this bound. Further, the level of the error defined by I does not increase or decrease as the interval length T increases (see Figures 8.8 and 8.9). In Figure 8.8 the Magnitude 2 1 0.5 0.2 0.1 0.05 0.02 0.01 3
4
5
6
7 8 Frequency (Hz)
Figure 8.8 Magnitude of the true Fourier transform of the output signal for the cases where T : 2 (lower peak) and T : 4 (higher peak).
237
FOURIER AND LAPLACE TRANSFORM OF OUTPUT
Magnitude 2 1 0.5 0.2 0.1 0.05 0.02
error
0.01 3
4
5
6
7 8 Frequency (Hz)
Figure 8.9 Magnitude of the approximate Fourier transform Y of the output signal, for the cases where T : 2 (lower peak) and T : 4 (higher peak). The magnitude of the error between the true and approximate Fourier transform is identical for T : 2 and T : 4, and is the smooth curve.
magnitude of the true Fourier transform Y, is plotted for cases T : 2 and T : 4. In Figure 8.9, the magnitude of approximate Fourier transform Y , as well as the error I, are graphed for cases T : 2 and T : 4. As T increases, the lobe at the frequency of the input (4 Hz) increases in height, and decreases in width. Away from the lobe, the envelope of the magnitude of both Y and Y remains constant as T increases and, consistent with this, I does not change with T. Clearly, for this example the approximate Fourier transform Y , does not converge to the true Fourier transform Y, defined in Eq. (8.9). 8.4.2.2 Explanation An explanation of the nonconvergence of Y (T, f ) to Y (T, f ) as T ; -, for signals with constant average power, can be found by noting that I can be approximated by an integral over the region defined in Figure 8.10, where t is a time such that h(p) dp h(p) dp. The magniRF F tude of this integral is relatively insensitive to an increase in the value of T. That is, as T increases the error defined by I remains relatively static. For the case where x + L , the magnitude of 2 x() d decreases, in general, as T 2\RF increases, and the error defined by I converges to zero. 8.4.2.3 Power Spectral Density Clearly, on a finite interval [0, T ], it is the case that G (T, f ) : 7
Y (T, f ) X(T, f )H(T, f ) " T T
a.e.
(8.21)
238
LINEAR SYSTEM THEORY
p=T−λ
p T
Region where there is significant contribution to I
th T – th
T
λ
Figure 8.10 Region of integration where there is a significant contribution to the integral I. The time th is the time when the impulse response has negligible magnitude as defined in the text.
as I(T, f ) is finite. However, for the infinite interval, it follows, as I(T, f ) does not increase with T, that lim G (T, f ) : lim ' 2 2
I(T, f ) :0 T
(8.22)
A consequence of this result is that lim G (T, f ) : lim 7 2 2
X(T, f )H(T, f ) : lim G (T, f )H(T, f ) (8.23) 6 T 2
In fact, as shown in the next section, this last result holds for a broad class of signals that are not elements of L [0, -].
8.5 INPUT—OUTPUT POWER SPECTRAL DENSITY RELATIONSHIP Consider the case where the input random process X to a linear system, is defined on the interval [0, T ] by the ensemble
E : x: S ;[0, T ] ; R 6
S Z> countable case S R uncountable case
(8.24)
where P[x(, t)] : P[] : p for the countable case, and P[x(, t) ]: AZ A AM>BA
A P[ + [ , ; d]] : f ( ) d for the uncountable case. Here, f is Mthe probM M M ability density function associated with the index random variable , whose sample space is S .
INPUT—OUTPUT POWER SPECTRAL DENSITY RELATIONSHIP
239
The output waveforms define a random process Y with an ensemble
E : y: S ; [0, T ] ; R, y(, t) : 7
R
x(, )h(t9) d, + S
(8.25)
where P[y(, t)] : P[x(, t)]. For subsequent analysis, it is convenient to define the random process I, whose ensemble E is ' i: S ;[0, T ];C, i(, t) : I(, T, f )e HLDR df E : \ ' + S , I(, T, f ) : Y (, T, f ) 9 Y (, T, f )
(8.26)
where P[i(, t)] : P[x(, t)]. The power spectral density of the output signal is stated in the following theorem. The subsequent theorem states the convergence of G (T, f ) to H(T, f )G (T, f ) as T ; -. 7 6 T 8.5. P S D O R P If x + E and h have bounded variation on all closed finite intervals, and x and h are 6 locally integrable, then G (T, f ) : H(T, f )G (T, f ) 9 2Re[H(T, f )G (T, f )] ; G (T, f ) (8.27) 7 6 6' ' where
1 p X(, T, f )I*(, T, f ) A T A G (T, f ) : 6' 1 X(, T, f )I*(, T, f ) f () d T \
countable case (8.28) uncountable case
Proof. Consider the countable case. By definition, Y (, T, f ) G (T, f ) : p 7 A T A
(8.29)
Local integrability of x + E and h guarantee that the Fourier transforms X 6 and H exist. The relationships Y (, T, f ):Y (, T, f )9I(, T, f ) and Y (, T, f ): X(, T, f )H(T, f ) then result in the power spectral density of Y as detailed. T 8.6. C O P S D Assume for all signals x + E that x is locally integrable, x has bounded variation on all 6 closed finite intervals, and the average power of x does not increase with the
240
LINEAR SYSTEM THEORY
interval length. Further, assume that h has bounded variation on all closed finite intervals and h + L [0, -]. It then follows that lim G (T, f ) : lim H(T, f )G (T, f ) : lim GY (T, f ) 7 6 2 2 2
(8.30)
or using a more convenient notation, G ( f ) : H ( f )G ( f ) 7 6
(8.31)
lim G (T, f ) : 0 ' 2
(8.32)
Further,
If lim G (T, f ) - or lim G (T, f ) : 0, then 2 Y 2 Y lim H(T, f ) G (T, f ) : 0 6' 2
(8.33)
If lim G (T, f ) : -, then 2 Y lim 2
H(T, f ) G (T, f ) 6' :0 GY (T, f )
(8.34)
Proof. The proof of this theorem is given in Appendix 5. 8.5.1 Notes As shown in Appendix 5, there is finite energy associated with I(T, f ), and the power associated with I(T, f ) decreases to zero as T ; -. This fact results in the average power in H(T, f )G (T, f ) and G (T, f ) being negligible compared 6' ' with the average power in GY (T, f ) as T ; -. The required result as given by Eq. (8.30) then follows from GY (T, f ) 9 GY (T, f ) : 92Re[H(T, f )G (T, f )] ; G (T, f ) 6' '
(8.35)
To establish the rate of convergence of GY (T, f ) to GY (T, f ) when f is fixed, consider the single waveform case and a bound on the relative error between GY (T, f ) and GY (T, f ), given by 0
2H(T, f ) G (T, f ) ; G (T, f ) 6' ' GY (T, f )
GY (T, f ) " 0
(8.36)
As G (T, f ) : X(T, f ) I(T, f )/T, G (T, f ) : I(T, f )/T, and I(T, f ) does 6' '
INPUT—OUTPUT POWER SPECTRAL DENSITY RELATIONSHIP
241
not increase with T, whereas X(T, f ) generally does, it follows that a reasonable bound on the relative error is 0
2H(T, f ) G (T, f ) 2I(T, f ) 6' : GY (T, f ) H(T, f ) X(T, f )
GY (T, f ) " 0 (8.37)
For the case where lim X(T, f )/(T is finite, but nonzero, the relative 2 error is proportional to 1/(T. This case is consistent with a bounded power spectral density on the infinite interval. For the case where lim X(T, f )/T 2 is finite, the relative error is proportional to 1/T. This case is consistent with an unbounded power spectral density on the infinite interval. Such a case occurs for periodic signals at specific frequencies. The relationship given in Eq. (8.31) underpins a significant level of analysis of noise in linear systems. One application of this result is in characterizing the noise level of an electronic circuit. Such a characterization is fundamental to low noise electronic design and is the subject of Chapter 9. The following subsection gives an important example, where the relationship given in Eq. (8.31) cannot be applied. 8.5.2 Example — Oscillator Noise A quadrature oscillator is an entity that generates signals of the form x(t) : A cos[2f t ; (t)] A
y(t) : A sin[2f t ; (t)] A
(8.38)
where typically, 2f . Such signals arise from the differential equations, A x : 9[2f ; ]y A y : [2f ; ]x A
x(0) : A cos[(0)] y(0) : A sin[(0)]
(8.39) (8.40)
This result can be proved by substitution of x and y into the differential equations. For the case where the modulation is zero, a quadrature sinusoidal oscillator results and can be implemented as per the prototypical structure shown in Figure 8.11. In this figure, n and n are independent noise sources to account for the noise in the integrators and following circuitry. With the noise sources n and n , the differential equations characterizing the circuit of Figure 8.11 are x : 92f (y ; n ) A
y : 2f (x ; n ) A
(8.41)
Differentiation and substitution yields the following differential equation for x: x ; 4f x : 92f (n ) 9 4f n A A A
(8.42)
242
LINEAR SYSTEM THEORY
n1(t)
−x′ 2πfc 2πfc ∫
–1
x
2πfc ∫
+1
y
y′ 2πfc
n2(t) Figure 8.11 Prototypical quadrature oscillator structure.
As this is a linear differential equation, it follows that the quadrature oscillator can be modeled, as far as the output x is concerned, as shown in Figure 8.12. The impulse responses in this figure are the solutions of the differential equations, x ; 4f x : 92f (t) A A
x ; 4f x : 94f (t) A A
(8.43)
x(0) : 92f , x(0) : 0 A x(0) : 0, x(0) : 94f A
(8.44) (8.45)
which equates to the solution of x ; 4f x : 0 t 0 A x ; 4f x : 0 t 0 A
It then follows that the respective impulse responses are h (t) : 92f cos(2f t) A A
h (t) : 92f sin(2f t) A A
(8.46)
Clearly, h , h , L [0, -], and H (T, f ) and H (T, f ) do not converge as T ; -. Consequently, Theorem 8.6 cannot be used when ascertaining the noise
n1
h1 ↔ H1 x
n2
h2 ↔ H2
Figure 8.12 Equivalent model for the output signal x of the quadrature oscillator shown in Figure 8.11.
243
MULTIPLE INPUT—MULTIPLE OUTPUT SYSTEMS
characteristics of an oscillator. This fact is overlooked in a significant proportion of the literature (Demir, 1998 p. 164), and alternative approaches are required to characterize the noise of an oscillator [see, for example, Demir (1998 ch. 6)]. 8.6 MULTIPLE INPUT —MULTIPLE OUTPUT SYSTEMS Two possible multiple input—multiple output (MIMO) systems are illustrated in Figures 8.13 and 8.14 where the input signals x , . . . , x , respectively, are , from the ensembles E , . . . , E , defining the random processes X , . . . , X . 6 6, , By definition
S Z> S R
E : x : S ; [0, T ] ; R 6G G
countable case uncountable case
(8.47)
For the system shown in Figure 8.13, one signal from the ensemble for the output random process Z , can be written in the form S R , , x ( , )h (t 9 ) d + S (8.48) z (, t) : w y ( , t) : w G G G G S SG G G SG G G where : ( , . . . , ). On the interval [0, T ], it follows from Theorem 8.2, that , , Z (, T, f ) : w Y ( , T, f ) S SG G G G (8.49) , : w [H (T, f )X ( , T, f ) 9 I ( , T, f )] SG G G G G G G
x1(γ1, t) ∈ EX 1 y1(γ1, t) GX h1 ↔ H1 1 yN(γN, t)
xN (γN, t) ∈ EX N GX hN ↔ HN
w11 z1(ζ, t) ∈ EZ
1
w1N ζ = (γ1, …, γN)
N
wM1 zM(ζ, t) ∈ EZ
M
wMN
Figure 8.13 Schematic diagram of a multiple input—multiple output system.
244
LINEAR SYSTEM THEORY
x1(γ1, t) ∈ EX 1 GX 1
h11 ↔ H11
y11(γ1, t) z1(ζ, t) ∈ EZ
1
xN(γN, t) ∈ EX N GX N
h1N ↔ H1N hM1 ↔ HM1
y1N(γN, t) yM1(γ1, t)
ζ = (γ1, …, γN) zM(ζ, t) ∈ EZ
M
hMN ↔ HMN
yMN(γN, t)
Figure 8.14 Schematic diagram of an alternative multiple input—multiple output system.
The following theorem states the relationship between the output and input power spectral densities of the system illustrated in Figure 8.13. T 8.7. P S D R M I— M O S If x + E and h have bounded variation on all G 6G G closed finite intervals, x is locally integrable, h + L [0, -] and the input random G G processes X , . . . , X are independent with zero means, then, for the infinite , interval [0, -], it is the case that , , G ( f ) : w G ( f ) : w H ( f )G ( f ) 8S 7 6G SG SG G G G G
(8.50)
where, for convenience of notation, the subscript - has been dropped. For the general case, where X , . . . , X are not necessarily independent with , zero means, the following result holds for the infinite interval , , , G ( f ) : w G ( f ) ; w w* G ( f ) 8S SG 7G SG SH 7G7H G G H (8.51) H$G , , , : w H ( f )G ( f ) ; w w* H ( f )H *( f )G ( f) 6G H 6G 6H SG G SG SH G G G H H$G On the infinite interval, the cross power spectral density between the random processes X or Y and Y , is given by G G H G ( f ) : H*( f )G (f) (8.52) 6G7H H 6G6H G ( f ) : H ( f )H *( f )G ( f) (8.53) 7G7H H 6G6H G
MULTIPLE INPUT—MULTIPLE OUTPUT SYSTEMS
245
On the infinite interval, the cross power spectral density between Z and Z , S T is given by , , , G ( f ) : w w* G ( f ) ; w w* G ( f ) 8S8T SG TG 7G SG TH 7G7H G G H H$G , , , : w w* H ( f )G ( f ); w w* H ( f )H *( f )G (f) 6G H 6G6H SG TG G SG TH G G G H H$G
(8.54)
Proof. The proof is given in Appendix 6. 8.6.1 Alternative Multiple Input—Multiple Output System Consider one signal from the ensemble for the output random process Z , S defined by the structure of Figure 8.14,
, , R x ( , )h (t 9 ) d z (, t) : y ( , t) : G G SG S SG G G G
+S G
(8.55)
where : ( , . . . , ). On an interval [0, T ], it follows from Theorem 8.2, that , , , Z (, T, f ) : Y ( , T, f ) : [H (T, f )X ( , T, f ) 9 I ( , T, f )] S SG G SG G G G G G G
(8.56)
The following theorem details the appropriate power spectral density relationships for this system. T 8.8. P S D R A M I—M O S If x + E and h have bounded 6G G GH variation on all closed finite intervals, x is locally integrable and h + L [0, -], G GH then the following result holds for the infinite interval: , , , G ( f): G ( f); G ( f) 8S 7 7 7 G SG G H SG SH H$G , , , : H ( f )G ( f ) ; H ( f )H * ( f )G ( f) 6G SH 6G6H SG SG G G H H$G
(8.57)
246
LINEAR SYSTEM THEORY
On the infinite interval, the cross power spectral density between Z and Z , S T is given by , , , G ( f): G ( f); G (f) 8S8T 7 7 7 7 G SG TG G H SG TH (8.58) H$G , , , : H ( f )H * ( f )G ( f ) ; H ( f )H * ( f )G (f) TG 6G TH 6G6H SG SG G G H H$G Proof. The proof of this theorem is given in Appendix 7.
APPENDIX 1: PROOF OF THEOREM 8.1 If the input signal x + L [0, T ] and is of bounded variation then, as per Theorem 2.19, for all 0 there exists a 0, such that x can be approximated on the interval [0, T ] by a step function x , defined according to x (t) : x
t t R t9 : x(i) (t 9 i) G
t + [0, T ] (8.59)
such that 2 x(t) 9 x (t) dt . Since the response of the system to a signal is h , it follows, from the causality, time invariance and linearity of the system, that the output y , at a time t + [0, T ], in response to the signal x , is y (t) : x(0)h (t) ; % ; x R : x(i)h (t 9 i) G
t t h t9
(8.60)
It remains to show that lim y (t) : y(t) and lim 2 y(t) 9 y (t) dt : 0. To prove these results, it is convenient to define a function z according to z(t, ) : x()h(t 9 )
$
y(t) :
R
z(t, ) d
(8.61)
If both x and h are causal and of bounded variation, then for any fixed value of t and consistent with Figure 8.15, z can be approximated for + [0, t] by a step function z , R z (t, ) : x(i)h(t 9 i) ( 9 i) G
+ [0, t]
(8.62)
247
APPENDIX 1: PROOF OF THEOREM 8.1
x(λ)
x
∆
t ∆ ∆
δ∆ λ – t ∆ ∆ λ
1⁄∆ t ∆ t ∆
2∆
T
h(t–λ)
∆
λ
t ∆ t ∆
2∆
T
Figure 8.15 Illustration of the functions comprising z, and step approximations to them.
Theorem 2.19 then implies that for 0 there exists a 0, such that
R
z(t, ) 9 z (t, ) d
$
R R lim z (t, ) d : z(t, ) d
(8.63)
Further, it follows that
R
R R z (t, ) d : x(i)h(t 9 i) ( 9 i) d G R \ : x(i)h(t 9 i) G (8.64) t t R t h t9 d ; x 9 R : y (t) ;
where : 9x
t t h t9
R
R >
9
t d
(8.65)
Clearly, ; 0 as ; 0. Thus, from Eqs. (8.61), (8.63), and (8.64), it follows
248
LINEAR SYSTEM THEORY
that lim y (t) : y(t). Finally, from Eq. (8.63), it follows that
2
y(t) 9 y (t) dt :
2
R
z(t, ) d 9
2 R
R
z (t, ) d ; dt
z(t, ) 9 z (t, ) d dt ; T T ; T (8.66)
which is the final required result.
APPENDIX 2: PROOF OF THEOREM 8.2 With the stated assumptions, it follows that Y (T, f ) : :
2
2
y(t)e\HLDR dt : x()e\HLDH
2
2
R
x()h(t 9 ) d e\HLDR dt (8.67)
h(t 9 )e\HLDR\H dt d
H
The region of integration is illustrated in Figure 8.16. A change of variable p : t 9 in the inner integral of the last equation yields Y (T, f ) : :
2
2
x()e\HLDH x()e\HLDH
2\H
2
h(p)e\HLDN dp d
h(p)e\HLDN dp 9
2
t
t = λ+T
2T Area 2 T Area of integration
h(p)e\HLDN dp d
2\H
t = λ
Area 1 λ T
Figure 8.16 Region of integration for evaluation of Y.
(8.68)
APPENDIX 3: PROOF OF THEOREM 8.3
249
which can be written in the form Y (T, f ) : X(T, f )H(T, f ) 9 I(T, f )
(8.69)
where I(T, f ) :
2
x()e\HLDH
2
h(p)e\HLDN dp d
2\H
(8.70)
The region of integration for the integral I is area 2 in Figure 8.16 prior to the change of variable p : t 9 , and the region specified in Figure 8.4 after this change. The results for the Laplace transform case follow in an analogous manner, with use of the substitution s : j2f.
APPENDIX 3: PROOF OF THEOREM 8.3 If x, h + L [0, -], then for all 0 there exists numbers and p , such that C C
HC
x()e\HLDH d
h(p)e\HLDN dp
HC
x()e\HLDH d h(p)e\HLDN d
NC
NC
HC
x()d
(8.71)
h(p)dp
(8.72)
NC
It then follows that the region over which there is a significant contribution to the integral I is as illustrated in Figure 8.17. The illustration is for the case where ; p T. Clearly, as T increases, the region over which there is a C C significant contribution to the integral I decreases, and for T ; p , it is C C expected that the value of the integral I can be made arbitrarily small, and this is shown below.
p
p = T–λ
Region where there is significant contribution to I
T p+λ>T
pε T–λε λε
T
λ
Figure 8.17 Illustration of area of integration to establish a bound on the integral I. The illustration is for the case where ; p T. C C
250
LINEAR SYSTEM THEORY
p
p = T –λ
T Region 3
Region 1 pε
p+λ>T Region 2 λε
T
λ
Figure 8.18 Illustration of area of integration to establish a bound on the integral I. The illustration is for the case where ; p T. C C
The case where T ; p is illustrated in Figure 8.18. Using the regions C C 1, 2, and 3 defined in this figure, the following bound on the error integral I can be established:
I(T, f )
HC
x()e\HLDH d
; ;
2
HC 2
2
x()e\HLDH d x()e\HLDH d
HC
h(p)e\HLDN dp
NC
NC
2
h(p)e\HLDN dp
(8.73)
h(p)e\HLDN dp
NC
From the definitions for and p , it follows that C C I(T, f )
HC
x() d ;
;
NC
h(p) dp ;
x() d ;
(8.74)
h(p) dp
Thus, for any 0, there exists a T, such that the error bound for the absolute difference between Y and Y , as given by I, is less than k for some fixed k, which is independent of T. When x + L [0, -], the results for the Laplace transform case follow in an analogous manner, with use of the substitution s : j2f. When x is locally integrable, x does not have exponential increase, and Re[s] 0, then it is the case that x(t)e\0 Q + L [0, -]. Use of the above approach, in an analogous manner, yields the required proof.
APPENDIX 4: PROOF OF THEOREM 8.4
251
APPENDIX 4: PROOF OF THEOREM 8.4 The graphs of x() and h(t 9 ) are shown in Figure 8.19 for the case where h and x are windowed, such that they are zero outside the interval [0, T ]. It then follows that
y(t) :
R
x()h(t 9 ) d :
R
x()h(t 9 ) d 2 x()h(t 9 ) d R\2
0 tT T t 2T
0
(8.75)
elsewhere
Hence, assuming both x, h + L [0, T ], it follows that
2
Y (2T, f ) :
y(t)e\HLDR dt
2
:
R
x()h(t 9 ) d e\HLDR dt
2
2
2
R\2
;
(8.76)
x()h(t 9 ) d e\HLDR dt
The region of integration comprises areas 1 and 2 shown in Figure 8.16. Changing the order of integration yields, Y (2T, f ) :
2
H>2
H
h(t 9 )e\HLDR\H dt x()e\HLDH d
(8.77)
A change of variable p : t 9 in the inner integral yields, Y (2T, f ) :
2
x()e\HLDH
2
h(p)e\HLDN dp d : X(T, f )H(T, f ) (8.78)
which is the required result.
x(λ)
t–T
h(t – λ)
T
t
λ
Figure 8.19 Graph of windowed input signal and shifted windowed impulse response function.
252
LINEAR SYSTEM THEORY
The results for the Laplace transform case follow in an analogous manner with use of the substitution s : j2f.
APPENDIX 5: PROOF OF THEOREM 8.6 Consider the countable case, where a subscript, rather than an argument, is used, that is, x (t) rather than x(, t). First, for the case where h + L [0, -], the G region of integration for I (T, f ) can be approximated by the region shown in G Figure 8.10, that is, I (T, f ) G
RF 2
2\N
(8.79)
x ()h(p)e\HLDN>H d dp G
It then follows that an upper bound on I (T, f ) is given by G I (T, f ) G
RF 2
2\N
k sup ' 6
x () h(p) d dp k G '
RF 2
2\RF
x () h(p) d dp G
h(p) dp
(8.80)
where k is of the order of unity and ' sup : sup 6
2
2\RF
x () d, i + Z>, T t , T + R> G F
(8.81)
As per Theorem 2.13, it follows, if the average power on [T 9 t , T ] does not F increase with T, that sup is finite and I (T, f ) does not increase with T. It then 6 G follows that I (T, f ) lim G : lim G (T, f ) : 0 ' T 2 2 G
(8.82)
This result then implies the following for f fixed: First, if X (T, f ) is constant G with respect to T, or is such that lim X (T, f )/(T : 0, then 2 G lim H(T, f ) G (T, f ) : lim 6G'G 2 2
H(T, f ) X (T, f ) I*(T, f ) G G : 0 (8.83) T
For this case, both G (T, f ) and GY (T, f ) converge to zero as T ; -. Second, 6G G if X (T, f ) is such, that lim X (T, f )/(T is finite and nonzero, then both 2 G G
APPENDIX 6: PROOF OF THEOREM 8.7
253
G (T, f ) and GY (T, f ) are bounded as T ; - and 6G G lim H(T, f ) G (T, f ) : lim 6G'G 2 2
H(T, f ) X (T, f ) I*(T, f ) G G : 0 (8.84) T
Third, if X (T, f ) is such, that lim X (T, f )/(T is infinite, then both 2 G G G (T, f ) and GY (T, f ) are unbounded as T ;-. It is then the case that 6G G lim 2
H(T, f ) G (T, f ) I (T, f ) 6G'G G : lim :0 GY (T, f ) H(T, f ) X (T, f ) 2 G G
(8.85)
When these results hold for all signals in the ensemble, and this is guaranteed by the assumptions made, the following results hold: lim G (T, f ) : 0 ' 2
(8.86)
G (T, f ) : 0, then If, lim G (T, f ) - or lim 2 Y 2 Y lim H(T, f ) G (T, f ) : 0 6' 2
(8.87)
If lim G (T, f ) : -, then 2 Y H(T, f ) G (T, f ) 6' :0 lim GY (T, f ) 2 lim G (T, f ) : lim GY (T, f ) : lim H(T, f )G (T, f ) 7 6 2 2 2
(8.88) (8.89)
APPENDIX 6: PROOF OF THEOREM 8.7 The proof of the first result follows directly from Theorems 4.6 and 8.6. The first form of the second result follows directly from Theorem 4.6. The second form of the second result follows from the definition of the cross power spectral density and Theorem 8.6. To show this, consider the countable case and the notation P[x ( , t), x ( , t)] : p ( , ). By definition G G H H GH G H 1 G ( f ) : lim p ( , )Y ( , T, f )Y *( , T, f ) 7G7H H H GH G H G G T 2 AG AH 1 [X ( , T, f )H (T, f )9I ( , T, f )] G G G G G : lim p ( , ) GH G H T ;[X*( , T, f )H * (T, f ) 9 I*( , T, f )] H H H H H 2 AG AH (8.90)
254
LINEAR SYSTEM THEORY
Using definitions for the cross power spectral density, it follows that
(T, f )H (T, f )H *(T, f ) 9 G (T, f )H (T, f ) G ( f ) : lim G 7G7H 6G6H H 6G'H G G 2 9 G* (T, f )H *(T, f ) ; G (T, f ) 6H'G H 'G'H
(8.91)
It follows, from a similar argument to that used in the proof of Theorem 8.6, that the relative magnitude of the cross power spectral density terms G H , 6G'H G small G* H*, and G , with respect to GY Y and GY Y , become increasingly 6H'G H 'G'H G H G H as T ; -. Hence, G
7G7H
( f ) : H ( f )H *( f )G ( f) H 6G6H G
(8.92)
which is the required result. To show the cross power spectral density result, note that for each outcome : ( , . . . , ), the following individual power spectral densities and cross , power spectral density can be defined, Z (, T, f ) 1 , , S : w w* Y ( , T, f )Y *( , T, f ) (8.93) H H SG SH G G T T G H Z (, T, f ) 1 , , G (, T, f ) : T : w w* Y ( , T, f )Y *( , T, f ) (8.94) 8T H H TG TH G G T T G H Z (, T, f )Z *(, T, f ) T G (, T, f ) : S 8S8T T (8.95) 1 , , : w w* Y ( , T, f )Y *( , T, f ) H H SG TH G G T G H
G (, T, f ) : 8S
As each of these power spectral densities have the same probability and the same form, it follows, by analogy with G (T, f ), that the cross power spectral 8 density between Z and Z , on the infiniteS interval, is given by S T , , , G ( f ) : w w* G ( f ) ; w w* G ( f ) 8S8T SG TG 7G SG TH 7G7H G G H H$G , , , : w w* H ( f )G ( f ) ; w w* H ( f )H *( f )G (f) SG TG G 6G SG TH G H 6G6H G G H H$G (8.96)
APPENDIX 7: PROOF OF THEOREM 8.8
255
APPENDIX 7: PROOF OF THEOREM 8.8 The proof of the result for G (T, f ) follows directly from Theorems 4.6 and 8S 8.6, and the cross power spectral density relationships given in Theorem 8.7. To show the cross power spectral density result, note that for each outcome : ( , . . . , ), the following individual power spectral densities and cross , power density can be defined: Z (, T, f ) 1 , , S : Y ( , T, f )Y *( , T, f ) SH H SG G T T G H Z (, T, f ) 1 , , G (, T, f ) : T : Y ( , T, f )Y *( , T, f ) 8T TH H TG G T T G H Z (, T, f )Z *(, T, f ) T G (, T, f ) : S 8S8T T
G (, T, f ) : 8S
1 , , : Y ( , T, f )Y * ( , T, f ) TH H SG G T G H
(8.97) (8.98)
(8.99)
As each of these power spectral densities have the same probability and the same form, it follows, by analogy with G (T, f ), that the cross power spectral 8 density between Z and Z , on the infiniteS interval, is given by S T , , , G ( f): G ( f); G (f) 8S8T 7 7 7 7 G SG TG G H SG TH H$G , , , : H ( f )H * ( f )G ( f ) ; H ( f )H * ( f )G (f) TG 6G TH 6G6H SG SG G G H H$G (8.100)
9 Principles of Low Noise Electronic Design 9.1 INTRODUCTION This chapter details noise models and signal theory, such that the effect of noise in linear electronic systems can be ascertained. The results are directly applicable to nonlinear systems that can be approximated around an operating point by an affine function. An introductory section is included at the start of the chapter to provide an insight into the nature of Gaussian white noise — the most common form of noise encountered in electronics. This is followed by a description of the standard types of noise encountered in electronics and noise models for standard electronic components. The central result of the chapter is a systematic explanation of the theory underpinning the standard method of characterizing noise in electronic systems, namely, through an input equivalent noise source or sources. Further, the noise equivalent bandwidth of a system is defined. This method of characterizing a system, simplifies noise analysis — especially when a signal to noise ratio characterization is required. Finally, the input equivalent noise of a passive network is discussed which is a generalization of Nyquist’s theorem. General references for noise in electronics include Ambrozy (1982), Buckingham (1983), Engberg (1995), Fish (1993), Leach (1994), Motchenbacher (1993), and van der Ziel (1986).
9.1.1 Notation and Assumptions When dealing with noise processes in linear time invariant systems, an infinite timescale is often assumed so power spectral densities, consistent with previous notation, should be written in the form G ( f ). However, for notational 256
INTRODUCTION
257
RS + Vo
VS −
Figure 9.1 Schematic diagram of signal source and amplifier.
convenience, the subscript is removed and power spectral densities are written as G( f ). Further, the systems are assumed to be such that the fundamental results, as given by Theorems 8.1 and 8.6, are valid. 9.1.2 The Effect of Noise In electronic devices, noise is a consequence of charge movement at an atomic level which is random in character. This random behaviour leads, at a macro level, to unwanted variations in signals. To illustrate this, consider a signal V , 1 from a signal source, assumed to be sinusoidal and with a resistance R , which 1 is amplified by a low noise amplifier as illustrated in Figure 9.1. The equivalent noise signal at the amplifier input for the case of a 1 k source resistance, and where the noise from this resistance dominates other sources of noise, is shown in Figure 9.2. A sample rate of 2.048 kSamples/sec has been used, and 200 samples are displayed. The specific details of the amplifier are described in Howard (1999b). In particular, the amplifier bandwidth is 30 kHz. Amplitude (Volts) 4 · 10−6 2 · 10−6 0 −2 · 10−6 −4 · 10−6 0.02
0.04
0.06
0.08
0.1
Time (Sec) Figure 9.2 Time record of equivalent noise at amplifier input.
258
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
Amplitude (Volts) 0.000015 0.00001 5 · 10−6 0 −5 · 10−6 −0.00001 −0.000015
0.02
0.04
0.06
0.08 0.1 Time (Sec)
Figure 9.3 Sinusoid of 100 Hz whose amplitude is consistent with a signal-to-noise ratio of 10.
In Figure 9.3 a 100 Hz sine wave is displayed, whose amplitude is consistent with a signal-to-noise ratio of 10 assuming the noise waveform of Figure 9.2. The addition of this 100 Hz sinusoid, and the noise signal of Figure 9.2, is shown in Figure 9.4 to illustrate the effect of noise corrupting the integrity of a signal. For completeness, in Figure 9.5, the power spectral density of the noise referenced to the amplifier input is shown. In this figure, the power spectral Amplitude (Volts) 0.000015 0.00001 5 · 10−6 0 −5 · 10−6 −0.00001 −0.000015
0.02
0.04
0.06
0.08 0.1 Time (Sec)
Figure 9.4 100 Hz sinusoidal signal plus noise due to the source resistance and amplifier. The signal-to-noise ratio is 10.
GAUSSIAN WHITE NOISE
259
Figure 9.5 Power spectral density of amplifier noise referenced to the amplifier input.
density has a 1/ f form at low frequencies, and at higher frequencies is constant. For frequencies greater than 10 Hz, the thermal noise from the resistor dominates the overall noise.
9.2 GAUSSIAN WHITE NOISE Gaussian white noise, by which is meant noise whose amplitude distribution at a set time has a Gaussian density function and whose power spectral density is flat, that is, white, is the most common type of noise encountered in electronics. The following section gives a description of a model which gives rise to such noise. Since the model is consistent with many physical noise processes it provides insight into why Gaussian white noise is ubiquitous. 9.2.1 A Model for Gaussian White Noise In many instances, a measured noise waveform is a consequence of the weighted sum of waveforms from a large number of independent random processes. For example, the observed randomly varying voltage across a resistor is due to the independent random thermal motion of many electrons. In such cases, the observed waveform z, can be modelled according to + z(t) : w z (t) G G G
z +E G G
(9.1)
where w is the weighting factor for the ith waveform z , which is from the ith G G
260
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
Figure 9.6 One waveform from a binary digital random process on the interval [0, 8D].
ensemble E defining the ith random process Z . Here, z is one waveform from G G a random process Z which is defined as the weighted summation of the random processes Z , . . . , Z . Consider the case, where all the random processes Z , . . . , + Z are identical, but independent, signalling random processes and are defined, + on the interval [0, ND], by the ensemble
, E : z ( , . . . , , t) : (t 9 (k 9 1)D) G G , I I
+ 91, 1 I P[ : <1] : 0.5 I
(9.2)
where the pulse function is defined according to
(t) :
1 0
0tD elsewhere
( f ) : D sinc( f D)e\HLD"
(9.3)
All waveforms in the ensemble have equal probability, and are binary digital information signals. One waveform from the ensemble is illustrated in Figure 9.6. One outcome of the random process Z, as defined by Eq. (9.1), has the form illustrated in Figure 9.7 for the case of equal weightings, w : 1, D : 1, and G M : 500. The following subsections show, as the number of waveforms M, increases, that the amplitude density function approaches that of a Gaussian function, and that over a restricted frequency range the power spectral density is flat or ‘‘white’’. 9.2.2 Gaussian Amplitude Distribution The following, details the reasons why, as the number of waveforms, M, comprising the random process increases, the amplitude density function approaches that of a Gaussian function. The waveform defined by the sum of M equally weighted independent binary digital waveforms, as per Eq. (9.1), has the following properties: (1) the amplitudes of the waveform during the intervals [iD, (i ; 1)D), and [ jD, ( j ; 1)D), are independent for i " j; (2) the amplitude A, in any interval [iD, (i ; 1)D] is, for the case where M is even, from the set
GAUSSIAN WHITE NOISE
261
Amplitude 60 40 20 0 −20 −40 0
20
40
60
80
100 Time (Sec)
Figure 9.7 Sum of 500 equally weighted, independent, binary digital waveforms where D : 1. Linear interpolation has been used between the values of the function at integer values of time.
S : 9M, 9M ; 2, . . . , 0, . . . , M 9 2, M, and M is assumed to be even in subsequent analysis; (3) at a specific time, the amplitude A, is a consequence of k ones, and m negative ones where k ; m : M. Thus, A + S is such that A : k 9 m. Given A and M, it then follows that k : (M ; A)/2
m : (M 9 A)/2
(9.4)
Hence, P[A] equals the probability of k : 0.5(A ; M) successes in M outcomes of a Bernoulli trial. For the case where the probability of success is p, and the probability of failure is q, it follows that (Papoulis 2002 p. 53) P[A] :
M!( pI)q+\I M!p >+q +\ : k!(M 9 k)! [0.5(M ; A)]![0.5(M 9 A)]!
(9.5)
To show that P[A] can be approximated by a Gaussian function, consider the DeMoivre—Laplace theorem (Papoulis 2002 p. 105, Feller 1957 p. 168f ): Consider M trials of a Bernoulli random process, where the probability of success is p, and the probability of failure is q. With the definitions : (Mpq and : Mp, and the assumption 1, the probability of k successes in M trials can be approximated according to: P[k out of M trials] :
M! e\I\IN pIq+\I k!(M 9 k)! (2
(9.6)
262
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
where a bound on the relative error in this approximation is:
(k 9 ) (k 9 ) 9 6 2
k"
(9.7)
For the case being considered, where k : 0.5(A ; M), and p : q : 0.5, it follows that : 0.5(M, : 0.5M, and k 9 : 0.5A. Thus, for 0.25M 1, the amplitude distribution in any interval [ jD, ( j ; 1)D], can be approximated by the Gaussian form: P(A) : P
A;M 2e\+ out of M trials 2 (2 M
(9.8)
where a bound on the relative error is
A A 9 12M 2(M
(9.9)
Note, with the assumptions made, the mean of A is zero, and the rms value of A is (M. The factor of 2 in Eq. (9.8) arises from the fact that A only takes on even values. Consistent with this result, many noise sources have a Gaussian amplitude distribution, and the term Gaussian noise is widely used. Confirmation, and illustration of this result is shown in Figure 9.8, where the probability of an amplitude obtained from 1000 repetitions of 100 trials of a Bernoulli process (possible outcomes are from the set 9100, 998, . . . , 0, . . . , 100) is shown. The smooth curve is the Gaussian probability density function as per Eq. (9.8) with M : 100. 9.2.3 White Power Spectral Density The power spectral density of the individual random processes comprising Z are zero mean signaling random processes, as defined by the ensemble of Eq. (9.2). It then follows, from Theorem 5.1, that the power spectral density of each of these random processes, on the interval [0, ND], is 1 G (ND, f ) : r ( f ) : sinc G r
f r
(9.10)
where, r : 1/D, and is the Fourier transform of the pulse function . As Z is the sum of independent random processes with zero means, it follows, from Theorem 4.6, that the power spectral density of Z is the sum of the weighted
GAUSSIAN WHITE NOISE
263
Probability 0.1 0.08 0.06 0.04 0.02 −20
−10
0
10
20
30
Amplitude Figure 9.8 Probability of an amplitude from the set 9100, 998, . . . , 98, 100 arising from 1000 repetitions of 100 trials of a Bernoulli process. The probabilities agree with the Gaussian form, as defined in the text.
individual power spectral densities, that is, + + G (ND, f ) : w G (ND, f ) : r ( f ) w 8 G G G G G 1 + : sinc( f/r) w G r G
(9.11)
This power spectral density is shown in Figure 9.9 for the normalized case of M : r : 1, and w : 1. For frequencies lower than r/4, the power spectral density is approximately constant at a level of M/r, and it is this constant level that is typically observed from noise sources arising from electron movement. This is the case because, first, the dominant source of electron movement is, typically, thermal energy, and electron thermal movement is correlated over an extremely short time interval. Second, a consequence of this very short correlation time, is that the rate r, used for modelling purposes, is much higher than the bandwidth of practical electronic devices. Thus, the common case is where the noise power spectral density, appears flat for all measurable frequencies, and the phrase ‘‘white Gaussian noise’’ is appropriate, and is commonly used. Note, for processes whose correlation time is very short compared with the response time of the measurement system (for example, rise time), the power spectral density will be constant within the bandwidth of the measurement
264
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
GZ(ND, f) 1 0.8 0.6 0.4 0.2
0.5
1
1.5
2
2.5
3
Frequency (Hz) Figure 9.9 Normalized power spectral density as defined by the case where r : M : w : 1.
system and, consistent with Eq. 9.11, this constancy is independent of the pulse shape.
9.3 STANDARD NOISE SOURCES The noise sources commonly encountered in electronics are thermal noise, shot noise, and 1/ f noise. These are discussed briefly below. 9.3.1 Thermal Noise Thermal noise is associated with the random movement of electrons, due to the electrons thermal energy. As a consequence of such electron movement, there is a net movement of charge, during any interval of time, through an elemental section of a resistive material as illustrated in Figure 9.10. Such a net movement of charge, is consistent with a current flow, and as the elemental section has a defined resistance, the current flow generates an elemental voltage dV. The sum of the elemental voltages, each of which has a random magnitude, is a random time varying voltage. Consistent with such a description, equivalent noise models for a resistor are shown in Figure 9.11. In this figure, v and i, respectively, are randomly varying voltage and current sources. These sources are related via Thevenin’s and Norton’s equivalence statements, namely v(t) : Ri(t), and i(t) : v(t)/R. Statistical arguments (for example, Reif, 1965 pp. 589—594, Bell, 1960 ch. 3) can be used to show that the power spectral density of the random processes,
STANDARD NOISE SOURCES
265
+ V − dV Figure 9.10 Illustration of electron movement in a resistive material.
which give rise to v and i, respectively, are: 2h f R G ( f): 4 eFDI2 9 1
V /Hz
2h f G ( f): ' R(eFDI2 9 1)
(9.12) (9.13)
A/Hz
where T is the absolute temperature, k is Boltzmann’s constant (1.38;10\ J/ K), h is Planck’s constant (6.62;10\ J.sec) and R is the resistance of the material. For frequencies, such that f 0.1kT /h 10 Hz (assuming T : 300K) a Taylor series expansion for the exponential term in these equations, namely, eFDI2 1 ; h f /kT
(9.14)
is valid, and the following approximations hold: G ( f ) 2kT R 4
V /Hz
2kT G ( f) ' R
A/Hz
(9.15)
These equations were derived using the equipartition theorem, and statistical arguments, by Nyquist in 1928 (Nyquist 1928; Kittel 1958 p. 141; Reif 1965 p. 589; Freeman 1958 p. 117) and are denoted as Nyquist’s theorem. A
R
v(t)
i(t)
R
Figure 9.11 Equivalent noise models for a resistor.
266
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
derivation of these results, based on electron movement, is given in Buckingham 1983 pp. 39—41. Further, these equations are the ones that are nearly always used in analysis. Note that the power spectral density is ‘‘white’’, that is, it has a constant level independent of the frequency. One point to note: In analysis, the Norton, rather than the Thevenin equivalent noise model for a resistor best facilitates analysis. 9.3.2 Shot Noise As shown in Section 5.5, shot noise is associated with charge carriers crossing a barrier, such as that inherent in a PN junction, at random times, but with a constant average rate. As detailed in Section 5.5.1 the power spectral density, for all but high frequencies, is given by G( f ) qI ; I ( f )
A/Hz
(9.16)
where q is the electronic charge (1.6;10\ C), and I is the mean current. Note that, apart from the impulse at DC, the power spectral density is ‘‘white’’. In electronic circuits the mean current is associated with circuit bias. As variations away from the bias state are of interest in analogue electronics, it is usual to approximate the power spectral density in such circuits, according to G( f ) qI
A/Hz
(9.17)
9.3.3 1/f Noise As discussed in Section 6.5, the power spectral density of a 1/ f random process has a power spectral density given by G( f ) :
k f?
(9.18)
where k is a constant, and determines the slope. Typically, is close to unity. At low frequencies, 1/ f noise often dominates other noise sources, and this is well illustrated in Figure 9.5.
9.4 NOISE MODELS FOR STANDARD ELECTRONIC DEVICES 9.4.1 Passive Components In an ideal capacitor with an ideal dielectric, all charge is bound, such that interatomic movement of charge is not possible. Accordingly, an ideal capacitor is noiseless. An ideal inductor is made from material with zero resistance, and in such a material the voltage created by the thermal motion of electrons
NOISE MODELS FOR STANDARD ELECTRONIC DEVICES
267
is zero. Hence, ideal inductors are noiseless. As discussed above, resistors exhibit thermal noise, and have either of the noise models shown in Figure 9.11. Fish (1993 ch. 6) gives a more detailed analysis of noise in passive components. 9.4.2 Active Components The small signal equivalent noise model for a diode, is shown in Figure 9.12 (Fish, 1993 pp. 126—127). In this figure I is the mean diode current, and the " power spectral density of the small signal equivalent noise source i, is given by G ( f ) : q I " "
(9.19)
A/Hz
Note, the model of Figure 9.12(c) is also applicable to standard nonavalanche photodetectors, when they are operated with reverse bias. The small signal noise equivalent model for a PNP or NPN BJT transistor, operating in the forward active region, is shown in Figure 9.13 (Fish, 1993 p. 128). The sources i , i , and i in this figure, respectively model the thermal ! noise in the base due to the base spreading resistance r , which is typically in @ the range of 10—500 Ohms (Gray, 2001 p. 32; Fish, 1993 pp. 128—139), the shot noise of the base current and the collector current shot noise (see Edwards, 2000). The respective power spectral densities of these noise sources are G ( f ) : 2kT /r A/Hz @ G ( f ) : qI A/Hz G ( f ) : qI ! !
(9.20) A/Hz
(9.21)
In analysis, it is usual to neglect r as, typically, it is in parallel with a much M lower value load resistance. The small signal noise equivalent model for a NMOS or PMOS MOSFET, with the source connected to the substrate, and a N or P channel JFET, when they are operating in the saturation region, is shown in Figure 9.14 (for
Figure 9.12 (a) Diode symbol. (b) Noise equivalent model for a diode under forward bias. (c) Noise equivalent model for a diode under reverse bias. ID is the DC current flow and Cj is the junction capacitance.
268
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
IC IB
iBB(t)
IB PNP
NPN
IC Cµ
B +
rb iB(t)
V
Cπ
C gmV ro
rπ iC(t)
− E
Figure 9.13 Small signal equivalent noise model for a NPN or PNP BJT operating in the forward active region.
example, Fish, 1993 p. 140; Levinzon, 2000; Howard, 1987). In this figure, the noise sources i and i , respectively, account for the noise at the gate, which is % " due to the gate leakage current and the induced noise in the gate due to thermal noise in the channel, and the thermal noise in the channel. The respective power spectral densities of these sources are G ( f ) : q I ; 2kT (2 f C )/g % % EQ K G ( f ) : 2kT Pg A/Hz " K
(9.22)
A/Hz
(9.23)
In these equations, is a constant with a value of around 0.25 for JFETs, and 0.1 for MOSFETS (Fish, 1993 p. 141). P is a constant with a theoretical value of 0.7, but practical values can be higher (Howard, 1987; Muoi, 1984; Ogawa, 1981). I is the gate leakage current which, typically, is in the pA range. As % with a BJT, it is usual to neglect r in analysis. M ID
IG
+
IG
V ID
NMOS
Cgd
G
PMOS
iG(t) −
D gmV
iD(t) ro
Cgs S
Figure 9.14 Small signal equivalent noise model for a PMOS or NMOS MOSFET, or a N or P channel JFET, operating in the saturation region.
NOISE ANALYSIS FOR LINEAR TIME INVARIANT SYSTEMS
269
9.5 NOISE ANALYSIS FOR LINEAR TIME INVARIANT SYSTEMS The following discussion relates to analysis of noise in linear time invariant systems — linear electronic systems are an important subset of such systems. 9.5.1 Background and Assumptions A schematic diagram of a linear system is shown in Figure 9.15. With the assumption that the results of Theorems 8.1 and 8.6 are valid, the relationship between the input and output power spectral densities, on the infinite interval [0, -], or a sufficiently long interval relative to the impulse response time of the system, is given by G ( f ) : H( f ) G ( f ) 7 6
(9.24)
In this diagram, the input random process X is defined by the ensemble E , 6 and the output random process Y is defined by the ensemble E . 7 9.5.1.1 Transfer Functions and Notation Analysis of electronic circuits is usually performed through use of Laplace transforms (for example, Chua, 1987 ch. 10). Such analysis yields a relationship, assuming appropriate excitation, between the Laplace transform of the ith and jth node voltage or current, of the form V (s)/V (s) : L (s). If the time domain input at the ith node, v (t), H G GH G is an ‘‘impulse,’’ then V (s) : 1 and, hence, the output signal v (t) is the impulse G H response, whose Laplace transform is given by L (s). In the subsequent text, GH the following notation will be used: L is denoted the Laplace transfer function, GH while H , which is the Fourier transform of the impulse response, is simply GH denoted the transfer function. From the definitions for the Laplace and Fourier transform, it follows that the relationship between these transfer functions is H ( f ) : L ( j2 f ) GH GH
(9.25)
The Fourier transform H , is guaranteed to exist if the impulse response h , is GH GH such that h + L [0, -]. Similarly, the Laplace transform L , will exist, with a GH GH region of convergence including the imaginary axis, when h + L [0, -]. GH Finally, in circuit analysis, it is usual to omit the argument s from Laplace transformed functions. To distinguish between a time function, and its associated Laplace transform, capital letters are used for the latter, while lowercase letters are used for the former. x ∈EX GX
h↔H
y ∈EY GY
Figure 9.15 Schematic diagram of a linear system.
270
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
9.5.2 Input Equivalent Noise — Individual Case The definition of the input equivalent noise of a linear system, is fundamental to low noise amplifier design. The following is a brief summary: When all components in a linear circuit have been replaced by their equivalent circuit models, including appropriate models for noise sources, the circuit, as illustrated in Figure 9.16, results. In this figure w and w respectively, are the input and output signals of + the circuit, and w , . . . , w are signals from the ensembles defining the N noise , sources in the circuit. The Laplace transform of these signals are, respectively, denoted by W , W , . . . , W , W . The transfer function between the source and , + the output, denoted H , is defined according to +
W ( j2 f ) (9.26) ( j2 f ) : + W ( j2 f ) UB UU, where, denotes the Dirac delta function, and it is assumed that w + L [0, -], + when w : , such that, the results of Theorem 8.3 are valid. Similarly, the transfer functions H , . . . , H are defined as the transfer functions that + ,+ relate the noise sources w , . . . , w to the amplifier output, and are defined as , H
+
( f ) :L
+
W ( j2 f ) H ( f ) : L ( j2 f ) : + (9.27) G+ G+ W ( j2 f ) UGBUUUG\UG>U, G It is usual, when quantifying the noise performance of an amplifier, to refer the noise to the amplifier input in order that it is independent of the amplifier gain. To achieve this, it is necessary to define an input equivalent noise source for each of the noise sources in the amplifier. By definition, the input equivalent noise source, denoted w , for the ith noise source w , is the equivalent noise CG G source at the amplifier input that produces the same level of output noise as w . That is, by definition, w guarantees the equivalence of the circuits shown G CG in Figures 9.17 and 9.18, as far as the output noise is concerned. Assume, for the circuit shown in Figure 9.17, that either, or both the source w , and the ith noise source w , have zero mean, and the source is independent G of the ith noise source. It then follows, from Theorem 8.7, that the output
Linear Circuit w0(t) wM(t) w1(t)
wi(t)
wN(t)
Figure 9.16 Schematic diagram of a linear system with N noise sources.
NOISE ANALYSIS FOR LINEAR TIME INVARIANT SYSTEMS
w0(t)
Linear Circuit
wi(t)
271
wM(t)
Figure 9.17 Noise model for ith noise source.
power spectral density, G , due to w and w is + G G ( f ) : H ( f ) G ( f ) ; H ( f ) G ( f ) (9.28) + + G+ G where G and G , respectively, are the power spectral densities of w and w . G G For the circuit shown in Figure 9.18, the output power spectral density due to the noise sources w and w , is CG G ( f ) : H ( f ) G ( f ) ; H ( f ) G ( f ) (9.29) + + + CG where G is the power spectral density of the input equivalent source w . A CG CG comparison of Eqs. (9.28) and (9.29) shows that these two circuits are equivalent, in terms of the output power spectral density, when
H
( f ) G ( f ) : H ( f ) G ( f ) (9.30) + CG G+ G Thus, the power spectral density of the input equivalent noise source associated with the ith noise source is
H ( f ) G ( f ) : HCO ( f ) G ( f ) (9.31) G ( f ) : G+ G+ G CG
H ( f ) G + where HCO ( f ) : H ( f )/H ( f ) is the transfer function between the ith noise G+ G+ + source, w , and the associated input equivalent noise source w . G CG
w0(t)
wei(t)
Noiseless Linear Circuit
wM(t)
Figure 9.18 Equivalent noise model, as far as the output node is concerned, for the ith noise source.
272
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
9.5.3 Input Equivalent Noise ---— General Case For the general case of determining the input equivalent noise of all the N noise sources, the approach is to, first, establish the input equivalent signal and, then, evaluate its power spectral density. The details are as follows: the N noise signals generate an output signal according to w (t) : +
R
R
w ()h (t 9 )d (9.32) , ,+ where, h , . . . , h are the impulse responses of the systems between w and + ,+ G w for i + 1, . . . , N. From Theorem 8.3 it follows that K W (s) : W (s)L (s) ; · · · ; W (s)L (s) Re[s] 0 (9.33) + + , ,+ where, L is the Laplace transform of h , and the noise signals are assumed G+ G+ not to have exponential increase, which is the usual case. An equivalent input signal, w , whose Laplace transform is W , will result in an output signal with CO CO the same Laplace transform when
w ()h (t 9 )d ; · · · ; +
W (s)L (s) : W (s)L (s) ; · · · ; W (s)L (s) Re[s] 0 (9.34) CO + + , ,+ Thus, provided L (s) " 0, it is the case that + , , W (s)L (s) G+ W (s) : W (s) : G Re[s] 0 (9.35) CO CG L (s) G G + where W is the Laplace transform of the ith input equivalent signal associated CG with w . Consistent with this result, an equivalent model for the input G equivalent noise is as shown in Figure 9.19. The ith transfer function in this figure, from Eq. (9.35), is given by L ( j2 f ) H ( f) HCO ( f ) : G+ : G+ H ( f) "0 (9.36) G+ + L ( j2 f ) H ( f ) + + where L (s) and L (s) are validly defined when Re[s] : 0, as assumed in G+ + Eqs. (9.26) and (9.27). The following theorem states the power spectral density of the input equivalent noise random process. T 9.1 P S D I E N For independent noise sources with zero means, the amplifier input equivalent power spectral density, denoted G ( f ), is the sum of the individual input equivalent CO power spectral densities, that is,
, , , H (f) G+ G ( f ) (9.37) G ( f ) : G ( f ) : HCO ( f ) G ( f ) : CG G+ G CO G H ( f) + G G G where G and G , respectively, are the power spectral density, and the input G CG equivalent power spectral density, of the ith noise source. For the general case,
NOISE ANALYSIS FOR LINEAR TIME INVARIANT SYSTEMS
w1
273
eq
H1M
wN
weq(t)
eq
HNM
Figure 9.19 Equivalent model for input equivalent noise source.
the input equivalent power spectral density is given by
, H (f) , , H ( f )H * ( f ) G+ H+ G (f): G ( f ) (9.38) G ( f ) ; G+ CO GH G H (f)
H ( f ) + + G G H H$G where G is the cross power spectral density between the ith and jth noise GH sources. Proof. These results follow directly from the model shown in Figure 9.19 and Theorem 8.7. 9.5.4 Notation When analysing a linear circuit arising from several transistor stages, it is convenient to label the sources according to which node they are between, as is indicated in Figure 9.20. In this figure I is a noise current source between nodes 1 and 3, I is a current source between node 1 and ground and so on. For the circuit arising from a single transistor stage amplifier, it is more convenient to label the noise sources according to their origin, as is illustrated in the following example. 9.5.5 Example: Input Equivalent Noise of a Common Emitter Amplifier To illustrate the theory related to input equivalent noise characterization of a circuit, consider the Common Emitter (CE) amplifier shown in Figure 9.21. The small signal equivalent noise model for such a structure, is shown in Figure 9.22. The noise current sources i , i , i , i defined in this figure are 1 ! independent and have zero means. Their respective power spectral densities are: 2kT G (f): 1 R 1 G ( f ) : qI
2kT r @ 2kT G ( f ) : qI ; ! ! R ! G (f):
A/Hz A/Hz
(9.39) (9.40)
274
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
i13
R13 Source
Node 3
C12
Node 1
Node 2
R23
+
iS i1
+ kV
V −
R2
C1
i2
vo −
Figure 9.20 Notation for labelling noise sources in a circuit.
The amplifier voltage transfer function, L , is M
V (s) L (s) : M M V (s) T1BG1G G G! 1 9g R 1 9 sC /g K ! I K : R R ! 1 ; 1@ 1 ; s(r //R ) C ; C 1 ; g R ; L I K ! r //R L 1@ r L 1@ L
(9.41) ; sD
where D : (r //R )R C C , and R : R ; r . Using the parameter values L 1@ ! L I 1@ 1 @ tabulated in Table 9.1, the normalized magnitude, H ( f ) : L ( j2 f ) , of this M M transfer function, is plotted in Figure 9.23. The low frequency gain is 37.5, and the 3 dB bandwidth is 58 MHz.
VCC RC vo RS
IB
IC
vS Bias Figure 9.21 Schematic diagram of a common emitter amplifier.
275
NOISE ANALYSIS FOR LINEAR TIME INVARIANT SYSTEMS
iBB(t ) Cµ
rb
B
+ iB ( t ) RS
V
iS ( t )
Cπ
vo gmV RC
rπ iC ( t )
−
vS
Figure 9.22 Small signal equivalent noise model for a common emitter amplifier.
The small signal input equivalent noise model for the common emitter amplifier, is shown in Figure 9.24, where the power spectral density of the input equivalent noise source , from Theorem 9.1, is given by CO
H ( f )
H ( f )
H ( f )
H ( f ) G ( f ) :G ( f ) 1 ;G ( f ) ;G ( f ) ;G ( f ) ! CO 1 !
H ( f )
H ( f )
H ( f )
H ( f ) M M M M (9.42)
Normalized Magnitude 1 0.5 0.2 0.1 0.05 0.02 0.01 1. · 107
1. · 108
1. · 109
Frequency (Hz) Figure 9.23 Normalized magnitude of transfer function of common emitter amplifier for parameter values listed in Table 9.1. The low frequency gain is 37.5 and the 3 dB bandwidth is 58 MHz.
276
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
TABLE 9.1 Parameters for BJT common emitter amplifier Parameter
Value (Rounded)
VT : kT/q IC : IC /IB gm : IC / VT r : /gm L rb RS RC fT C I C : gm / 2 fT 9 C L I
0.0259 1 mA 100 0.039 2600 Ohms 50 Ohms 50 Ohms 1000 Ohms 1.5 GHz 0.5 pF 4 pF
Here, H ( f ) : L ( j2 f ), L (s) : V (s)/ I (s) , and similarly 1 1 1 M 1 G1BT1G G G! for the other transfer functions. Standard circuit analysis yields the following results:
H ( f ) 1 : R 1
H ( f ) M
H ( f )
H ( f ) : r : (R ; r ) @ 1 @
H ( f )
H ( f ) M M R 1 ; 1@ r
H ( f ) 1 ; 4 f (r R )(C ; C ) L ! L 1@ L I :
H ( f ) g 1 ; 4 f C/g K I K M
(9.43)
(9.44)
where R : R ; r . It then follows that the power spectral density G of the 1@ 1 @ CO input equivalent voltage noise source v , is CO G ( f ) : 2kT R ; 2kTr CO 1 @ r ; qI R 1 ; 1@ 2kT ; R !
L
R 1 ; 1@ r 1 ; 4 f (r R )(C ; C ) L L 1@ L I g R 1 ; 4 f C/g I K K 1@
R 1 ; 1@ r 1 ; 4 f (r R )(C ; C ) L L 1@ L I g 1 ; 4 f C/g K I K
(9.45)
V /Hz
where the fact that I : I : g r I has been used to combine the power ! K L spectral density of the base and collector shot noise. Clearly, such an analytical expression facilitates low noise design. For example, low noise performance is consistent with a low source resistance, R ; 1 low base spreading resistance, r ; and low base current, I . However, with @
NOISE ANALYSIS FOR LINEAR TIME INVARIANT SYSTEMS
veq
rb
RS
Cµ + V
vS
Cπ
277
vo gmV RC
rπ
− Figure 9.24 Equivalent noise model for CE amplifier as far as the output node is concerned.
respect to the base current note, as g : I /V , with V : kT /q being the K ! 2 2 thermal voltage, that r /g : V /I and hence, an optimum base current 2 L K exists to minimize the third term in the expression for G (Hullett, 1977). CO Using the parameter values given in Table 9.1, the power spectral density, as defined by Eq. (9.45), is shown in Figure 9.25. Also shown is the output power spectral density given by G ( f ) : H ( f ) G ( f ) M M CO
g R 1 ; 4 f C/g K ! I K : [2kT R ; qI R ] 1@ (1 ; R /r ) 1@ D( f ) 1@ L 2kT R [1 ; 4 f (R r )(C ; C )] ! 1@ L L I ; qI ; ! R D( f ) !
Geq(f ), Go(f )
(9.46) V /Hz
(V 2 ⁄ Hz)
1 . · 10−15 5 . · 10−16 1 . · 10−16 5 . · 10−17 1 . · 10−17 5 . · 10−18 1 . · 10−18 1 . · 107
1 . · 108
1 . · 109 Frequency (Hz)
Figure 9.25 Input equivalent (lower trace) and output (upper trace) power spectral density of common emitter amplifier for parameter values listed in Table 9.1.
278
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
where D( f ) : [1 9 4 f R (R r )C C ] ! 1@ L L I
(9.47) R ! ; 4 f (R r ) C ; C 1 ; g R ; 1@ L L I K ! R r 1@ L One interesting example of the usefulness of input equivalent noise characterization can be found in Howard (1999a) which details a novel structure for an optoelectronic receiver, that potentially, has half the input equivalent noise level of a standard optoelectronic receiver.
9.6 INPUT EQUIVALENT CURRENT AND VOLTAGE SOURCES In many instances it is convenient to be able to characterize the noise of a structure by an equivalent noise source at its input, which is insensitive to the way the structure is driven, that is, insensitive to the nature of the source impedance characteristics. Such a characterization is possible through use of an input equivalent current source I , and an input equivalent voltage source CO V (Haus, 1960; Lam, 1992; Netzer, 1981; Gray, 2001 p. 768), as per the model CO shown in Figure 9.26. In this model Z is the input impedance of the linear circuit and Z : 1/Y 1 1 is the source impedance. With the Norton equivalent model for the source, I 1 is the source current and I is a current source to account for the noise 11 associated with the source impedance. With the Thevenin equivalent model, V 1 is the source voltage (V : I Z ) and V is a voltage source to account for the 1 1 1 11 noise associated with the source impedance (V : I Z ). 11 11 1 An alternative model often used with differential input circuits such as operational amplifier circuits [see, for example, Trofimenkoff (1989)], is shown in Figure 9.27. It is important to note that the two current sources I , are CO 100% correlated.
YS ZS IS +ISS VS +VSS Thevenin Model for Source
Veq
+
YS ZS
Z11
+
Ieq
V1 -
Noiseless + Linear VN Circuit -
Norton Model for Source
Figure 9.26 Equivalent input noise model for a linear time invariant circuit where an input equivalent voltage source and an input equivalent current source have been used to characterize the circuit noise.
INPUT EQUIVALENT CURRENT AND VOLTAGE SOURCES
Ieq IS1
Z11
+ +
YS1 Veq
V1
Noiseless Linear Circuit
+ VN −
− IS2
279
Ieq
YS2
Figure 9.27 An alternative input equivalent model for differential input circuits including operational amplifier circuits. Norton models have been used for the two sources.
The following theorem states the input equivalent voltage and current defined in this model. T 9.2. I E S C S V T he model shown in Figure 9.26 is valid, and the input source voltage or input source current that replace the noise sources in the circuit as far as the output node is concerned, are such that V :V ;V ;Z I 1 11 CO 1 CO I :I ;I ;Y V 1 11 CO 1 CO
T hevenin Norton
(9.48)
where the input equivalent current I and input equivalent voltage V are defined CO CO according to V QA(s) V (s) : , CO L (s) ,QA
V MA(s) I (s) : , CO L ' (s) ,MA
(9.49)
Here V QA and V MA, respectively, are the output noise voltage when the input is short , , circuited and open circuited, that is, V QA (s) : V (s) , , 8141411 and the transfer functions L L
,QA
,QA
V MA(s) : V (s) , , 71'1'11
(9.50)
and L ' , are defined according to ,MA
V (s) (s) : , V (s) 4141181'G 1
V (s) L ' (s) : , (9.51) ,MA I (s) '1'1171'G 1
280
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
In these definitions, I is the L aplace transform of the current sources connected G to the ith node in the linear circuit, to account for the various noise sources connected to that node. Proof. The proof of this theorem is given in Appendix 1. 9.6.0.1 Notes Consistent with Eqs. (9.49)—(9.51), V is the input equivalent CO voltage noise when the input is shoft circuited, that is, when V : V : Z : 0, 1 11 1 assuming a Thevenin equivalent model for the source. Similarly, I is the input CO equivalent current noise when the input is open circuited, that is, when I : I : Y : 0, assuming a Norton equivalent model for the source. 1 11 1 9.6.1 Input Equivalent Noise Power Spectral Density T 9.3. I E N P S D T he power spectral density of the voltage V , replacing the noise sources V , I , and 1 11 CO V , assuming V is independent of both I and V , and all noise sources have CO 11 CO CO zero means, is given by ( f )] G ( f ) : G ( f ) ; G ( f ) ; Z G ( f ) ; 2Re[Z G 411 4CO 41 1 'CO 1 'CO4CO
V /Hz (9.52)
where G , G , and G are, respectively, the power spectral density of V , 411 4CO 'CO 11 V , and I , and where G is the cross power spectral density between I 'CO4CO CO CO CO and V . According to Eq. (9.25), all the impedances are evaluated with arguments CO of j2 f. W ith the assumptions noted above, the power spectral density of the current I replacing I , I , and V , is given by 1 11 CO CO G ( f ) : G ( f ) ; G ( f ) ; Y G ( f ) ; 2Re[Y *G ( f )] '1 '11 'CO 1 'CO4CO 1 4CO
A/Hz (9.53)
where G
is the power spectral density of I . '11 11 Proof. The proof of these results for G and G follows from Eq. (9.48) and 41 '1 Theorem 8.8. The results specified in this theorem can be simplified, under certain conditions, as outlined in the following two subsections.
9.6.1.1 Case 1: Input Equivalent Voltage Dominates Noise If G ( f ) 4CO
Z G ( f ) which is consistent with Re[Z G ( f )] G ( f ), it follows 4CO 1 'CO 1 'CO4CO that G ( f ) G ;G ( f ) 41 411 4CO
(9.54)
INPUT EQUIVALENT CURRENT AND VOLTAGE SOURCES
281
that is, the input equivalent noise is determined by the source noise and the input equivalent voltage noise of the circuit. 9.6.1.2 Case 2: Input Equivalent Current Dominates Noise If G ( f ) 'CO
Y G ( f ) which is consistent with Re[Y *G ( f )] G ( f ), it follows 1 'CO4CO 'CO 1 4CO that G ( f ) :G ( f ) ;G ( f) '1 '11 'CO
(9.55)
that is, the input equivalent noise is determined by the source noise and the input equivalent current noise of the circuit. 9.6.2 Example: Input Equivalent Results for Common Emitter Amplifier Consider the common emitter amplifier shown in Figure 9.21, and its small signal equivalent model shown in Figure 9.22. Analysis for the input equivalent current and voltage yields, V (s) : 9 I (s)r ; I (s)r CO @ @ 1 ; r /r 1 ; s(r r )(C ; C ) @ L @ L L I 9 I (s) ! g 1 9 sC /g K I K 1 1 ; sr (C ; C ) L L I I (s) : I (s) 9 I (s) CO ! g r 1 9 sC /g K L I K
(9.56)
(9.57)
Hence, with independent and zero mean noise sources the input equivalent power spectral densities are ( f ) : G ( f )r ; G ( f )r @ @ 1 ; r /r 1 ; 4 f (r r )(C ; C ) @ L @ L L I ;G ( f ) ! g 1 ; 4 f C/g I K K 1 1 ; 4 f r(C ; C ) L L I G ( f ) :G ( f) ;G ( f) 'CO ! g r 1 ; 4 f C/g I K K L 1 ; r /r @ L G ( f ) : G ( f )r ; G ( f ) 'CO4CO @ ! g r KL (1 ; j2 fr (C ; C ))(1 9 j2 f (r r )(C ; C )) L L I @ L L I ; 1 ; 4 f C/g I K G
4CO
(9.58)
(9.59)
(9.60)
282
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
GVS( f )
PSD (V 2/Hz)
GVeq( f )
GVSS( f )
1. · 10−17 1. · 10−18 1. · 10−19 1. · 10−20
ZS 2GIeq( f )
1. · 107 1. · 108 2Re[ZSGIeqVeq( f )]
1. · 109 Frequency (Hz)
Figure 9.28 Power spectral density of input equivalent voltage source for a common emitter amplifier as well as the constituent components of this power spectral density.
These results follow from Theorem 8.8. Substitution of Eqs. (9.58)—(9.60) into Eq. (9.52) yields, G ( f ) : 2kT R ; 2kTr ; qI R 1@ 41 1 @ R 1; 1@ 2kT r 1;4 f (r R )(C ;C ) L L 1@ L I ; ;qI ! g R 1 ; 4 f C/g K I K !
V /Hz (9.61)
which can easily be shown to be equivalent to the form given in Eq. (9.45), for the case where the input equivalent noise was directly evaluated. In Figure 9.28, G ( f ) is plotted along with its constituent components as given by Eq. (9.52). 41 Note, for frequencies within the amplifier bandwidth (58 MHz), the input equivalent power spectral density is dominated by the power spectral density of the input equivalent voltage source and the power spectral density of the source resistance. For frequencies significantly higher than the amplifier bandwidth the noise due to the cross power spectral density dominates. 9.7 TRANSFERRING NOISE SOURCES Consider a cascade of N stages, as shown in Figure 9.29, where it is required to replace the kth noise source by an equivalent one at the (k ; 1)th stage, such that the noise power spectral density at the output node N, is unchanged. The following theorem states the required result.
283
TRANSFERRING NOISE SOURCES
Zo, k
Zi, k +
Zo, k–1
Zi, k + 1
Vk
Ik
+
+ Vk+1 Ik+1
−
VN −
− Stage k + 1
Stage k
Stage N
Figure 9.29 Cascade of N stages.
T 9.4. T N S S C C G T he two circuits shown in Figure 9.30 are equivalent, as far as the output node voltage is concerned, provided I
(s) : L (s)I (s) I> QA I
I (s) L (s) : QA QA I (s) 'I8GI> I
(9.62)
where L is the short circuit current transfer function (commonly called the short QA circuit current gain) of the kth stage. Proof. The proof of this theorem is given in Appendix 2. The theory pertaining to, and application of, this result has been detailed in Moustakas (1981). 9.7.1 Example — Output Noise of Common Emitter Amplifier Consider the small signal equivalent noise model for the common emitter amplifier shown in Figure 9.22, and the requirement to replace the input noise sources i , i , and i with an equivalent noise source at the output and in 1 parallel with the noise source i . To achieve this, the small signal equivalent ! Zo, k – 1 Ik
Zi, k + 1 +
+
Vk
Vk + 1
− Stage k
Zi, k + 1
Zo, k – 1 Ik + 1
+ Vk + 1
Isc −
− Stage k
Figure 9.30 Two equivalent circuits as far as the (k ; 1)th node voltage is concerned. Equivalence is when Ik;1 : Lsc Ik where Lsc : Isc /Ik .
284
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
Cµ + RSb //rπ
V ik(t)
gmV
Cπ
ik + 1(t)
iC (t)
−
Zo
RC
isc(t)
Figure 9.31 Small signal equivalent model for common emitter amplifier where ik;1 replaces ik . In this model RSb : RS ; rb .
model shown in Figure 9.22, can be redrawn as shown in Figure 9.31 where the noise current i is such that I R i (s) r i (t) i (t) : 1 1 ; @ ; i (t) I R ;r R ;r 1 @ 1 @
2kT G ( f): ; qI I R ;r 1 @
(9.63)
Analysis of this model yields the short circuit current transfer function,
(9.64)
9g (R r )(1 9 sC /g )I (s) K 1@ L I K I 1 ; s(R r )(C ; C ) 1@ L L I
(9.65)
I (s) 9g (R r )(1 9 sC /g ) K 1@ L I K L (s) : QA : QA I (s) 1 ; s(R r )(C ; C ) 'I I 1@ L L I and hence,
I (s) : I>
Thus, the noise power spectral density associated with the current sources i ! and i at the output node are I>
2kT 2kT G ( f ) ; G ( f ) : qI ; ; ; qI ! I> ! R ;r R ! 1 @ g (R r )(1 ; 4 f C/g ) I K ; K 1@ L 1 ; 4 f (R r )(C ; C ) 1@ L L I
(9.66) A/Hz
This result can be used to, first, infer the input equivalent noise power spectral density via the transfer function of Eq. (9.44). The result stated in Eq. (9.45) readily follows. Second, as the output impedance, Z of the common emitter M
NOISE EQUIVALENT BANDWIDTH
285
stage is Z (s) : M
R [1;s(R r )(C ;C )] ! 1@ L L I R ! ;sR (R r )C C 1;s(R r ) C ;C 1;g R ; L I K ! R r ! 1@ L L I 1@ L 1@ L (9.67)
it then follows that the output power spectral density is G ( f ) : Z ( j2 f ) [G ( f ) ; G ( f )] M M ! I> 2kT g R(R r )(1 ; 4 f C/g ) K ! 1@ L I K ; qI : R D( f ) 1@ 2kT R[1 ; 4 f (R r )(C ; C )] ! 1@ L L I ; qI ; ! R D( f ) !
(9.68)
where D( f ) is defined in Eq. (9.47). This equation agrees with the result previously derived and stated in Eq. (9.46). The output power spectral density G is plotted in Figure 9.25. M 9.8 RESULTS FOR LOW NOISE DESIGN The principles outlined in preceding sections can be used to show, for example, the following results: (1) at low frequencies and with a low source impedance a cascade of common emitter/source stages will yield a lower level of noise than a common gate/base or common collector/drain cascade (Moustakis, 1981); (2) at low frequencies and for low source impedances a BJT will yield, in general, lower noise than a JFET, but the reverse is true for high source impedances (Leach, 1994); (3) at low frequencies and for low source impedances, paralleling transistors will reduce the input equivalent noise (Hallgren, 1988); and (4) transformer coupling is effective in reducing the input equivalent noise when the source impedance is low (Lepaisant, 1992).
9.9 NOISE EQUIVALENT BANDWIDTH If a system has been characterized by a noise equivalent bandwidth, the calculation of the noise power at the output of the system, as required, for example, when the signal-to-noise ratio of a system is being evaluated, is greatly simplified. D: N E B The noise equivalent bandwidth B , of a real system with transfer function H( f ) and with a gain given by ,
286
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
H( f ) , is the bandwidth of an ideal filter with a gain of H( f ) , which yields M M the same level of output power as the system being considered. By definition, and for a low pass transfer function, B is defined such that, ,
G ( f ) H( f ) df : H( f ) ', M
,
G ( f ) df ',
(9.69)
where G is the power spectral density of the input noise. The definition is ', readily generalized for the bandpass case. This definition arises from the evenness of H( f ) and the power spectral density of real signals, as well as the definition of the output power, namely, P :
\
G ( f ) H( f ) df : H( f ) ', M
,
\
,
G ( f ) df ',
(9.70)
9.9.1 Examples For the common case of white noise, where G ( f ) : G (0), an explicit ', ', expression for B is readily obtained, ,
1
H( f ) df B : , H( f ) M
(9.71)
and the output noise power is P : 2 H( f ) G (0)B M ', ,
(9.72)
If, for example, H has a single pole form H( f ) : H /(1 ; j f / f ), then it M follows that B : f /2 : 1.57f . , For the case where G ( f ) : kf , which occurs, for example, in high speed ', optoelectronic receiver amplifiers [see, for example, Jain (1985)], it follows that
3 B : f H( f ) df ,
H( f ) M
(9.73)
and the output noise power is P :
2k H( f ) B , M 3
(9.74)
POWER SPECTRAL DENSITY OF A PASSIVE NETWORK
287
If, for example, H has a Gaussian form with a 3-dB bandwidth of f Hz, that is, H( f ) : H e\ DD , then the noise equivalent bandwidth is M B : ,
[3( /(2]f : 1.87f (ln(2)
(9.75)
9.9.2 Signal-to-Noise Ratio of Common Emitter Amplifier Consider the Common Emitter amplifier whose power spectral density is shown in Figure 9.25 and whose transfer function is shown in Figure 9.23. This transfer function can be approximated by a single pole form H( f ) : H / M (1 ; j f / f ), where H : 37.5 and f : 5.8 ; 10. As is evident in Figure M 9.25, the input power spectral density is approximately flat up until well beyond the amplifier bandwidth, and consistent with Eq. (9.71), the noise equivalent bandwidth is approximately 91 MHz. With an input equivalent power spectral density level close to 10\V /Hz, it follows from Eq. (9.72), that the output rms noise level is 0.506 mV consistent with an equivalent rms input noise level of 13.5 V. With a 1 mV rms input signal, the signal-to-noise ratio is 5490 or 37.4 dB.
9.10 POWER SPECTRAL DENSITY OF A PASSIVE NETWORK The well-known result, which is a generalization of Nyquist’s theorem, is that the power spectral density of noise measured across the terminals of a passive network, is given by G ( f ) : 2kT Re[Z$ ( f )] : 2kT R$ ( f ) GL GL 4
V /Hz
(9.76)
where the input impedance at the same two terminals is Z , Z$ ( f ) : GL GL Z ( j2 f ) : R ( j2 f ) ; X ( j2 f ), R ( j2 f ) is real while X ( j2 f ) is imaginGL GL GL GL GL ary, and R$ ( f ) : R ( j2 f ). This result is consistent with the models for a GL GL passive network shown in Figure 9.32, where G is the power spectral density 4 of the voltage source v and the power spectral densities of the sources i and i 0 Xin
Zin
v
i
Zin
iR
Rin
Figure 9.32 Equivalent models for a passive network.
288
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
are given by 2kT R$ ( f ) GL G ( f): '
Z$ ( f ) GL
2kT G (f): '0 R$ ( f ) GL
(9.77)
A proof of Eq. (9.76) usually uses an argument based on conservation of energy as opposed to direct circuit analysis (Williams, 1937; Helmstrom, 1991 pp. 427—429; Papoulis, 2002 p. 452). A partial proof of Eq. (9.76), based on direct circuit analysis, is given in the following subsection. Suitable references for the extension of the generalized Nyquist result, as per Eq. 9.76, to nonlinear circuits are Coram (2000) and Weiss (1998, 2000). 9.10.1 Implications of Nyquist’s Theorem Given resistive elements in a passive circuit generate zero mean independent noise waveforms, it follows from Theorem 8.7, that Eq. (9.76) is consistent with the following result: , 2kT , G (f):
H ( f ) : 2kT Re[Y $( f )] H ( f ) G 4 G G R G G G
(9.78)
where H ( f ) : L ( j2 f ), with L being the Laplace transfer function V /I G G G G relating the output voltage V to the ith current source I , associated with the G noise generated by the ith resistance R . Further, Y $( f ) : Y ( j2 f ) is the G G G admittance of R and associated lossless circuitry between the two nodes that G R is between. Equating Eqs. (9.76) and (9.78) yields, G , Re[Z$ ( f )] : Re[Y $( f )] H ( f ) GL G G G
(9.79)
A conjecture associated with this result is the generalization, , Z (s) : Y (s)L (s)L (9s) GL G G G G
(9.80)
In fact the correct generalization is , Z (s) : Y (9s)L (s)L (9s) GL G G G G
(9.81)
A partial proof of this conjecture is given in Appendix 3. It is based on proving the result for the general ladder structure shown in Figure 9.33, where the current sources account for the noise of the resistive elements in the adjacent admittance.
289
POWER SPECTRAL DENSITY OF A PASSIVE NETWORK
I12 + Y11
V1 −
Y12
Y23
Y22
Y21
I11
I23
V2
V3
Y33
Y32
I22
I33 I32
I21 Stage 1
Stage 2
Stage 3
Figure 9.33 Structure of a passive ladder network.
For a N stage passive ladder network defined by Figure 9.33, the input impedance can be written as , Z (s) : Y (9s)L (s)L (9s) ; Y (9s)L (s)L (9s) GG> GG> GG> GL G ;Y (9s)L (s)L (9s) G>G G>G G>G ;Y (9s)L (s)L (9s) G>G> G>G> G>G> where
V (s) L (s) : GH I (s) 'GH'NON$GO$H GH
(9.82)
(9.83)
9.10.2 Example Consider the determination of the noise at the input node of a three stage, doubly terminated lossless ladder (van Valkenburg, 1982 pp. 399f ) shown in Figure 9.34. Analysis yields the input impedance,
where
V (s) N(s) Z (s) : : GL I (s) D(s) ''0'0
(9.84)
N(s) : R;3RL s;6RCL s;4RCL s;5RCL s;RCL s;RCL s (9.85) D(s) : 2R ; 3(L ; RC)s ; 9RCL s ; 4CL (L ; RC)s ; 6RCL s ; CL (L ; RC)s ; RCL s
(9.86)
290
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
V1
I1
V2 L
V3 C
L
V4 L
C
C
R
R IR1
iR4
Figure 9.34 Three stage doubly terminated lossless ladder.
Hence, the input equivalent noise at node 1 has the power spectral density,
G ( f ) : 2kT Re[Z ( j2 f )] : 2kT Re ',
N( j2 f )D(9j 2 f )
D( j2 f )
V /Hz (9.87)
As a check, the transfer functions for the noise current sources I and I 0 0 are,
N(s) V (s) : I (s) D(s) 0 '0''0
V (s) R : I (s) D(s) 0 '0''0
(9.88)
and direct analysis yields,
V ( j2 f ) V ( j2 f ) ; I ( j2 f ) I ( j2 f ) 0 0 1 N( j2 f )N(9j 2 f ) ; R : 2kT
D( j2 f ) R
2kT G (f): R
(9.89) V /Hz
Comparing Eq. (9.87) with Eq. (9.89) yields the requirement N( j2 f )D(9j 2 f ) ; N(9j 2 f )D( j2 f ) N( j2 f )N(9j 2 f ) ; R : (9.90) 2 R which can be readily verified. G is plotted in Figure 9.35 for the case where R : 50, C : 10\, and L : 10\. At low frequencies, the power spectral density is that of RR; at high frequencies, when the inductor impedance becomes high, the power spectral density is that of the resistor R.
APPENDIX 1: PROOF OF THEOREM 9.2
291
PSD (V 2/Hz) 5. · 10−19 2. · 10−19 1. · 10−19 5. · 10−20 2. · 10−20 1. · 106
1. · 107
1. · 108 1. · 109 Frequency (Hz)
Figure 9.35 Power spectral density at input node of passive network of Figure 9.34 when R : 50, C : 10\ and L : 10\.
Finally, for the noiseless case where R : -, it should be the case that Re[Z ( j2 f )] : 0. To check this, note that when R : -, ',
V (s) 1 ; 6CL s ; 5CL s ; CL s Z (s) : : GL I (s) sC(3 ; 4CL s ; CL s) '0
(9.91)
Clearly, this transfer function is purely imaginary, when s : j2 f, as required.
APPENDIX 1: PROOF OF THEOREM 9.2 The linear circuit, as per Figure 9.26, is assumed to have N nodes, where node 1 is the input node and node N is the output node. When the input node is open circuited, the circuit is assumed to be characterized by the N node equations according to Y
%
Y
% Y V ,, ,
,
Y ,
V I :
or
YV :I
(9.92)
I ,
where V is the Laplace transform of the ith node voltage, Y is the admittance G GH parameter for the i 9 jth node, and I is the Laplace transform of the current G sources connected to the ith node to account for the various noise sources
292
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
connected to that node. The following inverse matrix is assumed to exist: %
Z
Z Y , :
Z % Z , ,,
%
Y
\
,
(9.93)
Y ··· Y , ,,
The elements of the inverse matrix are defined by Z : /, where is the GH HG determinant of Y and is the cofactor of Y (Anton, 1991 pp. 86—94). Further, HG HG Z is the input impedance at node 1. For the case where the input node is driven by a Norton source, as per Figure 9.26, the node equations for the overall circuit can be written as Y ;Y 1 Y Y ,
:
Y Y
%
Y , Y ,
V
Y % Y , ,,
V
%
Y 1 0
0
% 0
0
% 0
0
%
0
V
,
Y ;
Y
Y Y
% %
Y , Y ,
Y Y % Y , , ,,
V I ;I ;I 1 11 V I : V ,
I
,
(9.94)
and hence, after appropriate manipulation, Z Z , % 1;Y Z 1;Y Z 1 1 V Y Z Z Y Z Z Z 9 1 % Z 9 1 , V , 1 ; Y Z : 1 ; Y1Z 1 V , YZ Z Y Z Z Z 9 1 , % Z 9 1 , , , 1 ; Y Z ,, 1 ; Y Z 1 1
I ;I ;I 1 11 I I
(9.95)
,
It then follows that the transfer function between the ith noise source I and G the output signal V is ,
V (s) Y Z Z L (s) : , : Z 9 1 , G G ,G I (s) 1;Y Z 'G'1'11'HH$G G 1
(9.96)
APPENDIX 1: PROOF OF THEOREM 9.2
293
which can be written in the following form: Z Y (Z Z 9 Z Z ) ,G G , L (s) : ; 1 ,G G 1;Y Z 1;Y Z 1 1
(9.97)
Thus, the output voltage is
Z , Y (Z Z 9 Z Z ) Z (I ; I ) ,G G , I (9.98) 11 ; V (s) : , 1 ; 1 ,G , G 1;Y Z 1;Y Z 1;Y Z 1 1 G 1 To refer this output noise to the input, and hence, establish the validity of input equivalent current and voltage noise sources, the transfer function between the source current I and the output node voltage V is required. From this 1 , equation it readily follows that
V (s) Z , L ' (s) : , : , I (s) 1;Y Z '1'11'G 1 1
(9.99)
Further, as I : Y V the transfer function between the source voltage V and 1 1 1 1 the output voltage V can be defined according to , L
Z Y V (s) , 1 (s) : , : , V (s) 1;Y Z 41411'G 1 1
(9.100)
Hence, the input equivalent source current I that generates the same output 1 noise as the noise sources I and I , . . . , I is, 11 ,
, Z Y (Z Z 9 Z Z ) G , I ,G ; 1 ,G I (s) : I ; 1 11 G Z Z G , ,
(9.101)
which can be written as I :I ;I ;Y V 1 11 CO 1 CO
(9.102)
where
, Z ,G I I : CO G Z G ,
, Z Z 9Z Z ,G G , I V : CO G Z G ,
(9.103)
As I and V are independent of the source impedance, the model is justified. CO CO The results V : Z I and V : Z I yields the input source voltage that 1 1 1 11 1 11 generates the same output voltage as the noise sources I and I , . . . , I , 11 , V :V ;V ;Z I 1 11 CO 1 CO
(9.104)
294
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
Finally, from Eqs. (9.98) and (9.103), I and V can be written as CO CO Z V QA(s) V (s) : , CO Z ,
V MA(s) I (s) : , CO Z ,
(9.105)
where V QA and V MA, respectively, are the output voltage when the input is short , , circuited and open circuited, that is,
, Z Z 9Z Z ,G G , I V QA(s) : V (s) : , , '1'1171 G Z G , V MA(s) : V (s) : Z I , , 71'1'11 ,G G G
(9.106)
From Eqs. (9.99) and (9.100) it follows that the transfer functions L and ,QA L' can be defined according to ,MA L
Z V (s) : , (s) : , Z V (s) 41411'G71 1 V (s) L ' (s) : , :Z ,MA , I (s) '1'1171'G 1 ,QA
(9.107) (9.108)
Thus, I and V can be written as CO CO V MA(s) I (s) : , CO L ' (s) ,MA
V QA(s) V (s) : , CO L (s) ,QA
(9.109)
which is the last required result.
APPENDIX 2: PROOF OF THEOREM 9.4 First, consider the Thevenin equivalent model shown in Figure 9.36, which arises from looking in at the output of the kth stage of the circuit shown in Figure 9.29. In this figure V is the open circuit voltage at the output of the IMA kth node when this node is open circuited, that is, when Z : -. Clearly, GI> from Figure 9.36 it follows that
V
I>
:
Z
V GI> IMA Z ;Z MI GI>
(9.110)
APPENDIX 2: PROOF OF THEOREM 9.4
+
295
+
Zo, k Vkoc
Vk + 1 −
Isc Zi, k + 1
Figure 9.36 Thevenin equivalent circuit at output of kth stage.
Second, consider the case where the open circuit voltage, V , is a consequence IMA solely of the current source I , shown in Figure 9.29. The transfer function I between V and I is denoted L , that is, IMA I I
V L : IMA I I I 'I'I\'I\
(9.111)
It then follows that the voltage V , due to the current source I , is I> I V Third, a current I
I>
:
Z L I GI> I I Z ;Z GI> MI
(9.112)
, acting alone, generates a voltage V , where I> I> Z Z I V : [Z Z ]I : MI GI> I> I> MI GI> I> Z ;Z GI> MI
Thus, the current I
I>
(9.113)
generates the same voltage V as the current I , when I> I
Z Z I Z L I MI GI> I> : GI> I I Z ;Z Z ;Z GI> MI GI> MI
$
L I I : I I I> Z MI
(9.114)
As V : L I , it follows that IMA I I V I : IMA : I : L I I> Z QA QA I MI
(9.115)
where I is the short circuit current that flows in the circuit, as illustrated in QA Figure 9.36, and
I L : QA QA I I 'I8GI>
(9.116)
296
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
APPENDIX 3: PROOF OF CONJECTURE FOR LADDER STRUCTURE Kirchoff’s current law applied to the nodes of the first stage in Figure 9.33, yields the following matrix of equations: Y
;Y 9Y 0 V I 9I 9Y Y ;Y 9Y V : I 9I 9I 2 2 0 9Y Y ;Y V I 9I ;I 2 2
(9.117)
where Y is equal to the admittance Y plus the admittance of the network 2 to the right of Y . Solving this equation yields the following transfer functions:
V Y Y ;Y Y ;Y Y 2 2 Z :L : : GL I ''''' 9Y Y V 2 L : : I ''''' 9Y Y V 2 L : : I ''''' 9Y Y V L : : I '''''
(9.118) (9.119) (9.120) (9.121)
where is the determinant of the matrix and is given by : Y (Y Y ; Y Y ; Y Y ) ; Y Y Y 2 2 2
(9.122)
It then follows that the input impedance Z , defined in Eq. (9.118), can be GL written according to Y * Y Y ;Y Y ;Y Y Y * Y * Y * (Y Y ;Y Y ;Y Y ) 2 2 ; 2 2 2 Z : GL
(9.123) where, for convenience, the notation W (s) :W (s)W (9s) and W *(s):W (9s) has been used, and is used in subsequent analysis. Using Eqs. (9.118)—(9.121) in this equation, the input impedance can be written in the form Z : Y * L ; Y L ; Y * L ; Y * L 2 GL
(9.124)
This is the required result for a circuit consisting of the four stage 1 admittances Y , Y , Y , and Y . This result needs to be extended to account 2
APPENDIX 3: PROOF OF CONJECTURE FOR LADDER STRUCTURE
297
I23
Y23 + Y2L V2 −
Stage 2
Y3T I33
Y32 I32
Z2R, Y2R
Figure 9.37 Definition of Z2R and Y2L for second stage of ladder network.
for the admittances in stage 2. To do this, first, note that Y : Y ; Y , 2 0 where Y is the admittance to the right of Y , as per Figure 9.37. Hence, the 0 input impedance can be written as Z : Y * L ; Y * L ; Y * L ; Y * L ; Y * L (9.125) 0 GL Second, the last term in this equation can be written as Y * L : L Y Z 0 0 0
(9.126)
where Z : 1/Y . Using the result of Eq. (9.124), it follows that Z can 0 0 0 immediately be written as Z : Y * F ; Y * F ; Y * F 2 0
(9.127)
where the transfer functions F , F , and F are defined according to the following equations where the voltage and current definitions are as per Figure 9.37:
V F : MA I 7*'' V F : MA I 7*'' V F : MA I 7*'' In these equations, V
MA
V : I V L : I V L : I L
(9.128) '' (9.129) '' (9.130) ''
is the open circuit voltage defined by V : V . MA 7*
298
PRINCIPLES OF LOW NOISE ELECTRONIC DESIGN
Thus, substitution of Eqs. (9.127) and (9.126) into Eq. (9.125) yields, Z : Y * L ; Y * L ; Y * L ; Y * L GL ; L Y [Y * F ; Y * F ; Y * F ] 2 0
(9.131)
The goal is to rewrite the last three terms in this expression, in terms of the transfer functions from the current sources to the node 1 voltage V . To do this, consider the first of these three terms L Y Y * F . First, the transfer 0 function F is the relationship between the open circuit voltage V and the MA current source I , that is, V : F I . By considering the Thevenin equiv MA alent circuit shown in Figure 9.38, it follows that
Y 0 V :V MA Y ; Y 0 *
(9.132)
and hence, the current I will generate a voltage V given by
V :F Y
Y 0 I ;Y 0 *
(9.133)
Now, the voltage V could also have been generated by a current I , according to the relationship V : I /(Y ; Y ). Hence, the relationship between the 0 * equivalent current I and I , is I :Y F I 0
(9.134)
From the definitions used for stage 1 of the network, a current I will generate a voltage V at the left end of the network, according to V : L I . A I23 Y2L
+ V2 −
Y2R Y2L
Y23 Y32
Y3T
+ V2
V2oc
− Thevenin Equivalent
A Figure 9.38 Thevenin equivalent model for the circuit to the right of the line A 9 A.
APPENDIX 3: PROOF OF CONJECTURE FOR LADDER STRUCTURE
299
Hence, the output voltage generated by I is given by V :L Y F I :L I 0
(9.135)
where by definition, L : L Y F is the transfer function relating I to V . 0 Similar definitions can be made for L and L , and then, using Eq. (9.135), Eq. (9.131) can be written in the required form: Z : Y * L ; Y * L ; Y * L ; Y * L GL ; Y * L ; Y * L ; Y * L 2
(9.136)
In a similar manner, the admittance Y can be expanded, and the above 2 argument repeated to obtain a further expansion for the impedance Z . The GL general result for a N stage ladder network is, , Z : Y * L ; Y * L
; Y * L
; Y *
L
GG> GG> G>G G>G G>G> G>G> GL G (9.137)
Notation COMMON MATHEMATICAL NOTATION Greatest integer function Conjugation operator For all There exists Dirac Delta function The empty set Characteristic function Infimum Probability density function for a random variable X Imaginary unit number: j : 91 Sinc function: sin(x)/x Supremum x is an element of A The signal x, its Fourier transform and the Fourier transform evaluated on the interval [0, T ] u Unit step function AB A is a subset of B A;B Cartesian product of the sets A and B C The set of complex numbers E Ensemble associated with the random process X 6 G(T, f ), G ( f ) Power spectral density evaluated on [0, T ] and [0, -] I The interval I Im Imaginary part of L Set of Lebesgue integrable functions on (9-, -) L [, ] Set of Lebesgue integrable functions on [, ] M Measure operator N The set of positive integers 1, 2, . . . * ` Inf f 6 j sinc(x) sup x+A x, X, X(T, f )
300
NOTATION
P P (T ), P P 6 Q R, R> R (T, ), R ( ) R(T, t, ) Re S! S 6 Z, Z>
Probability operator Average power evaluated on [0, T ] and [0, -] Probability space associated with the random process X The set of rational numbers The set of real numbers, the set of real numbers greater than zero Time averaged autocorrelation function evaluated on [0, T ] and on [0, -] Autocorrelation function evaluated on [0, T ] Real part of Complement of the set S Index set identifying outcome of a random process X The set of integers, the set of positive integers 1, 2, . . .
DEFINITION OF ACRONYMS a.e. c.c. iff rms s.t. u.c. CE DAC FM FSK MIMO NBHD PSD QAM RP SNR
301
almost everywhere countable case if and only if root mean square such that uncountable case common emitter digital to analogue converter frequency modulation frequency shift keyed multiple input—multiple output neighbourhood power spectral density quadrature amplitude modulation random process signal to noise ratio
PARAMETER VALUES q-Electronic Charge T -temperature k-Boltzmann’s constant V : kT /q-Thermal Voltage 2
1.6;10\ C 300 degree Kelvin (room temp.) 1.38;10\ J/degree Kelvin 0.0257 J/C (room temp.)
λº»®»²½»-
œ
œ
œ íðî
ÎÛÚÛÎÛÒÝÛÍ
œ
œ
œ
œ
œ œ
œ
œ
œ
íðí
íðì
ÎÛÚÛÎÛÒÝÛÍ
œ
œ
œ
œ œ œ
œ ø
÷ œ
œ œ
œ
œ
œ œ
œ
ÎÛÚÛÎÛÒÝÛÍ
íðë
œ œ œ œ œ
ø ÷
œ
œ œ œ
íðê
ÎÛÚÛÎÛÒÝÛÍ
œ
œ œ œ
œ
œ œ œ
Index 1/f noise, 198, 266 1/f noise and bounded random walks, 198 Absolute continuity, 18, 22, 23 Absolute continuity and differentiability, 23 Absolute convergence, 34 Almost everywhere, 24 Approximation continuous, 30 step, 32 Autocorrelation definition, 79 existence conditions, 81 infinite interval case, 82 notation, 80 power spectral density relationship, 81 random process case, 80 single waveform case, 79 time averaged, 79 Bounded power spectral density, 75, 92 Bounded random walk, 196 Bounded variation, 18 Boundedness, 17 Boundedness and integrability, 29, 35 Boundedness and local integrability, 35 Brownian motion, 192 Cartesian product, 4 CE amplifier input noise, 273, 281 noise model, 273 output noise, 277, 283 signal to noise ratio, 287 Characteristic function, 4
Communication signals, 140 Complex numbers, 6 Conjugation operator, 8 Continuity absolute, 18 left, 13 piecewise, 13 point, 14 pointwise, 14 right, 13 uniform, 14 Continuity and absolute continuity, 23 Continuity and boundedness, 19 Continuity and Dirichlet point, 42 Continuity and discontinuities, 20 Continuity and maxima-minima, 20 Continuity and path length, 20 Continuous approximation, 30 Convergence, 36 dominated, 36 Fourier series, 39 in mean, 36, 37 monotone, 37 pointwise, 36 uniform, 36 Correlogram, 80 Countable set, 6 Cross power spectral density, 102 bound, 106 definition, 104 DAC, 152 Differentiability, 15 piecewise, 16 Differentiability and Dirichlet point, 41
307
308
INDEX
Dirac Delta function, 43 Dirichlet point, 40 Dirichlet value, 41 Disjoint signaling random process, 207 Disjoint signals, 8 Dominated convergence theorem, 37 Energy, 13 Ensemble, 44 Equivalent disjoint random process, 207 Equivalent low pass signal, 187 Fourier series, 38 Fourier theory, 38 Fourier transform, 39 Frequency modulation binary FSK, 215 raised cosine pulse shaping, 218 Fubini-Tonelli theorem, 34 Function definition, 7 Gaussian white noise, 259 Generalized signaling process, 166 Impulse response, 230 Impulsive power spectral density, 76 graphing, 122 Infimum, 4 Information signal, 138, 141 Input equivalent noise, 270, 278 CE amplifier, 273, 281 input equivalent current, 278 input equivalent voltage, 278 power spectral density, 280 Integers, 5 Integrability, 35 Integrated spectrum, 77 Integration Lebesgue, 25 local, 27, 35 Riemann, 26 Interchanging integration order, 34 Interchanging summation order, 34 Interval, 6 Inverse Fourier transform, 42 Jitter, 155 Lebesgue integration, 25 Linear system impulse response, 230 input-output relationship, 232 power signal case, 234 power spectral density of output, 238
transforms of output signal, 232 widowed case, 234 Linear system theory, 229 Locally integrable, 27 Low noise design, 285 Measurable functions, 25 Measurable set, 24 Measure, 23 Memoryless system, 10, 11 Memoryless transformation of a random process, 206 Monotone convergence theorem, 37 Multiple input-multiple output system, 243 Natural numbers, 5 Neighbourhood, 6 Noise 1/f, 198, 266 CE amplifier, 273, 281, 283 doubly terminated lossless ladder, 289 effect of, 257 oscillator, 241 passive network, 287 shot, 160, 266 thermal, 264 Noise analysis input equivalent noise, 270 linear time invariant system, 269 Noise equivalent bandwidth, 285 Noise model BJT, 267 diode, 267 JFET and MOSFET, 267 resistor, 264 Non-stationary random processes, 71 Nyquist’s theorem, 265 generalized, 287 Operator definition, 7 Ordered pair, 4 Orthogonal set, 9 Orthogonality, 8 Oscillator noise, 241 Parseval’s theorem, 44 Partition, 4, 6 Passive network, 287 Piecewise continuity, 13 Piecewise smoothness, 16 Piecewise smoothness and absolute continuity, 23 bounded variation, 22 Dirichlet point, 42
INDEX
discontinuities, 21 piecewise continuity, 22 Pointwise continuity, 14 Pointwise convergence, 36 Power relative, 59 signal, 13 Power and power spectral density, 65 Power spectral density, 60 approximate, 99 autocorrelation relationship, 78, 81 bounded, 75, 92 continuity, 63 continuous form, 62 cross, 102 definition, 63, 67, 69 discrete approximation, 72 existence criteria, 73 graphing, 122 impulsive, 74 infinite interval, 67, 69 input equivalent current, 280 input equivalent noise, 272 input equivalent voltage, 280 integrability, 66 interpretation, 64 multiple input—multiple output system, 244 nonlinear transformation, 209 nonzero mean case, 121 output of linear system, 238 periodic component case, 119 properties, 65 random process, 67 required characteristics, 60 resolution, 66 simplification, 98 single sided, 72 single waveform, 60 symmetry, 66 via signal decomposition, 95 Power spectral density example, 96, 98, 102, 122 amplitude signaling through memoryless nonlinearity, 211 binary FSK, 215 bipolar signaling, 148 CE amplifier input noise, 273, 281 CE amplifier output noise, 277, 283 digital random process, 70, 122 doubly terminated lossless ladder, 289 FM with pulse shaping, 218 jittered pulse train, 159 oscillator noise, 241 periodic pulse train, 115 quadrature amplitude modulatin, 191
309
RZ signaling, 146 shot noise with dead time, 165 sinusoid, 65 spectral narrowing, 213 Power spectral density of bounded random walk, 196 DAC quantization error, 152 electrons crossing a barrier, 161 equivalent low pass signal, 188 generalized signaling process, 167 infinite sum of periodic signals, 118 infinite sum of random processes, 108 input equivalent noise sources, 280 jittered signal, 158 linear system output, 238 multiple input—multiple output system, 244 passive network, 287 periodic random process, 117 periodic signal, 112 quadrature amplitude modulated signal, 186 random walk, 193 sampled random process, 184 sampled signal, 181 shot noise, 161 shot noise with dead time, 164 signaling random processes, 141 sum of N random processes, 112 sum of two random processes, 107 Probability space, 44 Quadrature amplitude modulation, 185 Quantization, 152 Raised cosine pulse, 218 Raised cosine spectrum, 148 Random processes, 44 signaling, 138 Random walk, 192 bounded, 196 Rational numbers, 5 Real numbers, 5 Relative power measures, 59 Riemann integration, 26 Riemann sum, 179 Riemann-Lebesgue theorem, 40 Sample autocorrelation function, 80 Sampling, 179 Schottky’s formula, 163 Schwarz inequality, 29 Set theory, 3 Shot noise, 160, 266 Shot noise with dead time, 163
310
INDEX
Signal classification, 35 decomposition, 9 definition, 7 disjointness, 8 energy, 13 orethogonality, 8 power, 13 Signal path length, 17 Signal to noise ratio CE amplifier, 287 DAC, 155 Signaling invariant system, 210 Signaling random process, 138 definition, 139 disjoint, 207 generalized, 166 Simple function, 30 Single-sided power spectral density, 72 Spectral distribution function, 77 Spectral issues, 146 Spectral spread, 213
Square law device, 212 Step approximation, 32 Supremum, 4 System definition, 7 linear, 9 memoryless, 10, 11 Thermal noise, 264 Transferring noise sources, 282 Uncountable set, 6 Uniform and pointwise continuity, 14 Uniform continuity, 14 Uniform convergence, 36 White noise, 259 Wiener Khintchine relations, 83 Wiener process, 192 Zero measure, 24, 27