Synthetic Instruments
This page intentionally left blank
Synthetic Instruments
Concepts and Applications by C.T. Nadovich
AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Newnes is an imprint of Elsevier
Newnes is an imprint of Elsevier 200 Wheeler Road, Burlington, MA 01803, USA Linacre House, Jordan Hill, Oxford OX2 8DP, UK Copyright © 2005, Elsevier Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail:
[email protected]. You may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting “Customer Support” and then “Obtaining Permissions.” Recognizing the importance of preserving what has been written, Elsevier prints its books on acid-free paper whenever possible. Library of Congress Cataloging-in-Publication Data (Application submitted.) British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. ISBN: 0-7506-7783-X For information on all Newnes publications visit our website at www.newnespress.com 04 05 06 07 08 09 10 9 8 7 6 5 4 3 2 1 Printed in the United States of America.
Contents Foreword .....................................................................................xiii Plan of this Book................................................................................xiv Chapter Outline..................................................................................xv Preface........................................................................................xvii Acknowledgments ......................................................................xix What’s on the CD-ROM? ............................................................xxi Chapter 1: What is a Synthetic Instrument? ...............................1 History of Automated Measurement ....................................................1 Genesis ..................................................................................................2 Modular Instruments ............................................................................4 Synthetic Instruments Defined .............................................................5 Synthesis and Analysis .........................................................................6 Generic Hardware .................................................................................6 Advantages of Synthetic Instruments ................................................11 Eliminating Redundancy ....................................................................11 Measurement Integration ...................................................................13 Measurement Speed ............................................................................14 Longer Service Life .............................................................................15 Synthetic Instrument Misconceptions ...............................................15 Why not Just Measure Volts with a Voltmeter? .................................16 Virtual Instruments .............................................................................16 Analog Instruments ............................................................................19 Chapter 2: Synthetic Measurement System Hardware Architectures ..........................................................................21 System Concept—The CCC Architecture ........................................21 Signal Flow .........................................................................................22 The Synthetic Measurement System..................................................23 Chinese Restaurant Menu (CRM) Architecture ...............................23 Parameterization of CCC Assets ........................................................25 Architectural Variations .....................................................................26 v
Contents
Compound Stimulus ...........................................................................27 Simultaneous Channels and Multiplexing .........................................28 Hardware Requirements Traceability .................................................34 Chapter 3: Stimulus .....................................................................35 Stimulus Digital Signal Processing .....................................................35 Waveform Playback ............................................................................36 Direct Digital Synthesis ......................................................................37 Algorithmic Sequencing.....................................................................39 Synthesis Controller Considerations ..................................................41 Stimulus Triggering .............................................................................42 Stimulus Trigger Interpolation ...........................................................43 The Stimulus D/A ..............................................................................44 Interpolation and Digital Up-Converters in the Codec ....................45 Stimulus Conditioning .......................................................................46 Stimulus Conditioner Linearity..........................................................47 Gain Control ......................................................................................47 Adaptive Fidelity Improvement .........................................................49 Reconstruction Filtering .....................................................................50 Stimulus Cascade—Real-World Example ..........................................51 Chapter 4: Response ...................................................................55 Response Signal Conditioning ...........................................................55 Input Protection..................................................................................55 Response Linearity and Gain Control................................................56 Adaptive Techniques ..........................................................................57 The Response Codec ..........................................................................58 Fidelity and Measurement Accuracy ..................................................58 Ideal Quantization ..............................................................................60 Codec Headroom ................................................................................60 Headroom Trade-off and System Fidelity ...........................................61 Response Digital Signal Processing ....................................................62 Waveform Recorder and DSP.............................................................62 Matched Filter Demodulator ..............................................................64 Response Trigger Time Interpolator ...................................................66 Response Cascade—Real-World Example .........................................66
vi
Contents
Chapter 5: Real-World Design: A Synthetic Measurement System ....................................................................................69 Universal High-Speed RF Microwave Test System............................69 Background .........................................................................................69 Logistical Goals...................................................................................70 Technical Goals ..................................................................................70 RF Capabilities....................................................................................70 System Architecture ...........................................................................71 Microwave Synthetic Instrument (TRM1000C) ...............................71 Supplemental Resources .....................................................................74 DUT Interface ....................................................................................74 Product Test Adapter Solutions .........................................................75 Calibration ..........................................................................................75 Primary Calibration ............................................................................75 Operational Calibration .....................................................................76 Software Solutions ..............................................................................76 Test Program Set Developer Interface ................................................77 TRM1000C Software..........................................................................77 Conclusions.........................................................................................78 Chapter 6: Measurement Maps ..................................................81 Measurement Abstraction ..................................................................83 General Measurements .......................................................................85 Abscissas and Ordinates .....................................................................86 The Measurement Function ...............................................................86 Canonical Ordinate Algorithms.........................................................88 Multidimensional Measurements .......................................................88 Domains ..............................................................................................89 Measurement Maps .............................................................................90 Ports and Modes..................................................................................92 DUT Modes as Abscissas ....................................................................94 Ports as Abscissas ................................................................................94 Map Manipulations.............................................................................95 Problems with Hysteresis ....................................................................99 Stimulus and Response .....................................................................100 Inverse Maps .....................................................................................100 Accuracy Advantages of Inverse Maps.............................................102 Problems with Inverse Maps .............................................................103 vii
Contents
Calibration Strategy and Map Manipulations ..................................104 Canonical Maps ................................................................................105 Sufficiency of the Stimulus Response Measurement Map Stance....107 Processing a Measurement ................................................................108 The Basic Algorithm ........................................................................109 Chapter 7: Signals .....................................................................115 Kinds of Signals ................................................................................116 Coding, Decoding, and Measuring the Signal Hierarchy ................117 Decoding Method Abscissas .............................................................118 Direct Real Analog Baseband Signals ..............................................119 Digital Coded Baseband ...................................................................121 Analog Coded Baseband ..................................................................121 Bandwidth .........................................................................................122 Bandpass Signals ...............................................................................125 Bandpass Sampling ...........................................................................127 Image Rejection ................................................................................130 Interference and Images ....................................................................131 I/Q Sampling.....................................................................................132 Broadband Periodic Signals ..............................................................133 Chapter 8: Calibration and Accuracy.......................................137 Metrology for Marketers and Managers ............................................137 Measurand .........................................................................................138 Accuracy and Precision ....................................................................140 Test versus Measurement ..................................................................141 Introduction to Calibration ..............................................................143 Reference Standards .........................................................................143 Uncertainty Analysis ........................................................................143 Stimulus Calibration.........................................................................144 Overall Strategy for Stimulus Calibration ........................................145 Using Interpolation to Invert a Map ................................................145 Interpolation Example ......................................................................147 Sampling Interval versus Resolution Confusion ..............................149 Ordinate Quantization and Precision...............................................153 De-Embedding Calibration Objects .................................................154 De-Embedding Dimensionality and Interpolation ...........................156 Abscissa De-Embedding....................................................................156
viii
Contents
Chapter 9: Specifying Synthetic Instruments .........................157 Synthetic Instrument Definition and XML......................................158 Why XML? ........................................................................................159 ATML ...............................................................................................162 Why Not SCPI, ATLAS,…? ............................................................162 Introduction to XML ........................................................................164 Automatic Descriptions ....................................................................164 Not a Script.......................................................................................166 XML Basics .......................................................................................167 Synthetic Measurement Systems and XML .....................................168 Describing the Measurement with XML ..........................................169 Defining an Instrument.....................................................................171 Calibration Strategy Example...........................................................175 Functional Decomposition and Scope..............................................177 Measurement Parameters—A Hazard ..............................................179 Describing the Measurement System with XML..............................180 Describing Measurement Results with XML ....................................184 Column and Array Data ...................................................................185 Self-Documenting Features ..............................................................185 Arrays as Elements ............................................................................186 SQL Database Concepts and Data Objects ......................................187 HDF...................................................................................................188 Chapter 10: Synthetic Instrument Markup Language: SIML ...................................................................191 A DTD for Measurement Description ..............................................192 More SIML Details ...........................................................................195 Locked Abscissas ...............................................................................196 Banded Abscissas ..............................................................................198 Constraints........................................................................................201 Modulation .......................................................................................202 Ordinate Modifiers: Averaging and Statistical Manipulations ........203 Chapter 11: Ten Mistakes in Synthetic Measurement System Design .....................................................................205 Fixing Performance or Functionality Shortfalls Exclusively by Adding Hardware .....................................................................205 Fixing Hardware Mistakes with Software .........................................207 Adding Modes or Features Dedicated to Specific Measurements ....207 ix
Contents
Designing Synthetic Instruments Procedurally ................................208 Meeting Legacy Instrument Specifications ......................................209 Developing Stimulus Separate from Response .................................211 Not Combining Measurements ........................................................212 Hardware Modularity as a Distraction..............................................212 Bad Lab Procedure ............................................................................213 Fear of Change ..................................................................................214 Acronym Glossary .....................................................................217 Basic SIML DTD .........................................................................221 Bibliography ...............................................................................223 Books .................................................................................................223 Periodicals .........................................................................................224 Conference Papers ............................................................................224 About the Author .......................................................................225 Index ...........................................................................................227 ■
■
■
List of Figures Figure 1-1. Manual measurements ..............................................................2 Figure 1-2. Digital hardwired logic versus CPU .........................................7 Figure 1-3. Signal-generator leveling loop ...............................................12 Figure 1-4. Crowded front panel on Tektronix Spectrum analyzer ..........17 Figure 2-1. Basic CCC cascade .................................................................21 Figure 2-2. Synthetic architecture cascade—flow alternatives ................22 Figure 2-3. Synthetic measurement system ..............................................23 Figure 2-4. CRM architecture...................................................................24 Figure 2-5. Parameterized architecture .....................................................26 Figure 2-6. Compound stimulus................................................................27 Figure 2-7. Space division multiplexing ...................................................30 Figure 2-8. Time division multiplexing ....................................................30 Figure 2-9. Time division multiplexing with virtual commutator............31 Figure 2-10. Frequency division multiplexing ..........................................32 Figure 2-11. Code division multiplexing ..................................................33 Figure 3-1. The stimulus cascade ..............................................................35 Figure 3-2. Basic waveform playback controller .......................................36 Figure 3-3. Direct digital synthesizer ........................................................38 x
Contents
Figure 3-4. Fractional samples ..................................................................39 Figure 3-5. Fine trigger delay control .......................................................43 Figure 3-6. Effect of gain control placement on SNR with varying gain .48 Figure 3-7. Adaptive nulling.....................................................................50 Figure 3-8. Aeroflex CS25000 ..................................................................52 Figure 4-1. The response cascade..............................................................55 Figure 4-2. Low-gain versus high-gain ......................................................56 Figure 4-3. Adaptive nulling to improve response measurement.............58 Figure 4-4. Sources of noise and distortion in synthetic systems .............59 Figure 4-5. VU meter ................................................................................61 Figure 4-6. Waveform recording controller ..............................................62 Figure 4-7. Matched filter .........................................................................64 Figure 4-8. AP240 reconfigurable PCI signal analyzer platform ..............67 Figure 5-1. TRM1000C functional diagram .............................................72 Figure 5-2. Test adapter interface .............................................................74 Figure 6-1. Joint manifold .........................................................................84 Figure 6-2. A measurement map ...............................................................91 Figure 6-3. A color image as a measurement map ....................................92 Figure 6-4. Is the gain switch setting a port or mode? ..............................93 Figure 6-5. Inverting a map ......................................................................97 Figure 6-6. Square rooter ........................................................................102 Figure 6-7. Inverse map with multiple branches ....................................103 Figure 6-8. Calibration strategy trees......................................................107 Figure 6-9. Basic SRMM measurement algorithm .................................110 Figure 7-1. Fuzzy hierarchy of “stances” ..................................................116 Figure 7-2. Analog and digital codings ...................................................120 Figure 7-3. One scan line of NTSC analog video ..................................122 Figure 7-4. The Sampling theorem.........................................................123 Figure 7-5. Amplitude and frequency modulation .................................126 Figure 7-6. Mixing ..................................................................................128 Figure 7-7. Bandpass sampling example .................................................129 Figure 7-8. IF at 1/4 the sampling rate ....................................................130 Figure 7-9. Preselector signal conditioner ..............................................131 Figure 7-10. Time equivalent sampling spectra ......................................134 Figure 8-1. Precision versus accuracy......................................................140 Figure 8-2. Test versus measurement ......................................................142 Figure 8-3. Clipped cosine ......................................................................147
xi
Contents
Figure 8-4. Fundamental power transfer .................................................148 Figure 8-5. Interpolation error (estimate – ideal) ..................................148 Figure 8-6. ”Spiked” FFT ........................................................................152 Figure 8-7. Interpolated FFT (same data)...............................................153 Figure 8-8. De-embedding applied to temperature measurements .........155 Figure 9-1. Bookshelf modular ................................................................158 Figure 9-2. Example of SCPI code ..........................................................163 Figure 9-3. Tree structure of XML code example ...................................170 Figure 9-4. Detailed example tree structure ...........................................173 Figure 9-5. Measurement system, switch matrix, and DUT ...................182 Figure 9-6. Self-documenting SRMM object .........................................186 List of Tables Table 2-1. CRM architecture ....................................................................25 Table 3-1. D/A converter trade-off range ................................................44 Table 3-2. BSG performance range ..........................................................53 Table 3-3. BSG options.............................................................................54 Table 5-1. TRM1000C measurement suit ................................................73 Table 7-1. Sampling techniques .............................................................135 List of Examples Example 9-1. Simple XML document ....................................................167 Example 9-2. Simple oscilloscope...........................................................169 Example 9-3. Alternative XML structure ...............................................170 Example 9-4. Flatbed scanner .................................................................171 Example 9-5. Network analyzer ..............................................................172 Example 9-6. Compound ordinate .........................................................175 Example 9-7. Parameter list ....................................................................178 Example 9-8. Defining ports ...................................................................183 Example 10-1. Complete XML document..............................................192 Example 10-2. Simple DTD ...................................................................193 Example 10-3. More sophisticated DTD ................................................194 Example 10-4. Distortion analyzer .........................................................197 Example 10-5. Enhanced distortion analyzer .........................................199 Example 10-6. Constraints .....................................................................201 Example 10-7. Signal encoding ..............................................................202 Example 10-8. Averaging........................................................................203 Example B-1. Complete SIML DTD ......................................................221
xii
Foreword The way electronic measurement instruments are built is making an evolutionary leap to a new method of design called synthetic instruments. This promises to be the most significant advance in electronic test and instrumentation since the introduction of automated test equipment (ATE). The switch to synthetic instruments is beginning now, and it will profoundly affect all test and measurement equipment that will be developed in the future. Synthetic instruments are like ordinary instruments, in that they are specific to a particular measurement or test. They might be a voltmeter that measures voltage, or a spectrum analyzer that measures spectra. The difference is that synthetic instruments are implemented purely in software that runs on general-purpose, nonspecific measurement hardware with a high-speed A/D or D/A converter at its core. In a synthetic instrument, the software is specific; the hardware is generic. Therefore, the personality of a synthetic instrument can be changed in an instant. A voltmeter may be a spectrum analyzer a few seconds later, and then become a power meter, or network analyzer, or oscilloscope. Totally different instruments are realized on the same hardware, switching back and forth in the blink of an eye, or even existing simultaneously. The union of the hardware and software that implement a set of synthetic instruments is called a synthetic measurement system (SMS). This book studies both synthetic instruments, and systems from which they may be best created. Powerful customer demands in the private and public sectors are driving this change to synthetic instruments. There are many bottom-line advantages in making one, generic and economical SMS hardware design do the work of an expensive rack of different, measurement-specific instruments. ATE customers all want to reap the savings this promises. ATE vendors like Teradyne, as well as conventional instrumentation vendors like Agilent and Aeroflex, have announced or currently produce synthetic instruments. The U.S. Military, one of the largest ATE customers
xiii
Foreword
in the world, wants new ATE systems to be implemented with synthetic instruments. Commercial electronics manufacturers such as Lucent, Boeing, and Loral are using synthetic instruments now in their factories. Despite the fact that this change to synthetic instrumentation is inevitable and widely acknowledged throughout the ATE and T&M industries, there is a paucity of information available on the topic. A good deal of confusion exists about basic concepts, goals, and trade-offs related to synthetic instrumentation. Given that billions of dollars in product sales hang in the balance, it is important that clear, accurate information be readily available.
Plan of this Book The basic goal of this book is to explain synthetic instrumentation at a high level, focusing on specific details when necessary to illustrate a crucial point of note. The first half of the book is generally of a hardware flavor, and the second half is generally oriented toward theory and software, but unifying themes of synthetic measurement system high-level design are presented throughout. The foremost unifying concept in the book, tying together hardware, system theory, and software in a tidy package, is the measurement map. This unique and powerful concept serves as a bridge between the synthetic cascade hardware architecture, the abstract idea of a measurement, and the expression of a synthetic instrument as an XML schema for automated software implementation and processing. The power of the XML schema based on the measurement map is that concise, structured descriptions of measurements can be given directly by the test engineering user (possibly with the help of a GUI tool). These descriptions of a measurement can be automatically processed, optimized, and performed. Most importantly, the test engineer can specify exactly the measurement wanted without writing procedural test scripts or doing any other programming. Thus, the book is a collage of hardware, software, and system concepts all aimed at the single goal of explaining and extending this new approach to measurement system design.
xiv
Foreword
Chapter Outline The book begins by answering the question: “What is a synthetic instrument?” First, we review the history of measurement, automated and otherwise. The advantages of the synthetic approach are enumerated and discussed, along with motivation for the fundamental idea of using generic hardware to perform specific measurements. Also presented are the necessary distinctions between synthesis and analysis, and between test versus measurement. Once this basic concept of a synthetic instrument is explained, the book moves on to a detailed discussion of hardware architecture. This discussion begins with the introduction of the control, codec, conditioning (CCC) cascade architecture, as either a stimulus or response asset. Architecture variations are described, including the basic Chinese restaurant menu (CRM) variation, compound stimulus, and multiplexing options. The hardware discussion then moves sequentially through the stimulus and response system cascades, touching on critical issues and challenges these systems present. Among the issues considered for the stimulus side are direct digital synthesis (DDS) and controller characteristics, triggering and digital up-conversion, linearity, gain control, and adaptive fidelity improvement. In the response cascade, input conditioning, quantization, system fidelity trade-offs, adaptive interference cancellation, and matched filters are discussed. Examples of commercial, state of the art subsystems are presented for stimulus and response. The book also contains a complete chapter containing a detailed description of a real world, production oriented synthetic measurement system. The real world system example is roughly the midpoint of the book. At this point the focus shifts back to the theoretical as the central concept of a measurement map is introduced. The discussion links the measurement map to the test engineer’s desired test, as well as to the hardware and calibration. This lays the groundwork for subsequent theory and software architecture discussions. After the measurement map, there is a detailed discussion of signals and signal processing issues often encountered in synthetic measurement systems. These include discussions of making measurements at different xv
Foreword
levels of the signal coding hierarchy, encoding and decoding strategies, bandwidth, and sampling strategies. Some practical issues with up and down converters are explored. A chapter on calibration and accuracy attempts to clarify the way to think about measurement topics. There is also a general discussion of reference standards and uncertainty analysis. Along with general calibration topics, there is a major section on the topic of stimulus calibration that includes a detailed analysis of certain interpolation issues. The concept of de-embedding is also presented. The next two chapters are an introduction to the XML method for encapsulating measurement descriptions, and an annotated example of a measurement description expressed in XML. The book concludes with a listing of The Ten Mistakes in Synthetic Measurement System Design. This chapter draws from many of the concepts introduced throughout the previous material, drawing conclusions that apply to real-world applications.
xvi
Preface My discussions ten years ago with Chris Nadovich about modular, software-based test instruments were born out of the same frustrations that are now driving the synthetic instrument movement: we were involved in integrating standard instruments into special applications for which they were not designed. As we struggled to program around unfortunate “features” and patched together solutions for new, unique capabilities, we wondered if there wasn’t a better way. Ten years later, our ideas for this new class of instrumentation are starting to take hold in industry. When we began, we didn’t really know what to call this idea. As we have progressed from project to project, the term synthetic instrument became the widely accepted term for this new type of test equipment. The synthetic instrument concept is as revolutionary as the forward pass was in American football. Just like the idea of actually letting go of the football and throwing it forward where anyone could catch it was a challenge to football’s status quo, synthetic instruments require a paradigm shift from instrument manufacturers. They empower the user and integrator to mold the instrument to their specific needs. Just like the early football establishment puzzled over how to run an offense that let go of the football, today’s instrument vendors puzzle over how to deal with the freedom that synthetic instruments bestow upon their customers. As hard as it would be today to imagine football without the forward pass, in the future it will be just as hard to imagine the world of test instrumentation without synthetic instruments. Widespread adoption of synthetic instrumentation concepts has been hampered by a lack of a common vocabulary. With the revolutionary and somewhat nebulous nature of synthetic instruments, they do not fit comfortably into the language of traditional instrumentation. Since the synthetic instrument can easily be molded through software, using traditional language to describe or specify this new class of instruments tends to mold it into a reflection of the old traditional instruments. Having a
xvii
Preface
terminology of its own will allow the synthetic instrument to take advantage of the full power and flexibility of the platform. This book helps create and define the lexicon for synthetic instruments. The lack of measurement science behind synthetic instrument concepts also hinders its acceptance. Because the concepts of synthetic instruments are so new, there is a concern about how well instruments based on them will perform. For synthetic instruments to become fully accepted, the basic measurement science behind them will need to be studied and documented. This activity will take time and commitment on the part of company research organizations and academia. Until these organizations publish the necessary science and metrology to support synthetic instruments, there will continue to be reluctance to adopt them. Over the past ten years, Chris and I have had success implementing synthetic instruments for a variety of test applications, allowing us to turn the concepts we had into reality. But as in the early stages of most revolutions, there is still considerable work ahead. Until synthetic measurement systems are mass-produced, their cost makes them uncompetitive for all but the most complex and demanding applications. Until their common lexicon is established, defining their requirements is still troublesome. The lack of documented measurement science continues to make it difficult for developers to undertake a synthetic instrument project. This book, with the work and thought that Chris has put into it, is a major step in overcoming some of the limitations we have encountered, and takes us further toward having synthetic instruments fulfill their revolutionary destiny in the test and measurement industry. —Jack Berlekamp
xviii
Acknowledgments Although they are clearly the main focus, this book is not just about the synthetic instruments themselves. It also includes an ample helping of what I hope is my wisdom (others may call it my “jaded opinion”) based on my experience regarding how synthetic instruments and synthetic measurement systems should be built and used. I openly admit that my experience is colored by the particular companies and products that I’ve worked for over the years. I have fought many a technical battle to get things done in what I perceived was the right way. I’ve won some, lost some, and in some cases I changed my perception as I was convinced I was wrong. Regardless of my personal opinion relative to the actual practices in real-world systems, it should not be construed that any particular failure or mistake I may analyze is associated with any specific real-world system or product just because of where I may have worked. Without significant exception, all the valuable opinions I express in this book (and whatever pearls of wisdom they may contain) are derived from the successes I have been involved with, one way or the other. My interest in synthetic instruments would have never begun if it wasn’t for the influence of Jack Berlekamp, who led me into this topic back in the 1990’s. Jack was a prominent champion of the idea, and I watched him struggle again and again to teach others the techniques and associated benefits of synthetic instrumentation. His crusade convinced me of the importance of setting down, in book form, what synthetic instruments were all about. Bill Birurakis, who, to my knowledge, invented the term synthetic instrument was another of my teachers. Although I interacted with Bill far less than with Jack, Bill’s influence on my vision of synthetic instrumentation was substantial. Bill’s crystal clear vision of what synthetic instruments are (and are not) eventually penetrated my dull brain. I don’t always agree with Bill on some of the gory details, or Jack for that matter, but much of my own point of view on the topic is a “synthesis” of these two
xix
Acknowledgments
individuals, with “parameterization” and “canonicalization” of my own, for bad or for good. Any good writing found in the book is a result of Chris Lett’s patient, yet ruthless proofreading suggestions. Chris has proofread every major bit of writing I’ve ever done. What little writing skill I may seem to have, I owe entirely to his literary influence over the years. Dan Frey and Jeff Bronfeld also contributed valuable suggestions to the book as they suffered through proofreading early drafts. There are many other people, too numerous to list, with whom I’ve worked at Aeroflex, Flam & Russell, Checkpoint Systems, and other companies that had significant influence on my view of automated test. Their impact on this book is considerable.
xx
What’s on the CD-ROM? Included on the accompanying CD-ROM:
A fully searchable eBook version of the text in Adobe PDF format.
XML source code examples. Real-world synthetic instrument data sheets, application notes, and white papers from commercial instrument vendors.
xxi
This page intentionally left blank
CHAPTER
1
What is a Synthetic Instrument? Engineers often confuse synthetic measurement systems with other sorts of systems. This confusion isn’t because synthetic instrumentation is an inherently complex concept, or because it’s vaguely defined, but rather because there are lots of companies trying to sell their old nonsynthetic instruments with a synthetic spin. If all you have to sell are pigs, and people want chickens, gluing feathers on your pigs and taking them to market might seem to be an attractive option to some people. If enough people do this, and feathered pigs, goats, cows, as well as turkeys and pigeons are flooding the market, being sold as if they were chickens, real confusion can arise among city folk regarding what a chicken might actually be. One of the main purposes of this book is to set the record straight. When you are finished reading it, you should be able to tell a synthetic instrument from a traditional instrument. You will then be an educated consumer. If someone offers you a feathered pig in place of a chicken, you will be able to tell that you are being duped.
History of Automated Measurement Purveyors of synthetic instrumentation often talk disparagingly about traditional instrumentation. But what exactly are they talking about? Often you will hear a system criticized as “traditional rack-em-stack-em.” What does that mean? In order to understand what’s being held up for scorn, you need to understand a little about the history of measurement systems.
1
Synthetic Instruments: Concepts and Applications
Genesis In the beginning, when people wanted to measure things, they grabbed a specific measurement device that was expressly designed for the particular measurement they wanted to make. For example, if they wanted to measure a length, they grabbed a scale, or a tape measure, or a laser range finder and carried it over to where they wanted to measure a length. They used that specific device to make their specific length measurement. Then they walked back and put the device away in its carrying case or other storage, and put it back on some shelf somewhere where they originally found it (assuming they were tidy). If you had a set of measurements to make, you needed a set of matching instruments. Occasionally, instruments did double duty (a chronometer with built-in compass), but fundamentally there was a one-to-one correspondence between the instruments and the measurements made. That sort of arrangement works fine when you have only a few measurements to make, and you aren’t in a hurry. Under those circumstances, you don’t mind taking the time to learn how to use each sort of specific instrument, and you have ample time to do everything manually, finding, deploying, using, and stowing the instrument.
Figure 1-1. Manual measurements
2
What is a Synthetic Instrument?
Things went along like this for many centuries. But then in the 20th century, the pace picked up a lot. The minicomputer was invented, and people started using these inexpensive computers to control measurement devices. Using a computer to make measurements allows measurements to be made faster, and it allows measurements to be made by someone that might not know too much about how to operate the instruments. The knowledge for operating the instruments is encapsulated in software that anybody can run. With computer-controlled measurement devices, you still needed a separate measurement device for each separate measurement. It seemed fortunate that you didn’t necessarily need a different computer for each measurement. Common instrument interface buses, like the IEEE-488 bus, allowed multiple devices to be controlled by a single computer. In those days, computers were still expensive, so it helped matters to economize on the number of computers. And, obviously, using a computer to control measurement devices through a common bus requires measurement devices that can be controlled by a computer in this manner. An ordinary schoolchild’s ruler cannot be easily controlled by a computer to measure a length. You needed a digitizing caliper or some other sort of length measurement device with a computer interface. Things went along like this for a few years, but folks quickly got tired of taking all those instruments off the shelf, hooking them up to a computer, running their measurements, and then putting everything away. Sloppy, lazy folks that didn’t put their measurement instruments away tripped over the interconnecting wires. Eventually, somebody came up with the idea of putting all these computer-controlled instruments into one big enclosure, making a measurement system that comprised a set of instruments and a controlling computer mounted in a convenient package. Typically, EIA standard 19” racks were used, and the resulting sorts of systems have been described as “rack-em-stack-em” measurement systems. Smaller systems were also developed with instruments plugged into a common frame using a common computer interface bus, but the concept is identical.
3
Synthetic Instruments: Concepts and Applications
At this point, the people that made measurements were quite happy with the situation. They could have a whole slew of measurements made with the touch of a button. The computer would run all the instruments and record the results. There was little to deploy or stow. In fact, since so many instruments were crammed into these rack-em-stack-em measurement systems, some systems got so big that you needed to carry whatever you were measuring to the measurement system, rather than the other way around. But that suited measurement makers just fine. On the other hand, the people that paid for these measurement systems (seldom the same people as using them) were somewhat upset. They didn’t like how much money these systems were costing, how much room they took up, how much power they used, and how much heat they threw off. Racking up every conceivable measurement instrument into a huge, integrated system cost a mint and it was obvious to everyone that there were a lot of duplicated parts in these big racks of instruments. Modular Instruments As I referred to above, there was an alternative kind of measurement system where measurement instruments were put into smaller, plug-in packages that connected to a common bus. This sort of approach is called modular instrumentation. Since this is essentially a packaging concept rather than any sort of architecture paradigm, modular instruments are not necessarily synthetic instrumentation at all. In fact, they usually aren’t, but since some of the advantages of modular packaging correspond to advantages of synthetic system design, the two are often confused. Modular packaging can eliminate redundancy in a way that seems the same as how synthetic instruments eliminate redundancy. Modular instruments are boiled down to their essential measurement-specific components, with nonessential things like front panels, power supplies, and cooling systems shared among several modules. Modular design saves money in theory. In practice, however, cost savings are often not realized with modular packaging. Anyone attempting to specify a measurement or test system in modular VXI packaging knows that the same instrument in VXI often costs more than an equivalent standalone instrument. This seems absurd given the fact that the modular
4
What is a Synthetic Instrument?
version has no power supply, no front panel, and no processor. Why this economic absurdity occurs is more of a marketing question than a design optimization paradox, but the fact remains that modular approaches, although the darling of engineers, don’t save as much in the real world as you would expect. One might be tempted to point at the failure of modular approaches to yield true cost savings and predict the same sort of cost savings failure for synthetic instrumentation. The situation is quite different, however. The modular approach to eliminating redundancy and reducing cost does not go nearly as far as the synthetic instrument approach does. A synthetic instrument design will attempt to eliminate redundancy by providing a common instrument synthesis platform that can synthesize any number of instruments with little or no additional hardware. With a modular design, when you want to add another instrument, you add another measurement specific hardware module. With a synthetic instrument, ideally you add nothing but software to add another instrument.
Synthetic Instruments Defined Synergy means behavior of whole systems unpredicted by the behavior of their parts taken separately. —R. Buckminster Fuller[B4] Fundamental Definitions Synthetic Measurement System A synthetic measurement system (SMS) is a system that uses synthetic instruments implemented on a common, general purpose, physical hardware platform to perform a set of specific measurements. Synthetic Instrument A synthetic instrument (SI) is a functional mode or personality component of a synthetic measurement system that performs a specific synthesis or analysis function using specific software running on generic, nonspecific physical hardware.
5
Synthetic Instruments: Concepts and Applications
There are several key words in these definitions that need to be emphasized and further amplified. Synthesis and Analysis Although the word “synthetic” in the phrase synthetic instrument might seem to indicate that synthetic instruments are synthesizers—that they do synthesis. This is a mistake. When I say synthetic instrument, I mean that the instrument is being synthesized. I am not implying anything about what the instrument itself does. A synthetic instrument might indeed be a synthesizer, but it could just as easily be an analyzer, or some hybrid of the two. I’ve heard people suggest the term “analytic instruments” rather than synthetic instruments in the context of some analysis instrument built with a synthetic architecture, and this isn’t really correct either. Remember, you are synthesizing an instrument; the instrument itself may synthesize something, but that’s another matter. Generic Hardware Synthetic instruments are implemented on generic hardware. This is probably the most salient characteristic of a synthetic instrument. It’s also one of the bitterest pills to swallow when adopting an SI approach to measurements. Generic means that the underlying hardware is not explicitly designed to do the particular measurement. Rather, the underlying hardware is explicitly designed to be general purpose. Measurement specificity is encapsulated in software. An exact analogy to this is the relationship between specific digital circuits and a general-purpose CPU. A specific digital circuit can be designed and hardwired with digital logic parts to perform a specific calculation. Alternatively, a microprocessor (or, better yet, a gate array) could be used to perform the same calculation using appropriate software. One case is specific, the other generic, with the specificity encapsulated in software. The reason this is such a bitter pill is that it moves many instrument designers out of their hardware comfort zone. The orthodox design ap-
6
What is a Synthetic Instrument?
Figure 1-2. Digital hardwired logic versus CPU
proach for instrumentation is to design and optimize the hardware so as to meet all measurement requirements. Specifications for measurement systems reflect this optimized-hardware orientation. Software is relegated to a subordinate role of collecting and managing measurement results, but no fundamental measurement requirements are the responsibility of any software. With a synthetic instrumentation approach, the responsibility for meeting fundamental measurement requirements is distributed between hardware and software. In truth, the measurement requirements are now primarily a system-level requirement, with those high-level requirements driving lower-level requirements. If anything, the result is that more responsibility is given to software to meet detailed measurement requirements. After all, the hardware is generic. As such, although there will be some broad-brush optimization applied to the hardware to make it adequate for the required instrumentation tasks, the ultimate responsibility for implementing detailed instrumentation requirements belongs to software.
7
Synthetic Instruments: Concepts and Applications
Once system planners and designers understand the above point, it gives them a way out of a classic dilemma of test system design. I have seen many first attempts at synthetic instrumentation where this was not understood. In these misguided efforts, the hardware designers continued to bear most or all of the responsibility for meeting system-level measurement performance requirements. Crucial performance aspects that were best or only achievable in system software were assigned to hardware solutions, with hardware engineers struggling against their own system design and the laws of physics to make something of the impossible hand they had been dealt. Software engineers habitually ignore key measurement performance issues under the invalid assumption that “the hardware does the measurement.” Software engineers focus instead on well-known TPS issues (configuration management, test executive, database, presentation, user interface (UI), and so forth) that are valid concerns, but which should not be their only concerns. One of the goals of this book is to raise awareness of this fact to people contemplating the development of synthetic instrumentation: a synthetic instrument is a system-level concept. As such, it needs a balanced systemlevel development effort to have any chance of being successful. Don’t fall into the trap of turning to hardware as the solution for every measurement problem. Instead, synthesize the solution to the measurement problem using software and hardware together. Organizations that develop synthetic instruments should make sure that the proper emphasis is placed. System-level goals for synthetic instruments are achieved by software. Therefore, the system designer should have a software skill-set and be intimately involved in the software development. When challenges are encountered during design or development, software solutions should be sought vigorously, with hardware solutions strongly discouraged. If every performance specification shortfall is fixed by a hardware change, you know you have things backward. Dilemma—All Instruments Created Equal When the Founding Fathers of the United States wrote into the Declaration of Independence the phrase “all men are created equal,” it was clear to everyone then, and still should be clear to everyone now, that this 8
What is a Synthetic Instrument?
statement is not literally true. Obviously, there are some tall men and some short men; men differ in all sorts of qualities. Half the citizens to which that phrase refers are actually women. What the Founding Fathers were doing was to establish a government that would treat all of its citizens as if they were equal. They were perfectly aware of the inequalities between people, but they felt that the government should be designed as if citizens were all equivalent and had equivalent rights. The government should be blind to their inherent and inevitable differences between citizens. Doubtless, the resources of government are always limited. Some citizens who are extremely unequal to others may find that their rights are altered from the norm. For example, an 8-foot tall man might find some difficulty navigating most buildings, but the government would find it difficult to mandate that doorways all be taller than 8 feet. Thus, a consequence of the “created equal” mandate is that the needs of extreme minorities are neglected. This is a dilemma. Either one finds that extraordinary amounts of resources are devoted to satisfying these minority needs, which is unfair to the majority, or the needs of the minority are sacrificed to the tyranny of the majority. The endless controversies that result are well known in U.S. history. You may be wondering where I’m going with this digression on U.S. political thought and why it has any place in a book about synthetic instrumentation. Well, the same sort of political philosophy characterizes the design of synthetic instrumentation and synthetic measurement systems. All instruments are created equal by the fiat of the synthetic instrument design paradigm. That means, from the perspective of the system designer, that the hardware design does not focus on and optimize the specific details of the specific instruments to be implemented. Rather, it considers the big picture and attempts to guarantee that all conceivable instruments can flourish with equal “happiness.” But we all know that instruments aren’t created equal. As with government, there are inevitable trade-offs in trying to provide a level playing field for all possible instruments. Some types of instruments and measurements require far different resources than others. Attempting to provide for these oddball measurements will skew the generic hardware in such a way that it does a bad job supporting more common measurements. 9
Synthetic Instruments: Concepts and Applications
Here’s an example. Suppose there is a need for a general-purpose test and measurement system that would be able to test any of a large number of different items of some general class, and determine if they work. An example of this would be something like a battery tester. You plug your questionable battery into the tester, push a button, and a green light illuminates (or meter deflects to the green zone) if the battery is good, or red if bad. But suppose that it was necessary to test specialized batteries, like car batteries, or high power computer UPS batteries, or tiny hearing aid batteries. Nothing in a typical consumer battery tester does a good job of this. To legitimately test big batteries you would want to have a high-power load, cables thick enough to handle the current, and so on. Small batteries need tiny connectors and sockets that fit their various shapes. Adding the necessary parts to make these tests would drive up the cost, size, and other aspects of the tester. Thus, there seems to be an inherent compromise in the design of a generic test instrument. The dilemma is to accept inflated costs to provide a foundation for rarely needed, oddball tests, or to drop the support for those tests, sacrificing the ability to address all test needs. Fortunately, synthetic instrumentation provides a way to break out of this dilemma to some degree—a far better way than traditional instrumentation provided. In a synthetic instrumentation system, there is always the potential to satisfy a specific, oddball measurement need with software. Although software always has costs (both nonrecurring and recurring), it is most often the case that handling a minority need with software is easier to achieve than it is with hardware. A good, general example of this is how digital signal processing (DSP) can be applied in post processing to synthesize a measurement that would normally be done in real time with hardware. A specific case would be demodulating some form of encoding. Rather than include a hardware demodulator in order to perform some measurement on the encoded data, DSP can be applied to the raw data to demodulate in post-processing. In this way, a minority need is addressed without adding specialized hardware.
10
What is a Synthetic Instrument?
Continuing with this example, if it turns out that DSP post-processing does not have sufficient performance to achieve the goal of the measurement, one option is to upgrade the controller portion of the control, codec, conditioning (CCC) instrument. Maybe then the DSP will run adequately. Yes, the hardware is now altered for the benefit of a single test, but not by adding hardware specific to that test. This is one of my central points. As I will discuss in detail later on, I believe it is a mistake to add hardware specific to a particular test.
Advantages of Synthetic Instruments No one would design synthetic instruments unless there was an advantage: above all, a cost advantage. In fact, there are several advantages that allow synthetic instruments to be more cost effective than their nonsynthetic competitors. Eliminating Redundancy Ordinary rack-em-stack-em instrumentation contains repeated components. Every measurement box contains a slew of parts that also appear in every other measurement box. Typical repeated parts include:
Power Supply
Front Panel Controls
Computer Interfaces
Computer Controllers
Calibration Standards
Mechanical Enclosures
Interfaces
Signal Processing
A fundamental advantage of a synthetic approach to measurement system design is that adding a new measurement does not imply that you need to add another measurement box. If you do add hardware, the hardware comes from a menu of generic modules. Any specificity tends to be restricted to the signal conditioning needed by the sensor or effector being used. 11
Synthetic Instruments: Concepts and Applications
Stimulus Response Closure: The Calibration Problem Many of the redundancies eliminated by synthetic instrumentation are the same as redundancies eliminated by modular instrument approaches. However, one significant redundancy that synthetic instruments have the unique ability to eliminate is the response components that are responsible for stimulus, and the stimulus components that support response. I call this efficiency closure. I will show, however, that this sort of redundancy elimination, while facilitated by synthetic approaches, has more to do with using a system-level optimization rather than an instrument-level optimization. A signal generator (a box that generates an AC sine wave at some frequency and amplitude) is a typical stimulus instrument that you may encounter in a test system. When a signal generator creates the stimulus signal, it must do so at a known, calibrated signal level. Most signal generators achieve this by a process called internal leveling. The way internal leveling is implemented is to build a little response measurement system inside the signal generator. The level of the generator is then adjusted in a feedback loop so as to set the level to a known, calibrated point. Output to DUT
Generate Signal
Adjust Level
Leveling Loop
Adjust Level
Figure 1-3. Signal-generator leveling loop
As you can see in Figure 1-3, this stimulus instrument comprises not only stimulus components, but also response measurement components. It may be the case that elsewhere in the overall system, those response components needed internally in the signal generator are duplicated in some other instruments. Those components may even be the primary function of what might be considered a “true” response instrument. If so, the response function in the signal generator is redundant. 12
What is a Synthetic Instrument?
Naturally, this sort of redundancy is a true waste only in an integrated measurement system with the freedom to use available functions in whatever manner desired, combining as needed. A signal generator has to work standalone, and so must carry a redundant response system within itself. Even a synthetic signal generator designed for standalone modular VXI or PXI use must have this response measurement redundancy within. Therefore, it would certainly be possible to look at a system comprising a set of nonsynthetic instruments and to optimize away stimulus response redundancy. That would be possible, but it’s difficult to do in practice. The reason it is difficult is that nonsynthetic instruments tend to be specific in their stimulus and response functions. It’s difficult to match-up functions and factor them out. In contrast, when one looks at a system designed with synthetic stimulus and response components, the chance of finding duplicate functions is much higher. If synthetic functions are all designed with the same signal conditioner, converter, DSP subsystem cascade, then a response system provided in a stimulus instrument will have the same exact architecture as one provided in a response instrument. The duplications factor out directly. Measurement Integration One of the most powerful concepts associated with synthetic instrumentation is the concept of measurement integration. Fundamental Definitions Measurement Integration Combining disparate measurements into a single measurement map. From my discussion of a measurement map in the section titled “Abscissas and Ordinates,” you will learn how to describe a measurement in such a way that encourages measurement integration. When you specify a list of ordinates and abscissas, and state how the abscissas are sequenced, you have effectively packaged a bunch of measurements into a tidy bundle. This is measurement integration in its purest sense.
13
Synthetic Instruments: Concepts and Applications
Measurement integration is important because it allows you to get the most out of the data you take. The data set is seen as an integrated whole that is analyzed, categorized, and visualized in whatever way makes the most sense for the given test. This is in contrast with the more prevalent way of approaching test where a separate measurement is done in a sequential process with each test. There is no intertest communication (beyond basic prerequisites and an occasional parameter). The result of this redundancy is slow testing and ambiguity in the results. Measurement Speed Synthetic instruments are unquestionably faster than ordinary instruments. There are many reasons for this fact, but the principal reason is that a synthetic instrument does a measurement that is exactly tuned to the needs of the test being performed. Nothing more, nothing less. It does exactly the measurement that the test engineer wants. In contrast, ordinary instruments are designed to a certain kind of measurement, but the way they do it may not be optimized for the task at hand. The test engineer is stuck with what the ordinary instrument has in its bag of tricks. For example, there is a speed-accuracy trade-off on most measurements. If an instrument doesn’t know exactly how much accuracy you need for a given test, it needs to take whatever time it takes for the maximum accuracy you might ever want. Consequently, you get the slowest measurement. It is true that many conventional instruments that make measurements with a severe speed-accuracy trade-off often have provision to specify a preference (e.g., a frequency counter that allows you to specify the count time and/or digits of precision), but the test engineer is always locked into the menu of compromises that the instrument maker anticipated. Another big reason why synthetic instrumentation makes faster measurements is that the most efficient measurement techniques and algorithms can be used. Consider, for example, a common swept filter spectrum analyzer. This is a slow instrument when fine frequency resolution is required simultaneously with a wide span. In contrast, a synthetic spectrum analyzer based on fast Fourier transform (FFT) processing will not suffer a slowdown in this situation.
14
What is a Synthetic Instrument?
Decreased time to switch between measurements is also another noteworthy speed advantage of synthetic instrumentation. This ability goes handin-hand with measurement integration. When you can combine several different measurements into one, eliminating both the intermeasurement setup times and the redundancies between sets of overlapping measurements, you can see surprising speed increases. Longer Service Life Synthetic measurement systems don’t become obsolete as quickly as less flexible dedicated measurement hardware systems. The reason for this fact is quite evident: Synthetic measurement systems can be reprogrammed to do both future measurements, not yet imagined, at the same time as they can perform legacy measurements done with older systems. Synthetic measurement systems give you your cake and allow you to eat it too, at least in terms of nourishing a wide variety of past, present, and future measurements. In fact, one of the biggest reasons the U.S. military is so interested in synthetic measurement systems is this unique ability to support the old and the new with one, unchanging system. Millions of man-hours are invested in legacy test programs that expect to run on specific hardware. That specific hardware becomes obsolete. Now what do we do? Rather than dumping everything and starting over, a better idea is to develop a synthetic measurement system that can implement synthetic instruments that do the same measurements as hardware instruments from the old system, while at the same time, are able to do new measurements. Best of all, the new SMS hardware is generic to the measurements. That means it can go obsolete piecemeal or in great chunks and the resulting hardware changes (if done right) don’t affect the measurements significantly.
Synthetic Instrument Misconceptions Now that you understand what a synthetic instrument is, let’s tackle some of the common misconceptions surrounding this technology.
15
Synthetic Instruments: Concepts and Applications
Why not Just Measure Volts with a Voltmeter? The main goal of synthetic instrumentation is to achieve instrument integration through the use of multipurpose stimulus/response synthesis/ analysis hardware. Although there may be nonsynthetic, commercial offthe-shelf (COTS) solutions to various requirements, we intentionally eschew them in favor of doing everything with a synthetic CCC approach. It should be obvious that a COTS, measurement specific instrument that has been in production many years, and has gone through myriad optimizations, reviews, updates, and improvements, will probably do a better job of measuring the thing it was designed to measure, than a first revision synthetic instrument. However, as the synthetic instrument is refined, there comes a day when the performance of the synthetic instrument rivals or even surpasses the performance of the legacy, single-measurement instrument. The reason this is possible is because the synthetic instrument can be continuously improved, with better and better measurement techniques incorporated, even completely new approaches. The traditional instrument can’t do that. Synthetic Musical Instruments This book is about synthetic measurement instruments, but the concept is not far from that of a synthetic musical instrument. Musical instrument synthesizers generate sound alike versions of many classic instruments by means of generic synthesis hardware. In fact, the quality of the synthesis in synthetic musical instruments now rivals, and in some cases surpasses, the musical-aesthetic quality of the best classic mechanical musical instruments. Modern musical synthesis systems also can accurately imitate the flaws and imperfections in traditional instruments. This situation is exactly analogous to the eventual goal of synthetic instrument—that they will rival and surpass, as well as imitate, classic dedicated hardware instruments. Virtual Instruments In the section titled “History of Automated Measurement,” I described automated, rack-em-stack-em systems. People liked these systems, but they were too big and pricey. As a consequence, modular approaches were developed that reduced size and presumably cost by eliminating
16
What is a Synthetic Instrument?
redundancy in the design. These modular packaging approaches had an undesirable side effect: they made the instrument front panels tiny and crowded. Anybody that used modular plug-in instruments in the 1970s and 1980s knows how crowded some of these modular instrument front panels got.
Figure 1-4. Crowded front panel on Tektronix Spectrum analyzer
It occurred to designers that if the instrument could be fully controlled by computer, there might be no need for a crowded front panel. Instead, a soft front panel could be provided on a nearby PC that would serve as a way for a human to interact with the instrument. Thus, the concept of a virtual instrument appeared. Virtual instruments were actually conventional instruments implemented with a pure, computer-based user interface. Certain software technologies, like National Instruments’ LabVIEW product, facilitated the development of virtual instruments of this sort. The very name “virtual instrument” is deeply entwined with LabVIEW. In a sense, LabVIEW defines what a virtual instrument was, and is.
17
Synthetic Instruments: Concepts and Applications
Synthetic instruments running on generic hardware differ radically from ordinary instrumentation, where the hardware is specific to the measurement. Therefore, synthetic instruments also differ fundamentally from virtual instruments where, again, the hardware is specific to a measurement. In this latter case, however, the difference is more disguised since a virtual instrument block diagram might look similar to a synthetic instrument block diagram. Some might call this a purely semantic distinction, but in fact, the two are quite different. Virtual instruments are a different beast than synthetic instruments because virtual instrument software mirrors and augments the hardware, providing a soft front panel, or managing the data flow to and from a conventional instrument in a rack, but does not start by creating or synthesizing something new from generic hardware. This is the essential point: synthetic instruments are synthesized. The whole is greater than the sum of the parts. To use Buckminster Fuller’s word, synthetic instruments are synergistic instruments[B4]. Just as a triangle is more than three lines, synthetic instruments are more than the triangle of hardware (control, codec, conditioning) they are implemented on. Therefore, one way to tell if you have a true synthetic instrument is to examine the hardware design alone and to try to figure out what sort of instrument it might be. If all you can determine are basic facts, like the fact that it’s a stimulus or response instrument, or like the fact that it might do something with signals on optical fiber, but not anything about what it’s particularly designed to create or measure—if the measurement specificity is all hidden in software—then you likely have a synthetic instrument. I mentioned National Instruments’ LabVIEW product earlier in the context of virtual instruments. The capabilities of LabVIEW are tuned more toward an instrument stance rather than a measurement stance, (at least at the time of this writing) and therefore do not currently lend themselves as effectively to the types of abstractions necessary to make flexible synthetic instrumentation as do other software tools. In addition, LabVIEW’s nonobject-oriented approach to programming prevents the application of powerful object oriented (OO) benefits like efficient software reuse. Since OO techniques work well with synthetic instrumentation, LabVIEW’s shortcoming in this regard represents a significant limitation.
18
What is a Synthetic Instrument?
That said, there’s no reason that LabVIEW can’t be used to as a tool for creating and manipulating synthetic instruments, at some level. Just because LabVIEW is currently tuned to be a non-OO virtual instrument tool, doesn’t mean that it can’t be a SI tool to some extent. Also, it should be noted that the C++-based LabWindows environment doesn’t share as many limitations as the non-OO LabVIEW tools. Analog Instruments One common misconception about synthetic instruments is that they can be only analog measuring instruments. That is to say, they are not appropriate for digital measurements. Because of the digitizer, processing has moved from the digital world to the analog world, and what results is only useful for analog measurements. Nothing could be further from the truth. All good digital hardware engineers know that digital circuitry is no less “analog” than analog circuits. Digital signaling waveforms, ideally thought of as fixed 1 and 0 logic levels are anything but fixed: they vary, they ring, they droop, they are obscured by glitches, spurs, hum, noise, and other garbage. Performing measurements on digital systems is a fundamentally analog endeavor. As such, synthetic instrumentation implemented with a CCC hardware architecture is equally appropriate for digital test and analog test. There is, without doubt, a major difference between the sorts of instruments that are synthesized to address digital versus analog measurement needs. Digital systems often require many more simultaneous channels of stimulus and response measurement than do analog systems. But bandwidths, voltage ranges, and even interfacing requirements are similar enough in enough cases to make the unified synthetic approach useful for testing both kinds of systems with the same hardware asset. Another difference between analog and digital oriented synthetic measurement systems is the signal conditioning used. In situations where only the data is of interest, rather than the voltage waveform itself, the best choice of signal conditioner may be nonlinear. Choose nonlinear digitalstyle line drivers and receivers in the conditioner. Digital drivers will give us better digital waveforms, per dollar, than linear drivers. Similarly, when implementing many channels of response measurement, a digital receiver will be far less expensive than a linear response asset. 19
This page intentionally left blank
CHAPTER
2
Synthetic Measurement System Hardware Architectures The heart of the hardware system concept for synthetic instrumentation is a cascade of three subsystems: digital control and timing, analog-digital conversion (codec), and analog signal conditioning. The underlying assumption in the synthetic instrument concept is that this architecture concept is a good choice for the architecture of next-generation instrumentation. In this chapter, I will explore the practical and theoretical implications of this concept. Other architectural options and concepts that relate to the fundamental concept will also be considered.
System Concept—The CCC Architecture The cascade of three subsystems, control, codec, and conditioning, is shown in Figure 2-1.
Controller
Codec
Conditioner
Figure 2-1. Basic CCC cascade
I will call this architecture the three C’s or The CCC Architecture: Control, Codec, and Conditioning. In a stimulus asset, the controller generates digital signal data that is converted to analog form by the codec, which is then adjusted in voltage, current, bandwidth, impedance, coupling, or has any of a myriad of possible interface transformations performed by the conditioner.
21
Synthetic Instruments: Concepts and Applications
Signal Flow Thus, the “generic” hardware used as a platform for synthetic instrumentation comprises a cascade of three functional blocks. This cascade might flow either way, depending on the mode of operation. A sensor might provide a signal input for signal conditioning, analog-to-digital conversion (A/D), and processing, or, alternatively, processing might drive digital-toanalog conversion (D/A), which drives signal conditioning to an effector. In my discussions of synthetic instrumentation architecture, I will often treat digital-to-analog conversion as an equivalent to analog-to-digital conversion. The sense of the equivalence is that these two operations represent a coding conversion between the analog and digital portions of the system. The only difference is the direction of signal flow through them. This is exactly the same as how I will refer to signal conditioners as a generic class that comprises both stimulus (output) conditioners, and response (input) conditioners. STIMULUS SIGNAL FLOW
Controller
Codec
Conditioner
RESPONSE SIGNAL FLOW
Controller
Codec
Conditioner
Figure 2-2. Synthetic architecture cascade—flow alternatives
Thus, in this book, when I refer to either of the two sorts of converters as an equivalent element in this sense, I will often call it a codec1 or converter, rather than be more restrictive and call it an A/D or a D/A. This will allow us to discuss certain concepts that apply to both stimulus and response instruments equally. Similarly, I will refer to signal conditioners and digital processors (controllers) generically as well as in a specific stimulus or response context. 1
Although the word “CoDec” implies to both a Coder and a Decoder, I will also use this word to refer to either individually or collectively. 22
Synthetic Measurement System Hardware Architectures
The Synthetic Measurement System When you put a stimulus and response asset together, with a device under test (DUT)2 in the middle, you now have a full-blown synthetic measurement system (SMS).
Controller
Codec
Conditioner
DUT
Controller
Conditioner
Codec
Figure 2-3. Synthetic measurement system
Chinese Restaurant Menu (CRM) Architecture When one tries to apply the CCC hardware architecture to a wide variety of measurement problems, it often becomes evident that practical limitations arise in the implementation of a particular subsystem with respect to certain measurements. For example, voltage ranges might stress the signal conditioner, or bandwidth might stress the codec, or data rates might stress the controller, and so on. Given this problem, the designer is often inclined to start substituting sections of hardware for different applications. With this approach taken, the overall system begins to comprise several CCC cascades, with portions connected as needed to generate a particular stimulus. Together they form a sort of “Chinese Restaurant Menu” of possibilities—CRM Architecture for short.
2
In the automated test community, engineers often use the jargon acronym DUT to refer to the device under test. Some engineers prefer unit under test (UUT). Whatever you call it, DUT or UUT, it represents the “thing” you are making measurements about. Most often this is a physical thing, a system possibly. Other times it may be more abstract, a communications channel, for instance. In all cases, it’s something separate from the measurement instrument. 23
Synthetic Instruments: Concepts and Applications Column A
Column B
Column C
State Machine Controller
1-bit Codec
Digital Conditioner
DSP Controller
12-bit Codec
Broadband Conditioner
PIO-Based Controller
18-bit Codec
High Voltage Conditioner
Figure 2-4. CRM architecture
To create an instrument hardware platform, select one item from column A (a signal conditioner), one item from column B (a codec asset), and one item from column C (a digital processor) to form a single CCC cascade from which to synthesize your instrument. For example, consider the requirements for a signal generator versus a pulse generator. A pulse generator, unless it needs rise time control, can get away with a “1-bit” D/A. Even with rise time control, only a few bits are really needed if a selection of reconstruction filters are available. On the other hand, a high-speed pulse timing controller is needed, possibly with some analog, fine-delay control, and the signal conditioning would be best done with a nonlinear pulse buffer with offset capability, and specialized filtering for rise/falltime control. In contrast, the signal generator fidelity requirements lead us to a finely quantized D/A, with at least 12 bits. A direct digital synthesis (DDS) oriented controller is needed to generate periodic waveforms; and a linear, low-distortion, analog buffer amplifier is mandatory for signal conditioning. Therefore, when faced with requirements for a comprehensive suite of tests, one way to handle the diversity of requirements is to provide multiple choices for each of the “three C’s” in the CCC architecture.
24
Synthetic Measurement System Hardware Architectures
Table 2-1 is one possible menu. Column A is the control and timing circuitry, column B is the D/A codec, and column C is the signal conditioning. To construct a particular stimulus, you need to select appropriate functions from columns A, B, and C that all work together for your application. Table 2-1. CRM architecture
Control & Timing
Codec Conversion
Signal Conditioning
DSP or µP
1-bit (on/off) Voltage Source (100 GHz)
Nonlinear Digital Driver
High Speed State Machine
18-bit, 100 kHz D/A
Wideband Linear Amplifier
Med Speed State Machine with RAM
12-bit, 100 MHz D/A
High Voltage Amplifier
Parallel I/O Board driven by TP
8-bit, 2.4 GHz D/A
Up-converter
The items from the CRM architecture can be selected on the spot by means of switches, or, alternatively, stimulus modules can be constructed with hardwired selections from the menu. The choice of module can be specified in the measurement strategy, or it can be computed using some heuristic. But the CRM design is somewhat of a failure; it compromises the goal of synthetic design. The goal of synthetic instrumentation design is to use a single hardware asset to synthesize any and all instruments. When you are allowed to pick and choose, even from a CRM of CCC assets, you have taken your first step down the road to hell—the road to rack-n-stack modular instrumentation. In the limit, you are back to measurementspecific hardware again, with all the redundancy put back in. Rats! And you were doing so well! Let’s take another look at the signal generator and pulse generator from the “pure” synthetic instrumentation perspective. Maybe you can save yourself. Parameterization of CCC Assets One way to fight the tendency to design with a CRM architecture is to use asset parameterization. Instead of swapping out a CCC asset completely, design the asset to have multiple modes or personalities so that it can meet multiple requirements without being totally replaced. 25
Synthetic Instruments: Concepts and Applications
This sort of approach is not unlike the nonpolymorphic functional factorizations in procedural software, when type parameters are given to a function. Rather than making complete new copies of an asset with certain aspects of its behavior altered to fit each different application, a single asset is used with those changing performance aspects programmable based on its current type. For example, rather than making several different signal conditioners with different bandwidths, make a single signal conditioner that has selectable bandwidth. To the extent that bandwidth can be parameterized in a way that does not require the whole asset to be replaced (equivalent to the CRM design), the design is now more efficient through its use of parameterization. State Machine Controller
1-bit Codec
Digital Conditioner
DSP Controller
12-bit Codec
Broadband Conditioner
PIO-Based Controller
18-bit Codec
High Voltage Conditioner
Figure 2-5. Parameterized architecture
The choice between multiple modules with different capabilities, and a single module with multiple capabilities is a design decision that must be made by the synthetic measurement system developers based on a complete view of the requirements. There is no way to say a priori what the right way is to factor hardware. The decision must be made in the context of the design requirements. However, it is important to be aware of the tendency to modularize and solve specific measurement issues by racking up more specific hardware, which leads away from the synthetic instrument approach. Architectural Variations Although in many circumstances there is good potential for the basic synthetic instrument architecture, one can anticipate that some architectural variations and options need to be considered. Sadly, many of these
26
Synthetic Measurement System Hardware Architectures
variations backslide to habitual, sanctioned approaches. There’s nothing to be done about this; the realities of commercial development must be acknowledged. The reasons synthetic instrument designers consider deviations from the pure “three C’s” tend to be matters of realizability, cost, risk, or marketing. State-of-the-art that a more conservative design approach is taken. And, for whatever reason, customers may simply want a different approach. In this book, I won’t bother to consider architecture variations that are expedient for reasons of risk or marketing, but I will mention a few variations that are wins with respect to cost and realizability. These tend to be SMS architectural enhancements rather than mere expedient hacks to get something built. Enhancements like compound stimulus and multiplexing are two of the most prominent. Compound Stimulus One architecture enhancement of CCC that is often used is called compound stimulus, where one stimulus is used in the generation of another stimulus. The situation to which this enhancement lends itself is whenever state-of-the-art D/A technology is inadequate, expensive, or risky to use to generate an encoded stimulus signal directly with a single D/A. The classic example of compound stimulus is the use of an up-converter to generate a modulated bandpass signal waveform. This is accomplished by a combination of subsystems as shown in Figure 2-6. Synthesizer
Controller
Codec
Conditioner
LO
Controller
Codec
Upconverter Conditioner
Figure 2-6. Compound stimulus
27
Synthetic Instruments: Concepts and Applications
Note that the upper “synthesizer” block shows an internal structure that parallels the desired CCC structure of a synthetic instrument. The output of this synthesizer is fed down to the signal conditioning circuit of the lower stimulus system. The lower system generates the modulation that is up-converted to form the compound stimulus. Whenever a signal encoding operation is performed, there will be an open input for either the encoded signal, or the signal on which it is encoded. This is an opportunity for compound stimulus. Signal encoding is inherently a stimulus compounding operation. In recognition of this situation, CCC assets can be deployed with a switch matrix that allows their outputs to be directed not only at DUT inputs, but alternatively to coding inputs on other CCC stimulus assets. This results in a CCC compound matrix architecture that allows us to deploy all assets for the generation of compound-encoded waveforms. Simultaneous Channels and Multiplexing An issue that is often ignored in the development of synthetic measurement systems is the need for multiple, simultaneous stimuli, and multiple, simultaneous measurements. With many tests, a single stimulus is not enough. And although, most times, multiple responses can be measured sequentially, there are some cases where simultaneous measurement of response is paramount to the goal of the test. Given this need, what do we do? The most obvious solution is to simply build more CCC channels. Duplicate the CCC cascade for stimulus as many times as needed so as to provide the required stimuli. Similarly, duplicate the response cascade to be able to measure as many responses as needed. This obvious solution certainly can be made to work in many applications, but what may not be obvious is that there are other alternatives. There are other ways to make multiple stimuli, and make multiple response measurements, without using completely duplicated channels. Moreover, it may be the case that duplicated channels cost more and have inferior performance as compared to the alternatives. What are these cheaper, better alternatives? They all fall into a class of techniques called multiplexing. When you multiplex, you make one channel do the work of several. The way this is accomplished is by taking 28
Synthetic Measurement System Hardware Architectures
advantage of orthogonal modes in physical media. Any time uncoupled modes occur, you have an opportunity to multiplex. There are various forms of multiplexing, each based on a set of modes used to divide the channels. The most common forms used in practice are the following: Space Division Multiplexing Time Division Multiplexing Frequency Division Multiplexing Code Division Multiplexing
These multiplexing techniques can be implemented in hardware, of course, but they can also be synthetically generated. After all, if you have the ability to generate any of a menu of synthetic instruments on generic hardware, why can’t you synthesize multiple instruments simultaneously? Space Division Multiplexing Space division multiplexing (SDM) is a fancy name for multiple channels. Separate channels with physical separation in space is the orthogonal mode set. SDM has the unique advantage of being obvious and simple. If you want two stimuli, build two stimulus cascade systems. If you want two responses, build two response cascades. A more subtle advantage, but again unique to SDM and very important, is the fact that multiple SDM channels each have exactly the same bandwidth performance of a single channel. This may seem trivial, but it is not strictly true with other techniques. Another advantage of space multiplexing is that it tends to achieve good orthogonality. That is to say, there is little crosstalk between channels. What is generated/measured on one channel does not influence another. But there are many significant disadvantages to space multiplexing. Foremost among these is the fact that N channels implemented with space multiplexing cost at least N times more than a single channel, sometimes more. Another prominent disadvantage of space multiplexing is a consequence of the good orthogonality: the channels are completely independent they can drift independently. Thus, gain drift, offsets, and problems like phase and delay skew, among others, will all be worse in a space-multiplexed system.
29
Synthetic Instruments: Concepts and Applications
Codec
Controller
Conditioner
DUT
Controller
Codec
Conditioner
Figure 2-7. Space division multiplexing
Time Division Multiplexing The most common alternative to space multiplexing is time division multiplexing (TDM). The TDM technique is based on the idea of using a multiway switch, often called a commutator, to divide a single channel into multiple channels in a time-shared manner. Commutator
Controller
Codec
Conditioner
DUT
Commutator
Controller
Codec
Conditioner
Figure 2-8. Time division multiplexing
Time multiplexing schemes are easy to implement and to the extent that the commutator is inexpensive compared to a channel, a TDM approach can be far less expensive than most any other multiplexing approach. Another advantage is in using the same physical channel for each measurement or stimulus, there is less concern about interchannel drift or skew.
30
Synthetic Measurement System Hardware Architectures
On the downside, a new concern becomes important: the speed of the commutator in relation to the bandwidth of the signals. Unlike SDM, where it’s obvious that the multiple channels each work as well, bandwidth-wise, as an individual channel, with TDM there may be a problem. The available single-channel bandwidth is shared among each multiplexed channel. It can be shown[B1], however, that if the commutator visits each channel at least at the Nyquist rate (twice the bandwidth of the channel), all information from each channel is preserved. This is a point that recurs in other multiplexing techniques. It turns out that for N channels TDM multiplexed to have the same bandwidth performance as a single channel with bandwidth B, they need to be multiplexed onto a single channel with bandwidth N times B at a minimum. Thus, TDM (and all other multiplexing techniques) are seen as a space-bandwidth trade-off. TDM is relatively easy to implement synthetically. Commutation or decommutation are straightforward digital techniques that can be used exclusively in the synthetic realm, or paired with specific commutating hardware. One possible architecture variation that uses a virtual commutator implemented synthetically is shown in Figure 2-9. Commutator
Controller
Codec
Conditioner
DUT
Controller
Virtual Commutator
Codec
Conditioner
Figure 2-9. Time division multiplexing with virtual commutator
The application of TDM shown in Figure 2-9 assumes a multi-input, single output DUT. Multiplexing is used in a way that allows a single stimulus channel to make measurements of all the inputs. The corresponding responses are easily sliced apart in the controller.
31
Synthetic Instruments: Concepts and Applications
Frequency Division Multiplexing Another common alternative to space multiplexing is frequency division multiplexing (FDM). The FDM technique is based on the idea of using different frequencies, or subcarriers, to divide a single channel into multiple channels in a frequency allocated manner. FDM schemes are somewhat harder to implement than TDM. Some sort of mixer is needed to shift the channels to their respective subcarriers. subcarriers. Later they must be filtered apart and unmixed. Modern DSP simplifies a lot of the issues that would otherwise make this technique more costly. Synthetic FDM is straightforward to achieve. In Figure 2-10, three frequencies, F1, F2, and F3 are used with mixers to allow a single response channel to make simultaneous measurements of a DUT with three outputs. Within the controller, a filter bank implemented as an FFT separates the individual response signals. Controller
Codec
Conditioner
F1
F2
F3
DUT
Controller
FFT (Filter Bank)
Codec
Conditioner
∑
Figure 2-10. Frequency division multiplexing
Unfortunately, unlike TDM, even though the same physical channel is used for each measurement or stimulus, those signals occupy different frequency bands in the single physical channel. There is no guarantee that intermux-channel drift or skew will not occur. On the other hand, the portions of FDM implemented synthetically will not have this problem—or can correct for this problem in the analog portions. Again it is true that for N channels, FDM multiplexed, to have the same bandwidth performance as a single channel with bandwidth B, they must be multiplexed on a single channel with a bandwidth N times B, at a minimum. 32
Synthetic Measurement System Hardware Architectures
Code Division Multiplexing A more esoteric technique for multiplexing (although much more well known these days because of the rise of cellular phones that use CDMA) is code division multiplexing (CDM). The CDM technique is based on the idea of using different orthogonal codes, or basis functions, to divide a single channel into multiple channels along orthogonal axes in code space. To first order, CDM schemes are of the same complexity as FDM techniques, so the two methods are often compared in terms of cost and performance. The frequency mixer used in FDM is used the same way in CDM, this time with different codes rather than frequencies. Again, fancy DSP can be used to implement synthetic CDM. Separate code multiplexed channels can be synthesized and demultiplexed with DSP techniques. Figure 2-11 shows a CDM system that achieves the same multiplexing as the FDM system in Figure 2-10.
Controller
Codec
Conditioner
Code 1
Code 2
Code 3
DUT
Controller
Code DeMux
Codec
Conditioner
∑
Figure 2-11. Code division multiplexing
As with TDM, because the same physical channel is used for each measurement or stimulus, there is less concern about interchannel drift or skew. Unlike FDM, codes will spread each channel across the same frequencies. In fact, CDM as the unique ability to ameliorate some frequency-related distortions that plague all other techniques, including space multiplexing. Again it is true that for N channels, CDM multiplexed, to have the same bandwidth performance as a single channel with bandwidth B, a single channel with bandwidth at least N times B is needed. 33
Synthetic Instruments: Concepts and Applications
Choosing the Right Multiplexing Technique The right choice of multiplexing technique for a particular SMS depends on the requirements of that SMS application. You should always try to remember that this choice exists. Don’t fall into the rut of using one particular multiplexing technique without some consideration of all the others for each application. And don’t forget that multiple simultaneous copies of the same instrument can be synthesized.
Hardware Requirements Traceability In this chapter, I have discussed hardware architectures for synthetic measurement systems. In that discussion, it should have become clear that the way to structure SMS hardware is not guided solely, or even primarily, by measurement considerations. In a synthetic measurement system, the functionality associated with particular measurements is not confined to a specific subsystem. This is a stumbling point especially when somebody tries to use the so-called waterfall methodology that is so popular. Doctrine has us begin with system requirements, then divine a subsystem factoring into some set of boxes, then apportion out subsystem requirements into those boxes. But synthetic measurement system hardware does not readily factor in ways that correspond to system-level functional divisions. In fact, the whole point of the method was to avoid designating specific hardware to specific measurements. What you end up with in a well designed SMS is like a hologram, with each individual physical part of the hologram being influenced by everything in the overall image, and with each individual part of the image stored throughout every part of the hologram. The holistic nature of the relationship between synthetic measurement system hardware components and the component measurements performed is one of the most significant concepts I am attempting to communicate in this book.
34
CHAPTER
3
Stimulus In some sense, the beginning of a synthetic instrumentation system is the stimulus generation, and the beginning of the stimulus generation is the digital control, or DSP, driving the stimulus side. This is where the stimulus for the DUT comes from. It’s the prime mover or first cause in a stimulus-response measurement, and it’s the source of calibration for response-only measurements. The basic CCC architecture for stimulus comprises a three-block cascade: DSP control, followed by the stimulus codec (a D/A in this case), and finally the signal conditioning that interfaces to the DUT.
Controller
Codec
Conditioner
Figure 3-1. The stimulus cascade
Stimulus Digital Signal Processing The digital processor section can perform various sorts of functions, ranging from waveform synthesis to pulse generation. Depending on the exact requirements of each of these functions, the hardware implementation of an “optimum” digital processor section can vary in many different, and seemingly incompatible directions. Ironically, one “general-purpose” digital controller (in the sense of a generalpurpose microprocessor) may not be generally useful. When deciding the synthesis controller capabilities for a CCC synthetic measurement system, it inevitably becomes a choice from among several distinct controller options.
35
Synthetic Instruments: Concepts and Applications
Although there are numerous alternatives for a stimulus controller, these various possible digital processor assets fall into broad categories. Listed in order of complexity, they are:
Waveform Playback (ARB)
Direct Digital Synthesis (DDS)
Algorithmic Sequencing (CPU)
In the following sections, I will discuss these categories in turn. Waveform Playback The first and simplest of these categories, waveform playback, represents the class of controllers one finds in a typical arbitrary waveform generator (ARB). These controllers are also akin to the controller in a common CD player: a “dumb” digital playback device. The basic controller consists of a large block of waveform memory, and a simple state machine, perhaps just an address counter, for sequencing through that memory. A counter is a register to which you repeatedly add +1 as shown in Figure 3-2. When the register reaches some predetermined terminal count, it’s reset back to its start count. Data Download Control Interface
Control Logic
Address Register
Address
RAM
Data To Codec
+1
Figure 3-2. Basic waveform playback controller
The large block of memory contains digitized samples of waveform data. Perhaps the data is in one continuous data set, or as several independent tracks of data. An interface controls the counter, and another gets waveform data into the RAM. In basic operation, the waveform playback controller sequences through the data points in the tracks and feeds the waveform data to the codec for conversion to an analog voltage that is then conditioned and used to
36
Stimulus
stimulate the DUT. Customarily, the controller has features where it can either playback a selected track repeatedly in a loop, or just a one-time, single-shot playback. There may also be features to access different tracks and play them back in various sequential orders or randomly. The ability to address and play multiple tracks is handy for synthesizing a communications waveform that has a signaling alphabet. The waveform playback controller has a fundamental limitation when generating periodic waveforms (like a sine wave). It can only generate waveforms that have a period that is some integer multiple of the basic clock period. For example, imagine a playback controller that runs at 100 MHz, it can only generate waveforms with periods that are multiples of 10 nS. That is to say, it can only generate: 100 MHz, 50 MHz, 33.333 MHz, and so on for every integer division of 100 MHz. A related limitation is the inability of a playback controller to shift the phase of the waveform being played. You will see below that direct digital synthesis (DDS) controllers do not have these limitations. Another fundamental limitation that the waveform playback controller has is that it cannot generate a calculated waveform. For example, in order to produce a sine wave, it needs a sine table. It cannot implement even a simple digital oscillator. This is a distinct limitation from the inability to generate waveforms of arbitrary period, but not unrelated. Both have to do with the ability to perform algorithms, albeit with different degrees of generality. Direct Digital Synthesis Direct digital synthesis (DDS) is an enhancement of the basic waveform playback architecture that allows the frequency of periodic waveforms to be tuned with arbitrarily fine steps that are not necessarily submultiples of the clock frequency. Moreover, DDS controllers can provide hooks into the waveform generation process that allows direct parameterization of the waveform for the purposes of modulation. With a DDS architecture, it is dramatically easier to amplitude or phase modulate a waveform than it is with an ordinary waveform playback system. A block diagram of a DDS controller is shown in Figure 3-3.
37
Synthetic Instruments: Concepts and Applications Data Download
Control Interface
Control Logic
Address Register
Adder
LSBs
Address
RAM
Data To Codec
Phase Increment
Figure 3-3. Direct digital synthesizer
The heart of a DDS controller is the phase accumulator. This is a register recursively looped to itself through an adder. One addend is the contents of the accumulator, the other addend is the phase increment. After each clock, the sum in the phase accumulator is increased by the amount of the phase increment. How is the phase accumulator different from the address counter in a waveform playback controller? The deciding difference is that a phase accumulator has many more bits than needed to address waveform memory. For example, waveform memory may have only 4096 samples. A 12-bit address is sufficient to index this table. But the phase accumulator may have 32-bit. These extra bits represent fractional phase. In the case of a 12-bit waveform address and 32-bit accumulator, a phase increment of 220 would index through one sample per clock. Any phase increment less than 220 causes indexing as some fraction of a sample per clock. You may recall that a waveform playback controller could never generate a period that wasn’t a multiple of the clock period. In a DDS controller, the addition of fractional phase bits allow the period to vary in infinitesimal fractions of a clock cycle. In fact, a DDS controller can be tuned in uniform frequency steps that are the clock period divided by 2N, where N is the number of bits in the phase accumulator. With a 32-bit accumulator, for example, and a 1-GHz clock, frequencies can be tuned in 1/4 Hz steps. Since the phase accumulator represents the phase of the periodic waveform being synthesized, simply loading the phase accumulator with a specified phase number causes the synthesized phase to jump to a new state. This is handy for phase modulation. Similarly, the phase increment can be varied causing real-time frequency modulation.
38
Stimulus Amplitude
After Filtering
0 1 2 3 4 5 MSBs Index Waveform Table
Time
Waveform Table Data
LSBs Not Connected MSBs
LSBs Count Fractional Phase
LSBs
Phase Accumulator
Figure 3-4. Fractional samples
It’s remarkable that few ARB controllers include this phase accumulator feature. Given that such a simple extension to the address counter has such a large advantage, it’s puzzling why one seldom sees it. In fact, I would say that there really is no fundamental difference between a DDS waveform controller and a waveform playback controller. They are distinguished entirely by the extra fractional bits in the address counter, and the ability to program a phase increment with fractional bits. Digital Up-Converter A close relative of the DDS controller is the digital up-converter. In a sense, it is a combination of a straight playback controller and a DDS in a compound stimulus arrangement as discussed in the section titled “Compound Stimulus.” Baseband data (possibly I/Q) in blocks or “ tracks” are played back and modulated on a carrier frequency provided by the DDS. Algorithmic Sequencing A basic waveform playback controller has sequencing capabilities when it can play tracks in a programmed order, or in loops with repeat counts. The DDS controller adds the ability to perform fractional increments through the waveform, but otherwise has no additional programmability or algorithmic support.
39
Synthetic Instruments: Concepts and Applications
Both the basic waveform playback controller and the DDS controller have only rudimentary algorithmic features and functions, but it’s easy to see how more algorithmic features would be useful. For example, it would be handy to be able to loop and through tracks with repeat counts, or to construct subroutines that comprised certain collections of track sequences (playlists in the verbiage of CD players). Not that we want to turn the instrument into an MP3 player, but playlist capability also can assemble message waveforms on the fly based on a signal alphabet. A stimulus with modulated digital data that is provided live, in real time can be assembled from a “playlist.” It would also be useful to be able to parameterize playlists so that their contents could be varied based on certain conditions. This leads to the requirement for conditional branching, either on external trigger or gate conditions, or on internal conditions—conditions based on the data or on the sequence itself. As more features are added in this direction, a critical threshold is reached. Instruction memory appears, and along with it, a way for data and program to intermix. The watershed is a conditional branch that can choose one of two sequences based on some location in memory, combined with the ability to write memory. At this point, the controller is a true Turing machine—a real computer. It can now make calculations. Those calculations can either be about data (for example: delays, loops, patterns, alphabets) or they can be generating the data itself (for example: oscillators, filters, pulses, codes). There is a vast collection of possibilities here. Moving from simple state machines, adding more algorithmic features, adding an algorithmic logic unit, or ALU, along with state sequencer capable of conditional branches and recursive subroutines, the controller becomes a dual-memory Harvard architecture DSP-style processor. Or it may move in a slightly different direction to the general-purpose single-memory Von Neumann processors. Or, perhaps, the controller might incorporate symmetric multiprocessing or systolic arrays. Or something beyond even that. Obviously, it’s also beyond the scope of this book to discuss all the possibilities encompassed in the field of computer architecture. There are many fine books[B7] that give a comprehensive treatment of this large
40
Stimulus
topic. I will, however, make a few comments that are particularly relevant to the synthetic instrumentation application. Synthesis Controller Considerations At first glance, a designer thinking about what controller to use for a synthetic instrument application might see the controller architecture choice as a speed-complexity trade-off. On one hand, they can use a complex general-purpose processor with moderate speed, and on the other hand, they can use a lean-and-mean state sequencer to get maximum speed. Which to pick? Fortunately, advances in programmable logic are softening this dilemma. As of this writing, gate arrays can implement signal processing with nearly the general computational horsepower of the best DSP microprocessors, without giving up the task specific horsepower that can be achieved with a lean-and-mean state sequencer. I would expect this gap to narrow into insignificance over the next few years. “Microprocessors are in everything from personal computers to washing machines, from digital cameras to toasters. But it is this very ubiquity that has made us forget that microprocessors, no matter how powerful, are inefficient compared with chips designed to do a specific thing.” —Tredennick and Shimamoto[P1] Given this trend, perhaps the true dilemma is not in hardware at all. Rather, it might be a question of software architecture and operating system. Does the designer choose a standard microprocessor (or DSP processor) architecture to reap the benefits of a mainstream operating system like vxWorks, Linux, pSOS, BSD, or Windows? Or does the designer roll-their-own hardware architecture specially optimized for the synthetic instrument application at the cost of also needing to roll-her-own software architecture, at least to some extent? Again, this gap is narrowing, so the dilemma may not be an issue. Standard processor instruction sets can be implemented in gate arrays, allowing them to run mainstream operating systems, and there is growing support for customized ASIC/PLD-based real-time processing in modern operating systems. 41
Synthetic Instruments: Concepts and Applications
In fact, because of advances in gate array and operating system technology, the world may soon see true, general-purpose digital systems that do not compromise speed for complexity, and do not compromise software support for hardware customization. This trend bodes well for synthetic instrumentation, which shines the brightest when it can be implemented on a single, generalized, CCC cascade. Extensive computational requirements lead to a general-purpose DSP- or microprocessor-based approach. In contrast, complex periodic waveforms may be best controlled with a high-speed state sequencer implementing a DDS phase accumulator indexing a waveform buffer. While fine resolution delayed pulse requirements are often best met with hybrid analog/ digital pulse generator circuits. These categories intertwine when considering implementation.
Stimulus Triggering I have only scratched the surface of the vast body of issues associated with stimulus DSP. One could write a whole book on nothing but stimulus signal synthesis. But my admittedly abbreviated treatment of the topic would be embarrassingly lacking without at least some comment on the issue of triggering. Probably the biggest topic not yet discussed is the issue of triggering. Triggering is required by many kinds of instruments. How do we synchronize the stimulus, and subsequent measurement with external events? Triggering ties together stimulus and response, much in the same way as calibration does. A stimulus that is triggered requires a response measurement capability in order to measure the trigger signal input. Therefore, an SMS with complete stimulus response closure will provide a mechanism for response (or ordinate) conditions to initiate stimulus events. Desirable triggering conditions can be as diverse as ingenuity allows. They don’t have to be limited to a signal threshold. Rather, trigger conditions can span the gamut from the rudimentary single-shot and freerun conditions, to complex trigger programs that require several events to transpire in a particular pattern before the ultimate trigger event is initiated.
42
Stimulus
Stimulus Trigger Interpolation Generality in triggering requires a programmable state machine controller of some kind. Furthermore, it is often desirable to implement finely quantized (near continuously adjustable) delays after triggering, which seems to lead us to a hybrid of digital and analog delay generation in the controller. While programmable analog delays can be made to work and meet requirements for fine trigger delay control, it’s a mistake to jump to this hardware-oriented solution. It’s a mistake, in general, to consider only hardware as a solution for requirements in synthetic measurement systems. Introducing analog delays into the stimulus controller for the purpose of allowing finely controlled trigger delay is just one way to meet the requirement. There are other approaches. For example, as shown in Figure 3-5, based on foreknowledge of the reconstruction filtering in the signal conditioner, it is possible to alter the samples being sent to the D/A in such a way that the phase of the synthesized waveform is controlled with fine precision—finer than the sample interval. Reconstructed Waveform
Volts
Volts
D/A Samples
Altered Samples
50% Level Trigger
50% Level Trigger
2.0 Sec
0
1
2
2.1 Sec
3
4
5
Seconds
0
1
2
0.1 Sec Change
3
4
5
Seconds
Figure 3-5. Fine trigger delay control
The dual of this stimulus trigger interpolation and re-sampling technique will reappear in the response side in the concept of a trigger time interpolator used to re-sample the response waveform based on the precisely known time of the trigger.
43
Synthetic Instruments: Concepts and Applications
The Stimulus D/A The D/A conversion section, or stimulus side codec (or really just “Co”), creates an analog waveform based on the output of the digital control section. As I will discuss below, the place where the D/A begins and the controller ends may be fuzzy. D/A converters tend to be constrained by a speed versus accuracy tradeoff much in the same way that controllers trade speed for complexity. This speed-accuracy trade-off reflects the practical reality that fast D/A systems tend to have worse amplitude accuracy than slow D/A systems. I’m using the word “accuracy” in the qualitative sense of finer amplitude resolution, measured in bits, but this may also be measured in SiNaD or IMD performance. Speed can be viewed as sampling rate or equivalently, time resolution. It’s most practical to use low speed D/A subsystems where amplitude accuracy is the primary requirement, and to use moderate amplitude accuracy systems where speed is paramount. The following table illustrates typical extremes of this spectrum of tradeoffs. It illustrates the typical bandwidth versus accuracy trade-off that one encounters. This does not reflect an exhaustive survey of available D/A technology, nor could any static table on a printed page keep up with the fast pace of change. Table 3-1. D/A converter trade-off range
Requirement
ENOB Speed
Pulse Generation
1-bit
100 GHz
Analog Waveforms
12-bit
100 MHz
AC/DC Reference
18-bit
100 kHz
Note that ENOB refers to the effective number of bits provided by the amplitude accuracy of a D/A in the given category. I have more to say about codec accuracy, ENOB, and other related topics when I discuss the response codec in the section titled “The Response Codec,” as these issues affect both stimulus and response in analogous ways.
44
Stimulus
Interpolation and Digital Up-Converters in the Codec One of the signal coding operations an SMS can be asked to perform is modulation on a carrier, resulting in a so-called bandpass signal. On the stimulus side of a synthetic instrument, there are two fundamental ways to generate a bandpass signal:
Up-convert digitally before the D/A
Up-convert with analog circuits after the D/A
It’s also possible to do both of these, up-converting to a fixed digital IF before the D/A, and then use analog up-conversion after the D/A to finish the job. The idea of interpolation in the context of digital signal processing is different than what I mean by interpolation in the context of measurement maps. In DSP terms, interpolation is a process of increasing the sampling rate of a digital signal. It is accomplished by means of an interpolating filter that reconstructs the missing samples with predicted data based on the assumption of a limited signal bandwidth. A good reference on the interpolation process is[B11]. Interpolation goes hand-in-hand with up-conversion since a higher frequency up-converted result needs more samples to represent it without aliasing. You may be puzzled why up-conversion and interpolation is a topic for the stimulus codec section of this book. Isn’t digital up-conversion accomplished by the stimulus controller? Isn’t analog up-conversion a signal conditioning function? Yes, I agree that logically the up-conversion function is not part of the codec, but the fact is that many new D/A subsystems are being built with digital up-conversion and interpolation on board. Actually, it’s quite beneficial for interpolation and digital up-conversion to be accomplished within the stimulus codec. This is good because it lowers the data rate required out of the stimulus controller. In a sense, the stimulus controller can concern itself only with the meaningful portion of the stimulus. The resulting baseband or low-IF signal can then be translated in a mechanical process to some high frequency without burdening the controller. In a sense, the codec is merely coding the information bearing portion of the signal, both as an analog voltage, and as a modulation.
45
Synthetic Instruments: Concepts and Applications
The idea of accomplishing signal encoding in the D/A subsystem extends beyond just up-conversion and analog coding. Other forms of encoding can be accomplished most efficiently here. These possibilities include: pulse modulation, AM/FM modulation, television signals such as NTSC/ PAL, frequency hopping, and direct sequence spreading. In general, never assume that all DSP tasks are performed in the stimulus controller; never assume all analog conditioning tasks are performed in the stimulus conditioner. All the hardware is available to be used for anything.
Stimulus Conditioning In a stimulus system, after the analog signal is synthesized by the D/A, some amount of signal conditioning may need to be applied. This conditioning can include a wide variety of signal processing and DUT specific considerations including, possibly, one or more of the following:
Amplification: linear, digital, pulse, RF, high voltage or current
Filtering: fixed, tunable, tracking, adaptive
Impedance: matched source, programmable mismatch, constant current/voltage
DUT Interface: probes, connectors, transducers, antennas
DUT interfacing is the normal role for signal conditioning; however, as I just explained with regard to the D/A, there’s no reason that signal encoding or modulation tasks can’t be performed here as well, especially up-conversion and modulation. An analog RF up-converter is a common signal conditioner component. Signal-conditioning requirements are obviously dependent on the needs of the DUT, but they are also dependent on the performance of the D/A subsystem. If, for example, the D/A is fast enough to generate all frequencies of interest, there is no need for up-conversion; if the D/A can produce the required power, current, or voltage, there is no need for amplification. The need to interact with a diverse selection of DUTs tends to drive the design of stimulus signal conditioning in the direction of either a parameterized asset, as I discussed in the section titled “Parameterization of 46
Stimulus
CCC Assets,” or to multiple assets in the CRM architecture sense. Often parameterization makes the most sense as it’s more efficient to contain the switching between different conditioner circuit options within some overall conditioner subsystem than it is to force the system-level switching to handle this. Only when faced with a unique and narrow range of conditioning needs specific to a certain class of DUTs does it make sense to create a unique conditioner asset to address that requirement. An illustration of this principle would be a variable gain amplifier or selectable filter in the signal conditioner. It would make no sense to build separate conditioners just to change a gain or a filter. Similarly, DC offsets, impedances, and other easily parameterized qualities are best implemented that way. The opposite case would be a situation where the signal conditioner could produce a signal that would be damaging to some class of assets, or where some class of DUTs could do damage to the conditioner, for instance a high voltage stimulus. Stimulus Conditioner Linearity Stimulus conditioner circuitry does not have to be linear in all cases. It depends on the requirements of the test. Some applications, pulsed digital test, for instance, might be best served with digital line drivers as the signal conditioner amplifiers. Such drivers are not linear devices. In other applications, linearity after the D/A is paramount. It’s also generally beneficial to minimize the noise and spurious signals added after the D/A. Problems with stimulus conditioner linearity are exacerbated when the stimulus conditioner is an analog up-converter. It is very difficult to preserve wide dynamic range, limit the injection of noise, and prevent spurious products from appearing at the stimulus output. Because linear conditioner design can be challenging and expensive, it’s important to keep an open mind about solutions to these difficulties. Gain Control Although it is definitely most convenient to adjust the level (amplitude) of the stimulus by simply adjusting the amplitude of the digital signal en47
Synthetic Instruments: Concepts and Applications
tering the D/A, sometimes this is not a good idea. If the signal conditioner has an up-converter, or other gain or spurious producing stages, the junk injected by this conditioner circuitry remains at a constant level as the signal out of the D/A drops. The D/A itself may also inject some unwanted signals at a fixed level. Consequently, the signal-to-noise ratio (SNR) of the stimulus will fall as the stimulus level is decreased relative to the fixed noise. In fact, the fixed noise-level may limit the minimum stimulus signal that is discernible, as eventually noise will swamp the signal. The way around this problem is to adjust signal levels in the stimulus conditioner after most of the junk has been added to the signal. That way, when adjusting the signal level, the SNR stays roughly the same. The signal level can be lowered without fear that it will be swamped by the noise. Noise
Noise
Adjust Level Before Codec
Signal
Controller
Signal Drops Into Noise
Cond.
Codec
Noise
Noise
Noise
Adjust Level After Codec Signal
Controller
Codec
Signal and Noise Drop Together
Cond. Noise
Figure 3-6. Effect of gain control placement on SNR with varying gain
As a result of this idea, stimulus signal conditioners are often designed with variable gain amplifiers or adjustable output attenuation. This allows us to run the D/A at an optimum signal level relative to headroom and quantization noise as discussed in the section titled “Codec Headroom.” Adjusting gain in the signal conditioner is not without its disadvantages. Foremost of these is that the variable gain must be implemented so as to
48
Stimulus
work and maintain calibration across the range of frequencies and signals that the conditioner might handle. This can be a challenge in broadband designs. Sometimes a compromise approach is used, with only coarse gain steps (perhaps 10 dB) implemented in the conditioner, with fine steps implemented in the codec or DSP controller. Adaptive Fidelity Improvement Often, the designers of synthetic measurement systems struggle to achieve impeccable fidelity in the stimulus signal conditioning so as to preserve all the precision in the stimulus they have generated with the finely quantized D/A. Building a “clean” enough signal conditioner good enough to match the D/A is often a daunting challenge. It’s particularly difficult to meet fidelity specifications in generic hardware when they derive from a the performance of a signal specific instrument. I have seen stimulus system designers struggle with this again and again. Designing up-converters with high fidelity and low spurs is additionally difficult. Granted, it’s harder to make a clean sine wave with a D/A and broadband analog processing than it is with a narrow-band filtered crystal oscillator, but this may be an unnecessary enterprise. As I will say repeatedly in this book, proper synthetic instrument design focuses on the measurement, not on the specifications of some legacy instrument being replaced. Turn to the measurement to see what fidelity is needed. You may find that much less is needed by the measurement than what a blanket fidelity specification would require. Once we have focused on the measurement and looked at the fidelity performance of reasonable stimulus signal conditioning, if we see that we still don’t meet the requirements for a good measurement, what do we do then? Clearly, the solution to that underspecified problem depends on the details of the situation. In some cases, parameterized filtering would help, other times higher power amplification can improve linearity. These are all well-known techniques. But there is one technique I want to mention here because it is often overlooked—a technique that has wide applicability to these situations: adaptive processing. Remember, there is a response system at our disposal, and a fully programmable, DSP driven stimulus system to boot. This combination lends itself nicely to closed-loop adaptive techniques. 49
Synthetic Instruments: Concepts and Applications
If you can measure and you can control, then you can adapt to achieve a measured goal. Specifically, it is possible to adapt the digital data driving the D/A so as to reduce or eliminate artifacts, spurs, and other fidelity issues introduced by the signal conditioner. A simple example of this technique would be the elimination of a spurious tone that appears in the stimulus output that is harmful to a measurement. Synthesize a second tone of the same frequency as the spur. Figure 3-7 shows a system for adaptively adjusting the amplitude and phase of the second tone to null the spur, eliminating it from the stimulus and thus making the measurement possible. Distortion
Synthetic Signal
Non-Ideal Signal "Conditioner"
Synthetic Distortion
Adaptive Phase/ Amplitude Adjustment
∑
S+D
(S+D) – D = S
Clean Signal
D
Figure 3-7. Adaptive nulling
Adaptive nulling, linearization, or calibration is a nontrivial enterprise to be sure, but it has the unique property in this context of being something that can be implemented purely in DSP software. That doesn’t mean it’s necessarily easier or better than a hardware solution to a fidelity issue. My point, however, is that such techniques should always be considered when the hardware has fidelity issues. Adaptive DSP techniques will often have a significantly lower cost in production than any hardware solution. Reconstruction Filtering Depending on the needs of the test, it may be possible to directly use the quantized voltage from the output of the stimulus codec. For example, if the stimulus is a digital logic level, it may be used directly. However,
50
Stimulus
when synthesizing a smooth analog stimulus waveform, it’s often better to use a reconstruction or interpolation filter. This filtering at the output of the D/A reconstructs the analog waveform from the “stair-step” approximation. Spectrally, this filter attenuates high-frequency aliases while it can also correct for the sin(x)/x roll-off effect created by holding the samples through each “tread” in the staircase. If I know the dynamics of the reconstruction filtering when I am calculating the samples I will send into the stimulus codec, it is possible for me to choose these samples to custom tailor and shape in the signal conditioner output waveform based on that knowledge. For example, fine control of rise time and delay is possible using this technique—ten times finer than the sample interval, even. This is a critical fact to keep in mind when designing the codec. It’s easy to see a specification that says “rise time programmable in 1 NS steps” and erroneously conclude that the D/A must run at 1 GHz. The characteristics of the reconstruction filtering, and other fixed or parameterized filters in the signal conditioner, can also be used to facilitate adaptive techniques, as described in the section titled “Adaptive Fidelity Improvement.”
Stimulus Cascade—Real-World Example Figure 3-8 shows an example of a real-world stimulus subsystem, the Celerity Series CS25000 Broadband Signal and Environment Generator, from Aeroflex. This is only one of the many different stimulus products made by Aeroflex. I selected this one in particular because it includes both high-fidelity signal conditioning and stimulus processing that comprises both a waveform playback controller and general-purpose CPU with a range of options. Thus, the Aeroflex Broadband Signal and Environment Generator platform represents a complete synthetic measurement system stimulus cascade. The BSG combines a very deep memory, very high-speed arbitrary waveform generator and a broadband RF up-converter with powerful signal generation software. The BSGs have bandwidths of up to 500 MHz, and full bandwidth signal memory of up to 10 seconds. The bandwidth, memory depth and dynamic range make the BSG a powerful tool for
51
Synthetic Instruments: Concepts and Applications
Figure 3-8. Aeroflex CS25000
broadband satellite communications, frequency agile radio communications, broadband wireless network communications, and radar test. An open, software-defined instrument architecture allows easy imports of user created waveforms. Vector signal simulator (VSS) software creates signal files for commercial wireless standards as well as generic nPSK, nQAM, nFSK, MSK, CW, tone combs, and notched noise signals. Any of these generic signal types can be gated or bursted in time, as well as hopped in frequency. Real signals, including recorded signals from Aeroflex’s broadband signal analyzers or other recorder sources, can be imported and combined with digitally generated signals, and then played back on the BSG. Impairments can be added to the signals including thermal noise, phase noise, and passband amplitude and phase distortion. VSS provides the unique ability to mix any combination of signals and impairments to generate complex signal environments. Aeroflex’s Vector signal player (VSP) software provides simple controls for signal file selection, output frequency control and output power control. Aeroflex’s up-converters use real (non-I/Q) conversion architectures, generating high dynamic range waveforms without the carrier leakage and signal image problems associated with I/Q modulators found in other signal sources. 52
Stimulus
The high-speed stimulus controller in the BSG is designed with an enhanced version of the waveform playback architecture I discussed in the section titled “Waveform Playback.” This controller allows the BSG to play extremely complicated waveform data files through the use of programmed sequencing. This allows for the predetermined, scenario-based, playback of different sections in memory. During playback, the instrument can move from one section of memory to another, on a clock cycle. The control of this analogous to a typical MIDI tone generator driven by a sequencer (a high-tech player piano). The BSG waveform address counter can move in programmed fashion to different sections of memory, building a complete stimulus output without having to put the complete wave train into memory. This sequencing capability is particularly useful when synthesizing digital modulation waveforms. It makes efficient use of memory while allowing many possible waveforms to be generated. A simple example would be a pulse that is only active for a small amount of time. The pulse can be put into a small block of memory, another small block of memory can hold a piece of interpulse signal (often just zeros). A scenario file programs the system to play the interpulse buffer for a certain number of cycles, then to play the pulse file once, then to start over. There can be multiple pulse profiles and interpulse buffer profiles that can be played to produce extremely complex output stimulus from a small amount of memory. Table 3-2. BSG performance range
Model Number CS25020
75 MHz
Sample Rates 200 MS/s
Sample Size 14 bits
Dynamic Range 70 dB
Max Memory 2048 MS
CS25025
200 MHz
250 MS/s
12 bits
60 dB
2048 MS
CS25040
160 MHz
400 MS/s
8 bits
45 dB
4096 MS
CS25080
280 MHz
700 MS/s
8 bits
45 dB
8192 MS
CS25082
280 MHz
700 MS/s
12 bits
55 dB
8192 MS
CS25130
500 MHz
1300 MS/s
8 bits
45 dB
16384 MS
CS25132
500 MHz
1300 MS/s
12 bits
55 dB
16384 MS
Bandwidth
53
Synthetic Instruments: Concepts and Applications Table 3-3. BSG options
Frequency Down-converter Option
Memory Sequencing Option
Output Options
Sample Clock Option
Disk Storage Options
Multiple Signal Options
Baseband
• Tunable or fixed up to 40 GHz in bands
• High speed address sequencing
• Precision attenuators • High speed attenuators • Reconstruction filters
• Low Phase Noise
• Fixed and Removable Drives • 73 GB to 146 GB • CD-RW, CD-ROM, DVD • IF/RF
• Digital
Controller Options
• UltraSPARC/Solaris • Pentium/Linux
Remote Control Options
• 10/100Base-T Ethernet • GPIB
Peripheral Options
• Keyboard and mouse • Flat panel and CRT Monitors
Playback / Output Options
• Wide-band Analog • High Speed Digital (LVDS, DECL, PECL,
TTL)
54
CHAPTER
4
Response This chapter describes concepts and design issues related to the response side of a synthetic measurement system. The main goal of the response subsystem is to measure some aspect of the DUT in a measurement context. A secondary goal is to measure the output of the stimulus system for calibration purposes. Some of the concepts discussed relating to stimulus also apply to response, and vice versa. Some concepts, however, are unique to response. The response CCC cascade comprises the interface to the DUT and associated signal conditioning, A/D conversion (response codec), and finally a DSP controller. The ordering of this cascade is the opposite of the stimulus cascade, but the functions are completely analogous. Controller
Codec
Conditioner
Figure 4-1. The response cascade
Response Signal Conditioning The response signal conditioner is the signal processing interface between the DUT and the response codec. It may be as simple as an amplifier with anti-alias filtering, or it may be as complex as a down-converter. Input Protection General-purpose test equipment must be able to withstand typical mistakes made by test engineers (yes, test engineers do sometimes make mistakes). Some of these may expose the response system to excess signal 55
Synthetic Instruments: Concepts and Applications
levels. The response signal conditioner should be designed such that reasonable overloads do not damage the subsequent processing. Response Linearity and Gain Control As in the stimulus signal conditioner, linearity in the response conditioner is a common requirement that leads to challenging hardware design. Digital-oriented test scenarios may not require full linearity, but most other applications do. Linearity is just as difficult to achieve in the response side than it is in the stimulus side, although on the response side there are some additional options that can ease the problems somewhat. In a response conditioner, there are two fundamentally different approaches to implementing a linear system. I call these the high-gain and low-gain response processing strategy. Essentially, the difference between these approaches is in where to place the noise floor relative to the A/D quantization noise. Low-gain strategies will place the signal conditioner noise near the quantization noise floor; high-gain strategies will have enough gain to amplify this noise all the way up to the nominal operating point of the A/D. Full Scale Nominal Max Signal Level
High Gain
Max Conditioner Noise
Low Gain
Quantization Noise
Zero
Figure 4-2. Low-gain versus high-gain
Most measurement systems are low-gain. Communication systems are often high-gain. High-gain systems will always need some form of gain control because the noise is already at the nominal loading point of the A/D. As a signal is introduced and its level increases, a high-gain system 56
Response
will need to back down its gain immediately to prevent overload. In a low-gain system, variable attenuation is optional and is used only with very large signals that threaten to overload the A/D. Many low-gain systems have no pre-A/D gain control at all. It’s somewhat ironic that measurement systems tend to be low-gain designs with limited gain control. The irony in this stems from the fact that most response signal conditioners have a signal-level sweet spot that maximizes dynamic range. You normally want to keep the measured signal pinned exactly on this sweet spot in order to achieve the best fidelity and consequently the best measurement accuracy. Unfortunately, other considerations in designs for measurement systems prevent this degree of gain optimization from being achieved. In contrast, communications systems often have automatic gain control (AGC) that keeps the signal level pinned precisely at the optimum level for the detector, even preservation of dynamic range and accuracy is the not reason this is done. It’s also interesting to note by way of analogy with stimulus that stimulus conditioners tend to always be low-gain in the sense that they try to minimize the quantization noise from the D/A reflected at the output while running with high signal at D/A nominal. As such, stimulus conditioning tends to be more constrained, although they do tend to run at their “sweet spot” more consistently. Response conditioners can get away with taking a low-gain or high-gain approach and move the signal level around, depending on the situation. Adaptive Techniques As with the stimulus conditioner, it’s possible to implement system-level adaptive techniques to fix linearity, crosstalk, or spurious signal issues that plague the response conditioner. For example, consider the measurement of signal harmonics in the presence of a powerful fundamental. If the fundamental is powerful enough, and the level of the DUT harmonics to be measured is low enough, then the measurement system harmonics (specifically those of the response conditioner) will swamp those of the DUT. The measurement will become impossible. But if adaptive nulling techniques are used to attenuate the fundamental without affecting the harmonics, the measurement again becomes possible. This nulling happens in the response signal conditioner, up front, as 57
Synthetic Instruments: Concepts and Applications
soon as possible, using the undistorted fundamental from the input of the DUT as the nulling reference. Clean Signal
Stimulus
Adaptive Amplitude and Phase Adjustment
Response
Distortion Added by DUT
∑
DUT
Signal plus Distortion Added by DUT
Figure 4-3. Adaptive nulling to improve response measurement
Similar methods can be applied for the measurement of intermodulation and spurs. This is also a good technique for measuring one channel alongside several others in a multiplexed arrangement. Without adaptive nulling, the adjacent channels may make accurate measurement of the desired channel impossible.
The Response Codec In this section I will discuss some issues with regard to response digitization. Some of the discussion will apply to the stimulus codec as well. Also included in this section is a description of a state-of-the-art commercial digitizer subsystem. Fidelity and Measurement Accuracy An issue always on people’s minds when they begin to contemplate a synthetic solution to a measurement problem is the number of bits in the A/D or D/A (what I refer to collectively as the codec). When comparing two systems, if one has 12 bits and the other has 14 bits, it’s tempting to conclude that the 14-bit system is somehow “better” than the 12-bit system. But the number of bits in the codec is a rather superficial and misleading metric for specifying the fidelity of a synthetic instrument. Even the sup58
Response
posedly more honest and encompassing effective number of bits or ENOB parameter can be completely misleading. Here’s why. There are a plethora of sources of error that plague a typical measurement system. The signal conditioner, like any analog system, can have offset, drift, noise, distortion, or spurious signals that corrupt the desired signal. The codec itself, with one foot firmly in the analog world, can also be plagued by these analog troubles. Less acknowledged, but certainly possible, the measurement on the digital side can be additionally corrupted by noise, distortion, and spurious signals. That last statement probably needs some justification if you are under the impression that digital processing is ideal and works just like it says in Oppenheim and Schaefer[B10]. Unfortunately, reality does not quite live up to this expectation. Digital filters, for example, can oscillate and generate spurious signals through limit cycles and other nonlinear behavior. They can also generate noise through coefficient round-off error and can introduce distortion through various finite word-size effects. This noise and distortion can be significantly larger than one might expect given the number of bits in the signal processing path. Underflow
Overflow
Crosstalk
Overflow
Intermodulation
Intermodulation
Quantization Noise
Quantization Noise
Roundoff Noise
Amplifier Noise
Amplifier Noise
Limit Cycles
Phase Noise
Phase Noise
Rate Conversion Errors
Clock Noise
LO Spurs
Aliasing
Aliasing
Images
Controller
Codec
Thermal Noise
Conditioner
Figure 4-4. Sources of noise and distortion in synthetic systems
Therefore, focusing only on the bits in the codec distracts attention from the performance of the whole system. It is necessary to analyze signal flow, noise, and distortion through the whole system in order to draw any conclusions about accuracy and fidelity. 59
Synthetic Instruments: Concepts and Applications
Ideal Quantization Assuming the codec is an ideal quantizer (say, an ideal A/D converter) with N bits, and a quantization step size of ∆, and, additionally, assuming that the input voltage has stationary and uniformly distributed statistics over the analog range quantized by the N bits, a common textbook exercise shows that the root mean square (RMS) error introduced by the quantization process is ∆2/12 relative to the ideal signal. In terms of dB, with the additional assumption of a bipolar signal (where one bit is used up as the sign bit) then the RMS quantization noise is 6N + 6 dB below the ideal signal. Giving up another 6 dB for headroom, then the digitization process results in RMS noise that is roughly 6 dB below the desired signal level for every bit in the codec. This 6 dB per bit quantization noise is a well-known rule of thumb, but it’s important to always remember the assumptions. They are: 1. 6 dB headroom 2. Input signal statistics stationary and uniformly distributed 3. Ideal quantization Reality may invalidate one or more of these assumptions. Let’s discuss them each in turn. Codec Headroom The need for headroom in a codec derives from the fact that real signals (measurements) have a peak-to-average ratio that is greater than one. Even theoretically, the common assumption that a measurement is Gaussian distributed about some mean implies a peak to average ratio that is infinite! The problem with a high peak/average ratio is that the average will determine the overall performance of the codec in terms of quantization noise, but the codec will catastrophically distort the signal if the signal peaks overload the maximum range of the codec. Therefore, it’s best to arrange things so the average is as high as possible with overload never (or rarely) occurring on peaks. Although real signals aren’t so benign as to have unity peak/average ratios, they are not so malicious as to have infinite peaks. Practice falls 60
Response
somewhere in between. It’s usually possible to come up with a good compromise. As an example of how to arrive at a headroom compromise, think about your living room stereo set. If your stereo is of any decent quality, it will have a VU meter that displays the signal level. The VU meter gives an excellent graphical depiction of headroom.
Figure 4-5. VU meter
The region in red above the 0 dB mark is the headroom. Most audio equipment works “best” when the signal peaks seem to bounce up to the 0 dB mark, with rare excursions higher into the red. This is exactly the same sort of consideration that guides the design of any codec system. You set your average below the maximum overload level with the headroom approximately the same as the anticipated peak/average ratio. This is called optimal loading of the codec. A consequence of this practice is that signal-to-quantization noise ratio in an optimally loaded codec will decrease dB-for-dB with any increase in peak/average ratio. The two parameters represent a counterbalancing trade-off. The dilemma therefore is a choice between avoiding overload, and minimizing quantization noise. Headroom Trade-off and System Fidelity The codec itself is always in the context of the signal conditioner and the controller. Headroom that optimizes the codec performance may not make sense for the signal conditioner. As I have discussed, the conditioner will have a “sweet spot” that optimizes performance. In addition, it can’t be assumed that just because there was an optimal balance between headroom and noise at the codec and conditioner, that this optimal balance remains optimal throughout digital processing. 61
Synthetic Instruments: Concepts and Applications
Things affect dynamic range and fidelity in digital signal processing just like they do in analog processing. Although modern DSP tools make it virtually trivial to slap together processing steps, as much care needs to be exerted in designing each step of DSP as is put into designing each step of analog processing if you are to achieve optimum performance.
Response Digital Signal Processing Just like the stimulus DSP, the response digital processor section can perform various sorts of functions, ranging from simple measurement and analysis tasks, to full-blown digital demodulation. Therefore, once again, depending on the exact requirements of each of these functions, the hardware implementation of an optimum response digital processor section can vary widely. One “general-purpose” DSP controller may not be what’s needed, in general. Following are two broad categories that divide the capabilities of possible response digital processor assets:
Waveform Recorder and DSP
Matched Filter (Demodulator)
In the following sections, I will discuss these categories in turn. Waveform Recorder and DSP The first and simplest of these categories, waveform recorder and DSP, is representative of the response controller in most synthetic measurement systems around today. It consists of a block digitizer comprising an A/D and RAM, combined with high-speed DSP immediately after the A/D, and a general-purpose DSP-oriented CPU that that works with the RAM buffered data. Data From Codec
High Speed DSP (FPGA)
Data
Addr
RAM Buffers
Memory Controller
Data
Addr
Low Speed DSP µP
Figure 4-6. Waveform recording controller
62
Data To Host
Response
The high-speed DSP (HSDSP) is normally used to reduce the quantity of data stored, by one of several techniques. These data-rate reduction schemes might include: decimating, digital down-converting, averaging, or demodulating, decoding, despreading, or computing statistical summaries. Some A/D parts build in high-speed DSP. You can live without HSDSP, but as data rates climb, this function becomes essential. Quite often it is implemented with a gate array. The memory controller manages the large block of waveform memory. The large block of memory contains digitized samples of waveform data. Perhaps the data is in one continuous data set, or several independently acquired tracks or blocks of data. The memory controller is a state machine that allows sequencing through that memory, reading or writing. The better systems allow you to read and write at the same time. Although you certainly can get away without simultaneous read and write, when I buy a digitizer system, I look for this feature first. It greatly enhances the capabilities of the system and is most often worth the money. Dual-port access may be implemented with a FIFO, or “ping-pong” buffers, or it may be a true two-port memory design with separate read and write address decoding logic. Low-speed DSP (LSDSP) represents a microprocessor dedicated to DSP tasks. This may either be within the response controller subsystem, or it may be implemented as part of the host. The purpose of LSDSP is to further reduce the data rate, possibly by computing final ordinates. Even with memory controllers that allow continuous acquisition capability, typical waveform recording controllers are block-oriented. What they do is “take a block of data” and analyze it. Even the more advanced units with decimators and down-converters will boil down to this limited functionality. I say “limited,” not withstanding the fact that, given a fast enough CPU and a big enough block, any sort of processing could be implemented this way. The reason I say “limited” is because other approaches can be orders of magnitude more resource efficient for certain essential tasks. Thus, the limitation of the simple block digitize and DSP response controller arises from the limits of DSP processing resources. Controllers can do many more things beyond just “taking a block of data” and running some DSP algorithm on it. They certainly must do more to
63
Synthetic Instruments: Concepts and Applications
handle high-speed, real-time interactive testing, particularly with digital modulations from cell phones and military communications equipment. Matched Filter Demodulator A matched filter is something I can prove mathematically to be the best way to detect the information modulated on a signal. It represents the gold standard of measurement devices. Matched filtering is described in any good communications theory book[B12], and is an essential vector signal analyzer (VSA) operation. The simple block diagram of a matched filter is shown in Figure 4-7. A template of the expected signal waveform is correlated with the input signal. The correlation is integrated over the signal duration, resulting in a metric that indicates how close the input signal is to the template. Sample Timing Correlator
Input Signal
Conditioner
Integrator
Codec
Sampler
Matched Filter Ordinate
Sample Timing
h(t)
Signal Template Generator
Figure 4-7. Matched filter
The signal template, h(t), in the matched filter is an ideal, undistorted copy of the thing the system is trying to detect and measure. This fact implies that a matched filter actually has the ability to store and generate waveform data, at least for internal use. If this sounds to you suspiciously like the stimulus controller, your suspicion isn’t misplaced. It may be surprising that a response detector contains stimulus generation, yet this is just another example of stimulus-response closure as I discussed in the section titled “Stimulus Response Closure: The Calibration Problem.” The response system cannot be separated from the stimulus system. In this case, an ideal response detector must have exact knowledge of the stimulus it is trying to detect.
64
Response
The matched filter is worth considering given what this achieves: linear detection of a signal with minimum mean square error. There is no better detector in a least squares sense. Given how great a matched filter is, it makes a lot of sense to have one or more of these in the response system. You definitely want one for each possible letter in the signal alphabet you are trying to detect. No doubt a matched filter can be implemented in DSP software, thus one might argue that a matched filter can be implemented with the block digitizer and DSP processor. It’s certainly cheapest to do it this way, and just as certainly a software matched filter would be cheaper than dedicated matched filter hardware. Two questions thereby arise: Is there any real need for dedicated matched filtering in the response system hardware? Isn’t this exactly the kind of specificity we should avoid in a synthetic measurement system? First question: Yes. Matched filtering as a dedicated hardware structure is often needed because it can’t be realistically implemented in DSP software for many real-world scenarios, particularly real-time scenarios. A true matched filter requires a convolution between the prototype impulse response and the response signal. This is computationally intensive. Even in cases where FFT techniques can be used to speed the processing, matched filters take a while to calculate. Then, multiply the already lengthy time for one matched filter by the number of templates in the alphabet, and you see that the computational burden becomes onerous quickly. Certainly, if you want matched filtering for a nontrivial alphabet, you need to dedicate hardware to the task. Second question: No. Matched filtering isn’t specific. Quite the contrary, matched filtering is the most general form of linear detection around. This is because a matched filter is a general process with all its specificity encapsulated in the signal template it seeks to detect. The template is a parameter; actually, the template is more correctly seen as an abscissa. Any finite signal template can be the basis for a matched filter. The template waveform represents a basis function for the ordinate being measured. The matched filter detection process is an inner product (dot product) operation that determines how much of the signal vector projects onto a particular abscissa represented by the basis. What better way to measure an ordinate!
65
Synthetic Instruments: Concepts and Applications
Response Trigger Time Interpolator I have already discussed triggering from in the context of the stimulus system in the section titled “Stimulus Triggering.” Response triggering is somewhat of a different, perhaps simpler problem. When responding to a signal, triggering from it, you have the option to use post processing to fix-up the acquired data based on the trigger. That’s easier to do than with stimulus where, lacking the ability to move back in time and change history, you are stuck with the data previously emitted. Response controllers or digitizers often have what’s called a trigger time interpolator that tells us precisely when the trigger happened relative to the digitizer clock. Straightforward digital processing techniques can be used to resample the data, transforming it into data synchronous with the trigger event.
Response Cascade—Real-World Example Figure 4-8 shows an example of a real-world digitizer subsystem, the Acqiris AP240. This is only one of the many different response digitizers made by Acqiris. I selected this one in particular because it includes both signal conditioning and response processing. Thus, Acqiris’ reconfigurable analyzer platform is more than just a digitizer. The full front-end signal conditioning has up to 1-GHz bandwidth, and an onboard FPGA digital processing unit (DPU) allows digitized signals to be processed and analyzed in real time. In fact, a system such as this, along with a host computer for moderate speed processing and control, represents a complete synthetic measurement system response cascade. With SSR firmware options, the DPU can be programmed to perform processing algorithms at the cards’ maximum sampling rate, easing the requirements on the remainder of the response DSP subsystem.
Onboard reconfigurable data processing unit (DPU) for real-time operations.
Front-panel digital I/O connectors for real-time data processing control (DPU Ctrl).
Synchronous dual-channel mode with independent gain and offset on each channel.
66
Response
Figure 4-8. AP240 reconfigurable PCI signal analyzer platform
Interleaved single-channel mode on either input, software selectable.
1-GHz analog bandwidth in all FS ranges.
Up to 2 GS/s sampling rate in single-channel mode.
Fully-featured 50Ù mezzanine front-end design with internal calibration and input protection.
Short (1 Mpoints/ch typical) or optional long processing memory (4 Mpoints/ch typical).
Multipurpose I/O connectors for trigger, clock, reference and status control signals.
Continuous and start/stop external clock modes.
High-speed PCI bus transfers data to host PC at sustained rates up to 100 MB/s.
Device drivers for Windows 95/98/NT4.0/2000/XP, VxWorks and Linux.
Auto-install software with application code examples for C/C++, Visual Basic, National Instruments LabVIEW and LabWindows/ CVI.
67
Synthetic Instruments: Concepts and Applications
The sustained sequential recording (SSR) firmware for AP240 analyzer platform uses a dual-bank memory system with onboard automatic switching allowing sustained sequential recording to the host PC in sequence mode at sustained trigger and data rates ten times faster than normal. The SSR firmware also allows 1-GHz bandwidth with synchronous start-on-trigger dual-channel sampling at rates up to 1 GS/s (2 GS/s in single-channel mode). When triggering, there is minimum dead time between successive acquisitions, allowing recording in sequence mode with a sustained trigger rate of up to 100 kHz.
68
CHAPTER
5
Real-World Design: A Synthetic Measurement System So far, I’ve talked rather philosophically about synthetic measurement system design issues. The examples I’ve given, and detailed techniques I’ve discussed have all been abstract, not referring to any particular measurement system or instrument implementation. This chapter is different. Here, I present a real-world synthetic measurement system that is in operation today (2004).1
Universal High-Speed RF Microwave Test System The real-world system I will discuss in this chapter was developed by Raytheon. The RF multifunction test system (RFMTS) was developed to provide for a wide variety of RF test demands, and targeted at radically reducing test times. It is a versatile test system integrating state-of-the-art capabilities in high-speed RF testing, microwave synthetic instrument measurement techniques, product interfacing, and calibrating. Background Trends in military product design have been toward modular, solid-state RF microwave architectures, taking advantage of major improvements in solid-state RF component design. For example, radar architectures are now based on using thousands of solid-state modules packaged in manageable assemblies of up to 30 modules each. This shift in design architecture has precipitated demand for a flexible high-speed, high-quality RF microwave test system. 1
This chapter is courtesy of Raytheon and Aeroflex Companies and is based on an AutoTestCon Paper[C1] that describes a high-speed, high-performance RF test system targeted for a moderate to high quantity manufacturing environment. 69
Synthetic Instruments: Concepts and Applications
In 1999, Raytheon embarked on a venture of developing such a system. Besides high throughput and versatility, the goals of the project included logistic goals that would address lowering life cycle costs, technical goals that would achieve high performance, and an architecture that would enable it to maintain technical excellence. Logistical Goals The main focus was to develop a system that would permit lowering life cycle costs. The goal translated into a common platform that could be used in a broad spectrum of applications. If achieved this would enable:
Training for one system versus many for operators, maintainers, and TPS developers.
Reduced calibration equipment and procedures.
System self-test to permit increased availability.
A modular architecture to facilitate maintenance, require fewer spares, and deal with obsolescence.
A spares and maintenance program for one system.
A common resource that could be shared across many programs.
An open architecture (hardware and software) that would promote system longevity and permit upgrading.
Technical Goals Architecture was a major consideration at the core of the technical goals. The objective was to have a modular system based on industry standards from both a hardware and software viewpoint and minimize dependence upon proprietary designs. RF Capabilities The first criterion was measurement speed. Experience with heretofore “high-speed” test systems achieved test times of approximately a half-hour for solid-state assemblies. Classical rack and stack RF test systems needed hours to test complex receiver-exciter assemblies. Goals were to reduce these times by factors from 3 to 10.
70
Real-World Design: A Synthetic Measurement System
Nearly as important as speed was RF measurement performance. A fast system with marginal performance would not have a wide range of applications. This new system had to have measurement performance capability similar to that of typical commercial test instrumentation. For it to be able to do the job, the system needed to have an extensive measurement suite. In the RF world, where cable losses and other transmission line issues can be serious problems, an instrument with good measurement capability at its front panel is only half the solution. Being able to easily extend the measurement all the way to the DUT was a prime consideration. Hence, having flexible calibration options was high on the list of priorities.
System Architecture The blend of a synthetic measurement system (the Aeroflex TRM1000C), and a Raytheon custom-designed 3rd bay (RF switch matrix, DUT interface assembly, and auxiliary COTS equipment) provided a solution that met the desired hardware goals. The software goals were achieved by taking advantage of the TRM1000C industry standard LabWindows/CVI, VXI plug and play type drivers, and LabWindows/CVIcompatible GPIB instrumentation in the Raytheon-designed 3rd bay. Microwave Synthetic Instrument (TRM1000C) The Aeroflex TRM1000C is designed to provide reconfigurable, highspeed production test equipment for evaluating a variety of different microwave devices such as amplifiers, transmit and receive (T/R) modules, frequency translation devices, receivers, local oscillators, and phase shifters. It can also perform tests on integrated subassemblies of RF components, as well as on full up-systems filled with any combination of active RF, multiport devices. The basic architecture of the TRM1000C is consistent with the basic architecture described in this book, a CCC cascade, enhanced with compound signal conditioners. The compound conditioners consist of a stimulus up-converter and a response down-converter. Time multiplexing is used to expand the system to multiple inputs and outputs. A calibration and verification system allows for loopback ordinates and application of metrology standards. Figure 5-1 outlines this high-level functionality.
71
Synthetic Instruments: Concepts and Applications Synthesis (AWG)
Up Conversion
DUT Interface ......
System/DUT Control & Syncronization
DUT ......
Signal Signa Conditionin l Conditioning Reference Plan
g Calibratio
Calibration n Verificatio
n Verification
DUT Interface Digitization & DSP
Down Conversion
Figure 5-1. TRM1000C functional diagram
The TRM1000C is designed to dramatically improve module test times and reduce measurement errors introduced by the operator, test hardware, and DUT interface. It is ideally suited for production test applications where throughput and flexibility are paramount. Through its synthetic design, the TRM1000C’s nonspecific RF hardware can be software configured to run specific RF and microwave production tests. The TRM1000C hardware architecture is based on advanced synthetic instrument concepts; the TRM1000C does the same measurements as several distinct microwave test instruments including a pulsed power meter, a frequency counter, multiple sources, a spectrum analyzer, a vector network analyzer, a noise figure meter, and a pattern generator. The full measurement suite is as follows in Table 5-1:
72
Real-World Design: A Synthetic Measurement System Table 5-1. TRM1000C measurement suit • • • • • • • • • • • • • • • • •
Power Tone Power Pulse Power (RMS or Peak) Total Power Spectral Power Density Noise Power Pout & Pin at “N” dB Compression AM/PM Multiport S-Parameters 12-Term Error Correction Gain Input/Output Return Loss Isolation Conversion Gain Group Delay Noise Figure Phase Noise
• • • • • • • • • • • • • • • • •
RF Signal Source Complex Volts Pulse Profile Rise time Fall time Droop Envelope Delay Frequency Spurii Harmonic Nth Order Intercept Modulation Index Raw Read Complex FFT Data Block Digital Data verification Analog DMM Scope Measurements
A standard TRM1000C includes complex stimulus generation including pulsed modulation, AM, FM, phase modulation (PM), and a fast response measurement channel. Since the system is synthetic in design and thereby easily reconfigurable, the system can be reused for different programs and applications, therefore maximizing return on investment (ROI). If you look carefully at the block diagram in Figure 5-1, you can see that the internal design of the TRM1000C follows the standard synthetic measurement system CCC architecture principles for both stimulus and response. A compound up-converter in the stimulus, and a compound down-converter in the response-side orients the TRM1000C architecture toward the generation and analysis of bandpass signals, as is appropriate to its RF measurement mission. A calibration matrix interconnects stimulus and response with the DUT, providing stimulus-response closure. This eliminates many redundancies (for example: duplicated channels, stimulus-side detectors, response-side sources) that would otherwise be necessary in order to maintain calibration.
73
Synthetic Instruments: Concepts and Applications
Supplemental Resources Practical design realities limited the scope of what could be achieved with a purely synthetic measurement system. For testing the more complex RF products (receiver-exciter elements, frequency translation devices, and so forth), additional nonsynthetic resources were required. These instruments do not need to be inside the high-speed loops of the synthetic instrument and therefore do not interfere with measurement speeds. The supplemental test instrument resources include three RF sources, an oscilloscope, digital multimeter (DMM), and power meter (for troubleshooting purposes). These instruments, the RF switch matrix, and DUT interface were incorporated into Raytheon designed 3rd bay to complement the TRM1000C.
DUT Interface For the RFMTS to easily work with a variety of RF products, some way to interface its test resources to these products had to be developed. The interface needed to be rugged, versatile, high performance and easy to use. In addition, a high-performance RF switch matrix was included as part of this assembly in order to permit simple, low-cost interface adapters. The chosen solution integrates a high-performance RF switch matrix and a Virginia Panel interface assembly as shown in Figure 5-2.
Figure 5-2. Test adapter interface
74
Real-World Design: A Synthetic Measurement System
Product Test Adapter Solutions Some of the products to be tested on the RFMTS were known to be large assemblies. For this reason, time was spent on developing calibration schemes that could be extended out several levels and on providing a rugged, high-performance interface. This approach has permitted using the system on virtually any size RF product. It also accommodates other relatively unique items that support the test, such as pneumatics, hydraulics, liquid cooling, and so on.
Calibration The RFMTS is designed to collect measurement data for a variety of different DUTs. By the nature of the measurement, raw data collected by the instrument contains characteristics of both the DUT and the test system hardware (instrument, switch interface, DUT adapter, and so forth). To extract only the characteristics of the DUT, the system must compensate for its own contribution to the measurement data. The process for characterizing the system’s contribution is generically termed calibration. Calibration is an integral part of performing measurements. Calibration procedures are dependent on the application, measurement, and measurement method; therefore, the flexible TRM1000C calibration design allows for different applications needs. The measurements must be NIST traceable; therefore, NIST traceable transfer standards are needed to calibrate the system. TRM1000C calibration is divided into two categories: primary calibration and operational calibration. Primary Calibration The TRM1000C utilizes the modular, line replaceable unit (LRU) methodology as part of the system. These LRUs are calibrated at a standard metrology calibration lab and become an integral part of the system. The majority of the LRUs are commercially available NIST traceable standards. The production floor can have spares available to remove and replace, minimizing system downtime when performing periodic maintenance. The LRUs also eliminate the need for external, on-site support
75
Synthetic Instruments: Concepts and Applications
equipment; therefore, the user does not need to bring external equipment up to the system. A list of primary calibrated LRUs is as follows:
Power Meter (50-MHz Source)
Power Sensor Calibration Factor
Noise Source Excess Noise Ratio (ENR)
10-MHz Rubidium Standard
3.5mm Calibration Kit (S-Parameters)
Operational Calibration This is an application-specific procedure that transfers the measurement standards from the system’s calibration LRUs to the system itself. This calibration can handle a number of different multiport devices and any number of DUT interfaces. The DUT interfaces can be as simple as NIST traceable coaxial connectors (3.5mm), or the interface can be more complex: non-NIST traceable coaxial connectors (for example, GPPO), standard and nonstandard wave guide, or even direct wafer probes. A multitier calibration technique is used to extend the calibration reference plane out to the DUT (de-embed).
Software Solutions With a synthetic instrument approach, the scope and potential capabilities of the software are almost boundless. The entire system is LabWindows/CVI-based. A software architecture has been used to address a variety of goals and desired capabilities. 1. Ease of use by TPS developers and test system maintainers. 2. Sufficient depth and flexibility to accommodate dealing with very complex, digitally-controlled RF products. 3. The different levels of software employed by the TRM1000C synthetic instrument.
76
Real-World Design: A Synthetic Measurement System
Test Program Set Developer Interface The objective here was to provide a simplistic means of implementing moderately complex tests, without demanding that all test program set (TPS) developers maintainers be experts in C programming. The solution was to develop C-based test procedures that would be graphical in nature, and then utilize a test executive that would readily permit stringing these test procedures together. This approach requires a few C experts, but permits test engineers to be productive with minimal software specific training. The test procedure concept simply translates typical RF measurement scenarios in a graphical user interface screen or “panel” where the required parameters can be entered. The TPS designer enters the associated parameters via the panel for each test the designer develops. Test procedures have been developed for all of the measurement types. A primary objective of the test procedure approach is to maximize reuse and provide cost effective TPSs in a timely manner. The approach has proved to be very effective. Test engineers are able to concentrate on the technical aspects of the DUT and the test scenarios. The pure software designers are doing what they do best—developing the C-based procedures in support of the test designers. In the event of complex tests, the LabWindows/CVI environment provides a very flexible environment for developing custom procedures for unique or even further speed enhancement if required. TRM1000C Software The TRM1000C currently uses a scripting language called JavaScript (ECMAScript). Scripts are used to define the logic, control, processing and storage of the measurement and resultant data. Any number of scripts may be loaded at any time and sequenced via a test executive or incorporated as part of a test procedure and then called by an executive. Through commanding of the scripts, the TRM1000C may change from one measurement personality (i.e., vector network analysis) to another (noise figure). There are two categories of scripts that can be designed: low level and high level. Low-level scripts are interpreted and run one step at a time. A
77
Synthetic Instruments: Concepts and Applications
low-level script is not optimized for speed, but is best for situations where the number of firmware states is unknown. Such a situation may arise when conditional branching is utilized. An example of this is when a measurement result is used to decide whether another measurement needs to be made. High-level scripts are unrolled and loaded into a state table within the processor. The processor can then execute the state table without any interaction with the script. This method is optimized for speed, but requires a known number of states. After the measurement is performed, the requested data is stored in a local data file (slot zero controller). Since the scripting capability allows for complex, multidimensional data sets to be collected and the speed of the system allows for considerable data to be collected quickly, these data files can be somewhat large. To minimize the data file size and to allow for easy access, the TRM1000C currently utilizes an open file format called hierarchical data format (HDF). Measurement results can then be either routed back through the host driver link as part of the script or the HDF file can be retrieved remotely by the host. Any number of different test executives can be used with the TRM1000C. Test executive software running on the host PC performs all data presentation and report generation activities.
Conclusions The system performance is comparable to that of standalone instrumentation and in some cases better. From a practical application viewpoint, the RFMTS performance has met all required DUT test requirements, many of which are very stringent. Relative to use of the term “typical,” it is very important to note that the test designer has a great deal of leeway relative to optimizing measurement and speed performance. The test designer has control over the IF bandwidth, DSP block size (number of samples), block count (averages), receiver gain, and so forth, therefore, allowing for optimizing performance for a required measurement. From practical experience, there are DUTs tested on the system that are optimized for speed and others that are optimized for measurement performance. In typical applications, the high-volume devices have measurement requirements that permit
78
Real-World Design: A Synthetic Measurement System
optimizing for speed. The DUTs that require the greatest accuracy are much fewer in number on a per system basis, and therefore, minor sacrifices of speed are easily accepted. As a last note relative to performance, the TRM1000C synthetic design and system architecture readily permits performance improvements via both hardware and software, and in fact these improvements are continually being made. From both a logistical and technical goal viewpoint, the system has been a major success. Multiple systems are on the floor, and the envisioned goals are now being realized. The open architecture, the speed, and the technical performance have made the RFMTS a standard core test system. The architecture permits continual growth from both hardware and software viewpoints. Hardware growth will focus on continued performance enhancements. With the modular approach, this can and is being done in an incremental fashion. Even speed, one of the system’s main virtues continues to improve as computer speeds and the other building blocks in the system improve. But software will be the most exciting area where continual growth is envisioned. The graphical test procedures will continually be refined. They will be made more user friendly, increased in depth, and more specific elements will be added. As these grow and develop, the cost and schedule for developing test programs will likewise reduce. Because of the nature of the system, software can and will enhance every aspect of the system: performance, utilization, calibration, and maintenance.
79
This page intentionally left blank
CHAPTER
6
Measurement Maps Large, complex ATE systems are often run by large, complex software systems. The complexity of the software is inevitably a result of the complexity of the system, which is a direct result of the problem solved by the system. Fortunately, test engineers are really smart people. As such, they have no problem dealing with the complexity of the measurement problem. In fact, the test engineer sees more complexity in the measurement problem than does someone unfamiliar with the details. Because they understand the problem better than anyone else, the test engineer is the best person to figure out exactly what test to run, and exactly how to run it. Unfortunately, the large, complex software systems that run large complex ATE systems are rarely designed to invite the test engineer to dive in and play. A lot of system-specific software knowledge is required to become productive and avoid breaking things. This specific knowledge has nothing to do with measurements, but rather is related to software architecture and software methodology issues. These issues may be vital from a software perspective, but they really have no direct value to the test engineer. They are pure overhead from their perspective. Don’t get me wrong. I’m not saying that the typical software found in ATE systems is badly designed. Instead, I’m saying it may have had other priorities—that it is of the wrong design to empower end user participation on a daily basis. The all-too-common problem of measurement-irrelevant software complexity renders big ATE systems inaccessible to the test engineers that need to use them to solve their everyday problems.
81
Synthetic Instruments: Concepts and Applications
This problem is not news to ATE software designers. In fact, ATE software technology has taken step after step in the direction of providing accessible programming interfaces to test engineers. These developments have had varying success in establishing a clear division between the machinery of the software innards, and the machinery of the measurement. Much progress has been made, but none have eliminated the problem completely. The holy grail of ATE software is a system such that the full complexity of measurement can be expressed by a test engineer with no knowledge of the software innards. The test engineer should be able to design a measurement, from scratch, without being forced to worry about software artifacts, like calling conventions, parameter lists, communications interfaces, and, most of all the programming quirks of the individual instruments. Much of the arcana that creeps into programming measurement systems is caused by those pesky quirks of specific measurement hardware, and their related configuration issues. Designing a measurement becomes a process of orchestrating a collection of instruments to do what you want them to do. This is an inherently complex task because each instrument has its own set of capabilities, expressed in nonuniform ways. Admittedly, things like SCPI and VXI plug and play go a long way toward unifying the look of a collection of unique instruments, but you still have a collection of unique instruments, albeit “smoothed over.” This is why synthetic instrumentation is such a breakthrough for software accessibility. For the first time, designers can define a measurement application programming interface (API) that is just about measurements and only about measurements. The irrelevant machinery is hidden. The test engineer sees a totally measurement-oriented interface through which they can express whatever it is about the measurement that needs to be expressed. In this book, I introduce the stimulus response measurement map (SRMM) model of measurements and XML-based SRMM measurement definitions as one way to define measurements in a synthetic measurement system that stays focused 100% on the measurements. I wouldn’t claim that my approach is the only possible approach, but I submit that it provides a proper foundation for ATE application software that is fully and exclusively based on the measurement, and thereby facilitates the 82
Measurement Maps
construction of user interfaces that spare the test engineer from irrelevant considerations outside the measurement.
Measurement Abstraction It is a sad irony that focusing on the measurement for the purpose of making life simple for the test engineer leads to abstraction, and a new abstraction is never simple for anyone to accept at first. I would go so far as to say that the most formidable human problems facing the introduction of synthetic instrument design concepts in real-world applications is the fact that synthetic instrumentation embodies an abstract model of measurements. When people are accustomed to dealing with concrete things (for example, a specific set of instruments that they use to make a specific measurement), it becomes very difficult for them to let go of this conception of things and try to imagine any other way to accomplish the measurement. This fact about human nature is why virtual instruments are so popular. With virtual instruments, you can imagine that your favorite old instruments are still being used to make your measurement. You can have comforting little virtual knobs on cute virtual front panels constituting your virtual rack of virtual instruments. This collection of instruments are then programmed in a quite familiar manner that mimics the way a corresponding set of physical instruments would be applied to do the measurement. But actually, if you want to measure something, Yogi Berra might say that all you really need is a thing that measures what you want to measure. If you want to measure A and B vs. C and D, you need an (A,B) = f(C,D) measuring instrument, an AB vs. CD meter, so to speak. Nothing else is really the “right” thing. Yes, maybe you have always measured A and B separately as functions of C and D and stitched it together, but fundamentally what you really want to do (if you don’t mind me putting words in your mouth) is measure the AB vector over the CD manifold1. This idea becomes crucial when C and D are not separable. A function of several variables: f(x0, x1, x2, …) 1
Fear not. Scary mathematical jargon will be explained.
83
Synthetic Instruments: Concepts and Applications
is separable if it can be expressed as a product of functions of the individual variables. g0(x0) · g1(x1) · g2(x2) · … Here’s an example: Suppose you have an ultrasonic transducer. You would like to measure its response versus frequency and its response versus signal input level. These two measurements may not be separable. You could vary input level and plot output at some fixed frequency. Then you could vary frequency and plot output at some fixed input level. If the response was separable, you could then multiply those two functions and have the whole thing. But it may be the case that the shape of the frequency response curve changes at different power levels. It may also be the case the power transfer curve is shaped differently at different frequencies. You really need to measure the response over the joint domain manifold of frequency response and power response to fully characterize the sensor. Figure 6-1 is an example of a manifold f(x,y) that isn’t separable.
Figure 6-1. Joint manifold
Synthetic instrumentation approaches don’t yield their biggest payoff unless you are willing to think about measurements themselves, as pure measurements, especially multidimensional measurements. You need to divorce your thought from particular instrumentation used to make the measurements and think only about what you want to measure, in and of itself. Failing to do this will inevitably shift the focus from the measure-
84
Measurement Maps
ment to the instrument being used and result in the introduction of myriad irrelevant and extraneous considerations that would not otherwise appear. To the end of attempting to get everyone thinking about measurements abstractly, the following section attempts to introduce some vocabulary that does not shackle us to instrument-specific ideas. The vocabulary is based on mathematical concepts that, for the most part, are studied by teenagers in high school.
General Measurements In a synthetic instrument, measurements are performed by software running on generic hardware. Ideally, this software is completely flexible. Any sort of measurement is possible to define, so long as it falls within the capabilities of the hardware. Unfortunately, if measurements are flexible without limit, one is wandering without guideposts in this total freedom. The system can do anything, so there’s no structure to provide a handle on what can be accomplished. For example, there is no finite set of distinct measurement parameters that fully describe the required inputs to all these possible measurements. Moreover, there is no possible universal parameter format, type, or structure that can cover all cases. Similarly, in general there is no standard data type to cover all possible measurement results. Nor is there a finite and standard calibration set that can be applied to relate these arbitrary measurements to physical units. It should be clear, therefore, that some sort of structure must be imposed on this vast abstract possibility in order for our finite human resources to be applied. One type of structure is the virtual instrument structure. This introduces the accustomed structure of everyday instrumentation in order to make our options reasonably finite. But virtual instruments are not unlike the way stops on a church organ mimic classic musical instruments. The organ can synthesize various classic instruments, but if you wanted something else not in the set of organ stops—something new—you couldn’t have it. This isn’t because of any inherent limitation in the organ, but rather because of a limitation in the model used to parameterize and limit what the church organ can be asked to synthesize.
85
Synthetic Instruments: Concepts and Applications
What is needed is a structure on possible measurements that is limiting enough to result in a tractable system design, but generic enough to allow the full range of possibilities. Returning to the organ metaphor, introduce the idea of a Fourier series, and allow organ stops to be specified as Fourier coefficients. Now the goal is reached. The Fourier series provides a handy and compact structure without introducing practical limitations. The full freedom of synthesizing any past instrument lives along with the possibility of synthesizing future instruments.
Abscissas and Ordinates I will now propose a system for describing measurements that is free of specific instrumentation focus, and thereby does not require reference to virtual instruments, or any other legacy crutch. It is compactly and usefully structured, but the full freedom of synthesizing any past instrument lives along with the possibility of synthesizing future instruments. In this system, I consider only the subset of all possible measurements that I call stimulus response measurement map (SRMM) measurements. I will show that with this conception, a uniform format for parameters, data, and calibration is possible. This has far-reaching significance because the broad class of SRMM measurements comprises all the typical measurements made with conventional instrumentation, as well as much of what is possible to do with any instrumentation. The Measurement Function SRMM measurements are based on the concept of an abscissa and an ordinate. You may remember these words from high school. If you remember what they mean, you are ahead of the game because I will not alter their meaning in any fundamental way. All I will do is to observe the fact that these concepts represent a measurement in a generalized manner first elucidated by Isaac Newton. Consider the equation: y = f(x) The variable x represents the abscissa and y represents the ordinate. The function f relates the two. If you don’t like the words abscissa and ordinate, perhaps you might prefer the alternative: independent variable 86
Measurement Maps
(abscissa), and dependent variable (ordinate). Or maybe you just like x and y. Some purists think that abscissa and ordinate should be reserved strictly for the case of a two-dimensional plot. Fearless of the wrath of the math gods, I will use abscissa and ordinate terms to refer to independent and dependent variables regardless of the dimensionality of each. In a sense, the function f represents the measurement process, with the abscissa representing the state of things, or possibly some imposed state (stimulus) and the ordinate representing the measured or observed response. Mathematically, a function is defined by specifying its ordinate for every possible abscissa over some domain. In the case of a discretely sampled domain, a function can be defined by simply enumerating the ordinates in a table. For example: x
y
1
10.0
2
10.41
3
10.73
4
20.0
defines a function over the domain [1,2,3,4]. This process of defining a function by a table is exactly analogous to the process of measuring the value of an ordinate for a uniformly sampled abscissa domain. It’s simply a matter of labeling the abscissa and ordinate. For example, instead of x and y, I could write frequency and power, or time and temperature, as in: Time (hr)
Temp (°C)
1
10.0
2
10.41
3
10.73
4
20.0
Not all possible tables define valid measurement functions. The definition of a function requires that there be one and only one value of y for any value of x. Therefore, a table that listed x = 2 several times with different values of y would not define a function. Analogously, in the context of a measurement, this requirement translates into demanding that the 87
Synthetic Instruments: Concepts and Applications
system produce one and only one measurement for each value of abscissa. Normally, such a requirement isn’t a problem. In the case of inverse maps, however, it may become a sticking point. Canonical Ordinate Algorithms Much of the technique of measurement in a synthetic instrument is encapsulated in so-called canonical ordinate algorithms. These algorithms generally represent the crux of the measurement issues—the down and dirty business of finally getting a number. Canonical ordinate algorithms, as such, are not concerned with the context of the measurement, abscissa rastering, or data structures. These other issues have been abstracted away and are handled by other algorithms and data structures in the stimulus response measurement map model. Multidimensional Measurements Functions can have more than one abscissa. In such a case, the domain is a multidimensional manifold or, more loosely, surface. If, for example there are two abscissas, u and v, the the function of two variables: y = f(u,v) can be defined over a discretely sampled two-dimensional (u,v) domain with a table, thus: u
v
y
1
4
5
1
5
6
2
4
6
2
5
7
Once again, for this table to define a function, there must be one and only one value of y for each unique (u,v) pair.
88
Measurement Maps
Domains Note the interesting pattern that the (u,v) abscissas make. It should be clear that this pattern is formed with an outer, or cartesian, product of two uniformly sampled abscissas, [1,2] and [4,5]. This produces every possible combination of the individual abscissa values. An outer product can also be visualized as a table. Here is the outer product table that produced the above abscissa pairs: u\v
4
5
1
(1,4) (1,5)
2
(2,4) (2,5)
Not all discretely sampled multidimensional domains can be represented as outer products; however, those domains that can be represented with an outer product, I call separable. The advantage of the separable property of a domain is that it can be represented very compactly by the sampling grids of the individual abscissas. The abscissa pairs in the outer product do not need to be stored explicitly after the measurement. Fortunately, separable domains constitute, in large measure, the kinds of measurement domains people like to use. There are, however, some restricted special cases of nonseparable domains that are of interest. For example, consider the measurement table: u
v
y
1
2
3
2
3
5
3
4
7
4
5
9
I call this kind of domain locked. There is a fixed difference between the u and v value. In many measurements, abscissas are locked in ways analogous to this. A received frequency might be locked to a transmit frequency, a response port may be relative to a stimulus port, there are many examples of abscissa locking. The final kind of nonseparable domain that is commonly used for measurements is called banded. In such a domain, one abscissa varies inde-
89
Synthetic Instruments: Concepts and Applications
pendently, and the other varies in a restricted range around the first. A banded domain can be represented as a diagonal subset of the outer product of a separable domain. For example, the domain table: u\v
1
2
3
1
(1,1) (1,2)
2
(2,1) (2,2) (2,3)
4
3
(3,2) (3,3) (3,4)
4
(4,3) (4,4)
might result in a measurement table like this: u
v
y
1
1
4
1
2
9
2
1
9
2
2
16
2
3
25
3
2
25
3
3
36
3
4
49
4
3
49
4
4
64
Banded domains are useful for measurements that explore a region in a two-dimensional domain of some ordinate where stimuli or abscissas vary in a coordinated, but not strictly locked, manner. Measurement Maps In a measurement process, it is often convenient to make several different ordinate measurements at a given abscissa point. Moreover, the ordinates may range over a multidimensional domain manifold with several independent abscissas. Therefore, to represent measurements, I need to
90
Measurement Maps
generalize one-step further from a scalar function of several variables to vector-valued function of several variables, a so-called measurement map. Abscissa Abscissa Abscissa
Ordinate Measurement Map
Abscissa
Ordinate Ordinate Ordinate
Figure 6-2. A measurement map
Although the concept of a vector field or multidimensional mapping comes from mathematics and therefore evokes latent math anxiety in people, I’m not really saying anything deeply difficult here. Oddly, if I talk separately about measurement data as multidimensional, or the multidimensional domain manifold over which the data is taken, people seem to understand that just fine. It’s when I put them together into a vector valued function of a domain manifold, in a simple word, a map, that the eyes glaze and the knuckles whiten. But consider, for example, an X-Y positioner moving an image sensor around. A common flatbed scanner like you have on your PC is an example. Clearly the position of the sensor is a two-dimensional thing. And if I talked about an X-Y-Z positioner, possibly with some theta angular rotation of the sensor, the resulting four-dimensional set of variables isn’t very hard to take. Any number of independent variables are easy to understand. When the scanner acquires color image data in red, green, and blue color dimensions, it establishes a relationship between the image intensity vector in RGB space, and the X-Y spatial location. This relationship is what I call a measurement map, and the data itself is measurement map data. (See Figure 6-3.) A measurement map is the relationship between a set of independent variables, and a set of dependent variables. It’s how the elements of one table or spreadsheet relates to another table or spreadsheet. That’s it. Nothing more tricky than that.
91
Synthetic Instruments: Concepts and Applications
Y
Image
Pixel Color (R,G,B)
Flatbed Scanner X
Map Function (R,G,B)=f(x,y) Figure 6-3. A color image as a measurement map
Ports and Modes I’ve talked endlessly about measurements. How do these relate to a DUT? What is the link between the two? A bridging concept that I use to relate a DUT to its measurement is the concept of a port. A port is a plane of interaction between the measurement system and the DUT—a precisely specified data interface between the signal conditioner and the DUT. Often it is a physical cable interface to a sensor or a connector on the DUT, but it might be an abstract plane in space. It depends on what you are measuring and how. Many schemes have been developed for managing ports and linking logical ports to the real physical wiring interfaces, registers, or commands they represent. These schemes tend to be proprietary and associated with particular measurement systems. Others are more generic. I will present a sketch of a minimal scheme in the section titled “Describing the Measurement System with XML” that uses XML syntax. Ports can have many attributes, but foremost is if the port is an input to the DUT, or if it is an output from the DUT. This will determine if the measurement system applies a stimulus to the DUT at a given port, or measures a response. In my model of a synthetic measurement system, ports must be associated with physical point or planes, and must be categorized as either stimulus or response. They can’t be both, or neither, although it is certainly possi92
Measurement Maps
ble to define two ports, one stimulus and one response that both connect to the same place, or no place. Ports are a logical concept, but ultimately they connect to the real world. Stimulus ports are, ultimately, controlled by a register that is written or a command that is sent; response ports are likewise wired to a register that is read or a command that is received. The port is not the only bridging concept that links from the measurement to the hardware. A mode denotes states of the system itself independent of its physical interfaces. The distinction between ports and modes is a fuzzy one. Perhaps the signal conditioner has two separate physical interfaces: One through an amplifier and one direct, with a switch internal to the conditioner selecting between them. I might call that switch a mode switch. On the other hand, suppose these two conditioner ports are connected to different DUT ports. Maybe then the switch is really a port switch. But what if the switch matrix outside the conditioner is able to route either of these signal-conditioning interfaces to the same or different interfaces on the DUT. Now it may be unclear if the gain selection switch should be considered a port or a mode.
Amplifier
Controller
Codec
Conditioner
Figure 6-4. Is the gain switch setting a port or mode?
It might be argued that implementing a mode by means of a physical interface, mixing the two concepts of mode and port, might be considered a hardware design mistake in the same way as a GOTO statement is considered, by some, to be a mistake in software design. This would be true if 93
Synthetic Instruments: Concepts and Applications
there were no other considerations, but the reason hardware is designed a certain way (or software, for that matter) is frequently based on the optimization of certain aspects of performance, along with considerations of safety, cost, reliability, and so forth. There may be good reasons to use separate physical interfaces for different modes in a signal conditioner that override any paradigm purity considerations. A common voltmeter is a good example, where the high-voltage measurement input is often a different interface than the normal, low-voltage input. This is done in order to reduce the chance of damaging the lowvoltage circuitry with an accidental high-voltage input, as well as eliminating the need for an expensive high-voltage switch in the meter. In general, it is wise to distinguish and separate modes from ports as much as possible. In a well-designed SMS, there will be a site configuration document and associated software layer that can disentangle many of these overlaps between ports and modes. DUT Modes as Abscissas Quite often, the measurement system is called upon to control the DUT. For example, suppose the DUT is a radio receiver. A reasonable measurement of interest might be the sensitivity of that radio receiver. But the sensitivity of a radio depends on its settings: where it is tuned in the band, if it’s set to AM or FM, and so on. If the radio can be controlled by a measurement system, we can ask that system to measure the radio’s sensitivity as a function of tuning and other mode settings on the radio. When a DUT is controlled by the measurement system during testing, the DUT modes become abscissas. Even though they are not part of the measurement system, per se, they represent an independent variable just as much as any other port, mode, or abscissa within the control of the measurement system. Ports as Abscissas Ports are related to the measurement map in an essential way. Each abscissa and ordinate in a stimulus response measurement map must be associated with a port. Also, one or more ports may be defined as port abscissas. It’s best to think of ports as a special kind of abscissa (a child
94
Measurement Maps
class) that behaves for the most part exactly like an abscissa, but has the additional ability to bind other abscissas and ordinates to a physical measurement port. In this way, all the machinery developed to sample abscissa domains becomes available to select the port used for each measurement. This approach makes a lot of sense because abscissas define the independent variables in the measurement; they define what is controlled and specified; they establish the domain over which the ordinate is measured. Similarly, modes are also sensible to make into abscissas as they, too, represent the independent context of an ordinate. If you make port and mode selection through abscissas, it’s clear that these port or mode abscissas may apply either to stimulus, response, or any combination thereof. This point matters most when specifying a calibration strategy that may introduce new calibration abscissas, or otherwise transform the map from what the user specified to what the machine can do.
Map Manipulations If you have followed me so far, you should see that the description of a measurement and the results of measurements are mappings. Using the stimulus response measurement map model of measurements allows us to see exactly what aspects of each particular measurement are unique to that measurement, and what parts are generic aspects. Looking at the map as a whole allows us to see beyond the particular list of abscissas and ordinates associated with a test and focus on the measurement itself. This measurement focus leads inevitably to a compact implementation as a synthetic instrument. In regards to abscissas, in the common case of a separable domain, the process of applying stimuli to a device can be reduced to defining the individual abscissa scales. With the addition of banded and locked domains, all usual domain cases are comprised by a small set with compact descriptions. There is a distinction between a map description and the map data itself. The map description is present before the measurement is made. The map data is acquired after the measurement is made. Together they represent
95
Synthetic Instruments: Concepts and Applications
a fully documented measurement map. Often I will discuss the two collectively using just the one word map. In cases where the distinction is relevant, I will explicitly specify map description or map data. Maps are more than just a way to define measurements and record the results of measurements. Maps can change. They can be processed and manipulated both before the measurement and after the measurement in ways that are isometrological—manipulations that do not affect what is ultimately measured. In fact, they must be processed and manipulated isometrologically in order to apply to many common measurements, particularly relative measurements, or measurements that include a calibration ordinate. This isometrologic manipulation process is called canonicalization. It is described in detail in the section titled “Canonical Maps” and it is the paramount benefit of the Stimulus response measurement map viewpoint. Maps can be interpreted (with some restrictions) as a multidimensional entity, as a manifold themselves. As such, all the concepts associated with manipulating objects in space work as tools for manipulating maps. For example, you might take a slice through a particular plane. A slicing action represents holding the value of some variable constant, either an abscissa or ordinate. Imagine a measurement of an amplifier gain and power supply drain as a function of input power and input frequency. Maybe you might like the gain versus frequency at constant power supply current. That would be a slicing operation. Slicing operations usually require interpolation because the plane that slices through the data may fall between measured or controlled points. Processing may also rotate the map. This may be done to reorder the axes, or perhaps to remove dependencies between axes—to orthogonalize axes. An example of orthogonalizing would be a DUT that has two inputs and two outputs. Both inputs affect both outputs, but the inputs affect the outputs in different ways. Imagine, for example, the hot and cold knobs on your sink. They control the temperature and flow rate of the water out of the spigot. Two abscissas, two ordinates. There is an interaction between the two. Turning one knob changes both the flow and the temperature. Maybe you would like to know how to control just the temperature at a constant flow, or the flow at constant temperature. That is the process of orthogonalization applied to measurements. In this case,
96
Measurement Maps
it involved finding a rotation transformation to apply that orthogonalizes the abscissas. When rotating more than just the abscissas, the rotation can be interpreted as an inversion of the map. For example, if you have a map y = f(x) and you find a function g that exchanges y for x, allowing you to get x = g(y), then you have rotated and flipped the x-y plane. That is to say, you have rotated abscissas and ordinates together, as one unit. Inversions, like slicing, require interpolation in order to make the new abscissa fall on nice, uniformly gridded points. x=g(y)
y=f(x)
Map Inversion
x
y
Figure 6-5. Inverting a map
Sometimes it’s necessary to flatten a map. This process removes an abscissa or ordinate. Calibration is often accompanied by flattening. For example, suppose you are measuring the gain of an amplifier. You may do that by measuring its input power, then its output power, then dividing the two, yielding gain. After the division, if you no longer want input and output power, you can flatten the map data manifold by combining two of its dimensions with a calculation. In that case, I call gain a calculated ordinate as compared to the directly measured ordinates of input and output power. Maybe you want to expand (or maybe better would be thicken) the map data with the gain calculation result, keeping input and output power in place. This is fine too. Expanding is the dual of flattening. It’s common to add new dimensions to map data during post-processing with additional calculated results; it’s also common to add them in preprocessing to include calibration ordinates or abscissas. Sometimes these expansions are paired with their dual, a flattening, on the opposite side of the data acquisition.
97
Synthetic Instruments: Concepts and Applications
An alternative to expanding and flattening is to make a child map. When creating a child map, it is generated from one or more parent maps by a calculation. It’s a good idea to maintain a link between child and parent so that later you can figure out what the calculated data was based on. This is often done in a nested or hierarchical tree-like manner that can easily be expressed in XML or HDF. Other dual-pair manipulations that are useful are rastering and raveling. Both of these ideas are implicit in the way domains manifolds are sampled. Normally, where there is more than one abscissa, the abscissa is sampled in raster order. That is to say, one of the axis is sampled so as to vary fastest in an innermost loop with the other axes held constant. Then, once the innermost abscissa is completely sampled across its domain, the next innermost abscissa is incremented to its next domain sample point, and then the innermost repeats its scan. Thus, all the axes are sampled through their ranges in this way. An alternative to rastering is to ravel the abscissa points in some other order. All points are still sampled, but the samples are ordered differently. Normally, the order in which the abscissa set is sampled does not affect the measurement, but when hysteresis is present, or when certain axes are slow and others are fast to measure, the ravel or raster order can be essential to the success of the measurement. All these manipulation techniques, and more, can be used on measurement maps. They can be used individually, or in combination. What is less obvious that should be emphasized is that they can also be used either on map data, or on map descriptions. This may be surprising to you if you had been thinking of all these cutting, flattening, and interpolation operations as after-the-fact post-processing of map data. Operations on map descriptions are performed differently than operations on map data, that is true. But the same set of techniques are generally available. One can expand the map description to include extra abscissas or ordinates before the measurement; similarly, one can interpolate, invert, or otherwise reshape. Consider the gain measurement example again, beginning with a map description that gives gain as an ordinate. One of the pre-processing steps might be to expand the map into canonical form with an atomic input
98
Measurement Maps
power and output power ordinate.2 This would be done by canonicalization so that during post processing the operation could be reversed, yielding gain. In fact, it is a general rule that calibration strategy considerations often will required a transform of the map description prior to the measurement. Any time you make a relative measurement, for example, the relative ordinate you want to measure must be split into multiple measurements, which are divided or subtracted or otherwise combined through some calculation to compute the desired computed ordinate. Problems with Hysteresis There is great advantage to thinking about maps as entities that can be manipulated by standard operations, as if the maps were made of modeling clay or tinker toys. In many cases, clever manipulation yields great efficiency in measurement activities, giving us faster, better, and cheaper measurements. Unfortunately, there are metrologic issues regarding accuracy that must be addressed. One of these issues is interpolation, relied on by many map manipulation techniques. Issues with interpolation are discussed elsewhere in this book. In this section, I will discuss another key issue that hurts the ability to manipulate maps. That issue is hysteresis. It is a fact that many DUTs have memory. That means that the results of one measurement on the DUT depends on what measurements have been made previously. For example, temperature is an issue for many devices, and making measurements can alter the temperature of the device, thereby changing measurement results. Many other examples exist of this phenomenon. As a consequence of hysteresis, the order in which measurements are taken can be crucial. In situations where hysteresis is an issue, the cutting, slicing, and rotating manipulations that are performed on some maps may not be performed arbitrarily without risking significant loss of accuracy. 2
In some situations, response port selection may be viewed as an abscissa, with power being the ordinate. Thus, an appropriate map manipulation can flip the port selection abscissa over and give us input and output power ordinates.
99
Synthetic Instruments: Concepts and Applications
There is a distinction between memoryless devices and devices that can exhibit hysteresis. When faced with a DUT that has memory, there is no alternative but to evaluate what measurements are being taken, at what speed, over what duration, and what effect on the DUT will be retained and affect future measurements. Only once these considerations have been evaluated, can appropriate constraints be applied on map manipulation so as to avoid any loss of accuracy. Stimulus and Response Stimulus and response are interactions with a device under test (DUT). Don’t make the mistake of thinking an abscissa is a stimulus, or a response is an ordinate. An abscissa is an independent variable that you set. It establishes a context for a measurement. The ordinate is that measurement. Yes, an abscissa is ordinarily related to setting a DUT stimulus because the stimulus plays a major role in setting context. Similarly, a DUT response is ordinarily related to an ordinate, because the fundamental idea of test and measurement is to measure the DUT response to some stimulus. These relationships are typically true, but not always. In fact, it’s quite common that an ordinate is associated with measuring the applied stimulus so as to verify that the correct stimulus was applied during the measurement. Later, in post processing, the stimulus ordinate may be isometrologically transformed into an abscissa, but it starts life very much as an ordinate. Another example would be in a system that analyzes modulated or coded responses from a DUT. An abscissa in such a system might be receiver frequency or subcarrier number or any of a number of possible response attributes that independently set the context for the dependent ordinate. For this reason, there must be a mechanism to explicitly associate the axes in a map to a stimulus or response. Inverse Maps A surprisingly helpful concept is the idea of an inverse map. Consider a two-port device under test, like an amplifier. The measurement system provides an input stimulus, and measures the output response. With this sort of setup, you can measure a map like gain versus input power and frequency. 100
Measurement Maps
Now, suppose somebody wanted to know what input power was required to hold the output power of the device constant at some fixed level versus frequency. In essence, they want to measure a stimulus or cause (input power, frequency) that results in a certain response or effect (output power). This sort of case is another example that demonstrates not all ordinate measurements are of responses. Sometimes we try to find out what stimulus causes a certain specified effect in the response. Assuming we are in a causal thermodynamic universe where effects follow causes in time, it won’t work to choose the effect first, and then see what cause “happens.” All you can do is to try some causes and record their effects. Afterward, on paper, you can invert cause and effect to see what causes would be needed given certain desired effects. This reversal of cause and effect is an inverse map. When the test engineer specifies a map that has reversed cause and effect, the calibration strategy must be to invert the map so that it can actually be measured forward in time. One way to achieve map inversion is to do a measurement to acquire a causal, noninverted or natural map, and then in post processing invert the map mathematically, resampling and interpolating as needed. But is there any way to reverse cause and effect without inverting a map after the fact? Is there some way to apply an effect, and measure a cause? Surprisingly, in some cases, the answer is yes! Let’s return to the case of the amplifier. I want to know what input power, versus frequency, will keep the output of the amplifier at some constant output power. Suppose I constructed a feedback loop in my measurement algorithm. This loop would implement a goal-seeking algorithm that would adjust the input power so as to keep the output constant. With this loop in place, I could vary the frequency and measure the input cause that produces the constant, specified effect. Consider, as another example, the square root circuit in Figure 6-6 that works using this same principle of inversion performed by a feedback loop. “Cheater!” you shout. I didn’t reverse the flow of time. Within the feedback loop, minuscule accidental errors in stimulus cause deviations in the
101
Synthetic Instruments: Concepts and Applications A
x2
+
−
∑
A
Figure 6-6. Square rooter
output that result in corrections of the stimulus. So the map really still does get inverted, in a sense. Well, maybe you’re right. But it is certainly true that feedback loops and adaptive systems seem to invert cause and effect, and they thereby represent a powerful tool for inverting measurement maps coincident with the moment of measurement itself. Accuracy Advantages of Inverse Maps As a general rule, a measurement designer should always consider the possibility of inverting cause and effect in her measurement, measuring something backward, and afterward inverting the map. This possibility should be compared with doing the measurement the forward way. In many cases, the inverted method leads to more accuracy, simpler hardware, or both. A classic example of the advantage of inversion is when calibrating a variable attenuator. A variable attenuator is a device that takes an input signal and reduces its amplitude by some selectable amount. These attenuators are calibrated by setting them to all their states and measuring the reduced output level relative to the input. The forward way to accomplish this measurement would be to stimulate the DUT with a fixed input, vary the setting of the DUT as an abscissa, and acquire the output level as an ordinate. Unfortunately, with attenuators that work over a wide range, this approach may be quite inaccurate and slow, especially for when the attenuation setting is high resulting in a very small response. When he realizes that his response system sensitivity is inadequate to the task, a test designer who hasn’t read this book might decide that he needs more sensitive hardware in the response system. The right thing to do is to consider an inverse map. Fix the response out of the DUT and allow the stimulus to be the ordinate. In doing that, the 102
Measurement Maps
response system can work at a level that is comfortable and accurate, and the requirement for wide dynamic range is shifted to the stimulus system, which may already have that capability since it is far easier to generate a wide range of levels accurately than it is to measure them. Sadly, if there is no software support in the synthetic measurement system for inverse maps, the test designer may see the hardware solution as easier to implement. This illustrates a common mistake in the development of synthetic instrumentation that I discuss elsewhere in the boot: exclusively using hardware to fix problems. Problems with Inverse Maps Inverse maps are not without drawbacks. Two that I will mention here are the problem with inverse function branches, and the problem of adaptive stability. After inversion, a map may not still be a map. That is to say, the result may not be a function. There may be more than one value possible for an ordinate at a given abscissa point. The alternative values are called branches of the inverse map. y=f(x)
x=g(y) Branch A
Map Inversion Branch B
x
y
Figure 6-7. Inverse map with multiple branches
The right branch to pick for the inverse may not be immediately clear. One option is to split the map and carry each branch forward separately in a collection of maps. In other cases, constraints and defaults provide a way to pick the right branch. Another difficulty sometimes faced by inverse maps is the problem of stability. When map inversion is performed by real-time adaptive algorithms, the feedback loop within that algorithm may oscillate. If the sys103
Synthetic Instruments: Concepts and Applications
tem relies on such an adaptive loop to perform map inversion, instability in that loop would be disastrous to the accuracy of the data. Fortunately, there is a large body of theory regarding the stability of feedback loops, and there is much practical advice about how to go about fixing unstable loops.
Calibration Strategy and Map Manipulations Why did I bother to create the stimulus response measurement map model of measurements? What good is it, really? Maybe it seems like an interesting way to describe measurements, but is there any big payoff? These are good questions. It may not be obvious why any of the math stuff is worth the trouble. Abscissas are just fancy loops. Ordinates are measurement subroutines. Yes, I see how this ties data together with the measurement in a neat package. That’s nice. But is there a bigger payoff? While I have already pointed out the many small payoffs that derive from the use of the SRMM approach in general, and the benefits of XML schema for describing measurements in particular, the jackpot payoff is with the concept of calibration strategy. Without the concept of calibration strategy, and related concepts, like compound and atomic ordinates and abscissas, the formalizing of measurement descriptions under the SRMM stance has no more benefit than other more generic object-oriented approaches—other approaches that, although they have nothing to do with test and measurement, may be more familiar to software engineers. Calibration strategy is a method for specifying how maps should be isometrologically transformed both before and after physical interactions with the DUT. Calibration strategy will rewrite the measurement to a new form, changing them from what the user originally specified for the test into what the synthetic measurement system actually can do. After the raw measurement is made, calibration strategy guides the post processing manipulations that occur, transforming the map back to what the user wanted in the first place. I have already discussed map manipulations in some detail in the section titled “Map Manipulations.” I gave an example of a map manipulation that would be applied in the case of a gain measurement. The gain ordinate versus some abscissa, is rewritten into the power-in and power-out ordinates versus the same abscissa. That map data is acquired. Then the 104
Measurement Maps
result is transformed back, calculating the ratio to collapse the power-in and power-out axes, yielding the desired gain map. Surely, the same thing be accomplished by simply writing a new ordinate called gain. Such an ordinate would operate the measurement hardware explicitly so as to make power-in and power-out measurements, it would divide the result, and it would return gain. What’s wrong with that? Nothing is really wrong, in the sense that this could certainly be made to work. In fact, I’ve seen many systems where test engineers do exactly this: write a new test script every time they want to measure something new. Fine, but there are a bunch of methodological problems here. Now you have a growing list of idiosyncratic ordinates to maintain. Improvements made to power-in and power-out may or may not be reflected in an improved gain ordinate. Worse yet, it’s not exactly clear what depends on what. Maybe the gain ordinate is written first and later on power-in and power-out versions are abstracted. This creates a labyrinth of dependencies. Axes, unit conversions, calibration, calculation, post processing are all embedded in “ordinates” which no longer are worthy of the name. Ultimately, a collection of hand coded, ad hoc measurement scripts accumulate that don’t form any sort of coherent, reusable system. Don’t get me wrong. I’m as much in favor of hand coding, ad hoc, hacked up, extreme programming as the next guy, but I don’t want to create a big system entirely this haphazard way unless I am planning to quit just after CDR. The concept of calibration strategy that I have been describing in this book leads to a better place where I can keep (and maybe enjoy) my job.
Canonical Maps Instead of “just coding a new ordinate,” there is a fundamentally better way to deal with what I will call a compound ordinate like gain. Fundamental Definitions Atomic Ordinate An ordinate based on a fundamental response measurement made by the system at a single abscissa point.
105
Synthetic Instruments: Concepts and Applications
Atomic Abscissa An abscissa based on a fundamental stimulus or mode setting of the system, independent of other modes. Compound Ordinate An ordinate that is computed from data acquired with one or more measurements made using other compound or atomic ordinates. Compound Abscissa An abscissa that implies the domain of one or more other compound or atomic abscissas. Stimulus Ordinate An ordinate obtained by map inversion or adaptive processing that determines the value of a stimulus as a dependent variable. Typically this is not atomic unless the hardware has special adaptive properties, or can travel backward in time. Loopback Ordinate An ordinate that is a direct measurement of a stimulus. Typically atomic. Do not confuse with a stimulus ordinate. Beside the fact that it doesn’t involve the DUT, a loopback ordinate is really no different than any other direct measurement by the system. Map Canonical Form A map that has been isometrologically transformed so as to contain nothing but atomic ordinates and abscissas. Maps in canonical form can be directly measured by the system. The main goal of calibration strategy is to take a map specified by the user and to transform it into canonical form so that it may be measured by the system. The resulting map data is post-processed based on the map manipulations required for the canonicalization, resulting in data that is reported to the user. Thus, the user sees a system that measures what she asked it to measure, although internally the measurements were remapped to what the machine could actually do.
106
Measurement Maps
You might note that compounds may be composed of “one or more” atomics. Why would anybody ever want just one atomic in the compound? The answer to that is the case of unit manipulations. Often a user will specify a measurement to be made in a certain unit: feet, volts, dBm, and so forth. The system itself measures in only a limited set of units. A simple map transformation in the calibration strategy can take a map specified in dBm and turn it into one specified in volts. A calibration strategy schema describes, constrains, and guides how maps are manipulated to get from specified measurements, to machine level measurements, and back to specified results. With relationships and associated transformations encapsulated into the calibration strategy, trusted ordinates and abscissas are relied upon to do the work. Once calibration strategy reaches the canonical map, the system can optimize that map based on user constraints so as to make the fastest, most accurate measurement possible. In using the word “schema,” I suggest that calibration strategy can be expressed as an XML schema. Indeed this is the case. The tree-structured nature of XML naturally serves to describe a tree of interrelationships leading from a user specified map, to a decomposition into a canonical map which guide the physical measurements to acquire sets of raw data. The schema then guides the processing, combining the raw data sets through map manipulations that lead back to the user required data. Gain
Power Out
Power In
Block Average
Block Average
Cal Data
Cal Data
Figure 6-8. Calibration strategy trees
Sufficiency of the Stimulus Response Measurement Map Stance But is a tree structure expressive enough for calibration strategy? Test engineers are clever and want to be able to combine elementary measurement processes with complete and unrestricted freedom.
107
Synthetic Instruments: Concepts and Applications
Any algebraic expression can be expressed as a tree (for example: adding, subtracting, multiplying, dividing maps, or combinations thereof). So I think I’m OK with any compound ordinate or abscissa that is related to atomics through an algebraic calculation. This covers relative measurements, differentials, and unit conversions. It covers even complex calibrations like S-parameter 12-term corrections that are needed for RF network analyzer synthetic instruments. Inversion would violate the tree structure, but I have made a special case of inversion. With machinery to compute inverse maps, I am free to specify inverse maps with no problem. Inversions for the purpose of measuring stimulus ordinates are the most reason for nonalgebraic map transformations. What’s left? Orthogonalization? I can treat this like inversion. I can ask the system for orthogonal abscissas even though the system must necessarily first measure them as coupled, then transform. Anything else? It is certainly the case that there are other calibration processes that are iterative or fundamentally procedural and require actual Turing-strength code to be written (for example, sorting and searching might be one class of calibration processes that don’t fit easily). These may be cumbersome or near impossible to cast as tree structures. Not that it can’t be done, but it wouldn’t be pretty. On the other hand, if calibration strategy handles algebraics, inversion, orthogonalization, and possibly recursion as standard manipulations, with some other contextual Turingmachine-strength semantics introduced in a limited manner, it should be enough to provide 99.999% of the expressiveness needed to reach any theoretically possible measurement algorithm without excessively gumming up the syntax of the description for everyday, real-world things.
Processing a Measurement How does the SRMM description of a measurement get translated into an actual measurement in a synthetic instrument? This question is a variation on the oft-heard refrain: “But how do you really do a measurement?” What is the measurement algorithm? Given the map model of measurements, all possible measurement algorithms can be described generically by one specific, unchanging, high-
108
Measurement Maps
level algorithm. In my effort to object-orient everything, I recast the measurement algorithm, something that might be seen as inherently procedural, as a collection of algorithm components that can be dealt with independently, and reused. Reuse is urgent because nothing is more expensive to produce, per pound, than software. Anything ensuring a software job is done just once is immensely valuable. What I am saying, therefore, is if you want to do a stimulus response measurement map measurement (or you can recast your time-honored measurement in a SRMM framework), I can give you a way to automatically generate software to do that measurement using a generic template. Establishing an algorithm template therefore leads us toward objectoriented (OO) techniques. You no longer have to specify the parts of the custom measurement problem that are the same as a standard measurement; all you need to specify are the differences. In the parlance of OO design, establish a base or parent class that describes what all measurement algorithms have basically in common. From that base class create child classes for specific variations. Thus, the variations are created without risk of losing the tested, reliable functionality of the parent. The Basic Algorithm Acquiring stimulus response measurement map measurements is, fundamentally, a process of rastering through a set of abscissas, controlling hardware, acquiring data, and filling in ordinates, forming a data map. This leads directly to an obvious algorithm to accomplish this task. This basic algorithm template for any particular SRMM measurement is as shown in Figure 6-9. This algorithm can be coded in various ways in a measurement system. Some portions of the algorithm may be executed directly in hardware (for example, a state machine or table driving measurement hardware through the abscissas) and other portions can optionally be performed outside the system (for example, post processing in a remote host computer). The following sections trace through each of the major steps in this generic algorithm.
109
Synthetic Instruments: Concepts and Applications
Initialization
Figure 6-9. Basic SRMM measurement algorithm
Map Validation
Cal Strategy Canonicalization
Map Optimize
Hardware Allocation Hardware Operation
Abscissa Setup Sequence Definition Table Initialization
Abscissa Sequencing
Hardware Release
Ordinate Measurement Data Structure Accumulation
Post Processing Supervisor
Axis Flattening Unit Conversions
Exception Handler
Map Inversions Map Rotations
110
Measurement Maps
Initialization In the initial phase of the measurement algorithm, the measurement system prepares for programming the hardware to do the measurement. These preparatory steps include, primarily, the following:
Map Validation
Calibration Strategy
Canonicalization
Map Optimization
The process starts with a map provided by the user. First, the system checks to be sure it has been given a valid map. To determine this, the map is validated against the DTD or schema. If the map proves to be valid, the system moves on to calibration strategy, otherwise an exception is thrown and the algorithm exits. Calibration strategy examines the map (now known valid) and figures out how it can be transformed into a canonical map. The way to do this may or may not be unique. It also may be impossible. Thus, various exceptions are possible at this stage, but if the map is canonicalizable, and the system can figure out the best way to accomplish the canonicalization, the next step is to perform those map transformations. Also, the system must remember what transformations were applied, as these must be undone during post-processing. Canonicalization will depend not only on the actual hardware available, but on soft constraints on ports and modes. Port selection involves specifying which DUT interface is active and which stimulus and response ports on that interface are to be used. The port designation is a parameter to the overall test specification provided by the TPS and must be made part of the map to permit complete canonicalization. Constraints are specifications, also provided by the user or controlling software, that set bounds on the states the measurement may explore. This constrains the stimuli applied to the DUT or designates acceptable responses from the DUT. When these bounds are crossed, exceptions are thrown from within the algorithm. Constraint limits may be of soft or hard severity, with the severity attribute possibly changing in different parts of the algorithm. Soft limits will generate exceptions that will be 111
Synthetic Instruments: Concepts and Applications
caught and handled, with the overall algorithm continuing after appropriate action is taken. During canonicalization, a possible response to a soft limit might be to choose a different strategy. Hard limits throw an exception that is caught by the algorithm supervisor, causing the algorithm to terminate entirely. Limits are often soft during strategy, but hard during execution. With the measurement map reduced to canonical form, the algorithm can now optimize it prior to loading it in hardware. Optimization can be based on different definitions of “best.” When instructed to seek the best speed, optimization is a process of sequence reordering, placing faster ordinates and abscissas into the innermost rastering loops. Certain abscissas may have large overhead times associated with switching (for example, mechanical 20 mS switches as compared to solid-state 20 nS switches.). The abscissas should be reordered to put the slow abscissas on the outermost loops, with the fast abscissas within—other user specified constraints, not withstanding. When seeking the best accuracy, the system may order measurements so as to minimize errors caused by hysteresis, repeatability, and drift. Certain ordinates may be incompatible in the sense that measuring them both for a certain abscissa point may be less accurate than if they are measured independently over the domain. Thus, in the name of accuracy (but sacrificing speed), the system my actually run through the same abscissa range twice. Abscissa Setup The main functions of abscissa setup are:
Hardware Allocation
Sequence Definition
Table Initialization
So far, the measurement algorithm has been working with abstract measurement maps, but now the rubber meets the road, so to speak. It needs to get the map executed on hardware. The first step for making this happen is to allocate the necessary hardware. Presumably, the map
112
Measurement Maps
has only atomic ports, modes, abscissas, and ordinates. That means the algorithm should be able to find hardware that can handle those atomics. In the case of a single tasking, one-measurement-at-a-time (OMAAT) system, this should never be a problem assuming the calibration strategy algorithm is correct. However, in a multitasking system, there may be other measurements in the scheduler that have prior dibs on some of the hardware. Thus, the hardware allocation algorithm at a minimum (in the OMAAT case) may be simple, but in a multitasking case may need to arbitrate contention for resources between simultaneous measurements. The list of abscissas, ordinates, ports, and modes required in a given measurement are specified in the map. This list is ordered and optimized to specify the slowest varying to quickest varying as they are sequenced in raster order. Table initialization is the process by which the now canonical abscissa sequence tables are calculated from (start, increment, number) specifications or are loaded from explicit lists. In the case of hardware state sequencers, table initialization also includes loading the hardware table appropriately. Table initialization also involves creating and preparing the empty data structures for storage of the ordinate measurements. Abscissa Sequencing Abscissa sequencing occurs around the process of ordinate measurement. The sequencing occurs in raster order as defined in the optimization that occurs after calibration strategy gives us a full list of atomic abscissas. Appropriate data structure indexes are maintained for the purpose of saving data in the proper spot in an array or table. Abscissa sequencing can be implemented as a raveled list of states in a big state table, or it can be calculated on the fly by an algorithm. This is a basic implementation trade-off. Hybrid approaches can be designed, for example, the state table can have rudimentary branching or conditional execution. Asynchronous exceptions can occur that might throw us out of the sequence. These may be soft exceptions, a mechanism must be provided for interrupting the sequence temporarily and returning to the same spot to continue.
113
Synthetic Instruments: Concepts and Applications
Ordinate Measurement Ordinate measurement occurs within the process of abscissa sequencing. At this stage, all ordinates are atomic, so the assumption is that the system measures them directly. Measurements are performed for each ordinate specified in the measurement map. Ordering is also as specified in the map, as per the user’s constraints and optimization by the system. Data structures are accumulated with each ordinate measurement. Post Processing The first post processing task is to release allocated hardware. This should return the system to a safe and sane state, removing all stimuli from the DUT and securing the response system. After this, post processing can perform various map transform and axis flattening functions, reversing the map canonicalization transformations (rotations or inversions). In general, any axis added is now flattened. If additional calibration data structure has been provided, post processing will also combine this with measured data. Units are converted to the required final units.
114
CHAPTER
7
Signals The design of synthetic instrumentation is a signal processing game. Mostly it is digital signal processing (DSP), but also analog signal processing (ASP) is intimately involved. Therefore, it should be no surprise that at some point I need to talk about signals: signals being synthesized and signals being analyzed by the synthetic instrument. But before I begin to talk specifically about signals, I need to warn you that signals are only one viewpoint or stance that I can use to describe the workings of a synthetic instrument.1 When you create the design for a synthetic instrument, you have the hardware as a generic context and are given a test to accomplish. You then define the map as a fundamental step in doing that test. Yet even more fundamental than the map is the signals produced and measured by the map. This is what I call the signal stance description of a measurement. The map stance sits above the signal stance viewpoint. The hardware stance is below the signals, and a test stance is above the map. I do speak of “above” and “below” in a hierarchy, but the advantage of the word stance as compared to “level” is that it is less evocative of a hierarchy. Stances don’t cleanly sort into levels. The signal stance isn’t really “lower” than the map stance. It’s just different. (See Figure 7-1.) The signal stance viewpoint is better known than the map stance, and some people may even think that signals are the only way to look at things between the test and the hardware. I disagree. I think this area needs to be split into maps and signals. In a sense, signals represent the 1
For a more complete discussion of conceptual stances for understanding the world, you might turn to Dennett.[B3]
115
Synthetic Instruments: Concepts and Applications
Test Measurement Map Signal
Driver Hardware
Figure 7-1. Fuzzy hierarchy of “stances”
machine/assembly code of measurement, and maps represent an applications generator (third or fourth generation or 4GL) language. Signals are one viewpoint that you definitely need to worry about, but quite often, you can profit by thinking with other stances. Continuing the computer language metaphor, some programmers may feel that every program should be written in assembler, but most folks these days know that higher-level languages increase productivity immensely. Conversely, there is a danger in straying too high in the conceptual hierarchy, focusing exclusively on the test, disregarding the details of the measurement. Possibly, this is where the greatest danger lies. Thinking exclusively about the test without breaking it down into measurements leads back to the old way to develop ATE. With that warning in mind, let’s talk about signals in their proper context relative to synthetic instrumentation.
Kinds of Signals Signals are electrical voltages that either may be voltages directly of interest, or may be analog voltages that represent something else. For example, the voltage across a battery is directly of interest as a voltage, whereas, in contrast, the voltage from a thermocouple is an analog of temperature: it’s a voltage that represents (is analogous to) something else. 116
Signals
I will restrict the discussion to voltage. Any physical parameter can be used as a signal analog. Thus, it is certainly possible to talk about current or other kinds of units as the fundamental parameter of interest, but in most systems voltage is what people use. I’ll stick with that.
Coding, Decoding, and Measuring the Signal Hierarchy No differently from anything else that mankind manipulates, electrical signals can carry meaning in complex and multihierarchical manners. Very sophisticated advanced mappings, modulations, and layered codings are possible. What starts as a simple analog signal—a voltage on a wire—can be manipulated and structured so as to represent intricate information. A simple example of coding is the digital bit. In some systems, anything higher than 2.5 volts represents the binary digit “1”, anything less than that represents a “0” bit. Thus, a signal may be seen as just a voltage or as a coded bit, but there is no reason to stop there. Bits can be grouped into bytes, words, or other data structures. There really is a hierarchy of possible views to a signal. Moreover, there is more than one hierarchy. For example, analog video is “just” a voltage on a wire, but it also represents layers of meaning: lines, fields, frames, luminance, and chrominance coded in a sophisticated analog format. I can go on and on with examples: cell phone signals, GPS waveforms, JTIDS, 802.11b, Bluetooth. The list is long and diverse. All these diverse hierarchies are rooted in the basic idea of the voltage on a wire. As such, any and all of these possible sophisticated signals is amenable to analysis, synthesis, and thereby measurement by a synthetic instrument using the CCC architecture that generically digitizes that voltage. As a consequence of typical hierarchical nature of signals, when someone talks about measuring a signal you should consider what relevant aspect or aspects of signal meaning, modulation, or coding fit in the context of the measurement. A simple example of this consideration is a bit error rate (BER) measurement. In a BER test, the stimulus is a voltage on a wire, but that same stimulus may also be viewed as a signal bearing coded digital information. This stimulus is passed through a DUT (possibly a modem, possibly a 117
Synthetic Instruments: Concepts and Applications
communications channel, or possibly a combination of the two). The response is a voltage, but may also be seen as a signal bearing coded digital information. The desired ordinate in a BER test is bit error rate, defined as the ratio of incorrect information bits in the response divided by total bits. Different abscissas are used, but the most typical is stimulus power expressed as Eb/N0. Clearly a BER measurement depends on the coded meaning of the signal, as the map is expressed in terms of bit error ratios and energy per bit that only make sense to talk about when viewing the signal as something representing specific coded digital information. The distortions and random noise in that cause the measurement errors in the coded information may or may not be themselves visible to the measurement system. In any event, characterizing these low-level signal adversities is not the objective of the measurement; the objective is measuring errors in the coded information. Yes, it is true that distortions and random noise will disrupt the voltage, and in turn will disrupt the information bearing capacity of the signal and cause decoding errors. So, in a sense, BER measures something about the voltage on a wire, but, more correctly, at a higher level, it measures something about how accurately information is transmitted through the DUT, independent of the underlying code. In fact, the attributes of the underlying coding are abscissas, measurements of the BER is an ordinate, orthogonal to the abscissas. Therefore, when decoding the signal before measuring some high level aspect, the measurement is not affected so long as the decoding is ideal (or if not ideal, the actuality of the decoding is parameterized, possibly as one of the abscissas). In fact, one must decode the signal in order to make measurements of the higher-level data since the information sought only exists at the higher level. Similarly, on the stimulus side, levels of meaning are encoded into a sophisticated synthesized hierarchical signal.
Decoding Method Abscissas When layers are stripped away from a signal in order to measure something about a high level aspect of its coding, you always want to do that ideally. That is to say, each decoding must be perfect in the sense that it doesn’t affect the measurement you want to make at the level you are interested in.
118
Signals
For example, if the DUT is a device that produces a digital data stream, serially coded with RS-232 on a wire, the object of the measurement is to get that data. It doesn’t matter how much noise there is on the RS-232 signal, or what the exact voltage levels are, so long as those disturbances cause no errors in the data. For instance, a digital thermometer measuring temperature may return the temperature coded with RS-232. The ordinate is the temperature data. Any decoding of the signal is merely a means of getting to the ordinate. In contrast, consider a BER test on a communications channel with no demodulator or decoder as part of the DUT. What the measurement system receives as a response is a coded voltage waveform along with junk, such as distortions and random noise that disrupt the voltage. The junk will disrupt the information bearing capacity of the signal to the extent that it would cause decoding errors. The object is to measure those errors quantitatively with a BER ordinate, but the signal isn’t decoded yet. In situations like these, the measurement system needs to decode the response in order to make the measurement that results in an ordinate. The way it chooses to decode the response is itself an abscissa.
Direct Real Analog Baseband Signals As I have explained, ordinates are often concerned with measuring some quantitative aspect of a signal at some level of description. Let’s start with the simplest of these: The sampled value of the signal voltage itself. Say, for example, there is a steady power supply voltage to be measured within the DUT. The signal in this case is a sample of the voltage digitized as a real number with units of volts. Signals that fit well with this sort of direct voltage digitizing scheme are often called baseband signals, which implies meaningful DC content. I don’t think it’s correct to call them “DC” signals since they can, in fact vary, although DC-coupled signals would make sense to someone who knew how to run an oscilloscope. But more significant than the DC content aspect of the baseband signals is the fact that they are a direct, linear analog of a real number that they represent, and as such a synthetic instrument may directly digitize them into a quantized real number that captures their meaning (to some
119
Synthetic Instruments: Concepts and Applications
quantization precision) and represents them with no additional transformations, decoding, or processing. 100 90 80 70 60 50 40 30 20
Type R Thermocouple 5 uV / Deg C
Analog code of 20 Deg C is now 100 mV after the Amplifier
1000
10 0 -10 -20 -30
A/D Converter 8-Bit 256 mV Full Scale
Digital Code 0x64 now represents 20 Deg C
100 uV is the voltage "analog" coding of 20 Deg C
Figure 7-2. Analog and digital codings
This may seem an obvious point, but when I talk about other sorts of hierarchical signals, things may not seem so clear. Therefore, I will be super specific and call these signals direct real analog baseband signals. This name gives us the idea that this is a direct, linear analog representation of a real number by an analog voltage with meaningful DC content. These signals are the bottom of the signal coding hierarchy. Despite the unsophisticated direct analog coding, direct real analog baseband signals are routinely used to represent all sorts of things. They may be sensor outputs, like temperature or pressure, or they may be simplified or linear mapped aspects of other sorts of signals, like a voltage that represents power or current. They also might be a coded signal, intentionally treated as simple analog for measurement purposes.
120
Signals
Digital Coded Baseband Digital coded baseband signals are a subset of analog signals that use ranges of voltages to represent discrete numbers. Routinely, a binary digit 1 or 0 is represented by two voltage ranges. In rare cases, more than two ranges are used, resulting in a wider range of integers that can be represented. This is similar to the way that more general analog signals use voltage in a continuous mapping to represent a precise real number. Digital signals, in contrast, only attempt to represent a coarse integer, the coarsest and most common case being just a zero or a one. The contrast affects the way the signals are approached with synthetic instrumentation. With analog signals it’s normally important to measure them to the best accuracy possible so as to get the most accurate and complete knowledge of the precise real number they represent. With digital signals, however, only the coarse integer they represent carries meaning. This limits the accuracy needed in the measurement. Necessarily, every digital coded signal is also an analog signal, or may be viewed as an analog signal by someone interested in making measurements. Shifting the point of view back to the larger, parent class of analog signals, a system might precisely measure the voltage of a digital coded signal to an accuracy beyond what it needs to determine the coded integer. The ability to shift the viewpoint back through parent classes of signals represents a general principle of synthetic instrumentation
Analog Coded Baseband Direct real analog baseband signals, as I have explained, represent a real number ordinate. But there is no limit to the creativity and cleverness of man and therefore, analog baseband signals have been used in myriad ways to represent much more sophisticated information through the use of coding. A good example of a sophisticated analog coding is NTSC analog encoded video. NTSC video can be thought of as baseband analog, but it doesn’t represent a single real number, but rather, it represents a two-dimensional image composed of three-dimensional color space pixels. This representation evolves continuously through time, but represents discrete samples of the image.
121
Synthetic Instruments: Concepts and Applications Amplitude White
Color Burst
Image Scan Line
Black Front Porch Sync
Back Porch
t 63.5 µS
Figure 7-3. One scan line of NTSC analog video
Signals of this sort present a unique challenge as well as a valuable opportunity to synthetic instruments. The challenge is to use generic hardware (for example, a simple CCC architecture) and appropriate software to capture and characterize a very sophisticated and specific analog coded signal. The opportunity is to demonstrate the cost benefit of the synthetic approach by eliminating the need for a special-purpose instrument to measure the specific coding.
Bandwidth I have discussed several different types of signals, all rooted on analog voltages. Can any signal be analyzed or synthesized with a CCC architecture synthetic instrument? The answer to this question comes down to the idea of bandwidth and information capacity. Nyquist, Shannon, and others have established theories that show how electrical signals have limited information bearing content. As a result of this limit, it is always possible to get as close as you please to extracting every last drop of information from a signal. Digital signal processing[B10] and communications theory texts[B1] go into these ideas in detail so I don’t need to belabor them here. The central point is this: these theories state given sufficient bandwidth and precision in a codec, it’s always possible to synthesize or analyze any given voltage signal, extracting all the information, regardless of how sophisticated the coding may be. 122
Signals
The sampling theorem from communications theory states that it is always possible to completely recreate a continuous analog waveform from exact, discrete-time samples of that waveform, so long as the frequency of those time samples are at least twice as high as the highest frequency in the waveform. Figure 7-4. The Sampling theorem
Signal Spectrum
f 0
B
Sampler Spectrum
f B
2B
B
2B
4B
Spectrum After Sampling f
Ideal Filter Passband
f
B
Reconstructed Waveform Spectrum
f 0
B
123
Synthetic Instruments: Concepts and Applications
Thus, if some signal is band-limited and has no energy above some frequency called its bandwidth (represented with a capital B), then time samples of that signal taken at a frequency of twice B fully characterize the signal. Another way to say this is if the codec in a synthetic instrument can digitize voltages fast enough, it can analyze or synthesize anything. Waveforms with frequency content up to B must be sampled at least at a 2B rate. The 2B sampling rate is often called the Nyquist rate. It should be noted that all practical signals have a finite bandwidth. You can always find some maximum frequency, B, above which the signal has negligible power with respect to any fixed fidelity criteria. Therefore, in a sense, the sampling theorem is a “proof” of the validity of synthetic instrumentation. It states a clear and easy to apply criteria for deciding what sort of CCC system you need to completely handle any given real-world voltage signal. This criteria works particularly well with direct baseband and analog coded signals, but it applies to all signals in the hierarchy of possible coding, since all signals are, fundamentally, voltages on a wire with some nominal limit on their bandwidth. But the greatest strength of CCC synthetic instruments that are based on the sampling theorem—their broad applicability to all voltage signals—is at the same time their greatest weakness. Why? Because in treating all signals as voltages to be digitized, a measurement system is forced to process far more information that really is of interest in the signal. For example, with digital coded signals, many sorts of measurements may not be interested in anything more than the low rate stream of 1’s and 0’s that are represented by the voltage. The output of a modem may be a TTL digital voltage. This signal may have a bandwidth in excess of 100 MHz when viewed as a signal to be Nyquist rate digitized in its fullest representation, but may only produce data in short packets at a low average rate in the KBit/s range. The detailed waveform out of the modem may not be of interest, but the data is. Still, in order to get this data using a Nyquist rate CCC synthetic instrument, the data needs to be distilled from thousands of uninteresting samples of the overall waveform. In general, whenever measurements need to be made of higher, more elaborate levels in the signal coding hierarchy, processing the lower levels
124
Signals
represents overhead. When a synthetic instrument is designed to work at the lowest level, it becomes less efficient the more elaborate the coding is. If the instrument could somehow get at the higher level coding directly, it would be more efficient. An obvious example of improving efficiency by going directly to the higher level of coding is easy to see in digital cases. Omitting or bypassing the codec of a CCC instrument, and routing the digital data through conditioning and then directly into the controller, avoids the need to digitize the detailed waveform. The result is the bits themselves. When the bits are the object of the measurement, this is far more efficient. In summary, Nyquist rate sampling is a powerful idea. It works particularly well with direct analog and analog coded signals, I have discussed. But, as I have just explained, there are cases where it misses the forest for the trees, as it digitizes every “tree” when all you want is the general shape of the forest. Another large class of signals that are not amenable to direct Nyquist digitization is bandpass signals.
Bandpass Signals The idea of a bandpass signal is an intellectual child of the invention of radio. In radio, information initially represented as a direct analog voltage signal with some low bandwidth is translated or modulated onto a carrier wave at some much higher frequency. This higher frequency carries the analog information much in the same way DC carries direct analog information. This simple idea of modulation intertwines with the frequency domain in signal analysis, which is, in-turn, based on the theory of the Fourier series and Fourier transform. Frequency domain analysis is probably one of the most powerful signal processing ideas ever invented. This viewpoint allows us to understand bandpass signals in a rigorous mathematical way. The scope of this book doesn’t allow me to fully describe all the aspects of bandpass signals or frequency domain analysis. The reader may refer to[B1] for a more complete exposition; however, I need to review some basics that are particularly relevant to synthetic instrumentation.
125
Synthetic Instruments: Concepts and Applications
Of specific relevance is the idea of modulation. In the same sort of way that the voltage on a wire can represent or analog some physical parameter, aspects of a carrier wave can be modulated to represent, in a similar analog sense, some physical parameter. There is a notable difference, however. With a carrier wave, it is possible to modulate two separate aspects of the carrier simultaneously and independently: amplitude and phase (or frequency). But other than that difference, the principle is identical.
AM
FM
Figure 7-5. Amplitude and frequency modulation
Modulated signals are often called bandpass signals because they occupy only a narrow band of frequency spectrum and may be passed by a resonant circuit or filter. The carrier, as I have already discussed, is the high frequency wave that is modulated. The modulation on that wave is called the envelope. Modulation is a basic analog signal coding technique. It is the first step up the hierarchical ladder for many sophisticated signals. As such, synthetic instruments are often asked to do measurements of the modulation, and have no particular interest in the carrier wave. When measuring cellular phone signals, radar signals, GPS signals, or most any kind of RF signal, the carrier is not something that generally matters more than as a means for accessing the envelope.
126
Signals
Therefore, the basic CCC architecture for a synthetic instrument may be seen as inefficient. It is normally the case that the bandwidth of the modulation is less than 10% of the carrier frequency. Quite often, this ratio is even smaller. Modulation bandwidths of 0.1% or smaller (relative to the carrier) aren’t uncommon. If you try to use direct Nyquist digitization to acquire a bandpass signal (or synthesize one), you end up using a sampling rate that is more than twice the carrier frequency, even though the carrier is not typically of interest. This is a waste. It would be better if bandpass signals could be acquired more efficiently. For example, it would be better if they could be digitized at some rate proportional to the modulation bandwidth rather than the carrier frequency. Is such a technique possible? Indeed, it is possible. In fact, there are several techniques for doing exactly this. In a sense, all these techniques amount to the same thing. As response techniques, they strip away the carrier and detect only the modulation; they demodulate the signal. In a more general sense, these techniques represent decoding of an analog coded signal. As with any coded signal, there is the option of stripping away lower levels of coding if the object of the measurement is some aspect of the higher-level representation. Similarly, when applied to stimulus generation, analogous techniques generate the modulation first, and then encode that upon a carrier. Keep in mind that most techniques can be run either way, with exceptions that I will try to note.
Bandpass Sampling Sometimes the sampling theorem is stated in such a way that we are led to think the Nyquist rate is some hard limit, like the speed of light. But this isn’t the case. It’s not that frequencies higher than half the sampling rate aren’t allowed, or aren’t possible. Rather, it’s that frequencies above this point are aliased down to apparently lower frequencies. The way this aliasing happens is quite well understood and predictable. In fact, it is so predictable that it can be used as a method for stripping the carrier from a high frequency bandpass signal, digitizing just the envelope. This method is called bandpass sampling.
127
Synthetic Instruments: Concepts and Applications
Sampling is a form of multiplication of signals, and the reason the bandpass sampling technique works is a result of a mathematical property of multiplying sine wave signals. Any time you multiply or “mix” two sine waves, the result will be a shifting of signal frequencies from the original frequencies to the sum and difference. This mixing process is sometimes called heterodyning. Multiplier (Mixer) cos ω 1t 1/2 [ cos(ω 1 + ω 2)t + cos(ω 1−ω 2)t ]
cos ω 2 t
Figure 7-6. Mixing
When sampling or mixing a signal in practice, the signal is never multiplied by a pure sine wave. But by virtue of the idea of the Fourier series[B5], it is possible to think of sampling as multiplying by the sum of a series of sine waves each at a harmonic of the sampling rate. Figure 7-7 illustrates how this works. Start with a bandpass signal at carrier frequency Fc, with spectrum as shown in (A). The sampling rate is Fs and is illustrated in (B), with harmonics of Fs shown as impulses out at Fs multiples: Fs, 2Fs, 3Fs…. When the bandpass signal is sampled, it is aliased down to an apparent baseband spectrum as illustrated in Figure 7-7. The result is exactly the same bandpass spectrum, and the same envelope, but the carrier frequency is now much, much lower. It’s exactly as if the spectrum has been slid down, or shifted, in frequency. When using the bandpass sampling technique, care must be taken in the choice of sampling rate relative to the carrier. If, for example, you choose a different sampling rate as in (D), the resulting alias at baseband folds over and results in a spectrum (and envelope) that is now irreparably altered as shown in (E).
128
Signals
A
f
Fc
0
B
0
Fs
2Fs
3Fs
Fs
2Fs
3Fs
f
C Fc-3Fs
f
0
D
0
Fs
2Fs
3Fs
Fs
2Fs
3Fs
E
0
Fc-3Fs
f
f
Figure 7-7. Bandpass sampling example
The folding over has an interesting interpretation from a signal coding viewpoint. You may recall that a bandpass signal encodes two, independent analogs onto the carrier (one as carrier amplitude, one as carrier phase), whereas, a baseband voltage can only encode one analog (the voltage). When the bandpass sampling process shifts the bandpass signal down to baseband, the amplitude information, and phase information get folded together into the voltage waveform. To avoid this folding, you need to keep the signal away from DC, still keeping it on a carrier so that it can still have amplitude and phase as separate things.
129
Synthetic Instruments: Concepts and Applications
When a mixer shifts a signal down to some lower frequency, but still above DC, the new frequency is called an intermediate frequency, or IF. Given the constraint that the bandpass signal cannot “touch” DC for fear of being folded over, or cannot contain content more than half the sampling rate, Figure 7-8 shows graphically that the IF frequency giving the most room is 1/4 the sampling rate.
0
FIF
Fs=4FIF
f
Figure 7-8. IF at 1/4 the sampling rate
Once more, you see the factor of 2 represented by the duo amplitude + phase that are encoded into a bandpass signal. In a sense there are two channels of information multiplexed onto one stream of samples. Furthermore, another (perhaps simpler) way to look at it is that bandpass signals have double-sided bandwidth, so you need twice the sample rate relative to baseband single-sided bandwidth. In practice, there may be difficulty getting a codec to do this bandpass sampling magic unless a very high bandwidth sampler is used. Fortunately, as of the date of this writing commercial samplers with bandwidths up to 100 GHz are becoming available on the market. These devices permit the CCC single architecture to digitize baseband signals up to half the continuous sampling rate: as of this writing up to 1-2 GHz for some devices. The same sampler can be used to capture bandpass signals up to the bandwidth of the sampler—100 GHz—using bandpass sampling.
Image Rejection An implicit assumption one makes in the bandpass sampling process applied to measuring a bandpass signal response is that there is only one, narrow band signal in the response being captured. Although this is the case for a large number of real, practical situations, it is not the case universally. For example, if the DUT is a filter and the object is to measure its frequency response, a single sine wave is the stimulus, and the system
130
Signals
measures a single sine wave response. In this case, there is only one, narrow band signal in the response. On the other hand, if the object is to measure the frequency response of an amplifier, although the major component of the response will be a single narrow band signal, there will also be noise and harmonic distortion at other frequencies. In a more extreme case, consider that the DUT is an antenna on an open range and the system is measuring an ordinate that is the response to an incident, open-air stimulus. When measuring the response from an antenna, it will contain the desired response component, but it may contain any number of interfering signals picked up from the aether. If bandpass sampling is used to measure a response that contains more than just one bandpass signal, every signal will fold into the response. The spurious responses that fold on top of the desired responses are called images and they can represent a serious problem in certain measurement scenarios when bandpass sampling is used.
Interference and Images Any radio engineer is familiar with the problem of images and knows the way to solve it is to uses something called a preselector filter that rejects any energy at image frequencies, allowing only the desired signal to pass. A more sophisticated, powerful solution also well known by radio engineers is to create a proper superheterodyne down conversion system that through a cascade of preselectors, mixing stages, and IF filters, a desired bandpass signal can be converted to IF without corruption by images and other spurious responses. Preselector
F RF
Mixer
Bandpass Filter
FIF = FLO − F RF FRF + 2FIF
FRF
FRF
Image FLO
Figure 7-9. Preselector signal conditioner
131
Synthetic Instruments: Concepts and Applications
In a synthetic instrument, a simple preselector, a full-blown superheterodyne down-converter prior to the codec (or up-converter after the codec in stimulus) can be viewed as just a kind of signal conditioner. Alternatively, it can even be viewed as a form of DUT interface or switch matrix since the preselector selects the response signal of interest from several that are frequency multiplexed at the output of the DUT.
I/Q Sampling An alternative technique to bandpass or IF sampling that can address the factor of two represented by amplitude and phase are so-called I/Q techniques that use two independent channels running in phase quadrature in order to retain both phase and amplitude information. This approach works even when the sampling mixes the bandpass signal down to baseband. The mathematical underpinning for I/Q techniques is the use of a complex number to represent the envelope of a modulated signal. In polar coordinates, the magnitude of the complex number represents the amplitude of the modulation, and argument of the complex number represents phase modulation. In rectangular coordinates, so-called in-phase (I) and quadrature (Q) components express the same information on a different basis. Hence, these techniques are called I/Q detection, I/Q decoding, or I/Q demodulation (or modulation/coding for stimulus). The relative merit of I/Q techniques relative to IF or bandpass sampling techniques is beyond the scope of this book. However, there is a relevant point with respect to hardware implementations of I/Q detection or modulation in a synthetic measurement system. An I/Q codec at baseband requires two A/D or D/A channels as compared to the single channel found in the CCC synthetic instrument hardware I have described. In that sense, I/Q techniques implemented in hardware would alter the generic architecture I have been discussing. Whether this alteration is merited, is another question. Hardware I/Q detection does not make the hardware more specific, but in fact makes it more general, giving it the capability to down-convert modulated signals direct to baseband. In that sense, I/Q detection is in line with the spirit of generic measurement hardware. The downside is that the enhanced
132
Signals
generality is paid for with increased complexity and redundancy. I/Q detection requires a dual-channel codec. The second channel represents a redundancy that is not present in IF techniques. Software I/Q detection is another matter. At some level, the convenience of a complex number representation of a signal is merited for almost any measurement involving bandpass signals. The mathematical processing advantages of this representation are many. For that reason, I would expect DSP in the controller to work with I/Q representations, perhaps exclusively for internal processing.
Broadband Periodic Signals The bandpass sampling technique described in the last section shows how a bandpass signal at some high carrier frequency can have the carrier stripped off. In this way, a CCC synthetic instrument can measure bandpass signal envelopes without needing to sample at twice the frequency of the carrier. But what about broadband signals, like digital pulses, that are not modulated on a carrier? In modern, high-speed digital circuits, pulses with significant energy at frequencies in excess of 10 GHz are not uncommon. These signals are not bandpass. Their energy is spread across the entire wide bandwidth. Can these broadband signals be digitized without resorting to 20-GHz digitizers? Yes, in certain circumstances, they can. For many years now, oscilloscopes have used time equivalent sampling techniques to capture waveforms with far wider bandwidth than maximum the sampling rate realized in the scope. These techniques require that the signal to be digitized is periodic. That is to say, the signal must repeat with some pulse repetition frequency or PRF. When a signal is periodic, it may be decomposed into a sum of sine waves at discrete frequencies all multiples of the PRF. This decomposition is the signal’s Fourier series. The Fourier series coefficients give the amplitude and phase of each harmonic. As I explained previously, periodic sampling is also related to the Fourier series. Sampling is like multiplying by a sum of harmonics that are multiples of the sampling rate.
133
Synthetic Instruments: Concepts and Applications
Combining these two ideas, by proper choice of the sampling frequency relative to the PRF, a time equivalent sampler system can sample and time expand a broadband periodic signal. The effect is not unlike a vernier scale.
t
τ
PRI=1/PRF
Fs= PRI + τ /12 Fs
f PRF
1/τ
0
1
2
3
4
5
6
7
8
9
10
11
Sample
Figure 7-10. Time equivalent sampling spectra
Time equivalent sampling preserves the shape of the waveform, but translates the speed to a much lower rate. In that sense, time equivalent sampling is a similar process to bandpass sampling, which preserves the shape of the spectrum, but translates it to a much lower frequency carrier. Another way to look at time equivalent sampling is as a stroboscopic technique, sampling at only one point in a waveform cycle, but slowly move that sample point in time so as to trace out the whole waveform. When thought of this way, the technique is often called a sample delay walk technique.
134
Signals Table 7-1. Sampling techniques
Signal Type
Baseband
Bandpass
Periodic
Method •
Direct Digitization
• • •
Bandpass Sampling/Harmonic Down-conversion Superhet Down-conversion I/Q sampling
• •
Time Equivalent Sampling Delay Walk
135
This page intentionally left blank
CHAPTER
8
Calibration and Accuracy Calibration and metrology is an immense topic with numerous aspects. Calibration is particularly vital with respect to synthetic instruments because the SI is the new kid on the block and will be scrutinized carefully by expert and tyro test engineers alike. During this scrutiny, proponents of the new approach want to make sure the evaluations are fair, unbiased, and based on a valid scientific methodology.
Metrology for Marketers and Managers When somebody decides they want to buy a measurement instrument, the only thing they may know for sure right off the bat is that they want to pay as little as possible for it. But if this was the only specification for a measurement instrument, it’s doubtful anything useful would be purchased. Sometimes instrument shoppers will specify that the instrument they want to buy should be able to do whatever it does exactly like some old instrument they currently have. But assuredly, if “doing exactly what X does” is the only performance criteria, the system that best performs just like X will be X itself. That’s not likely to be what they wanted, or they wouldn’t be shopping for a replacement for X. Intelligent people shopping for measurement instruments will think about the measurements the old instruments made and make a list of the measurements they still want. They will present this list of their requirements to various vendors. In this way, they allow suppliers freedom to offer them something other than what they already have. But they don’t want to give the vendors too much freedom, so they need to specify the accuracy they need for the measurements. Customarily they get the accu137
Synthetic Instruments: Concepts and Applications
racy and measurement capabilities from the specifications of X. If the new system meets these abstract specifications gleaned from the specifications of their old system X, they figure they will get something at least as good as X. Sadly, the concepts of measurement and accuracy are often misunderstood, misstated, obfuscated, lied about, or otherwise disguised to the extent that instruments supposedly meeting them end up useless anyway. For that reason, the most intelligent people will take the time to learn a little about metrology before they go about specifying a measurement system. I feel it’s worthwhile for everyone buying instruments (especially with someone else’s money) to at least know these basics. It’s beyond the scope of this book to discuss metrology and the scientific method in due detail, but I will include some of the basics in the hopes that it will help some readers cross the line into the smartest group. Measurand In the science of metrology, the measurand is the thing you are trying to measure. Some might want to gild the lily in the sense of Plato’s allegory of the cave and talk about “a true value of the measurand” as something separate, ideal, and unattainable relative to the needle deflections we observe with our human senses. However, metrologists think this “true measurand” terminology is redundant at best. I think it should be sufficient to merely talk about the value of the measurand in the context of a particular measurement method. It’s OK to leave out the “true” adjective, ontological subtleties not withstanding. NIST and other metrology authorities agree with this approach. The central point is that a measurand is specified in the context of a measurement method. That’s very important. Saying what you want to measure necessarily involves saying how you are going to measure it. It can be meaningless to talk about some parameter you want to measure without specifying a method.1
1
Not to say that things we have no method for measuring are meaningless, just that it may be meaningless to talk about such things in any quantitative way.
138
Calibration and Accuracy
Unfortunately, it’s sometimes difficult to come up with something that people will agree constitutes a valid description of how a measurement is made without including references to measurement specific hardware. Obviously, this will be a problem when specifying synthetic instruments, instruments that are implemented by software using measurement generic, nonmeasurement-specific hardware. We, as synthetic instrument designers, want to define measurand this way: Whatever the map measures, that is the measurand. All the context you should need about how a measurement is made is the precise stimulus response measurement map definition of the map in question. In this book, I give you a way to precisely express that in XML. You don’t need to know anything about the hardware. The answer to the question, “How is this measurand measured?” should be, “It’s measured with this SRMM description specification as expressed in this XML document using the resources of the available hardware.” This answer from a synthetic instrument designer means that in order to decide what it is you are measuring, what you really need to do is exactly specify the maps. You need to decide what the abscissas are, what the ordinates are, how they are sampled, what the calibration strategy is, define any compound ordinates, stipulate the axis ordering, specify what the post processing is, and so on. As you do this, it should become clear that you are really saying “how” the measurement is done, except that the “how” is now broken up and refactored in a way that may seem unfamiliar. It’s object-oriented—not procedural—and that confuses people. Some may object that fundamentally, someplace, you need to say how you are measuring something the old-fashioned procedural way. For example, someplace you will inescapably get down to an ordinate that measures some fundamental quantity, like mass, time, or voltage. This ordinate will be allocated to a specific measurement asset, loaded a certain way, and applied to the DUT in the context of the abscissa. When you zoom in this close, you will be forced out of the abstraction of the measurement map and have to say how you are measuring that fundamental measurand in a procedural way. That may be so, but it doesn’t show that there was anything left out specifying the measurement by defining the measurand as a map at the highest object-oriented level. Some details are implicit rather than ex139
Synthetic Instruments: Concepts and Applications
plicit, yet they are still in there. Some times it’s necessary to zoom in, but you only do that only when you really need to. Accuracy and Precision Ironically, the word accuracy as used in common technical discussion is often used inaccurately from a metrology perspective. In fact, the misuse is so widespread and pervasive that it has become the norm in informal technical parlance. Since proper usage of “accuracy” is now in the minority, any attempt to set the record straight may be viewed as nit-picking. People will say, “of course you know what I mean,” referring to their use of the word “accuracy.” But do we know what they mean? According to CIPM, NIST, and other metrology standard setters, accuracy is the “closeness of the agreement between the result of a measurement and the value of the measurand.” This is a qualitative concept. Accuracy has nothing to do with numbers. It’s OK to say: “This instrument is very accurate.” Or, you can say: “This instrument is more accurate than that instrument.” But when someone says: “The accuracy of this instrument is 0.001 units,” what they mean is unclear. The word precision, also a qualitative concept, is defined by ISO 3534-1 as “the closeness of agreement between independent test results obtained under stipulated conditions.” This standard sees the concept of precision as encompassing both repeatability and reproducibility (see subsections D.1.1.2 and D.1.1.3) since it defines repeatability as “precision under repeatability conditions,” and reproducibility as “precision under reproducibility conditions.” Nevertheless, precision is often taken to mean simply repeatability.
Precision
Accuracy
Figure 8-1. Precision versus accuracy
140
Calibration and Accuracy
Both precision and accuracy are qualitative terms. Therefore, to talk quantitatively about the qualitative term accuracy, or, for that matter precision, repeatability, reproducibility, variability, and uncertainty, one needs to use statistical theory. There really is no other choice. Only with rigorous statistical framework supporting you can you make precise quantitative statements like “the standard uncertainty of this measurement is 0.001 units” and have your exact meaning be perfectly clear. However, if the speaker doesn’t know what standard uncertainty is, there may be a fresh communication problem. You can’t just plug “standard uncertainty” in where you use “accuracy” and expect everything to be hunky-dory. The two terms are very different. For example, increased standard uncertainty means decreased accuracy, so it should be obvious that the two are not interchangeable. The phrase standard uncertainty has precise quantitative implications. There really is no choice but to learn what the term means if you want to have any chance of using it correctly. NIST has an excellent reference document online (http://physics. that quite nicely describes the precise, quantitative meaning of standard uncertainty along with related terms. Anyone interested in making quantitative statements about accuracy should study it as soon as possible.
nist.gov/ccu/uncertainty)
Test versus Measurement Test and measurement are two words that are often used interchangeably, but they actually are two very different things. Measurement describes the act of acquiring a numeric value that represents quantitatively some physical aspect of a device. Perhaps it’s a measurement of length, or temperature, or mass, or voltage output of some device. The defining feature of a measurement is it’s numeric result with associated physical units. A test, in contrast, describes the act of making a decision about some physical aspect of a device. A test has a qualitative result. For example, is the device long enough, cool enough, heavy enough, or does it have enough output? The defining feature here is that a test is the act of answering a question, making a decision, or rendering a pass or fail determination. 141
Synthetic Instruments: Concepts and Applications
Figure 8-2. Test versus measurement
The distinction between these two words matters because the methodology for synthetic instrument design that I discuss in this book is primarily related to measurement: how generic hardware can make specific measurements. Outside the measurement, the process is basically the same for synthetic and nonsynthetic systems. Although it is true that the synthetic instrument, particularly if it’s structure is strictly object-oriented, will likely relate to, or contribute to the test in ways that involve the measurement.2 For example, a synthetic instrument may contribute to how measurements are specified within a test, providing units, ranges, and validation. Or it may support how measurement results are stored, analyzed, and presented within a test, offering formats, analysis, and presentation objects. A synthetic instrument could even go so far as to include methods for performing pass/fail determinations. As such, the instrument has now become a tester, with a standalone ability to perform tests based on the measurements it performs. 2
This isn’t really a character of synthetic instruments. A classical instrument with a proper OO software driver could also contribute to a test in the same way. 142
Calibration and Accuracy
Introduction to Calibration Now I have explained some of the basic concepts of metrology, I can begin to talk about the different aspects of calibration. Reference Standards Since instruments need to produce results with true units, the results need to be acquired by means of a measurement process that is traceable to international metrology standards. Therefore, any synthetic instrument needs standards, possibly several, within it’s architecture. These standards may be for things like frequency, power, attenuation, voltage, time, delay, temperature, and so on. This list is by no means complete. The best standard to use for a particular measurement depends on how that measurement is done. One calibration philosophy that has certain advantages when applied to synthetic instruments is to employ modular (plug replaceable) standards that allow all the calibration standards in a “sealed rack” instrument to be quickly removed and replaced. This minimizes downtime as the removed standards can be sent as a unit to a calibration lab while the system remains in service with a previously calibrated standards set. Typical rack-em-stack-em or modular instruments do not use this calibration philosophy. Instead, they calibrate the multiple instruments that comprise the system as individual instruments. The system must be taken out of service to be calibrated. The calibration lab may even be brought to the system in the form of a mobile calibration cart. Uncertainty Analysis In order to investigate and predict the uncertainty of a measurement made by a synthetic instrument, the investigator needs to establish the calibration process. Industry practice breaks calibration into three main areas:
Primary (Standards) Calibration
Operational Calibration
Calibration Verification
143
Synthetic Instruments: Concepts and Applications
Primary calibration is the methodology by which international metrology standards are transferred to components in the system. Operational calibration is the methodology of conducting a measurement and applying calibration information to raw data. Calibration verification is a process, separate from self-test or system functional test (SFT) that seeks to determine, within some defined confidence limits, if the system is currently calibrated. Once the overall calibration methodology is established for the instrument, you can then move on to defining other calibration and accuracy related requirements, like measurement uncertainty, bias, calibration and accuracy analysis, drift, and aging.
Stimulus Calibration Stimulus calibration is the process by which the stimulus to the DUT is applied at the appropriate value for the measurement. For example, a power supply will stimulate the DUT by applying a voltage. It often is crucial that voltage be exactly correct. Stimulus calibration is related to inverse maps. The goal of stimulus calibration is to determine how to control the stimulus system so that it generates the required output. In point of fact, the output of the stimulus system (as measured by some response system) is the independent abscissa, and the system needs to determine the dependent control commands (ordinate) that generates that desired output. Since it is reversing cause and effect, this is an inverse map. There are two common ways to express the intent to set a stimulus to a predetermined value. Either you want the expected value of the stimulus to be at the intended value with some uncertainty expressed by a known probability density. Alternatively, you might want the stimulus to fall within some intended interval of values, with some confidence probability. An example of the first case would be to state that the supply produces an expected 1 volt, with a Gaussian uncertainly of variance 0.01 volt. An example of the second case would be to specify that the stimulus voltage be 1 volt ± 0.01 with a 90% confidence level.
144
Calibration and Accuracy
Overall Strategy for Stimulus Calibration The strategy adopted for stimuli calibration depends entirely on the stimuli involved. In general, however, it represents a good example illustrating how to handle inverse maps and the related concept of stimulus ordinates. I recommend seeking a compromise between knowing the stimuli accurately and the more difficult alternative of controlling the stimuli precisely. Performing stimulus calibration of a system involves establishing a relationship between the state of the stimulus subsystem and the true absolute stimulus output parameter. Often this is done by creating a calibration map with the following algorithm. 1. Adjust some internal parameter in the stimulus system to a set of arbitrary “state” points (often a grid of approximately uniformly spaced values). 2. Using a calibrated response system, measure the stimulus generated for each of these states. By means of the calibration map created the measurement can accurately know the stimulus value at each of those states. Calibration strategy algorithms can then invert the map to find the states required to generate any desired stimulus value. Normally this inversion is performed by offline interpolation, but it could be done with a real-time feedback-leveling loop. Acquisition of an accurate calibration map does not guarantee that stimulus can be controlled with precision. In general, there will be some quantization and repeatability effects that limit the precision of stimulus control to that which the stimulus hardware can achieve, particularly in an open loop configuration. Using Interpolation to Invert a Map Interpolation allows the calibration map (or any map, for that matter) to be inverted. Another good reason to use interpolation is to maximize measurement speed by minimizing the number of measurements taken. The question then arises: what is the minimum number of points required
145
Synthetic Instruments: Concepts and Applications
for a given degree of accuracy when interpolation is used? There are many possible forms of interpolation: linear, polynomial, least squares, and many others. Often, it is good to choose an interpolating function that matches the function that underlies the real process being approximated. A large body of literature is devoted to this problem[B2]. A favored form of interpolation for continuous functions with continuous derivatives is cubic spline interpolation. Splines have the advantage of passing through specified points and matching the derivatives at those points smoothly. Experience has shown that cubic spline generally provides optimum accuracy in practice when compared with either lower or higher order methods. In this discussion, therefore, I will focus only on cubic spline interpolation as a widely useful practical technique. The theory of the cubic spline provides the following expression for interpolation error:
f (x ) − s (x ) ≈
h4 2 2 θ (1 − θ ) f iv (xk )+ O h 5 24
( )
in any interior subinterval, [xk, xk + 1] where, θ = (x – xk)/h. The expression says that the error is proportional to the fourth power of the sample interval, h. Since it’s a fourth power relationship, decreasing the sample interval by a factor of 2 decreases the error by a factor of 16. In practice, this means there is a sharply defined threshold below which it makes no sense to acquire any additional data points. I can’t overemphasize the importance of this observation. Anyone designing high-speed measurement systems should be aware of the fourth power convergence of the cubic spline interpolation technique, as well as the convergence properties of other, alternative interpolation methods. Why? Because if you aren’t aware of this property, you may design systems that are slower than they need to be, or you may not understand one of the serious issues that affects the speed of your measurements. A fourth power convergence sneaks up on you faster than you might expect.
146
Calibration and Accuracy
Often it is the case that instrument systems are designed to take measurements as quickly as possible at some specified accuracy. Basic operation is to make a set of measurements in a vector field over a certain abscissa domain. The time it takes to measure the map is, simplistically, the number of points in the domain manifold multiplied by how long each of them, on average, takes to acquire. Excess points slow the measurement. The best thing to do is to take just as many points as you need to meet your accuracy goal. Should the convergence error of an interpolation be much smaller than the specified accuracy of the measured points, then meaningless extra data points are being taken and the measurement is slower than it needs to be. Interpolation Example A specific example will make this clearer. Consider an output versus input power measurement. In this sort of measurement, the input power into an amplifier is varied as the level of output power is recorded. Input power is the abscissa, output power is the ordinate. Very simple. In numerical simulation, I modeled a saturating amplifier by a sinusoid of amplitude A clipped at magnitude ±1 as is shown in Figure 8-3.
Figure 8-3. Clipped cosine
147
Synthetic Instruments: Concepts and Applications
I then computed the exact value of output fundamental power from the Fourier series as a function of amplitude. That result is plotted in Figure 8-4. Notice how the curve bends downward as the amplifier saturates.
Figure 8-4. Fundamental power transfer
Finally, I pretended that I had “measured” the output power ordinate only at 1 dB step intervals of increasing input power. With a cubic spline interpolation, I estimated the points in between those measured steps. The error between the actual values and my interpolated values was tiny. If my interpolated graph were plotted on top of the ideal graph shown in Figure 8-4, it would exactly overlay the ideal graph. To better see how little error there is, look at the difference between the two plots, which is seen in Figure 8-5. Even at 1 dB steps, the interpolation error is insignificant, peaking at little more than a mere 0.02 dB.
Figure 8-5. Interpolation error (estimate – ideal)
148
Calibration and Accuracy
The error convergence properties of the spline method establishes a minimum sampling requirement, beyond which the error rapidly drops into insignificance. Unfortunately, this threshold can’t be established numerically unless the fourth derivative of the function in question is known. In the case of a real-world measurement, one will never know this derivative analytically. It is possible, however, to make a statistical estimate of the next derivative by using so-called Predictor-Corrector and adaptive sampling methods. With these methods, one can estimate where this convergence occurs and adjust the data taking to maximize speed without sacrificing accuracy. Sampling Interval versus Resolution Confusion For some reason, people don’t like interpolation. They think it’s not legitimate in some way, that it manufactures data. A synthetic instrument that leverages interpolation for performance gains is somehow cheating. I think part of the problem one can have with interpolation has to do with a fundamental misunderstanding. There is a lot of confusion between the term “resolution” and the terms “quantization” or “sampling interval,” with many people using the word resolution erroneously to describe quantization in various contexts. In this section, I hope to set the record straight. Let me begin with two definitions: Fundamental Definitions Abscissa Resolution The minimum interval in an abscissa between which the measurement system when applied to a DUT can produce statistically independent ordinate measurements given a maximum intermeasurement correlation or crosstalk specification. Abscissa Quantization Interval The minimum interval in an abscissa between which the measurement system when measuring a DUT can produce unequal ordinate measurements.
149
Synthetic Instruments: Concepts and Applications
The subtle difference between these definitions should be evident. They are similar in some ways. A consequence of this similarity is the confusion between the two that I have often seen. However, despite the similarity in the definitions, the concepts are totally distinct. Resolution is about independence in ordinate sampling. How close can the system take two samples in abscissa and have the corresponding ordinates be independent. Not just different, but fully independent. Independent, in basic statistical terms, means that the ordinates each can be anything at all within the expected range regardless of what the other was. Consider a measurement of the temperature versus time of some part in a jet engine. The abscissa is time; ordinate is temperature. How close together in time can the system make two temperature measurements such that one temperature has no influence on the other? Let’s say the range is 0–500°C. The system measures 205.6°C. How long must it wait before taking another measurement such that it can get an ordinate in the 0 to 500 degree range that isn’t influenced by the fact that the temperature was just 205.6? Let’s say we want to measure the engine temperature running, and then cold. How long do we need to wait for it to cool off with the influence of the recent 205.6°C reduced below some acceptable threshold? That’s a question about resolution of measurements. Obviously, the answer to this question depends on more than just the measurement system. It depends on the thermodynamic characteristics of the DUT. It also depends on some criteria for what it means to “have no influence.” This criteria is the correlation or crosstalk specification I mention in the definition. The word “resolution” is often used in optics to refer to the resolving power of a telescope or microscope. This form of resolution is measured by specifying how close together, angularly, to image components can fall before they merge into one and become indiscernible based an certain criterion given differently by Rayleigh, Dawes, Abbe, and Sparrow. Angle is the abscissa, image intensity is the ordinate, a criteria is specified, and thus the optics definition of resolution correctly corresponds to the definition I have given.
150
Calibration and Accuracy
In contrast, the idea of quantization is not the same thing at all. Abscissa quantization, or sampling interval specifies how close together in abscissa the system can take measurements and possibly have ordinates that are different at all. In the temperature versus time example, how close together in time can you make two potentially different temperature measurements. Perhaps the system has internal processing or the sensor has constraints that prevents measurements from being acquired more often than once per second. If you try to get data sooner than this, you must necessarily get the same answer. If you wait longer than this, the measurement of temperature may well be different to some tiny degree. How is this related to interpolation? Well, if you understand the difference between abscissa quantization and abscissa resolution, you should be able to see that abscissa quantization can be made arbitrarily small through the use of interpolation. That is to say, you can achieve as fine quantization in abscissa as you want without actually taking any more data or altering the measurement hardware. Abscissa quantization is purely an artifact of processing, and you can choose it to be whatever you want based on the interpolation post-processing you do. You can even change your mind after the measurement and synthesize finer abscissa quantization should you need it. But you cannot synthesize finer resolution after the measurement. If the ordinate samples are all correlated across the abscissa, you cannot in general make them magically become the same number (or more) uncorrelated samples. Resolution says something immutable about the measurement process. Quantization does not.3 The reason this distinction is of great importance for understanding synthetic instruments is that resolution is a specification given on physical hardware measurement systems that says something about the quality of the hardware. If the meaning of resolution intended by the specification 3
Actually, the statement that you can’t improve abscissa resolution in post processing is false. For certain kinds of measurements, super resolution techniques exist for synthesizing finer abscissa resolution beyond what seems possible given the correlation of the measured data. These techniques trade off ordinate precision (SNR and repeatability) for abscissa resolution. Although not nearly without drawbacks, super resolution techniques have many real-world practical applications and should not be forgotten when abscissa resolution is at issue.[B8] 151
Synthetic Instruments: Concepts and Applications
is actually quantization, the specification does not necessarily say anything about the hardware in a synthetic instrument since in many cases the synthetic quantization can be anything you want it to be. On the other hand, I’m not trying to say quantization is not of any value or importance. Far from it. It is all too often the case that insufficiently fine abscissa quantization is provided despite the fact that it is easy to synthesize more. The reason for this may be that well meaning, scrupulous designers fear being accused of the immorality of interpolation; namely, that they don’t want to make true abscissa resolution seem finer than it is by cheating with interpolation post-processing. Yet this is misguided thinking. It results in an error in design that is potentially more serious than what it seeks to avoid. Consider the classic “spike” effect visible in uninterpolated FFTs. If you perform a moderately windowed FFT that results in an impulse-like spectrum, but you fail to zero-pad or use a chirp-Z interpolation to provide quantization finer than the resolution, you will see in Figure 8-6, a deceptive spectral plot that shows a lone spike. H(f )
f Figure 8-6. ”Spiked” FFT
Sometimes the spike sits atop a dome, not unlike an early WWI German spiked war helmet. In all cases, the spike appears suspiciously without sidelobes. If, however, you use some prudent amount of interpolation, the sin(x)/x structure becomes evident, as you see in Figure 8-7. Neither plot has more resolution, but I submit that the first plot is misleading, while the second plot is clear and plain. It may be that interpolation gives no additional information in a statistical sense, but it can vastly improve the readability and usefulness of data for humans.
152
Calibration and Accuracy
H(f )
f Figure 8-7. Interpolated FFT (same data)
Ordinate Quantization and Precision Abscissa quantization is probably the most often misnamed as resolution, but ordinate quantization is also sometimes called resolution as well. I prefer to try to keep the matters separate by never speaking of “ordinate resolution,” but rather, to refer to the Precision of an ordinate. Thus, I can also talk about the distinction between ordinate quantization and ordinate precision. Fundamental Definitions Ordinate Precision The minimum statistically significant interval possible between two unequal ordinate measurements by a measurement system when measuring a DUT given a specified confidence level. Ordinate Quantization Interval The minimum interval possible between two unequal ordinate measurements by a measurement system when measuring a DUT. When speaking just about an ordinate in isolation, resolution can be defined as the statistical blurring caused by measurement uncertainty. In that sense, ordinate resolution is a specification of the precision in the ordinate. It’s a parameter that includes repeatability and random noise that prevents us from being able to decide if two different measurement results are different because they are measuring two different true values of the measurand, or because random errors have produced the difference. One can never know for sure if two unequal measurement results are just caused by dumb luck or if they are “really” different, but one can say with some quantified confidence level that the difference wasn’t a fluke. For
153
Synthetic Instruments: Concepts and Applications
example, I might be able to measure a distance to within 1% precision given a confidence level of 99%. That is to say, if two measurements are more than 1% apart from each other, then 99% of the time this difference represents truly different distances. The other 1% of the time they are different based on dumb luck. This sort of reasoning is related to hypothesis testing and is part of the realm of the science of statistics. If you want to talk quantitatively about measurements, you need to do your statistics homework.[B9] Ordinate quantization, on the other hand, is the minimum interval between any two different ordinate measurements. It says nothing about the precision of the measurement. All it says is how many digits are recorded. As any elementary science student knows, adding more digits to your answer does not necessarily make your answer any more precise. Only the significant digits count, and those digits represent the true precision of the ordinate. As with abscissa quantization, ordinates can be quantized more finely after the fact by applying interpolation techniques. Ordinate precision, on the other hand, says something fundamental about the quality of the measurement process and in general cannot be changed merely with postprocessing.4
De-Embedding Calibration Objects Systems often are required to provide for the general concept of de-embedding, which is a generalization of any kind of correction factor applied to an ordinate for the purpose of removing the effects of the physical location of sensors in the measurement. De-embedding effectively moves the measurement from the sensor to some place in the DUT. In support of
4
Actually, the statement that you can’t improve ordinate precision in post processing is false. For certain kinds of measurements, averaging and decimation techniques exist for synthesizing finer ordinate resolution beyond what seems possible given the random component of the measured data. These techniques trade off abscissa resolution for ordinate precision. In a sense they are the opposite of super resolution techniques that allow us to perform trade-offs in the other direction. Although not nearly without drawbacks, averaging and decimation techniques have many real-world practical applications and should not be forgotten when ordinate precision is at issue. 154
Calibration and Accuracy
de-embedding, a synthetic measurement system will accept de-embedding calibration objects from the user or controlling software. To understand how the de-embedding calibration object is applied, consider the simple case of a single y ordinate measured at a single x abscissa. This is a single real number in some units. The de-embedding calibration object for this measurement specifies how to transform the ordinate from the measured value to the virtual value estimated to exist at some specified measurement point of interest. For example, in measurement of temperature, there may be some known relationship between the temperature at the sensor location, and the temperature at some significant location inside the device being measured. In the case of an integrated circuit die, for example, you may be able to measure or control the electrical power (Watts) into the die, and you may also know the thermal resistance (°C/ Watt) between the die and a location where you can measure temperature. The thermal resistance along with the power gives you a de-embedding factor from which you can infer the die temperature. Calculated Temperature
Electrial Power (Known ) Cover
DIE
SUBSTRATE
CARRIER
Measured Temperature
Temp Sensor
Thermal Resistance (Known )
Figure 8-8. De-embedding applied to temperature measurements
In general, the de-embedding calibration factor for a particular ordinate will be a function of all the abscissa variables and all the ordinates. Moreover, each ordinate can have a different de-embedding calibration factor. Thus the ordinate de-embedding calibration object is a map, and therefore can have the same data structure as the measurement object itself.
155
Synthetic Instruments: Concepts and Applications
De-Embedding Dimensionality and Interpolation Although the measurement data from ordinates and the ordinate de-embedding calibration data can be represented by an object with the same overall structure, the de-embedding ordinate calibration object does not need to have exactly the same number of dimensions, dimension lengths, or domain separability as the ordinate measurement object is serves to calibrate. In the case where the dimensionality is different, it is assumed that the de-embedding calibration factors are constant along those axes. In the case where the dimensions are not equal, it may be necessary to interpolate or extrapolate. How the interpolation and extrapolation is best done depends on the ordinate. In general, linear interpolation and least-squares extrapolation are often appropriate. Abscissa De-Embedding De-embedding calibration objects can be provided for abscissas as well. These objects must be available to the system before the abscissa sequence tables are built. They allow stimuli levels to be more accurately and conveniently controlled. As with ordinate de-embedding objects, the application, interpolation, and extrapolation methods are abscissa dependent.
156
CHAPTER
9
Specifying Synthetic Instruments When an ATE designer is first faced with an ATE problem that they want to fit to some synthetic instrument-based solution, the first step is to capture the requirements for solution. These requirements should specify the synthetic instrumentation from a high level, giving the properties and elements for the synthetic solution. There are many ways to do this. Engineers have attempted to specify synthetic instruments all sorts of ways, and I personally have seen this process attacked from many different directions. However, so far as I am aware at the date of this writing, there is no standard way to do this. I do know that there are many bad ways. One quite common bad approach is the following: measurement system specifications are written in the vaguest of terms and delivered to the designer. Habitually, this consists of a list of stimuli to be provided and responses to be recorded, and maybe there is some accuracies specified for the recorded values, often based on marketing brochures from legacy products. At that point, the first mistake is made. The designer begins to specify hardware modules that accomplish the measurement tasks. Already they have left the path to a synthetic solution. The designer goes down the list of stimuli and picks modules that make each stimulus. Maybe, if they’re lucky, they can find a module that does two of the stimuli, but in general the idea is to make a distinct mapping between required stimuli, and required stimulus modules. Specifications on accuracy of a stimulus are applied directly onto the associated module.
157
Synthetic Instruments: Concepts and Applications
Following that, a similar misguided process results in a list of required response modules corresponding to the required responses to be recorded. Specifications on accuracy of a response measurement are reiteratively levied directly onto the associated module. This results in a modular system design with easy to understand requirements traceability. Block diagrams are drawn. Specification compliance matrices are compiled. There’s only one small problem. The system isn’t synthetic! Instead, specific hardware is assigned to specific measurement tasks. It’s our old rack-em-stack-em friend in disguise in modular clothing. Sort of a “bookshelf” system, where each book represents a specific measurement.
Figure 9-1. Bookshelf modular
At this point, the designer can try to “back door” a synthetic solution by looking at some of the modules that have been specified. Maybe they can design them with a synthetic approach and in that way call the whole design synthetic. So the designer writes some vague requirements and attempts a synthetic design of the modules themselves. Here the process repeats and the same mistakes are made again.
Synthetic Instrument Definition and XML A significant observation about the wrong-headed approach described in the previous section is that the system software that makes these modules do the job (a job as yet not clearly defined) is held implicit during the requirements capture and analysis stage. Later on, once the hardware is defined, ATE programmers may be invited to generate software requirements 158
Specifying Synthetic Instruments
based on the original measurement specifications using this modular hardware. The software requirements that results will inevitably lead to a procedural test program. Maybe in a few spots some synthetic concepts can briefly appear, but the overall focus is on procedural test executive functionality. How do we fix this? How do we avoid being led down the garden path to ingrained ATE design methods? The answer is that we need to set off in a consistent and beneficial direction toward synthetic design if we want to end up specifying synthetic instruments. We must take a formally rigorous approach that focuses on the measurement and fits the problem to a concrete synthetic solution. Only then will we find that our system design is fully synthetic. Only then will we end up realizing all the efficiencies and benefits of a synthetic design. To provide the consistent and beneficial direction toward synthetic design during the capture of requirements, I have developed an XML-based method of specifying synthetic instruments. I submit that it is a right way to specify synthetic instruments. I believe this XML-based method can lead us more consistently toward a true synthetic design. For that reason, if this approach is eventually adopted by the community as the right way or not, for the purposes of this book it will serve my purposes of illustrating a “right thinking” method for specifying synthetic instruments that propels the designer closer to the goal. Why XML? Any popular new software technology has a gee-wizz factor that draws designers to it like so many moths to a flame. Designers are attracted regardless if the new technology is appropriate to the task at hand, often resulting in their demise. So long as a technology is new and interesting, with lots of cryptic acronyms, designers will use it appropriate or not. XML is certainly popular today and has tons of related acronyms, but at first glance XML’s roots as a text document markup language may seem inappropriate for specifying ATE. Why then, exactly, should we use it? You should use XML because XML has several compelling features that make it ideal to use in the context of synthetic instruments. These compelling features make XML appropriate as a mechanism for describing both synthetic instrument hardware and software and the two working together as a system. 159
Synthetic Instruments: Concepts and Applications
XML has the following distinct advantages in the context of synthetic instrumentation: Hierarchical XML provides a convenient way to describe hierarchical things. In fact, an XML document must be strictly tree structured. If you use XML to describe a synthetic instrument, the description will inevitably be tree structured. This is a good thing. Synthetic instruments must be designed in this way. Using a tool like XML with a strict hierarchical structure keeps you on the right road when you design your measurement system. Extensible XML is an eXtensible Markup Language. That means it can be extended with a new schema of tags that serve the needs of new contexts. You are free to define an
tag and give it semantic meaning specific to your synthetic instrument application. Extensibility in XML is a formalized and expected part of the way XML works. In XML, you define a document type description (DTD) or an XML schema that sets down the syntactic rules for your document. Abstract An XML description of a measurement is a pure and abstract description that exists outside of any particular hardware context. Often, synthetic instruments developed as replacements for legacy instruments make the mistake of using the legacy instrument specifications as a specification for the new synthetic instrument. This mistake can be largely avoided by abstracting the pure measurement capabilities of the old instrument away from its original hardware context. Standards-Based XML is a growing open standard derived from SGML (ISO 8879). Because XML is a standard, that means there are many open source and commercial tools available for authoring, manipulating, parsing, and rendering XML[B6]. The standardization of XML
160
Specifying Synthetic Instruments
allow these tools to interoperate well. Because of its wide applicability, it’s easy to find training to get up to speed. The bookstores are filled with introductory XML books, classes are offered in colleges, and even training videos are available. Note, too, that SGML is a popular standard within the U.S. government, and the U.S. government is the largest market today for synthetic instruments. Programming Neutral XML is not a programming language. It is a markup language. It’s goal is to describe the logical structure of something, not to directly define algorithms in detail. XML does not replace the use of full-featured programming languages in describing detailed procedures or data structures. This is important since no one wants to give up what has already been achieved with traditional programming. Nobody will be re-coding FFTs in XML. Instead, XML will assist in the generation of program code by encapsulating the hierarchical structure of a synthetic measurement system in a way that can guide and control and even automate the programming. Portable XML is not tied to any one computing platform, operating system, or commercial vendor. It flourishes across the spectrum of computing environments. Moreover, there are standards such as the document object model (DOM) and simple API for XML (SAX) that give a platform-neutral way for programs to interact with XML. The portability of XML is advantageous. No bias or restriction is placed on the hardware and software options based on the description methodology chosen. I’ve decided to define a system that uses XML to describe and specify synthetic instrumentation. XML is somewhat of a blank slate with regards to test and measurement, and therefore it will be mostly paradigm-neutral when applied. I think this is a great advantage. It will help focus the discussion toward what I am trying to explain about defining synthetic instruments and away from widely addressed questions surrounding traditional instruments.
161
Synthetic Instruments: Concepts and Applications
ATML Automated test markup language (ATML) is a cooperative effort by members of the ATE industry to define define a collection of XML schemas that allows ATE and test information to be exchanged in a common format adhering to the XML standard. The work in this book is independent of that effort. Based on the work I have seen on ATML to date, the XML techniques described in this book are more complimentary to ATML than they are redundant or conflicting. Because of the measurement focus required by synthetic instrumentation descriptions, I address a narrower scope of issues with XML technology than the ATML group is addressing. I’ll go out on a limb and predict that like many other ATE-related software tools and techniques, ATML will experience gravitation toward the routine, procedural, instrument oriented measurement paradigm. To counterbalance this inevitable gravitation, I would encourage everyone involved with ATML to attempt as much as possible to use the blank slate of XML as an opportunity to do something truly new and better, rather than to simply translate the same old methods into a new syntax. Why Not SCPI, ATLAS,…? Before I get into the use of XML, I need to address an objection that I know will be raised by some people. Some folks might think that it would be reasonable to use something else for describing synthetic measurements. One possibility I’ve heard suggested is SCPI. After all, The Standard Commands for Programmable Instrumentation (SCPI) defines a standard set of commands to control programmable test and measurement devices in instrumentation systems. That seems, at first glance, an appropriate standard. Automated instrumentation developers already know SCPI. In fact, the whole purpose of SCPI was to provide a standardized lingua franca for programmable instrumentation, particularly over the IEEE-488 bus, but over other interfaces (for example, VXI) as well.
162
Specifying Synthetic Instruments
Figure 9-2. Example of SCPI code
No doubt, SCPI can be used quite readily within a synthetic instrumentation system as a way to talk to the individual physical instruments. SCPI has a tree-oriented structure, not unlike XML. It’s also true that SCPI has a related data interchange format (DIF) for recording output data. It’s also certain that it definitely can be used to talk to synthetic instrument systems as a whole and to define interfaces to new synthetic instruments. Unfortunately, SCPI has certain aspects that make it somewhat problematic for use in describing synthetic instruments, both here in this book, and possibly in a wider context. First, SCPI provides a standard for an instrument communication interface, not a controllably extensible method for describing synthetic measurements. As such, SCPI really doesn’t exactly fit the bill for the purpose I intend. Second, listing the set of functions an instrument performs based on the commands it responds to does, to some degree, tell us what measurements can be made. I still could use SCPI to describe measurements, at least SCPI syntax, which is tree oriented, exactly like XML. In this way, I make SCPI do double duty with its role as an interfacing language. However, I believe this would inevitably lead us to a mixed-model, leading us away from an synthetic approach in the design. In its effort to allow for all the diverse functionality of the wide range of automated instrumentation, SCPI provides a rich facility that can be used to describe the interface to virtually any instrument, existing or imagined. That’s not to say it has no limitations. Any practical system must have limitations. Rather, I’m saying that SCPI provides too much flexibility, and thereby allows the designer the ability to design measurements and talk to instruments with any paradigm, synthetic or not.
163
Synthetic Instruments: Concepts and Applications
Problems with Other Legacy Software Approaches There really is no point in belaboring the issue by listing all possible legacy software choices available and explaining why they can’t be used or aren’t appropriate to synthetic instruments. To do so leads us into religious war. I indulged in one crusade, my prediction regarding ATML and argument against SCPI, and that should be enough. Notwithstanding the advantages already stated that argue for the use of XML, in point of fact, there is no fundamental reason why one can’t define and manipulate synthetic instruments and synthetic measurement systems using any system or combination thereof that strikes one’s fancy: SCPI, ATLAS, FORTH, BASIC, Java, SQL, FORTRAN, C, or any other reasonable programming tool or environment. They’re just not as cool as XML.
Introduction to XML As I alluded to earlier, one might think of XML only as a markup language for documents, where documents are text-processing things that get displayed on web pages or printed in books. In fact, as I write this book, I’m writing the text with XML markup. XML is in some sense a subset of SGML, the Standard Generalized Markup Language defined by ISO 8879. But a document is really any data containing structured information. Myriad applications are currently being developed that make use of XML documents in contexts that are far removed from text processing. There are already an amazing number of XML Document Type Descriptions. Any kind of structured information is amenable to description by an XMLbased format. Since XML can describe structured information, it can be used to describe synthetic instruments. However, just because something is possible doesn’t mean it’s necessarily a good idea. Why is the application of XML to the structured description of synthetic instrumentation a good idea? Automatic Descriptions XML can be applied practically as a description language in a fully automated context. That is to say it would be practical to start with an XML 164
Specifying Synthetic Instruments
description of a synthetic instrumentation system, and turn it into a real, high speed instrument using nothing but automated tools. XML is easy to use with modern compiler tools. Part of this facility stems from the fact that it is possible to express the syntax of XML in Extended Backus-Naur Form (EBNF). If you’ve never heard of EBNF before, don’t be frightened. EBNF is merely a system for describing the valid syntax of a grammar like XML. It’s a way to describe what can and can’t be said in a purely mechanical way. EBNF comprises a set of rules, called productions. Every production rule describes a specific fragment of syntax. A complete, syntactically valid program or document can be reduced to a single, specific rule, with nothing left over, by repeated application of the production rules. It is possible to express the syntax of XML in Extended Backus-Naur Form, and therefore it is possible to use modern compiler tools that can take EBNF and turn it into compilable high-speed computer codes. Any modern compiler book[B0] will explain how this works. Actually, there’s really no need to work at the level of EBNF if you don’t want to. XML has associated with it a large collection of parsers, formatters, and other tools that allow designers to easily attach semantic functionality to XML documents. In most cases, these XML-specific tools are better at doing this than generic compiler tools. Notwithstanding attempts by some companies to patent encumber various XML applications, thus far XML remains a relatively free and open technology. XML parsing libraries are freely available across many operating systems, and will be found integrated into many development tools. Similarly, there are a wide collection of tools that can be used to write XML. Constructing a GUI, for example, that generates well formed XML is a simple task given all the help available. Note the statement that XML can be turned into compilable computer code, as compared with interpreted code. Either is possible. The difference between the two is this: Should an XML measurement description be compiled, that implies that all the work of parsing the description and recasting it into a form that can be executed at high speed is accomplished once, up front, before the measurement is ever run. Interpreted code, in contrast, is not processed beforehand. It is processed while the measurement is being run.
165
Synthetic Instruments: Concepts and Applications
Because the processing of interpreted code occurs at measurement time, it has the potential of slowing down the measurement. It’s my opinion that interpreted scripting should be avoided for this reason. Measurement descriptions must be compiled into high-speed, isometrologically optimized state machine descriptions of a canonical map before they are run. Admittedly, given the ever-increasing speed of computers, this distinction seems less of an issue. Given a fast enough CPU, you can interpret and optimize your measurement every time it runs with no real penalty. Still, I feel that faster CPUs should not be an excuse for slower software or skipping the optimization step. There will always be situations where the maximum possible speed is needed, and you shouldn’t give that capability away for no reason. Not a Script The use of XML in automated test applications is nothing new. I have already mentioned the ATML effort. Another application of XML to ATE appears in Johnson and Roselli[C2], where XML is used as a flexible, portable test script language. Although clever and useful in certain circumstances, I believe “scripting” is an inappropriate use of XML in ATE. XML syntax is clunky for detailed procedural programming; a clunky script language is not what you want from XML in the context of synthetic instrumentation. Scripting implies sequential execution of a procedural design. Even if object-oriented (OO) techniques are used, and even if the script is compiled, the result is still not as map-oriented and OO as you want; it works at much too low a level; it can’t get optimized effectively. Scripting results in too much freedom, and thereby doesn’t constrain the design approach sufficiently to allow for the best performance. There are better things to use for scripting than XML. This point is true notwithstanding some of the advantages to the use of XML for scripting that are pointed out by Johnson and Roselli. In scripting, some hierarchical structure is definitely used (subroutines, loop blocks, if-else) but the basic flow is top-down event oriented. For the most part, therefore, XML just dirties up the syntax of what could be a clear procedural script if rendered with a clearer procedural syntax, like one finds in C, Perl, Java, or Forth. 166
Specifying Synthetic Instruments
XML is better used as a descriptive language. It shines in its ability to mark things up in a hierarchical or tree structured manner. When you mark up something, like a text document, the markup adds attributes to the text content at the lowest level. It also allows higher level logical structure (paragraphs, tables, sections, chapters, and so on) to be built up with hierarchical layers of markup. Some applications of XML have what is called mixed content with the hierarchical XML markup intertwined with raw text data at every level. In other cases, there is no content other than structural elements that nest downward at lower levels, possibly with raw data at the very bottom of the hierarchy. In this latter situation, tags and attributes are applied to lower level tags and attributes, down and down till you reach the atoms of content that represent fundamental things that do not apply to anything lower. XML Basics Let’s dive right in and look at an example XML document. Example 9-1. Simple XML document
<Measurement name=”Trace”>
Study Example 9-1 and see if you can discern the structure of XML syntax. If you are familiar with HTML, this should look very understandable to you. This is a great advantage of XML, by the way. It’s human readable and leverages the understanding of HTML already possessed by millions of people. If you aren’t familiar with tagged markup, like HTML, here’s a brief tutorial. First of all, you need to know that the angle brackets “<>”are special characters that enclose something called a tag. XML consists of these tags that themselves enclose elements. A start tag begins the enclosed area of text, known as an element, according to the tag name. The element defined by the tag ends with the end tag. An end tag starts with a slash. 167
Synthetic Instruments: Concepts and Applications
One difference between XML and HTML is that in XML, a start tag like must be followed by an end tag . The end tag is not optional. However, in XML something called an empty tag is allowed. These have a slash before the closing angle. Here is an example of a start and end tag enclosing an item: this is an item
This is an empty tag:
Empty tags don’t enclose anything, so they have no associated item. But that doesn’t mean empty tags are, well, “empty.” End tags, like items with no embedded tags, represent a kind of “leaf” of an XML syntax tree. As with HTML, XML tags may include a list of attributes consisting of an attribute name and an attribute value separated by an equals sign. An example would be , where “bar” is the attribute name, and “asdf” is the attribute value. For the moment, that’s it. (That wasn’t hard, now was it?) As you can see, XML has extremely simple syntax. In a sense, XML is a way of writing fancy parenthesis to enclose and nest things in a tree structure with handy places to assign attributes at each branching of the tree. There are more things to talk about with regard to XML. I refer you to the many fine books on the subject that are now available[B6]. Synthetic Measurement Systems and XML There are many different ways XML fits into the description of a synthetic instrument, but they can basically be divided into these interrelated duties:
Describing the Measurement
Describing the Measurement System
Describing the Measurement Results
The first of these duties is to provide an abstract description of the measurement to be performed. Since synthetic instruments do their work on generic hardware, this first duty is most important. There needs to be a way to describe measurements in a way separate from hardware. XML is exactly that way. 168
Specifying Synthetic Instruments
Even though it’s most important, capturing the measurements to be performed is still only part of the picture. The available hardware must be defined so that these hardware resources can be allocated to the measurement tasks at hand. The description of the hardware suite is best done relative to some anticipated fixed yet abstract model so as to structure the description in a way that allows it to be most effectively used. XML can provide exactly this framework. When the measurement is brought together with the hardware description, the synthetic instrument is generated, loaded into the hardware, and run. The result of the run is a set of measurement data. This resulting data needs to be captured, stored, analyzed, visualized, and possibly augmented or reduced by post processing. Again, XML can serve quite nicely as a data language, encapsulating hierarchical data in a way that can be easily manipulated both by a human and by a computer. Describing the Measurement with XML Let’s begin with a simple example in order to show how XML is applied to the task of describing a measurement. Consider a synthetic oscilloscope instrument that measures voltage as a function of time. Here is a simple XML description of the single measurement done by this instrument. Example 9-2. Simple oscilloscope
<Measurement name=”Trace”>
Let’s take a different view of Example 9-2. Figure 9-3 is a diagram of its tree structure. At the highest level, I have defined an “Instrument” called an Oscilloscope. Enclosed in that instrument is one “Measurement” which I named a “Trace”. The measurement consists of the “time” abscissa and “voltage” ordinate. In Example 9-2, I chose to use empty tags for abscissa and ordinate, with a simple attribute “name=” to give them an identifying title.
169
Synthetic Instruments: Concepts and Applications Instrument (Oscilloscope)
Measurement (Trace )
Ordinate (Voltage )
Abscissa (Time)
Figure 9-3. Tree structure of XML code example
I think Example 9-2 is simple enough that you probably understand what I have described using XML, and how I went about it. Admittedly, this a very superficial description. It is also true that I could have used XML in quite different ways to make the same description. I could have structured the description in XML this way instead: Example 9-3. Alternative XML structure
Oscilloscope <Measurement> Trace time Voltage
As you can see, what were once attributes have been replaced by a deeper nesting of tags. Is there a reason to prefer one approach over the other? Should a tag attribute be used for “name” rather than a child tag? In many cases, the answer to this XML style question is unclear. It could be done either way without much difference in this case. One approach would be to use attributes for things that are clearly and tightly associated with the specific tag itself, and no other, rather than something, pos-
170
Specifying Synthetic Instruments
sibly reusable, that the tag describes or otherwise comprises. In Example 9-3, you should see how the tag is reused at different levels. Any reusable entity that might apply to different things is probably best a tag rather than an attribute. On the other hand, extremely generic attributes like “name=” are so common that an argument for syntactic simplicity could be made, suggesting that these common things should be attributes, saving us typing, at the sacrifice of complicating reuse somewhat. There is one situation where there is no decision, where you must use a deeper tag nesting rather than an attribute. Multiple occurrences of an attribute are not permitted. Specifically, it would not be acceptable to make “Abscissa” an attribute of “Measurement” since a measurement could have multiple abscissas. For example, the map description of a image scanner might look like this: Example 9-4. Flatbed scanner
<Measurement name=”Image”>
Horizontal and vertical position are the two abscissas, and RGB ordinates describe the color image data. It would not be acceptable to list both abscissas and three ordinates as attributes of the measurement. They must be listed as nested, child tags. Defining an Instrument Let’s do another example, a little more worked out to illustrate some further points. This example describes a synthetic instrument that can do two measurements. One called Reflection and the other Transmission. RF Engineers will recognize these as the main measurements of a vector network analyzer, like the Agilent 8510.
171
Synthetic Instruments: Concepts and Applications Example 9-5. Network analyzer
<Measurement name=”Reflection”> <Stimulus> 1.000 2.000 3.000
<Measurement name=”Transmission”> <Stimulus>
172
Specifying Synthetic Instruments
input output
The tree structure here is again evident and I have introduced some new nesting elements: Stimulus, Response, and Port. Instrument (Network Analyzer )
Measurement (Reflection )
Abscissa (Power )
Measurement (Transmission )
Stimulus
Response
Stimulus
Response
Port
Port
Port
Port
Abscissa (Frequency )
Abscissa (Power )
Ordinate (Return Loss )
Abscissa (Frequency )
Ordinate (Gain)
Figure 9-4. Detailed example tree structure
The <Stimulus> and elements tells us if the enclosed elements are associated with stimulating the DUT, or measuring some response. Abscissas or ordinates may be defined as canonical only as a a stimulus or only as a response. Thus the stimulus and response nesting will decide what axes in the measurement map must be inverted prior to data acquisition.
173
Synthetic Instruments: Concepts and Applications
The element is a deceptively simple way to say what physical DUT ports are associated with what parameters of the enclosed elements. Abscissas and ordinate port parameters within are assumed to refer to the listed ports. In Example 9-5, all the abscissas, and the Reflection ordinate refer to the port named input. That name serves to uniquely identify a particular port. It would be assigned in the measurement system description, or possibly as a measurement parameter. In contrast, the “Gain” ordinate does refer to a particular port; it refers to a , which are a group of ports referred to collectively. In this case, the port group comprises the ports named input and output. Gain is measured once for this port group. A port group is different than a list of ports, where the ordinate would be measured once for each port in the list. Gain requires two ports to be specified in order to make a single measurement. If you wanted several gain measurements, you would need to give a list of PortGroups. Ports are, deep down, really just another abscissa. is defined as an element that contains abscissas and ordinates so as to clarify the parameter passing and grouping issues, but otherwise will act like any other abscissa. You must set their value explicitly. You also must say explicitly what the other abscissa values should be for the measurement. Remember, an abscissa is an independent variable, so it needs to be set independently. A Port or an abscissa can be set to a constant value, or it can vary. The “Frequency” abscissa values are an example of an abscissa that varies. It is given as an enumerated list in the first measurement, and by using a tag in the second case. Both result in the same actual abscissa points. The element allows us to say what the name of the units are for abscissa and ordinate, as well as giving them an attribute. This allows for automatic unit conversions and tracking. The ordinates specified in Example 9-5 would probably both be compound ordinates in a real instruments. This concept is described in the section titled “Canonical Maps.” It means that there is a calibration strategy specified that will give a schema to rewrite the map into canonical form, with only atomic ordinates specified. Let’s look at what that simple calibration strategy definition might look like for “Gain”.
174
Specifying Synthetic Instruments
Calibration Strategy Example Measurement systems can’t measure gain directly. It’s a relative measurement. Gain can be defined as the ratio of output power to input power of some signal passing through a DUT. The units of gain are customarily expressed in dimensionless decibels (dB), which is 10 log of the power ratio. Unlike gain, power is something that measurement system can often measure directly—it can be a canonical ordinate. If it can measure power canonically, it can break up the compound ordinate “Gain” into two copies of the canonical ordinate “Power”, one measured at the input port and the other measured at the output port of the DUT. Since input and output power are measured at different ports, I will expand the map with a port abscissa, then specify post processing to collapse the map back down along the port axis. Example 9-6. Compound ordinate
<Measurement> input output
Power[out]/Power[in]
Study Example 9-6 carefully. There are several interesting elements that deserve some comment. First of all, at the highest level in the tree, I have just an ordinate named “Gain” with units specified just like it was in the network analyzer example. Instead of ending there as it did before, now
175
Synthetic Instruments: Concepts and Applications
the ordinate also contains a map for a new (unnamed) internal measurement. The new measurement has no stimulus section, only response. The abscissa of this new measurement is actually the element (remember I said that ports were really abscissas in disguise). This is a canonical abscissa that specifies how to interact with the DUT. Within the port abscissa, you see that I give an enumerated list of response ports where the system will measure the ordinate, “Power”. This isn’t a port group, it’s a list of ports. The response port named “input” is really a loopback measurement of stimulus power. I assume that the measurement system hardware allows a loopback measurement of the stimulus at the DUT input. If it doesn’t, then the response port is not canonical over this part of the abscissa domain, and would need to be broken down further, possibly in terms of a stimulus power abscissa. Alternatively, I could define gain itself directly in terms of a stimulus power abscissa, instead of loopback response measured at the DUT input. This alternative approach would not work well on systems that did not have loopback capability. In general, it’s best to say what you really want at the highest level, and break things down till you have what the system can actually do (or at least what some standard set of interface maps can do). Don’t try to make it easy on the system by precanonicalizing based on what hardware you know you have. If you try to do this, you will sacrifice portability. The element is new. This tells us that the “Port” abscissa is to be collapsed by means of the algebraic equation given. Since there is only one ordinate and it’s a scalar, and the abscissa is an enumerated list, I can use a simple, scalar algebraic equation to perform the collapse. The syntax for the algebraic is given as the simplistic “Power[out]/Power[in]”, which is expresses the calculation of Gain as a ration of two Powers. The “Power” manifold is herein treated as an array indexed by the previously named values “in” and “out” along the axis (named “port”) that I have specified for collapsing. I call the algebraic syntax simplistic, not because it doesn’t work for this case, and many others, but because one may want to use something more complex here in general. For example, MATLAB syntax could be used, or J, or Perl, or C if you must. You could even link to external code here. Calibration strategy axes collapses need to be able to express all sorts of
176
Specifying Synthetic Instruments
multidimensional array manipulations, so it would be good to pick something that worked well with that sort of thing. Here’s an example of a more sophisticated axis collapse one may need. I have assumed power measurement was canonical, but what if it isn’t? If the hardware does not have some fundamental power measuring system like a wattmeter, power would need to be measured by doing a mean square integral of voltage, current, or some other calculation (for example, FFT) based on a block of data. The canonical ordinate in that case might be an array of voltage samples. The map collapse that case would be summing the squares of the data array, perhaps with a primitive as an algebraic expression, perhaps with a full blown script within a <Script> element. Note how units are specified for the result collapse. The units are empty, and therefore dimensionless linear by default, with the conversion to dB left implicit by the fact that the enclosing compound “Gain” ordinate is specified as “dB”. Scale and unit conversions should be performed automagically based on what unit is specified for each axis. This is facilitated by using standard string identifiers for unit names and scales and a separate unit conversion schema that can also be nicely specified in XML. Another point to notice is the way I introduced identifier references. This is the first time I have used the idea of reference and it represents a watershed in this XML method of describing measurement maps. I have named an axis and some of its elements to facilitate wiring them to the algebraic function. Identifiers and references lead us to the idea of Measurement Map parameters. Functional Decomposition and Scope When I defined a compound ordinate like Gain in terms of atomic ordinates, this is really quite similar to the classic functional decomposition that is used with programming languages like C. You start with a complex function, and then partially factor it with several, subroutines. Some of these subfunctions are factored again, and those again, and so on down till you have functions that don’t call any subroutines.1 Each function in 1
Assuming, of course, that you do not recurse.
177
Synthetic Instruments: Concepts and Applications
such a decomposition represents a node in a tree much in the same way as the <Measurement> elements represent nodes in the XML tree I have presented for describing measurement maps. In a classic functional decomposition, each function can have parameters, and optionally return values. This allows us to pass information downward and upward in the functional tree. In the measurement map and calibration strategy schema I have outlined, information flow is implicit between the atoms and the compounds, relying on the fact that their interfaces fit. While it would be nice to believe that this fitting would happen spontaneously on its own, realistically I need a way to specify an interface. I have used “name=” identifiers to label things within the measurement. Let’s add an explicit <ParameterList> element to say what internal parameters are passed into the measurement from outside. The names associated with these parameters become placeholders for the value passed in. Here’s the calibration strategy map for “Gain” restated with an explicit parameter list. Example 9-7. Parameter list
<Measurement> <ParameterList> input output
178
Specifying Synthetic Instruments
Power[out]/Power[in]
In addition to their use as parameter labels, I have used element “name=” identifiers as local variables within the measurement, allowing us to refer to specific abscissa values from the block. It would be reasonable to expect that the implied scope of a local identifier is delimited at the <Measurement>, just like the scope of functional parameters. Measurement Parameters—A Hazard The <ParameterList> element allows measurements to have parameters. The reason I introduced this capability was because compounds have dependencies on abscissas for calibration strategy. I needed a way to connect this together in an unambiguous way. Using named parameters seems unavoidable here. Now that they are introduced, measurement parameters have more possible uses that just the atom-compound interface. You may want a way for the test to interface with the measurement, passing down test parameters into a fixed map with variables, rather than rewriting the map with new constants. You can use the <ParameterList> interface for this if you wish. But do you want to do that? At this point, I start to wonder: When I introduced parameters and variables, aren’t I in danger of turning the XML description of measurements into a real programming language? Isn’t that a bad thing? Yes, indeed, this is a very dangerous point. Introducing reference in the form of functional parameters and local variables was a watershed for the XML stimulus response measurement map method. It potentially opens Pandora’s box, setting free all the demons that plague anything that threatens to become “real” programming. Until now, everything was pretty and perfect in a context-free way, but parameters and variables seem to threaten that austere beauty, introducing ugly semantic context.
179
Synthetic Instruments: Concepts and Applications
Don’t get me wrong. I’m as much a fan of gnarly-old variable naming and scoping as anyone, but you must remember the goal here is to provide a system that focuses on the measurement with a minimum of computer science arcana. I am dangerously close to introducing a whole bunch of issues that are well understood by programmers, but may alienate nonprogrammers (assuming any are even still reading at this point). Admittedly I am in danger, but XML itself comes to the rescue. Things are not as bad as they may seem. A Turing-strength programming language has virtually infinite freedom. From this freedom springs most of the problems people have with programming. But XML is different. XML allows freedom, but only in strictly regulated ways permitted by the schema and DTD. The clever folks that invented XML have already seen this hazard and have paved many ways around it. Therefore, the trick to avoiding these dangers, I believe, has two important aspects: 1. Design the XML DTD and schema to enforce strictly unambiguous reference of measurement parameters. For example, if an ordinate requires a PortGroup parameter, make sure it gets one. 2. With the assumption of unambiguity guaranteed by the schema, allow measurement parameters and functional data flow to remain implicit as much as possible; only introduce them when absolutely necessary, or when they would make sense to a test engineer. This trick is easier than it might seem. After all, look at what I have achieved so far with only very limited use of naming and reference. Furthermore, as I have already noted, test engineers are smart people. They will know, intuitively that some measurement parameter is missing and be happy to provide it if asked at the right time. Just don’t turn them into namespace accountants (i.e., programmers). If you do, they will rebel.
Describing the Measurement System with XML By now you should have a pretty good idea how to describe a measurement with XML, but what about the measurement system itself? At some point, we need all this XML stuff to interact with real hardware. How does that happen?
180
Specifying Synthetic Instruments
A good part of the answer to these questions lies within the map manipulation process I have already described. Within calibration strategy, map canonicalization proceeds until it reaches a map expressed entirely in terms of the set of atomic abscissas and ordinates. That set represents the real hardware, at least from the point of view of the map stance. Therefore, a necessary step in describing the measurement system with XML is to identify the atomic ordinates and abscissas that the system can implement natively. Calibration strategy is then given this list; test engineers can write any measurement map they want; as long as the calibration strategy can find a way to canonicalize their map, real hardware can be told how to measure it. Another part of the answer is given by defining the ports and modes available from the hardware. As I have described in the section titled “Ports and Modes,” ports and modes specify the state of the hardware during measurement. Ports tend to indicate what DUT interfaces the system is stimulating or is measuring from. Modes tend to indicate the internal settings of the measurement system itself. Both ports and modes can be canonical or atomic. Once again, specifying the complete list of atomic ports and modes is necessary to specify the hardware from the point of view of the map stance. The map then “knows” what the hardware can do. All that remains is telling the hardware to do it. The “telling the hardware” part of an atomic ordinate, abscissa, port, or mode is a hardware-implementation specific thing. It may be as simple as reading or writing a register, or as complex as you please. There are many established approaches to this problem. There are “plug and play” driver interface standards, and there are proprietary “site file” formats for describing the details needed for hardware interaction. Any abstract system for describing hardware interactions (setting atomic ports, modes, abscissas, or reading atomic ordinates) that I present here in this book risks being irrelevant. Hardware vendors tend to like to keep ownership of the set of hardware driver standards they support, picking ones to support that they think will sell the most of their product. They jealously guard the low level details of interfacing with their products, preventing any other drivers from being developed, preventing any sales to people using other standards, and thus “proving” they chose the right standard to support in the first place. Therefore, I won’t bother to in-
181
Synthetic Instruments: Concepts and Applications
troduce yet another standard to be shunned. I will, however, risk giving a very simple example of how an interface description could be accomplished in XML with no intention to propose it as a generalization. All that said, I fearlessly consider how an atomic port might be specified with enough detail to effect hardware interaction. Starting with the trivial case, if the measurement system has just one stimulus port, and one response port, and this connects to a DUT with just one gozinta and one gozouta (a.k.a. input and output) there really isn’t any work to do. I can always assume that the port in the Stimulus element of the measurement map is the one stimulus port, and the port in the Response element is the one response port. Done. Suppose now that I have a DUT with multiple inputs and multiple output, but I still have the single stimulus, single response SMS. Commonly the way people solve this is to use a switch matrix between the DUT and the measurement system. This allows an instrument with a small number of interfaces (in the one-to-many case, only one stimulus and one response interface) to be able to make measurements at numerous DUT ports. In the on-to-many case, the matrix is really just a TDM commutator with a different name. (Multiplexing options were discussed in the section titled “Simultaneous Channels and Multiplexing.”) Commutator
Controller
Codec
Conditioner
DUT
Commutator
Controller
Codec
Conditioner
Figure 9-5. Measurement system, switch matrix, and DUT
182
Specifying Synthetic Instruments
When a commutator style switch matrix is used for DUT interfacing, all I need to do to specify a port in the measurement map is to somehow set the position of the commutator switch. The proper incantation I must perform to set this switch position depends on the details of the hardware interfacing, but more often than not this involves little more than writing something to a register someplace, or calling an “official” driver function, which secretly writes something to a register someplace. Under the above set of assumptions about the hardware model, a suitable XML schema to capture the relevant details might be something as trivially simple as this: Example 9-8. Defining ports
<Write address=”0x1234” value=”0x5678”/>
Something as simple as that, either maintained separately, or placed within the scope of the Instrument element in the XML measurement map schema I have thus far defined, could bind the logical port named “output” to an explicit action for interacting with hardware. To extend this schema to the purpose of setting modes and abscissas would require a more structure and get us involved in the concept of parameter reference that I discussed in the section titled “Functional Decomposition and Scope.” Still, despite the additional semantic structure, it’s likely that I need do little more, hardware wise, than map referenced parameters to specific values the system writes to certain addresses. For ordinates, I will need to specify that the system should read values from certain addresses, but the concept is otherwise the same. Of course, the above discussion is rather simplistic. Some modes and abscissas require a complex algorithm to set. Consider the case of a frequency converter in a signal conditioner. There might be several tunable frequencies that need setting, amplifier and filter band-switches that need controlling, and so on, in order to get the conditioner tuned to the desired frequency abscissa. Similarly, with the response system, reading the digitizer may require a complex algorithm, timing consideration, and other details. Clearly, a lot more could be accomplished here. However,
183
Synthetic Instruments: Concepts and Applications
for reasons stated above, I’ll leave the rest of the XML schema for hardware interfacing as an exercise for the interested reader, or hardware vendor.
Describing Measurement Results with XML As discussed in the section titled “Abscissas and Ordinates,” measurements are mappings (i.e., vector valued functions) over separable and nonseparable domain grids. The typical data types for abscissas and ordinate elements are manifolds over the set of integers, real numbers, complex numbers, or even numeric arrays. Therefore, the basic requirement for a measurement results data structure is the ability to efficiently accumulate collections of tables of numeric data of the above types. Obviously, if you define the measurement in XML, and you define the measurement system in XML, it might seem natural to record the data in an XML format. This is certainly reasonable. It’s even a good idea. Listing the actual data values measured for an ordinate right in the XML map description is a great way to create a self-documenting data structure that can be manipulated with the same set of software tools you used for manipulating the measurement prior to the acquisition of data. The problem with jumping to the obvious conclusion that one should use XML as a data recording format is the existence of a plethora of possibilities for data structures and data file formats (for example: Microsoft Excel, MATLAB, DIF, SQL, Flat TDF or CSV, and so on) that can accumulate measurement data. Interoperating with these to various degrees, there is a second plethora of report generation and data visualization tools. People like these tools and understand the formats they rely on. Thus, it’s not possible to ignore this dual-legacy of prior art and go with XML for data storage, regardless of the advantages XML might have.2 Because the options are so diverse, I won’t bother surveying the topic beyond some observations regarding the basic data structure requirements of a synthetic instrument. To this end, I will explore only two different basic data structures for the storage of measurement data. 2
On the other hand, as of the date of this writing, numerous vendors of data acquisition, storage, analysis and visualization tools have either begun implementing support for open XML data formats in their legacy tools, or announced plans to transition their proprietary data files to XML format. Most prominent of these is Microsoft Corporation. 184
Specifying Synthetic Instruments
Column and Array Data Column data is the most general data format. It is equivalent to a typical PC spreadsheet data structure. Consider the following example: Abscissas Stim Freq (Hz)
Ordinates
Stim Power Supply Voltage Atm Pressure (W) (V) (mBar)
Resp Power (mW)
DUT Temp (°C)
1000
1.0
5.00
985
0.92
22.0
2000
1.0
5.00
985
0.95
22.6
1000
1.1
5.00
985
0.97
22.0
Column data tables can represent any number of scalar ordinates and abscissas, including data taken over abscissas that are not separable. If efficiency was not an issue, a column data structure could serve for all measurement map data. That means you can use Excel, or any other spreadsheet format to store map data. Column data is inefficient in the case of separable abscissa domains. If the abscissa is separable, it is far more efficient, space wise, to store only the factored individual scales for the abscissas rather than the outer product. The ordinate data is stored in ordinary array format. Spreadsheets can store arrays too, but tools that specialize in arrays (J, MATLAB, Mathematica) show their power here. Here’s an example for two abscissas and one ordinate: Gain (dB) vs Freq,Power
Frequency (Hz) 200 300 400 500
1 10.1 10.2 10.3 10.4 Power (mW)
2 11.5 11.7 11.9 12.0 3 12.6 12.7 12.9 13.2
4 12.9 13.3 13.5 13.7
The array data format can easily be converted to the column data format, although the reverse is not true, in general. Self-Documenting Features Data should always be self-documenting. A flat file of numbers has no meaning once it is separated from the context in which it was acquired. 185
Synthetic Instruments: Concepts and Applications
The requirement to self-document derives from good lab practice in general. All abscissa and ordinates must be traceable to type, title, and units. Calibration offsets and other meta-data may also be of importance, so provision should be made for attaching general attribute data to each abscissa or ordinate axis. This should include the capability for “attribute=value” style attributes or the equivalent. One of the great advantages of object orientation is that if facilitates selfdocumentation. If the measurement object comprises both the stimulus response measurement map description and the SRMM data, together, and all subsequent report generation and visualization draws from this unified object, you will accumulate a completely self-descriptive entity that paints a complete picture of your measurement. The SRMM object is to the measurement what the TPS is to the test. Stimulus Response Measurement Map Object Data
Units
Strategy
Graphs
Scales
Instrument State
History
DUT ID
Post Processing
Calibration
Transforms
Algorithms
Figure 9-6. Self-documenting SRMM object
Arrays as Elements Some ordinates are not scalars. For example, the spectra ordinate is a set of spectral power measurements over some domain of frequencies. Therefore, column data and outer product array data structures must include the capability of handling arrays as elements. In fact, the most general case allows a data element to be a column data or outer product array. With such a hierarchical format, all possible measurement data can be efficiently and naturally represented. One way to avoid allowing arrays as elements, but still to permit the same hierarchical freedom, is to allow relations to be defined between data sets.
186
Specifying Synthetic Instruments
For example, instead of storing a list or array at one location in a table, it could be stored in a totally different table with the two tables linked by a common code number or key. Those of you familiar with relational databases will know that most fields with array contents (other than strings) are stored in separate tables and linked by relations. SQL Database Concepts and Data Objects I have been talking about stimulus response measurement map measurement using mathematical terminology from calculus and linear algebra (arrays, vectors, manifolds, functions, mappings). There is, however, an alternative way to conceive SRMM measurements. You can think in database terms (tables, records, fields). The collection of ordinates for a given set of abscissas can be considered a record with the individual abscissas and ordinates representing fields. Not only can you think about data structures in database terms, you could actually use a SQL database to store measurement maps. This would allow us to conveniently sort and select portions of a larger dataset using standard SQL commands, and the ability to relate other kinds of contextual data for the measurement: date and time, configuration state of the equipment, identifying information from the DUT, and so forth. The idea of using a real database to store measurements has many positive aspects. The database viewpoint is particularly relevant when arrays are elements. Although linked structures can be built many ways in order to accommodate this kind of element, the methodology of relational database design can help us immensely here. On the other hand, the creation of a hashed database can be somewhat wasteful for SRMM measurement data because random access to individual records is not a typical requirement. More often, the data is rotated, subdivided, or abstracted in a sequential process for the purpose of visualizing the data with graphs, charts, or plots. Moreover, the raw data itself tends to be very large, making storage efficiency the paramount requirement over efficiency of random access queries and over sorting and searching. Therefore, although the structure of data will readily translate into typical database structures, a simpler, direct indexed data format for acquisition storage can often be a better choice.
187
Synthetic Instruments: Concepts and Applications
HDF I will sing the praises of XML a lot, but one alternative data format that meets most of the requirements for recording map data is hierarchical data format (HDF). HDF is a multiobject file format standardized and maintained by the National Center for Supercomputing Applications (NCSA) that facilitates the transfer of various types of scientific data between machines and operating systems. Machines currently supported include the HP, Sun, IBM, Macintosh, and ordinary PC computers running most any operating system, even Microsoft Windows. HDF allows self-documenting of data content and easy extensibility for future enhancements or compatibility with other standard formats. HDF includes JAVA and C calling interfaces, and utilities to prepare raw image of data files or for use with other software. The HDF library contains interfaces for storing and retrieving compressed or uncompressed 8-bit and 24-bit raster images with palettes, n-Dimensional scientific data sets and binary tables. An interface is also included that allows arbitrary grouping of other HDF objects. Any object in an HDF file can have annotations associated with it. There are a number of types of annotations: Labels are assumed to be short strings giving the name of a data object. Descriptions are longer text segments that are useful for giving more in depth information about a data object file annotations are assumed to apply to all of the objects in a single file. The scientific data set (SDS) is the HDF concept for storing n-dimensional gridded data. The actual data in the dataset can be any of the standard number types: 8-, 16- and 32-bit signed and unsigned integers and 32- and 64-bit floating-point values. In addition, a certain amount of meta-data can be stored with an SDS including:
The coordinate system to use when interpreting or displaying the data
Scales to be used for each dimension
Labels for each dimension and the dataset as a whole
Units for each dimension of the data
The valid maximum and minimum values for the data
Calibration information for the data
Fill or missing value information 188
Specifying Synthetic Instruments
A more general framework for meta-data within the SDS data model (allowing ‘name = value’ style meta-data) is also possible. There is also allowance for an unlimited dimension in the SDS data-model, making it possible to append planes to an array along one dimension. HDF is an open, public domain standard. It is mature and well established. It represents a good alternative to XML or SQL databases for the storage and manipulation of synthetic instrument data. The HDF web page is located at http://hdf.ncsa.uiuc.edu/.
189
This page intentionally left blank
CHAPTER
10
Synthetic Instrument Markup Language: SIML The intent so far has been to convey the basic concept of using eXtensible markup language (XML) as a language for describing measurements in a hardware-independent way. In a spirit of exploration, thus far, I have proceeded intuitively and used XML freely with no regard for any completeness. Nor have I yet become ossified onto a specific standardized approach, or yet worried about validation. For example, I argued between the alternative of having “name” as an attribute or as a tag element. Clearly there would be both more detail in a real-world implementation, and rules of consistency must be established. Before XML could be applied effectively in the complex context of a real system, with the XML source document guiding the automated implementation of measurements, I need to impose some standardization so that people don’t go writing whatever tags they want, spelling them and nesting them willynilly, however their mood strikes them. If allowed, the end result of that anarchy would be a document that was worthless as a source for machine processing. To avoid this failure, I need to create a well-defined synthetic instrument markup language or SIML. The “X” in XML means eXtensible, and so within the XML realm there is an established way for me to create my own SIML for use in defining synthetic instruments. I do this by establishing the desired document structure and describing it precisely with an appropriate document type definition (DTD) or schema. This creates the SIML schema for the synthetic instrumentation application of XML.
191
Synthetic Instruments: Concepts and Applications
A DTD spells out what tags and attributes are legal for use in a particular XML application, and how those tags interrelate in the tree structure. The DTD allows a particular XML document to be validated in terms of correctness. With a DTD, you must use elements in strict accordance with the DTD if you want your document to parse as valid. A Schema is an extension of this idea; its based on one of the schema description languages and gives a stronger way to specify the document structure. A valid, well formed document can then be the input to an automated process that turns the abstract measurement into a real measurement implemented by a synthetic measurement system. On the other hand, the description of real-world synthetic measurement systems is a dynamic endeavor. Requirements change. I wouldn’t want to paint myself into a corner with a rigid descriptive standard that stifled innovation. Fortunately, the DTD and schema descriptions afford XML with a good balance of stability and flexibility that allows construction of a stable yet flexible syntactic toolkit for describing measurements consistently in all their complexity. That’s why I picked XML in the first place!
A DTD for Measurement Description To begin the development of a DTD that gives a basic framework for verifying measurement descriptions, let’s return to my simple Oscilloscope example. Here is the XML description of the oscilloscope with a document type declaration added at the top. This represents a complete, valid, XML document. Example 10-1. Complete XML document
<Measurement name=”Trace”>
The first line at the top of the document indicates that this document is XML format and not standalone. That means that there exists a DTD that it may be validated against. 192
Synthetic Instrument Markup Language: SIML
The second line names this kind of document as “Instrument” and tells us where the DTD can be found. In this case the DTD is a local file, but it could just as easily be a URL, or even given in-line within the XML document itself. I alluded to the structured freedom DTDs afford and this is one example. The DTD provides structure, but you are free to use whatever DTD you want. Should you want to extend the range of allowable elements in a “Instrument” document, you don’t need to appeal to any standards committee, you just change the DTD. What does this DTD look like? It’s a file named Instrument.dtd with the following lines in it: Example 10-2. Simple DTD
Instrument (Measurement*)> Measurement (Abscissa*, Ordinate*)> Abscissa EMPTY> Ordinate EMPTY>
Instrument name CDATA #REQUIRED> Measurement name CDATA #REQUIRED> Abscissa name CDATA #REQUIRED> Ordinate name CDATA #REQUIRED>
Each line starting with
says that the “Instrument” element contains zero or more Measurement elements (the star after the “Measurement” label indicates the ‘zero or more’ quantifier). The other lines make similar statements about what each element contains. The final lines says that Abscissa and Ordinate elements are EMPTY. That won’t be true for very long, but it is true for the simple oscilloscope in Example 10-1. Also illustrated in this example is how attributes are defined with a declaration like this:
193
Synthetic Instruments: Concepts and Applications
This declaration says the “Instrument” element has one required attribute named “name” that will contain Character Data (CDATA). Now that you’ve seen the basics, let’s try a more complex example. Suppose you wanted to validate the network analyzer example. You can use the following DTD: Example 10-3. More sophisticated DTD
This DTD has been expanded to handle the additional elements. There are also some new structural nuances. Studying Example 10-3, a reader with a sharp eye will see how to express alternation with the | symbol, that is, how an element can contain either one or another of some selec-
194
Synthetic Instrument Markup Language: SIML
tion of elements. There are also examples of elements specified as one-ormore (+) or as zero-or-one (?) element wildcards. Continuing on from here, explaining the detailed syntax of a DTD or XML Schema simultaneous with its application to synthetic instruments is outside the scope of this book. At this point, the reader that wants to take this technology further is well advised to learn more about the details of XML through any of the fine XML books[B6] on the shelf of your favorite coffee-bar-bookstore. Specific resources for XML applied to synthetic instruments is available at www.synthetic-instruments.com web site. In the following sections I will outline in some more of the XML detail needed in a typical synthetic measurement system description.
More SIML Details Our measurement descriptions can be divided into the following major categories:
Global Measurement Elements prepare the system for a measurement and affect the way measurements are performed in a general sense. They allow the client to control the context and properties of measurement execution. They can be used to place limits on a measurement, adjust global properties of the system, or select stimulus and response interfaces to the DUT. Measurement system ports and modes fall into this category as well as being abscissa elements.
Calibration Processing Strategy Elements specify how maps are canonicalized and data is to be calibrated. They affect what data is taken and how data is processed after the measurement. Calibration strategy can alter or completely redefine the actual measurements to be performed by transforming the map.
Abscissa Elements describe the measurement axes, determining the domain of stimuli applied to the DUT. You have already seen abscissa elements in my simple descriptions. The abscissa elements define both the measurement axes and their ordering. Abscissa elements are often stimuli, but may be modes or ports or driven responses based on map inversion. In choosing the ordering of the 195
Synthetic Instruments: Concepts and Applications
abscissas, you decide which independent variable is varied most rapidly.
Ordinate elements describe the measurements to be performed and data to be recorded over the domain. The members of the defined list of ordinates represent the actual measurements performed on the DUT. Typically these are responses, but sometimes may be stimuli after map inversion.
Locked Abscissas Thus far, I have described abscissas as independent variables. That description pretty clearly implies that abscissas are to be set independently. Indeed it is true that by default, abscissa domains are rastered through their fully independent outer product. There are cases, however, when it’s a good idea to restrict this independence. As I discussed in the section titled “Domains,” abscissa domains are sometimes restricted into narrower regions by being locked or banded together. If an axis is locked, it no longer is varied independently, but rather is varied in tandem with another selected independent axis, possibly with some fixed offset. Finally, if it is banded to another axis, it is varied over a restricted neighborhood near the other axis. Abscissas can only be locked or banded to an independent axis with the same units. I will express locking and banding in SIML through some new elements and by another venture into the dangerous realm of names and reference. Typical uses of locking and banding include locking some response parameter to a stimulus parameter. The locking thereby would overlap stimulus and response elements. In general, since any two abscissas can be locked or banded, overlapping parent elements like stimulus and response, the locking associations may not follow the tree structure thus far defined for SIML. Given how locking and banding associations may transcend strictly tree-structured associations, SIML must say explicitly what abscissa is the master of any other abscissa to which it is locked or banded. The only practical way to do this is with an identifier. The syntax that accomplishes banding and locking can be seen in the SIML definition of a distortion analyzer given in Example 10-4. This in196
Synthetic Instrument Markup Language: SIML
strument makes a measurement of two-tone intermodulation. The stimulus comprises two separate tones of slightly different frequency, but of the same power. A response measurement is make of the power in a distortion product at a known frequency. One stimulus tone is at frequency f1 and the other is at f2, with their frequency spacing f∆ = f2 – f1. The response to measure is the third order intermodulation product located at 2 f2 – f1. In Example 10-4, I assume that f∆ = 0.1 MHz, putting 2 f2 – f1 at 0.2 MHz above f1. Example 10-4. Distortion analyzer
<Measurement name=”Third Order Intermodulation”> <Stimulus name=”Tone1”> 1.000 2.000 3.000
<Stimulus name=”Tone2”>
197
Synthetic Instruments: Concepts and Applications
Note the new “Locked” element tag I have introduced:
This empty element serves to specify the value of the abscissa. The element attributes unambiguously specify the referenced master abscissa. Not only do they say what abscissa type (Frequency) is master, they also says which stimulus (Tone1) contains that master abscissa. The combined attributes are unambiguous. The SIML schema would be in error if there was any ambiguity to the locking specification. Abscissas can only be locked to other abscissas with compatible units. Trying to lock abscissas with incompatible units (for example, a frequency abscissa to a power abscissa) would be another schema error. In this case, we are locking frequency in MHz to frequency in MHz, so there is no problem with unit conversion or with math. See how I have given a frequency offset for the locked abscissas using the “offset” attribute? When (Tone1, Frequency) is set at 1.0 MHz, (Tone2, Frequency) will be at 1.1 MHz, and the response (unnamed) will be measured at frequency 1.2 MHz. Banded Abscissas Banded abscissas work just like locked abscissas, but with an added twist. In addition to an offset, banded abscissas can have an increment and 198
Synthetic Instrument Markup Language: SIML
count. The banded abscissa can then step, independently, beginning at the specified offset from the master. For example, in our distortion analyzer it might be a good idea to dither the power of one of the stimulus tones slightly so as to be sure to find the spot where the two response tone powers are exactly equal. It also makes sense to measure the fundamentals as a response rather than relying on stimulus calibration. Here’s an enhanced distortion analyzer that does this using abscissa banding. Example 10-5. Enhanced distortion analyzer
<Measurement name=”Third Order Intermodulation”> <Stimulus name=”Tone1”> 1.000 2.000 3.000
<Stimulus name=”Tone2”>
199
Synthetic Instruments: Concepts and Applications
offset=”0.1”/>
In Example 10-5, I used banding for two purposes. The first purpose is to dither the stimulus power over ten steps spaced by 0.01 dB. The second purpose is to measure the power of 4 response frequencies: the two intermod products along with the two fundamentals. There is one final interesting comment to make about the distortion analyzer map in Example 10-5. I used banded abscissas to dither the stimulus power of one of the tones slightly so as to be sure to find the spot where the two response tones are equal. I could have, instead, specified the response powers as locked abscissas. That would have implied an inverse map for most measurement systems. The measurement system would have to find the necessary stimulus power that made the response powers equal. One way it could find that stimulus setting would be for it to invert the map and dither the stimulus, as I have shown in Example 10-5. The dithered axis would then be collapsed by interpolating the data to find the point where the response powers were equal and returning data for just that interpolated point.
200
Synthetic Instrument Markup Language: SIML
Constraints Constraints on manifolds are needed for many reasons. Maybe the foremost among these is the need to protect hardware from damage. There are cases where either the DUT or the measurement system is vulnerable to damage should a stimulus or response venture into some forbidden region. The simplest example of this is power supply voltage. Most systems can’t withstand supply voltage much higher they they are designed to operate from. There are other cases where incorrect frequencies, improper control sequencing, or excess signal power levels can lead to catastrophe. A less dire reason for constraints, but none the less important, is to guide the process of calibration strategy. I have noted in the section called “Inverse Maps” that calibration strategy that relies on map inversion can, in some cases, lead to multiple solutions or branches. Proper application of constraints can eliminate ambiguous branches and simplify the problem of canonicalizing and optimizing maps. The schema presented in Example 10-6 shows how to apply numeric constraints on an ordinate. The element is a sibling to and naturally is measured with the same units as the element being constrained. Example 10-6. Constraints
Constraints are specified as “hard” or “soft” thresholds. If a “hard” limit is exceeded, the measurement immediately aborts. If a soft limit is exceeded, the measurement continues after some programmed corrective action is taken by the system. Obviously, other “role” attributes could be defined and other kinds of thresholds. Reference identifiers could be used
201
Synthetic Instruments: Concepts and Applications
to specify relative constraints (for example, Voltage1 always greater than Voltage2). Modulation In Chapter 9, I described the signal coding hierarchy and how measurement systems can be asked to measure coded aspects of signals. As a hierarchy, signal coding can be expressed in a tree structure, and therefore XML can be used as a way to specify signal coding. I’ve already shown a little of this with the distortion analyzer example. The response tuned frequency is really a mode abscissa for removing the carrier component from a bandpass signal, leaving just the modulation. Other response mode abscissas can be defined in SIML, including matched filtering. On the stimulus side, we can put together encoded waveforms that are implemented using compound stimulus capabilities the hardware possesses, or, after stimulus map canonicalization, are synthesized with DSP algorithms in the stimulus controller. Consider this example where I define an FM modulated stimulus. Example 10-7. Signal encoding
<Stimulus name=”Tone1”> <Envelope type=”FM”>
202
Synthetic Instrument Markup Language: SIML
Ordinate Modifiers: Averaging and Statistical Manipulations Averaging is an option on many conventional instruments, especially those capable of digital storage. This option is applied to an ordinate. The ordinate y(x) measurement is repeated N times and its value averaged with the usual formula: N
y (x ) = ∑ yi (x ) i =1
The average is used as the resulting ordinate. Unless averaging is an atomic feature of the hardware, I would expect the usual way to implement averaging would be to canonicalize it as an axis of repeated ordinate measurements and then collapse that axis with the above averaging formula. Here’s how averaging can be specified in SIML. Example 10-8. Averaging
203
This page intentionally left blank
CHAPTER
11
Ten Mistakes in Synthetic Measurement System Design Throughout this book, I have attempted to give you a clear and consistent plan for the design of synthetic instrumentation. After reading my plan, and after considering your own individual application requirements, you should be able to design and develop your own SI that reaps the promised benefits of this beneficial design approach. Unfortunately, it has been my experience that the best laid plans to develop synthetic instrumentation go oft astray when battered by the erratic yet often powerful forces that beset system development in real-world industry. Designers don’t really want to stray from the plan, but in the fast pace of development, it’s easy to get hit by a latecomer requirement, an unexpected design shortfall, and get accidentally derailed as a consequence. By the time you realize what has happened, you have left the road leading to the Synthetic Instrument City, and are sidetracked, on your way to Modular-ville or Rack-em-Stackopolis with no way to turn back. Forewarned is forearmed, and in that spirit I will list ten common detours that designers can encounter during development on synthetic instrumentation. My hope is that knowing in advance about these common diversions, you will be able to navigate the through rough spots and ultimately stay on track.
Fixing Performance or Functionality Shortfalls Exclusively by Adding Hardware Synthetic instrumentation systems are a hybrid of software and hardware developed to meet some measurement need as expressed by a set of specifications. As the system is developed, performance estimates are 205
Synthetic Instruments: Concepts and Applications
made, and at some point the predicted performance is held up next to the required performance, and compared. The system designers would have to be living a pretty lucky and enchanted life if all the required performance is met. Normally there is a shortfall somewhere. Faced with an explicit shortfall in the performance of a hybrid of software and hardware, probably the most common reaction is to modify the hardware in some way so as to address it, rather than modifying software, or the system design as a whole, or the requirement itself. Instead, just hardware is changed; frequently something is added or complexity is increased. This is a mistake. The reason this is a mistake is because any system-level requirement shortfall is a system-level design shortfall. As such, it should be addressed at the system level in the best manner possible. In some cases this may involve changing hardware, but an unbiased evaluation must be made. Unfortunately, software engineers are often not even invited to consider solutions to things that are “obviously” hardware oriented at the systemlevel. I believe this is wrong and represents a primary source of failure and schedule/budget overrun in synthetic measurement system development. To avoid this mistake, designers should be aware of the solve-with-hardware knee-jerk bias and compensate somehow. For example, software engineers should go to hardware meetings, learn the issues, and solve them as best they can. There should be a sincere effort to develop a software solution to all requirement shortfalls even if hardware seems the thing that must change. Sometimes you might be surprised. “At first I thought we needed an amplifier, but integration solved our sensitivity problem.” “I was sure we needed another filter, but we coded an adaptive nulling algorithm in the DSP to remove the interference.” “That spec was impossible to meet with the hardware we were planning, so we convinced the customer that their measurement could be made just as accurately with the system as is.” “By multiplexing we were able to avoid the need for another channel.”
206
Ten Mistakes in Synthetic Measurement System Design
Those are just a few examples of what system designers might come up with after they pushed through their initial instinct to add hardware to solve system-level problems.
Fixing Hardware Mistakes with Software It may seem that this is the opposite of the first mistake. And it is. “Fixing it with hardware” and “Fixing it with software” are the Scylla and Charybdis of synthetic measurement system design. It’s difficult to steer a wise and safe course that does not wreck on either peril. Now I am concerned about an over-dependence on software to provide solutions for correctable mistakes made in hardware. Hardware oriented system engineers are prone to say “we’ll just fix it in software” when a shortfall appears that “obviously” can be fixed with an appropriate algorithm. The stepping motor that spins backward, the connector pins that are scrambled, the register that can’t be read after being written, the sensor that requires compensation for numerous unrelated parameters—the list is endless—are all examples of things that really should be fixed by the guy that made the mistake. Again, the solution to this side of the dilemma is to keep a systems view to all things. Software team members need to be present when these decisions are made, and need to be familiar the hardware issues and politically empowered in order to be able to say no to “just fixing it with software.” Perhaps more beneficial, the hardware team members, who are so often oblivious to the innards of the system software, need to get up to speed on these details. A suggestion to “just fix it with software” is not acceptable. Instead the suggestion should be to “fix it precisely so in this exact point of the software system design.”
Adding Modes or Features Dedicated to Specific Measurements “It’s so much easier just to add a voltmeter to the system than to add all that signal conditioner stuff and DSP we need to measure boring old DC voltage with our fancy 96 GHz digitizer. After all, this fancy digitizer fundamentally makes a rotten voltmeter. A slower but more precise A/D is more appropriate. So lets just add the voltmeter module, shall we?” —Anonymous 207
Synthetic Instruments: Concepts and Applications
OK. That may be an extreme example, but you should get the point. The tendency always is to add a new, conventional-style instrument to a SMS rather than to do the extra work needed to make the synthetic design handle the full generality of measurements that need to be made. This is a mistake in SMS design, although I’ll be the first one to admit that other factors may play a role here. Time and money may be saved in the short run by abandoning the purist SMS design philosophy. I will not argue that point. But in the long run, the mistake compounds and threatens everything. In the long run you find traditional instruments doing all the measurements, and only a ghost of a synthetic design lurking in the system. All the redundancy and instrument specificity you tried to avoid is now back.
Designing Synthetic Instruments Procedurally Object-oriented software design techniques have been widely known for over a decade, but still there is a pervasive bias toward procedural software design. This bias is very strong in the ATE community. After all, if you talk about “test procedures” and “test sequences,” everyone knows what you mean. But if you use the word “object,” the eyes around you glaze over and people start to drift away to refresh their coffee. Using procedural methodologies to design synthetic instruments is a big mistake. The reason this is such a big mistake is because synthetic instruments are built on a hierarchy that naturally fits with the OO concept of inheritance. An RMS distortion meter is a special kind of RMS voltmeter, which is a special kind of voltmeter, which is a special kind of meter, which is a special kind of instrument. Similarly, maps, abscissas, ordinates, signals, block acquisitions are another family tree. Not taking advantage of the natural structure of this hierarchy results in software redundancy, which leads to a maintenance nightmare. If one improves the voltmeter, the improvements are not necessary reflected in the RMS voltmeter, or the RMS distortion meter, unless somebody redundantly improves them all in the same way. Failing to orient the design around objects results in related things being redundantly scattered all throughout the system. Units are a classic example of this. Some ordinates are best expressed in terms of a certain 208
Ten Mistakes in Synthetic Measurement System Design
unit: volts, amps, dBm, for example. It seems to me that the best place to say what units a given ordinate has is in the ordinate itself. But without object orientation, information about the units that a given ordinate is expressed in are hidden in all sorts of places: reports, test parameters, pass fail criteria, database field names, graphs, and so on. Ironically, sometimes you look at the code that computes the ordinate itself and unless you are lucky enough to find a comment, the units being used are anyone’s guess! It is my recommendation that all synthetic instrument designers learn OOD principles. They should resist the temptation to just put one foot in front of the other, as they have done in the past, and consider where things really should go in order to eliminate redundancy. The limitations imposed by the realities of non-OO tools and legacy applications are fading. Newer tools and applications are more amenable to system level OOD. But no matter how good the tools, SMS design will never turn object-oriented unless the designers force themselves to think that way.
Meeting Legacy Instrument Specifications Although one of the greatest advantages of synthetic measurement systems is their ability to perform legacy measurements in lieu of some obsolete instrument, there is no more certain way to disembowel a synthetic measurement system design effort than to specify that the synthetic instrument needs replace a legacy instrument, or to meet the same specifications as some legacy instrument. This approach invariably leads the SMS in directions that are counterproductive. The result is a replacement for the legacy system that is not in any true sense a synthetic instrument. The reason that this approach goes wrong is because the legacy instrument specifications were chosen in the context of a specific implementation of that old instrument. The legacy specifications reflect that implementation, often more than they reflect the underlying measurement. Therefore, if you try to use the legacy specifications as any sort of guidance for a synthetic implementation, you are likely to be led astray. You end up addressing issues that are irrelevant to the goal of making the underlying measurement. Consider, for example, a measurement of relative humidity. Maybe you want to replace wet/dry bulb thermometers with a new digital humidity gage based on hygroscopic polymer sensors. Would you use the all specs on 209
Synthetic Instruments: Concepts and Applications
the wet/dry bulb system as a specification for the digital system? The size of the liquid reservoir? The minimum air flow for evaporation? Of course not. What you would do is abstract the measurement performed by the wet/dry bulb system and require the digital system to do the same measurement. To take another specific example, consider a RF spectrum analyzer. This instrument is a form of radio receiver that sweeps across some wide frequency band and plots the ordinate of power versus frequency. In a legacy spectrum analyzer data sheet, you will find all sorts of specifications about the “sweep speed” and the “video averaging,” but in a modern synthetic spectrum analyzer there may be a CCC system that digitizes and computes. There is no sweep; there is no video. To understand them at all, the meaning of those old specifications needs to be recast in terms that are relevant to a DSP implementation. And after you have done the work to figure out what “sweep speed” actually means when nothing is sweeping, what have you gained? Yes, there is some correspondence between a wagon wheel, a car tire, and an oar, but does that correspondence tell us anything of value when it comes to designing these devices? I doubt it. One might argue that some legacy instrument specifications are relevant to the measurement performed. Obviously, the size, weight, dimensions, interfaces, and power requirements of a legacy instrument are irrelevant, but the accuracy specifications aren’t. It may seem reasonable to believe that it’s possible to select those legacy specifications that have relevance to the underlying measurement, and just use those as specifications for the synthetic instrument. Unfortunately, there is no such free lunch. Legacy instrument specifications (particularly those that appear on manufacturer’s data sheets) are always tuned to the strengths and weaknesses of a particular hardware implementation and are always colored by marketing considerations. There is an implicit desire to put one’s best foot forward. The specifications that make it to data sheets are chosen in order to look good to customers and sell the product, not to be a specification for manufacturing the product. Data sheets are poor sources for quantitative information regarding accuracy. Any quantitative information is seldom defined with accuracy estimates in terms of standard uncertainty, but rather, is given as a number with no error quantification, or possibly with absolute “accuracy” numbers. Consequently, any quantitative meaning is absent. 210
Ten Mistakes in Synthetic Measurement System Design
You should realize, therefore, that legacy instrument specifications are a historic qualitative legacy of depreciated technology, not something carved on stone tablets to be preserved in our culture for thousands of years. They represent a distorted snapshot of what was possible in the past with a particular instrument. From them, you should attempt to glean a rough qualitative idea of what the legacy instrument was capable of, measurement wise, and accuracy wise, keeping in mind that past and present marketing bias colors everything, and quantitative statements of accuracy are a contradiction in terms. From the qualitative understanding you can gain from the study of legacy specifications, in combination with the present day test and measurement goals for the new synthetic instrument, you can develop specific measurement requirements that need to be addressed by a new design. You can then produce a design that seems to fit the need. This design can be analyzed, simulated, and prototyped to determine its quantitative performance in the required test scenario. The loop then closes as better requirements are written and the design is revised. The end result is a new instrument that addresses today’s need.
Developing Stimulus Separate from Response As I discussed earlier, the one unique redundancy that synthetic instrumentation can eliminate are response components dedicated to calibrate stimulus, and the stimulus components dedicated to calibrate response. By using a system-level optimization, you can factor out these sorts of redundancies readily. If stimulus and response subsystems are developed in isolation, then it becomes impossible to design with the assumption of closure. Therefore, redundant response functions must be added to stimulus, and visa versa. Cost and complexity go up. In general, this is bad. However, it must be said that there are worse crimes on this list. Sometimes circumstances dictate that a stimulus system must be used in isolation, or at least that there is a firm requirement for it to have the capability of independent operation. In these situations adding redundant components to close leveling loops and provide calibration signals is simply the cost of meeting the requirement.
211
Synthetic Instruments: Concepts and Applications
Not Combining Measurements The power of the stimulus response measurement map view of measurements is that it allows multiple measurements to be combined into a single, fast acquisition of a single map. When the system is allowed to combine measurements, acquiring data takes far less time than when the same measurements are made separately. In some cases, orders of magnitude less time. Unfortunately, there seems to be a reluctance to combine measurements. This reluctance is a result of the way TPSs are constructed: each measurement is separate in a firmly procedural sequence. In the normal worldview, considering that measurements might be thought of as nonprocedural entities borders on insanity. That one could trust to an instrument to combine measurements into a high speed, optimized map is absurd. Inevitably one must specify each switch to throw and each knob to twist for each measurement, mustn’t one? Anything else is madness, isn’t it? No, it isn’t madness. It is a fact borne out in practice that a combined map does the same set of integrated measurements faster and more efficiently than the same measurements can be done as separate tests. The true madness is to ignore the gain in performance this represents and continue to do measurements separately.
Hardware Modularity as a Distraction As I have explained, synthetic instruments are not necessarily modular. In fact, the whole idea of running specific measurements on general purpose hardware tends to discourage modular approaches. After all, the point of modularity to be able to conveniently plug in the specific hardware you need. If you can do all your measurements with the same CCC cascade, no modules need to be swapped. The hardware can just sit there, happily, doing all sorts of different measurements. All the modularity has been swept into software. Thus, efforts toward encouraging modularity in synthetic instrument designs can be a sort of a false god. These modularization efforts can drive the design away from a pure synthetic approach. The easier it is to put in different hardware modules, the less incentive to make one particular cascade of hardware to do all the measurement tasks.
212
Ten Mistakes in Synthetic Measurement System Design
On the other hand, the practical reality of realizable hardware may dictate that one CCC cascade may not be able to do everything you need. Consequently, you will need to switch in a new conditioner, or codec, or controller. You might as well modularize the portion replaced, so long as you do so in a manner that doesn’t undermine the foundation of your synthetic instrument system.
Bad Lab Procedure This mistake has been alluded to several times and is a contributor to other mistakes in this list. Even so, it deserves to be listed by itself as it is such a grievous and pernicious error. Anyone who has taken an introductory college level lab course in any hard science, be it physics or chemistry or biology, has been drilled in proper measurement procedure. You learned how to observe and take data from your observations, again and again. You learned how to use control experiments and other techniques to avoid tainted data because of observer bias and wishful thinking. You learned how to tabulate and statistically analyze data, how to calculate sample mean and variance, perform confidence testing, draw X/Y plots with proper divisions, markings and labeling, and so forth. These are basic metrologic and scientific skills that anyone taking these courses either learned (at least a little) or failed the course. Why then is evidence of a proper grasp of scientific and metrologic techniques so seldom seen in the operation of modern automated measurement systems? This problem is not specific to synthetic instruments, but as a relatively immature technology, synthetic instruments show flaws older technologies would have had more time to correct. Thus, it has been my experience that first generation SI efforts are often blemished with basic lab procedure errors that would have earned someone a “D” in physics lab 101. These are not subtle errors, mind you, but simple things, like not putting units on measurements, not making properly labeled plots, or not doing any rudimentary statistical analysis of results to justify the conclusions being drawn. All these mistakes fall under the category of bad lab procedure. Despite the fact that modern measurement equipment can allow even the
213
Synthetic Instruments: Concepts and Applications
unskilled to make measurements, making good measurements still requires skill. Even so, anybody can make these mistakes, whether they are skilled or not—or use a synthetic instrument or not. However, because synthetic instruments are more automated, and more user friendly, they facilitate shortcuts and consequent blunders. Therefore, it behooves the designers and operators of synthetic instruments not to forget the basic lab procedure that they all undoubtedly earned them an “A” back in college.
Fear of Change Synthetic instruments represent a new way to design instruments. They are different than what came before. They are not a concrete, hardware thing, but rather a software abstraction. The combination of innovation and abstraction loses a lot of people right from the start. They don’t get it. They keep looking for the rack-em-stack-em, or the modules, or even the virtual instruments. They need something familiar to latch on to that isn’t there. There is a legend about an early inventor of the quartz watch. Supposedly, he took his invention to various mechanical watchmaking companies trying to sell it to them. They looked at his masterpiece and could not see a wristwatch. Where is the spring? Where is the escapement? How does this thing tell time? They didn’t like it. It wasn’t a real wristwatch. Clearly, he eventually made his point and today, quartz watches dominate. Mechanical watches represent only a small fraction of the total watch sales. I can tell numerous anecdotes that are basically identical to the this story, but with synthetic instruments representing the quartz watch. When you show people a synthetic measurement system, particularly if the measurement software is designed along OO principles, they look at it and can’t see the instruments. They ask “How does this thing do a test?” This is one reason that I believe LabVIEW and virtual instruments have been successful: Not because it is a superior approach, but because you can see the instruments. The virtual instruments have graphical front panels that evoke the feel of a legacy instrument. These are wired together from the “back panel” with the interconnections and their procedural interactions clearly in evidence—at least in the smaller systems that are deceptively used to sell the approach. 214
Ten Mistakes in Synthetic Measurement System Design
Synthetic instrumentation systems aren’t as concrete as virtual instruments, and as such can be a harder sell to certain people despite their numerous advantages. Therefore there is often a tendency to make the mistake of concretizing synthetic instruments to placate those that want to see familiar concrete hardware patterns. Frivolous hardware modularization is a symptom of this disease, as is legacy instrument virtualization on the software side. The way to avoid this mistake is to focus on the measurements. Express them as maps without any legacy instrumentation context. Think like scientists, metrologists, and statisticians.
215
This page intentionally left blank
APPENDIX
A
Acronym Glossary 4GL: Fourth Generation Language AC: Alternating Current A/D: Analog-to-Digital Converter ALU: Algorithmic Logic Unit AM: Amplitude Modulation API: Application Program Interface ARB: Arbitrary Waveform Generator ASIC: Application-Specific Integrated Circuit ASP: Analog Signal Processing ATE: Automated Test Equipment ATLAS: Abbreviated Test Language for All Systems ATML: Automated Test Markup Language BASIC: Beginner’s All-Purpose Symbolic Instruction Code BER: Bit-Error Rate BSD: Berkley Standard Distribution BSG: Aeroflex Broadband Signal Generator CCC: Conditioner, Codec, Controller CD: Compact Disc CDATA: Character Data CDM: Code Division Multiplexing CDMA: Code Division Multiple Access CDR: Critical Design Review CIPM: Comité International des Poids et Mesures COTS: Commercial Off-the-Shelf 217
Synthetic Instruments: Concepts and Applications
CPU: Central Processing Unit CRM: Chinese Restaurant Menu CRT: Cathode Ray Tube CSV: Comma Separated Values CW: Continuous Wave D/A: Digital-to-Analog Converter DC: Direct Current DDS: Direct Digital Synthesis DECL: Differential Emitter Coupled Logic DIF: Data Interchange Format DMM: Digital Multimeter DOM: Document Object Model DPU: Digital Processing Unit DSP: Digital Signal Processing DTD: Document Type Description DUT: Device Under Test EBNF: Extended Backus-Naur Form EIA: Electronic Industries Alliance ENOB: Effective Number Of Bits ENR: Excess Noise Ratio FDM: Frequency Division Multiplexing FFT: Fast Fourier Transform FIFO: First In, First Out FM: Frequency Modulation FPGA: Field Programmable Gate Array FS: Full Scale GPIB: General Purpose Interface Bus GPS: Global Positioning System GUI: Graphical User Interface HDF: Hierarchical Data Format HP: Hewlett-Packard HSDSP: High-Speed Digital Signal Processing HTML: HyperText Markup Language 218
Acronym Glossary
IBM: International Business Machines IEEE: Institute of Electrical and Electronics Engineers IF: Intermediate Frequency IMD: Intermodulation Distortion ISO: International Standards Organization JTIDS: Joint Tactical Information and Distribution System LCU: Local Calibration Unit LRU: Line Replaceable Unit LSDSP: Low Speed Digital Signal Processing LVDS: Low Voltage Differential Signaling MIDI: Musical Instrument Digital Interface MS: Microsoft Corporation MSK: Minimum Shift Keying NCSA: National Center for Supercomputing Applications NIST: National Institute for Standards and Technology NTSC: National Television System Committee OMAAT: One Measurement At A Time OO: Object-Oriented OOD: Object-Oriented Design PAL: Programmed Array Logic PC: Personal Computer PCI: Peripheral Component Interconnect PECL: Positive Emitter Controlled Logic PLD: Programmable Logic Device PM: Phase Modulation PRF: Pulse Repetition Frequency RAM: Random Access Memory RF: Radio Frequency RFMTS: RF Multifunction Test System RGB: Red. Green. Blue. RMS: Root Mean Square ROI: Return On Investment SAX: Simple API for XML 219
Synthetic Instruments: Concepts and Applications
SCPI: Standard Commands for Programmable Instrumentation SDM: Space Division Multiplexing SDS: Scientific Data Set SFT: System Functional Test SGML: Standard Generalized Markup Language SI: Synthetic Instrument SIML: Synthetic Instrument Markup Language SMS: Synthetic Measurement System SNR: Signal to Noise Ratio SQL: Structured Query Language SRMM: Stimulus Response Measurement Map SSR: Sustained Sequential Recording TDF: Tab Delimited Format TDM: Time Division Multiplexing T&M: Test and Measurement TP: Test Program TPS: Test Program Set TTL: Transistor Transistor Logic UI: User Interface UPS: Uninteruptable Power Supply URL: Uniform Resource Locator US: United States USA: United States of America UUT: Unit Under Test VI: Virtual Instrument VSA: Vector Signal Analyzer VSP: Vector Signal Player VSS: Vector Signal Simulator VXI: VME bus eXtensions for Instrumentation WWI: World War One XML: eXtensible Markup Language
220
APPENDIX
B
Basic SIML DTD Example B-1. Complete SIML DTD
221
Synthetic Instruments: Concepts and Applications
222
Bibliography Books [B0] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman, Compilers. Copyright © 1986 Addison-Wesley Pub Co. 0201100886. Addison-Wesley Pub Co. [B1] A. Bruce Carlson, Communication Systems. Copyright © 1968 McGraw Hill, Inc. 0-07-009957-X. McGraw Hill. [B2] Elliott W. Cheney, Introduction to Approximation Theory. Copyright © 2000 American Mathematical Society; 2nd edition. 0821813749. American Mathematical Society. [B3] Daniel C. Dennett, The Intentional Stance. Copyright © 1987 The Massachusetts Institute of Technology. 0-262-54053-3. The MIT Press. [B4] R. Buckminster Fuller, Synergetics. Copyright © 1975 MacMillan Publishing Company. 002541870X. MacMillan Publishing Company. [B5] G. H. Hardy and W. W. Rogosinski, Fourier Series. Copyright © 1956 Cambridge University Press. 0521052084. Cambridge University Press. [B6] Elliotte Rusty Harold and W. Scott Means, XML In a Nutshell. Copyright © 2002 O’Reilly & Associates, Inc. 0-596-00292-0. O’Reilly. [B7] John L. Hennessy, David A. Patterson, and David Goldberg, Computer Architecture: A Quantitative Approach; 3rd Edition. Copyright © 2002 Morgan Kaufmann. 1558605967. Morgan Kaufmann. [B8] Steven M. Kay, Modern Spectral Estimation: Theory and Application. Copyright © 1988 Prentice Hall. 013598582X. Prentice Hall. [B9] E. L. Lehmann, Testing Statistical Hypotheses. Copyright © 1997 Springer Verlag; 2nd edition. 0387949194. Springer Verlag.
223
Synthetic Instruments: Concepts and Applications
[B10] Alan V. Oppenheim and Ronald W. Schafer, Digital Signal Processing. Copyright © 1975 Alan V. Oppenhiem and Bell Telephone Laboratories, Inc. 0-13-214635-5. Prentice Hall. [B11] P.P. Vaidyanathan, Multirate Systems and Filter Banks. Copyright © 1993 Prentice Hall PTR. 0-13-605718-7. [B12] John M. Wozencraft and Irwin Mark Jacobs, Principles of Communication Engineering. Copyright © 1965 John Wiley & Sons, Inc. 0-471-96240-6.
Periodicals [P1] IEEE Spectrum. The Institute of Electrical and Electronics Engineers, Inc.. 0018-9235. Susan Hassler. “Go Reconfigure”. Nick Tredennick and Brian Shimamoto. Copyright © 2003 The Institute of Electrical and Electronics Engineers, Inc. 36–40.
Conference Papers [C1] Robert. R. Hatch, Randall Brandeberry, and William Knox. Copyright © 2003 Institute of Electrical and Electronics Engineers. IEEE. Universal High Speed RF Microwave Test System. September, 2003. AutoTestCon 2003. Anaheim, CA, USA. IEEE. [C2] D. J. Johnson and P. Roselli. Copyright © 2003 Institute of Electrical and Electronics Engineers. 0-7803-7837-7. IEEE. Using XML As a Flexible, Portable, Test Script Language. September, 2003. AutoTestCon 2003. Anaheim, CA, USA. IEEE.
224
About the Author C.T. Nadovich is a working engineer with over 20 years of experience in the design and development of advanced instrumentation for RF and microwave test. He owns a private consulting company, Julia Thomas Associates that is involved in many electronic automated test related design and development efforts at the forefront of the Synthetic Instrumentation revolution. In addition to his hardware engineering work, Nadovich is an accomplished software engineer. He owns and manages an Internet provider company, JTAN.COM, and is involved with numerous software projects involving network programming. Nadovich received BSEE and MEEE degrees from Rensselaer Polytechnic Institute in 1981 with specialty in network theory and numerical analysis. After graduation, he has worked in industry for over 20 years, guiding ground-up development of a number of sophisticated signal processing systems, including systems for analog, digital, microwave and RF automated measurement. This work has given him extensive experience in both electronic automated test hardware and software, including test, and measurement from DC to 94 GHz, and with real-time DSP software using high performance digital systems and embedded computers. While working in industry as an engineer, he was also a competitive bicycle racer. In 1994, Nadovich, united his skills as an engineer with his love for bicycle racing when he designed the 250 meter velodrome used for the 1996 Olympics in Atlanta. C.T. Nadovich currently resides in Sellersville, PA along with is wife, Joanne, and their two children.
225
This page intentionally left blank
Index A
B
Abbe, Ernst, 150 abscissa, 86, 89, 94, 113, 149 port, 94 quantization, 149, 154 resolution, 149 abscissa de-embedding, 156 accumulator, 38 accuracy, 14, 44, 57, 102, 112, 121, 140, 146, 149, 210 Acqiris, 66 adaptive, 49, 50, 57, 101, 102, 103, 104, 149 Aeroflex, 51, 52, 69, 71 alias, 127, 128 alphabet, 65 amplifier, 24, 46, 55, 148 analog, 116, 124 analog baseband, 120 analog signal, 121 analysis, 6 analyzer, 14, 64, 66, 171, 175, 194, 196, 199, 202, 210 AP240, 66 distortion, 196, 199, 202 network, 171, 175, 194 spectrum, 210 vector signal, 64 antenna, 131 architecture, 26, 27 array array, 186 ATLAS, 162 ATML, 162 atom, 179 atomic, 105, 107, 182 atomic abscissa, 106 attenuator, 48, 102
banded, 89, 90, 196, 198, 200 bandpass, 45, 126, 127, 129 bandpass sampling, 134 bandwidth, 26, 122, 124, 125, 130 double-sided, 130 single-sided, 130 baseband, 119, 124, 128 battery tester, 10 BER, 117 Berlekamp, Jack, xv Birurakis, Bill, xv bit error rate, 117, 118 bookshelf system, 158 Bronfeld, Jeff, xvi BSD, 41
C C++, 19 calibration, 75, 76, 104, 105, 107, 111, 137, 143, 144, 145, 174, 175, 181, 195 object, 155 operational, 76, 143 primary, 75 procedures, 75 standards, 75 stimulus, 144, 145, 156 strategy, 104, 105, 107, 111, 174, 175, 181, 195 verification, 143 canonicalization, 96, 111 canonical maps, 96 cascade, 21, 29, 35, 55 CCC, 21 Celerity, 51 chickens, 1 child, 109, 170, 171
227
Synthetic Instruments: Concepts and Applications
Chinese restaurant menu, 23 chronometer, 2 CIPM, 140 closure, 12, 42, 64, 73, 211 codec, 22, 36 collapse, 176, 177 commutator, 30, 182, 183 compass, 2 compiler, 165 complexity, 81 compound, 105, 107, 174, 179 compound abscissa, 106 compound ordinate, 106 computer, 3 conditional branch, 40 constraints, 111, 201 conversion, 45 up, 45 converter, 27, 52, 55, 63, 71, 73, 132 down, 55, 63, 71, 73, 132 up, 27, 52, 71, 73 cost, 27 crosstalk, 29, 57 cubic spline, 146
DTD, 191 DUT, 23 dynamic range, 57, 103
E ENOB, 44, 59 equality, 9 exceptions, 113 extended Backus-Naur form, 165
F factor, 34 feedback, 101, 103, 104 fidelity, 49, 56, 57, 58, 59, 62 filter, 43, 46, 50, 51, 55, 59, 64, 123, 126, 130, 131 digital, 59 matched, 64 preselector, 131 filtered, 32 flattened, 114 football, xiii Fourier series, 86, 125, 133 FPGA, 66 Frey, Dan, xvi Fuller, Buckminster, 18 functional decomposition, 178
D damage, 201 data, 185, 186 array, 185 column, 185, 186 database, 187 Dawes, William Rutten, 150 de-embedding, 154, 156 decimate, 63 delay, 43, 51 demodulator, 10, 63, 64 detector, 64 digital coded baseband, 121 digitizer, 62, 66, 183 direct real analog baseband signals, 119 distortion, 59, 118, 131 domain, 87, 89, 90, 91 drift, 29 driver, 19, 78
G gain, 48, 56, 57, 93, 98, 105, 174, 175, 177 gate array, 41 generator, 24, 36, 42, 51 arbitrary waveform, 36 CS25000, 51 pulse, 24, 42, 53 signal, 24 generic, 6, 22, 49, 52, 65, 85, 115 GPIB, 3, 71
H harmonic, 133 harmonics, 57, 131
228
Index
HDF, 78, 188 headroom, 60, 61 history, 1, 9 hologram, 34 HTML, 167 hypothesis testing, 154 hysteresis, 99
legacy, 15, 16, 209, 210 length, 2 Lett, Chris, xvi leveling, 12 linearity, 56, 57 Linux, 41 locked, 89, 90, 196, 198, 200 loopback ordinate, 106
I
M
I/Q detection, 132 IEEE-488, 3 image, 131 image rejection, 130 inheritance, 208 instrument, 1, 3, 4, 5, 6, 16, 17, 18, 19, 83, 85 analog, 19 analytic, 6 classic, 16, 85 modular, 4 musical, 16, 85 rack-em-stack-em, 1, 3 synthetic, 5 traditional, 1 virtual, 16, 17, 18, 83, 85 integration, 13 intermodulation, 58, 197 interpolated, 148 interpolating, 200 interpolation, 45, 51, 66, 99, 101, 145, 146, 147, 148, 151, 152, 156 interpreted, 165 inversions, 114 inverted, 145 isometrological, 96 item, 168
manifold, 83, 84, 88, 90, 91 manifolds, 184 map, 96, 97, 98, 100, 102, 103, 104, 107, 108, 111, 115, 145, 174, 177, 181, 191, 200 calibration, 145 canonical, 107, 174, 181 canonicalization, 111 child, 98 collapse, 177 expansion, 97 flatten, 97 inverse, 100, 102, 103 inversion, 97, 108, 145, 200 manipulation, 104 optimization, 111 orthogonalization, 108 parent, 98 ravel, 98 stance, 115, 181 validation, 111, 192 map canonicalization, 114, 202 map canonical form, 106 map data, 95 map description, 95 map inversion, 195 map manipulation, 181 map optimization, 111 map transformation, 111 map validation, 111 marketing, 27 matched filter, 65 measurand, 138, 139, 153 measurement, 141
K key, 187
L LabVIEW, 17, 18, 19 LabWindows, 19 LabWindows/CVI, 71, 77
229
Synthetic Instruments: Concepts and Applications
measurement, x, xi, 2, 3, 99, 108, 212 algorithm, 108 automated, 3 device, 2 integration, 212 manual, 2 map, x, xi relative, 99 measurement integration, 13 measurement map, 86, 88, 91 inverse, 88 measurement system, 5 synthetic, 5 menu, 23 microprocessor, 6, 41 mistakes, 55 mixer, 128 mode, 93, 94, 95, 113, 181 modular, 4, 17, 25, 143, 157, 158, 205, 212 modulation, 45, 73, 117, 125, 126, 127, 202 phase, 73 modulator, 38, 52 multiplexing, 28, 29, 30, 31, 32, 33, 34, 182
quantization, 153, 154 orthogonalize, 96 oscilloscope, 74, 169, 192, 193 outer product, 89, 186 overload, 61
P parameter, 25, 65, 93 parameters, 178, 183 parent, 109 peak, 60 periodic, 37, 133 phase, 38, 52, 126 distortion, 52 increment, 38 modulation, 38 phase increment, 38 phase modulation, 73 pigs, 1 playback, 36, 38, 53 playlist, 40 port, 92, 93, 94, 95, 111, 113, 174, 175, 181, 182, 183 group, 174 positioner, 91 precision, 140 procedural, x, 26, 139, 208, 214 pSOS, 41 PXI, 13
N Newton, Isaac, 86 NIST, 75, 76, 138, 140 noise, 48, 56, 57, 59, 60, 61, 118 null, 57 Nyquist, 124, 125, 127 Nyquist, Harry, 122
Q quadrature, 132 quantization, 24, 48, 50, 58, 60, 61, 119, 151, 152
O
R
object-oriented, 18, 139, 166, 208, 209 object orientation, 186 OO, 18 ordinate, 86, 88, 105, 113, 153, 154, 174 atomic, 105 canonical, 88 compound, 105, 174 precision, 153
rack-em-stack-em, 25, 70, 143, 158, 214 rack-em-stackopolis, 205 rack-mount, 3 ravel, 113 Rayleigh, John William Strutt Lord, 150 Raytheon, 69, 71 recorder, 62
230
Index
redundancy, 11, 17, 25, 208, 209 reference, 179 requirements, 7, 34 resolution, 44, 150, 151, 152 super, 151, 154 reuse, 109 rise time, 51 risk, 27 rotate, 96 rotations, 114
SQL, 187 stability, 103, 104 standard, 143 state machine, 36, 41, 78, 166 state table, 78 stimulus, 27 compound, 27 stimulus ordinate, 106 structure, 184, 186 data, 184, 186 superheterodyne, 131 sweet spot, 57, 61 switch matrix, 28, 74, 182 synchronize, 42 synergy, 18 synthesis, 6, 35, 37 controller, 35 direct digital, 37 systolic, 40
S safety, 55, 201 sampling, 44, 151 interval, 151 sampling theorem, 123 schema, 191 scope, 177, 180 SCPI, 162, 163, 164 script, 77, 78, 166 separable, 83, 84, 89, 90, 184 sequencing, 39, 53, 68 Shannon, Claude, 122 signal, 12, 19, 27, 46, 115, 116, 117, 119, 120, 121, 125 analog, 19, 116, 119 bandpass, 27, 125 digital, 19, 121 encoding, 46 generator, 12 hierarchy, 117, 120 stance, 115 signal coding, 202 signal generator, 24 signal processing, 10 digital, 10 simultaneous, 28, 34, 63 site configuration, 94 slicing, 96 slowdown, 14 soft front panel, 17 Sparrow, ?, 150 speed, 41, 44, 70, 78, 79, 112, 147, 149 spline, 146, 148 spurious, 47, 49, 58, 59
T tag, 167, 170 test, x, 14, 55, 70, 77, 78, 81, 141 engineer, x, 14, 55 engineers, 81 executive, 78 parameter, 77 program set, 77 self, 70 speed, 14 test engineer, 180, 181 test engineering user, x test procedures, 208 test program, 159 test programs, 79 time, 147 track, 40 trigger, 40, 42, 66, 68 TRM1000C, 71, 77 Turing, 180 Turing, Alan, 40, 108 Turing machine, 40
231
Synthetic Instruments: Concepts and Applications
U uncertainty, 141, 143, 144, 210 units, 177, 209 up-converter, 39
V video, 117, 121 virtual, 85 virtual instrument, 83 voltmeter, 94 Von Neumann, John, 40 VU meter, 61 VXI, 4, 13 vxWorks, 41
W waveform, 36 waveform playback, 36 Windows, 41
232