Industrial Control Technology
Industrial Control Technology A Handbook for Engineers and Researchers
Peng Zhang Beijing Normal University, People’s Republic of China
N o r w i c h , NY, USA
Copyright © 2008 by William Andrew Inc. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the Publisher. ISBN: 978-0-8155-1571-5 Library of Congress Cataloging-in-Publication Data Zhang, Peng. Industrial control technology : a handbook for engineers and researchers / Peng Zhang. p. cm. Includes bibliographical references and index. 1. Process control--Handbooks, manuals, etc. 2. Automatic control--Handbooks, manuals, etc. I. Title. TS156.8.Z43 2008 670.42--dc22 2008002701 Printed in the United States of America This book is printed on acid-free paper. 10 9 8 7 6 5 4 3 2 1 Published by: William Andrew Inc. 13 Eaton Avenue Norwich, NY 13815 1-800-932-7045 www.williamandrew.com
Environmentally Friendly This book has been printed digitally because this process does not use any plates, ink, chemicals, or press solutions that are harmful to the environment. The paper used in this book has a 30% recycled content.
NOTICE To the best of our knowledge the information in this publication is accurate; however the Publisher does not assume any responsibility or liability for the accuracy or completeness of, or consequences arising from, such information. This book is intended for informational purposes only. Mention of trade names or commercial products does not constitute endorsement or recommendation for their use by the Publisher. Final determination of the suitability of any information or product for any use, and the manner of that use, is the sole responsibility of the user. Anyone intending to rely upon any recommendation of materials or procedures mentioned in this publication should be independently satisfied as to such suitability, and must meet all applicable safety and health standards.
Contents Preface
xix
1 Sensors and Actuators for Industrial Control 1.1 Sensors 1.1.1 Bimetallic Switch 1.1.1.1 Operating Principle 1.1.1.2 Basic Types 1.1.1.3 Application Guide 1.1.1.4 Calibration 1.1.2 Color Sensors 1.1.2.1 Operating Principle 1.1.2.2 Basic Types 1.1.2.3 Application Guide 1.1.2.4 Calibration 1.1.3 Ultrasonic Distance Sensors 1.1.3.1 Operating Principle 1.1.3.2 Basic Types 1.1.3.3 Application Guide 1.1.3.4 Calibration 1.1.4 Light Section Sensors 1.1.4.1 Operating Principle 1.1.4.2 Application Guide 1.1.4.3 Specifications 1.1.4.4 Calibration 1.1.5 Linear and Rotary Variable Differential Transformers 1.1.5.1 Operating Principle 1.1.5.2 Application Guide 1.1.5.3 Calibration 1.1.6 Magnetic Control Systems 1.1.6.1 Operating Principle 1.1.6.2 Basic Types and Application Guide 1.1.7 Limit Switches 1.1.7.1 Operating Principle 1.1.7.2 Basic Types and Application Guide 1.1.7.3 Calibration
1 1 1 1 2 4 5 6 6 7 9 9 10 10 11 12 12 15 15 16 19 20 22 22 25 26 27 28 33 38 38 40 43
vii
Zhang_Prelims.indd vii
5/24/2008 9:42:07 AM
viii
CONTENTS 1.1.8
1.2
Zhang_Prelims.indd viii
Photoelectric Devices 1.1.8.1 Operating Principle 1.1.8.2 Application Guide 1.1.8.3 Basic Types 1.1.9 Proximity Devices 1.1.9.1 Operating Principle 1.1.9.2 Application Guide 1.1.9.3 Basic Types and Specifications 1.1.10 Scan Sensors 1.1.10.1 Operating Principle 1.1.10.2 Basic Types 1.1.10.3 Technical Specifications 1.1.11 Force and Load Sensors 1.1.11.1 Operating Principle 1.1.11.2 Basic Types 1.1.11.3 Technical Specifications 1.1.11.4 Calibration Actuators 1.2.1 Electric Actuators 1.2.1.1 Operating Principle 1.2.1.2 Basic Types 1.2.1.3 Technical Specification 1.2.1.4 Application Guides 1.2.1.5 Calibrations 1.2.2 Pneumatic Actuators 1.2.2.1 Operating Principle 1.2.2.2 Basic Types and Specifications 1.2.2.3 Application Guide and Assembly on Valve 1.2.3 Hydraulic Actuators 1.2.3.1 Operating Principle 1.2.3.2 Basic Types and Specifications 1.2.3.3 Application Guide 1.2.3.4 Calibration 1.2.4 Piezoelectric Actuators 1.2.4.1 Operating Principle 1.2.4.2 Basic Types 1.2.4.3 Technical Specifications 1.2.4.4 Calibration 1.2.5 Manual Actuators
44 44 47 49 52 53 56 56 63 63 65 69 72 72 77 79 86 87 88 88 90 94 96 98 100 100 104 106 111 111 115 119 123 125 126 132 136 137 141
5/24/2008 9:42:07 AM
CONTENTS 1.3
2
Valves 1.3.1 Control Valves 1.3.1.1 Basic Types 1.3.1.2 Technical Specifications 1.3.1.3 Application Guide 1.3.2 Self-Actuated Valves 1.3.2.1 Check Valves 1.3.2.2 Relief Valves 1.3.3 Solenoid Valves 1.3.3.1 Operating Principles 1.3.3.2 Basic Types 1.3.3.3 Technical Specifications 1.3.4 Float Valves 1.3.4.1 Operating Principle 1.3.4.2 Specifications and Application Guide 1.3.4.3 Calibration 1.3.5 Flow Valves 1.3.5.1 Operating Principle 1.3.5.2 Specifications and Application Guide 1.3.5.3 Calibration
Computer Hardware for Industrial Control 2.1 Microprocessor Unit Chipset 2.1.1 Microprocessor Unit Organization 2.1.1.1 Function Block Diagram of a Microprocessor Unit 2.1.1.2 Microprocessor 2.1.1.3 Internal Bus System 2.1.1.4 Memories 2.1.1.5 Input/Output Pins 2.1.1.6 Interrupt System 2.1.2 Microprocessor Unit Interrupt Operations 2.1.2.1 Interrupt Process 2.1.2.2 Interrupt Vectors 2.1.2.3 Interrupts Service Routine (ISR) 2.1.3 Microprocessor Unit Input/Output Rationale 2.1.3.1 Basic Input/Output Techniques 2.1.3.2 Basic Input/Output Interfaces 2.1.4 Microprocessor Unit Bus System Operations 2.1.4.1 Bus Operations 2.1.4.2 Bus System Arbitration
Zhang_Prelims.indd ix
ix 142 142 143 149 150 155 155 161 165 166 170 171 172 173 175 175 177 177 179 180 187 187 190 191 192 201 201 205 207 207 208 210 210 213 213 216 218 219 222
5/24/2008 9:42:08 AM
x
CONTENTS 2.1.4.3 Interrupt Routing 2.1.4.4 Configuration Registers Programmable Peripheral Devices 2.2.1 Programmable Peripheral I/O Ports 2.2.2 Programmable Interrupt Controller Chipset 2.2.3 Programmable Timer Controller Chipset 2.2.4 CMOS Chipset 2.2.5 Direct Memory Access Controller Chipset 2.2.5.1 Idle Cycle 2.2.5.2 Active Cycle Application-Specific Integrated Circuit (ASIC) 2.3.1 ASIC Designs 2.3.1.1 ASIC Specification 2.3.1.2 ASIC Functional Simulation 2.3.1.3 ASIC Synthesis 2.3.1.4 ASIC Design Verification 2.3.1.5 ASIC Integrity Analyses 2.3.2 Programmable Logic Devices (PLD) 2.3.3 Field-Programmable Gate Array (FPGA) 2.3.3.1 FPGA Types and Important Data 2.3.3.2 FPGA Architecture 2.3.3.3 FPGA Programming
223 224 226 226 229 233 235 235 238 239 240 242 242 243 244 246 247 248 250 251 252 255
System Interfaces for Industrial Control 3.1 Actuator–Sensor (AS) Interface 3.1.1 Overview 3.1.2 Architectures and Components 3.1.2.1 AS Interface Architecture: Type 1 3.1.2.2 AS Interface Architecture: Type 2 3.1.3 Working Principle and Mechanism 3.1.3.1 Master–Slave Principle 3.1.3.2 Data Transfer 3.1.4 System Characteristics and Important Data 3.1.4.1 How the AS Interface Functions 3.1.4.2 Physical Characteristics 3.1.4.3 System Limits 3.1.4.4 Range of Functions of the Master Modules 3.1.4.5 AS Interface in a Real-Time Environment
259 259 259 260 261 263 266 267 269 275 275 275 276
2.2
2.3
3
Zhang_Prelims.indd x
277 277
5/24/2008 9:42:08 AM
CONTENTS 3.2
3.3
3.4
Zhang_Prelims.indd xi
Industrial Control System Interface Devices 3.2.1 Fieldbus System 3.2.1.1 Foundation Fieldbus 3.2.1.2 PROFIBUS 3.2.1.3 Controller Area Network (CAN bus) 3.2.1.4 Interbus 3.2.1.5 Ethernets/Hubs 3.2.2 Interfaces 3.2.2.1 PCI, ISA, and PCMCIA 3.2.2.2 IDE 3.2.2.3 SCSI 3.2.2.4 USB and Firewire 3.2.2.5 AGP and Parallel Ports 3.2.2.6 RS-232, RS-422, RS-485, and RS-530 3.2.2.7 IEEE-488 Human–Machine Interface in Industrial Control 3.3.1 Overview 3.3.2 Human–Machine Interactions 3.3.2.1 The Models for Human–Machine Interactions 3.3.2.2 Systems of Human–Machine Interactions 3.3.2.3 Designs of Human–Machine Interactions 3.3.3 Interfaces 3.3.3.1 Devices 3.3.3.2 Tools 3.3.3.3 Software Highway Addressable Remote Transducer (HART) Field Communications 3.4.1 HART Communication 3.4.1.1 HART networks 3.4.1.2 HART Mechanism 3.4.2 HART System 3.4.2.1 HART System Devices 3.4.2.2 HART System Installation 3.4.2.3 HART System Configuration 3.4.2.4 HART System Calibration 3.4.3 HART Protocol 3.4.3.1 HART Protocol Model 3.4.3.2 HART Protocol Commands 3.4.3.3 HART Protocol Data
xi 279 280 280 289 291 309 319 327 328 333 335 339 344 345 347 351 351 353 353 365 368 371 371 375 376 377 378 378 383 387 387 397 400 402 406 406 409 411
5/24/2008 9:42:08 AM
xii
CONTENTS 3.4.4
4
HART Integration 3.4.4.1 Basic Industrial Field Networks 3.4.4.2 Choosing the Right Field Networks 3.4.4.3 Integrating the HART with Other Field Networks
Digital Controllers for Industrial Control 4.1 Industrial Intelligent Controllers 4.1.1 Programmable Logic Control (PLC) Controllers 4.1.1.1 Components and Architectures 4.1.1.2 Control Mechanism 4.1.1.3 PLC Programming 4.1.1.4 Basic Types and Important Data 4.1.1.5 Installation and Maintenance 4.1.2 Computer Numerical Control (CNC) Controllers 4.1.2.1 Components and Architectures 4.1.2.2 Control Mechanism 4.1.2.3 CNC Part Programming 4.1.2.4 CNC Controller Specifications 4.1.3 Supervisory Control and Data Acquisition (SCADA) Controllers 4.1.3.1 Components and Architectures 4.1.3.2 SCADA Protocols 4.1.3.3 Functions and Administrations 4.1.4 Proportional-Integration-Derivative (PID) Controllers 4.1.4.1 PID Control Mechanism 4.1.4.2 PID Controller Implementation 4.1.4.3 PID Controller Tuning Rules 4.1.4.4 PID Control Technical Specifications 4.2 Industrial Process Controllers 4.2.1 Batch Controllers 4.2.1.1 Batch Control Standards 4.2.1.2 Control Mechanism 4.2.2 Servo Controllers 4.2.2.1 Components and Architectures 4.2.2.2 Control Mechanism 4.2.2.3 Distributed Servo Control 4.2.2.4 Important Servo Control Devices 4.2.3 Fuzzy Logic Controllers 4.2.3.1 Fuzzy Control Principle 4.2.3.2 Fuzzy Logic Process Controllers
Zhang_Prelims.indd xii
415 415 420 420 429 429 429 429 437 440 454 455 462 463 466 474 483 488 488 498 512 519 519 520 524 526 532 532 534 536 539 540 544 547 550 558 559 564
5/24/2008 9:42:08 AM
CONTENTS
xiii
5 Application Software for Industrial Control 5.1 Boot Code for Microprocessor Unit Chipset 5.1.1 Introduction 5.1.2 Code Structures 5.1.2.1 BIOS and Kernel 5.1.2.2 Master Boot Record (MBR) 5.1.2.3 Boot Program 5.1.3 Boot Sequence 5.1.3.1 Power On 5.1.3.2 Load BIOS, MBR and Boot Program 5.1.3.3 Initiate Hardware Components 5.1.3.4 Initiate Interrupt Vectors 5.1.3.5 Transfer to Operating System 5.2 Real-Time Operating System 5.2.1 Introduction 5.2.2 Task Controls 5.2.2.1 Multitasking Concepts 5.2.2.2 Task Types 5.2.2.3 Task Stack and Heap 5.2.2.4 Task States 5.2.2.5 Task Body 5.2.2.6 Task Creation and Termination 5.2.2.7 Task Queue 5.2.2.8 Task Context Switch and Task Scheduler 5.2.2.9 Task Threads 5.2.3 Input/Output Device Drivers 5.2.3.1 I/O Device Types 5.2.3.2 Driver Content 5.2.3.3 Driver Status 5.2.3.4 Request Contention 5.2.3.5 I/O Operations 5.2.4 Interrupts 5.2.4.1 Interrupt Handling 5.2.4.2 Enable and Disable Interrupts 5.2.4.3 Interrupt Vector 5.2.4.4 Interrupt Service Routines 5.2.5 Memory Management 5.2.5.1 Virtual Memory 5.2.5.2 Dynamic Memory Pool
569 570 570 570 571 573 575 575 575
Zhang_Prelims.indd xiii
577 577 578 578 579 579 579 579 581 582 585 586 586 588 589 593 595 597 597 597 598 599 601 601 608 609 610 612 613 616
5/24/2008 9:42:08 AM
xiv
CONTENTS
5.3
Zhang_Prelims.indd xiv
5.2.5.3 Memory Allocation and Deallocation 5.2.5.4 Memory Requests Management 5.2.6 Event Brokers 5.2.6.1 Event Notification Service 5.2.6.2 Event Trigger 5.2.6.3 Event Broadcasts 5.2.6.4 Event Handling Routine 5.2.7 Message Queue 5.2.7.1 Message Passing 5.2.7.2 Message Queue Types 5.2.7.3 Pipes 5.2.8 Semaphores 5.2.8.1 Semaphore Depth and Priority 5.2.8.2 Semaphore Acquire, Release and Shutdown 5.2.8.3 Condition and Locker 5.2.9 Timers 5.2.9.1 Kernel Timers 5.2.9.2 Watchdog Timers 5.2.9.3 Task Timers 5.2.9.4 Timer Creation and Expiration Real-Time Application System 5.3.1 Architecture 5.3.2 Input/Output Protocol Controllers 5.3.2.1 Server or Manager 5.3.2.2 I/O Device Module 5.3.3 Process 5.3.3.1 Process Types 5.3.3.2 Process Attributes 5.3.3.3 Process Status 5.3.3.4 Process and Task 5.3.3.5 Process Creation, Evolution, and Termination 5.3.3.6 Synchronization 5.3.3.7 Mutual Exclusive 5.3.4 Finite State Automata 5.3.4.1 Models 5.3.4.2 Designs 5.3.4.3 Implementation and Programming
616 618 618 619 621 621 622 622 622 625 626 629 630 632 634 638 639 640 645 646 647 647 650 650 652 653 654 654 655 656 656 657 658 659 660 665 667
5/24/2008 9:42:08 AM
CONTENTS 6 Data Communications in Distributed Control System 6.1 Distributed Industrial Control System 6.1.1 Introduction 6.1.1.1 Opened Architectures for Distributed Control 6.1.1.2 Closed Architectures for Distributed Control 6.1.1.3 Similarity to Computer Network 6.1.2 Data Communication Model for Distributed Control System 6.1.2.1 Data Communication Models for Open Control Systems 6.1.2.2 Data Communication Models for Closed-Control Systems 6.2 Data Communication Basics 6.2.1 Introduction 6.2.1.1 Data Transfers within an IC Chipset 6.2.1.2 Data Transfers over Medium Distances 6.2.1.3 Data Transfer over Long Distances 6.2.2 Data Formats 6.2.2.1 Bit 6.2.2.2 Byte 6.2.2.3 Character 6.2.2.4 Word 6.2.2.5 Basic Codeword Standards 6.2.3 Electrical Signal Transmission Modes 6.2.3.1 Bit-Serial and Bit-Parallel Modes 6.2.3.2 Word-Parallel Mode 6.2.3.3 Simplex Mode 6.2.3.4 Half-Duplex Mode 6.2.3.5 Full-Duplex Mode 6.2.3.6 Multiplexing Mode 6.3 Data Transmission Control Circuits and Devices 6.3.1 Introduction 6.3.2 Universal Asynchronous Receiver Transmitter (UART) 6.3.2.1 Applications and Types 6.3.2.2 Mechanism and Components 6.3.3 Universal Synchronous Receiver Transmitter (USRT)
Zhang_Prelims.indd xv
xv 675 675 675 676 678 680 680 681 690 691 691 691 693 693 695 695 696 696 697 698 700 700 701 701 702 703 703 705 705 706 706 707 708
5/24/2008 9:42:08 AM
xvi
CONTENTS 6.3.4
6.4
6.5
Zhang_Prelims.indd xvi
Universal Synchronous/Asynchronous Receiver Transmitter (USART) 6.3.4.1 Architecture and Components 6.3.4.2 Mechanism and Modes 6.3.5 Bit-Oriented Protocol Circuits 6.3.5.1 SDLC Controller 6.3.5.2 HDLC Controller 6.3.6 Multiplexers 6.3.6.1 Digital Multiplexer 6.3.6.2 Time Division Multiplexer (TDM) Data Transmission Protocols 6.4.1 Introduction 6.4.2 Asynchronous Transmission 6.4.2.1 Bit Synchronization 6.4.2.2 Character Synchronization 6.4.2.3 Frame Synchronization 6.4.3 Synchronous Transmission 6.4.3.1 Bit Synchronization 6.4.3.2 Character-Oriented Synchronous Transmission 6.4.3.3 Bit-oriented Synchronous Transmission 6.4.4 Data Compression and Decompression 6.4.4.1 Loss and Lossless Compression and Decompression 6.4.4.2 Data Encoding and Decoding 6.4.4.3 Basic Data Compression Algorithms Data-Link Protocols 6.5.1 Framing Controls 6.5.1.1 High-Level Data Link Control (HDLC) 6.5.1.2 Synchronous Data Link Control (SDLC) 6.5.2 Error Controls 6.5.2.1 Error Detection 6.5.2.2 Error Correction 6.5.3 Flow Controls 6.5.3.1 Stop-and-Wait 6.5.3.2 Sliding Window 6.5.3.3 Bus Arbitration 6.5.4 Sublayers 6.5.4.1 Logic Link Control (LLC) 6.5.4.2 Media Access Control (MAC)
709 709 712 718 719 721 721 722 723 725 725 726 726 727 730 733 734 737 739 740 741 741 742 749 749 750 752 753 754 755 758 758 759 760 760 760 762
5/24/2008 9:42:08 AM
CONTENTS
xvii
6.6
763 763 764 765 766 766 767 767 768 768 769 770
7
Data Communication Protocols 6.6.1 Client–Server Model 6.6.1.1 Two and Three-Tier Client–Server 6.6.1.2 Message Server 6.6.1.3 Application Server 6.6.2 Master–Slave Model 6.6.2.1 Master 6.6.2.2 Slave 6.6.3 Producer–Consumer Model 6.6.3.1 Designs 6.6.3.2 Implementations 6.6.4 Remote Procedure Call (RPC)
System Routines in Industrial Control 7.1 Overview 7.2 Power-On and Power-Down Routines 7.2.1 System Hardware Requirements 7.2.1.1 Low Voltage Power Supply Circuit (LVPSC) 7.2.1.2 Basic Input and Output System (BIOS) 7.2.2 System Power-On Process 7.2.3 System Power-On Self Tests 7.2.3.1 When Does the POST Apply? 7.2.3.2 What does the POST do? 7.2.3.3 Who Does the POST? 7.2.4 System Power-Down Process 7.3 Install and Configure Routines 7.3.1 System Hardware Requirements 7.3.1.1 PCI Address Spaces 7.3.1.2 PCI Configuration Headers 7.3.1.3 PCI I/O and PCI Memory Addresses 7.3.1.4 PCI-ISA Bridges 7.3.1.5 PCI-PCI Bridges 7.3.1.6 PCI Initialization 7.3.1.7 The PCI Device Driver 7.3.1.8 PCI BIOS Functions 7.3.1.9 PCI Firmware 7.3.2 System Devices Install and Configure Routine 7.3.3 System Configure Routine
Zhang_Prelims.indd xvii
775 775 776 778 778 780 781 783 783 783 785 785 788 789 790 790 791 791 792 795 796 798 800 802 803
5/24/2008 9:42:08 AM
xviii 7.4
7.5
Index
Zhang_Prelims.indd xviii
CONTENTS Diagnostic Routines 7.4.1 System Hardware Requirements 7.4.2 Device Component Test Routines 7.4.3 System NVM Read and Write Routines 7.4.4 Faults/Errors Log Routines 7.4.5 Change System Mode Routines 7.4.5.1 System Modes List 7.4.5.2 System Modes Transition 7.4.6 Calibration Routines 7.4.6.1 Calibration Fundamentals 7.4.6.2 Calibration Principles 7.4.6.3 Calibration Methodologies Simulation Routines 7.5.1 Modeling and Simulation 7.5.1.1 Process Models 7.5.1.2 Process Modeling 7.5.1.3 Control Simulation 7.5.2 Methodologies and Technologies 7.5.2.1 Manufacturing Process Modeling and Simulation 7.5.2.2 Computer Control System Modeling and Simulation 7.5.3 Simulation Program Organization 7.5.3.1 Simulation Routines for Single Microprocessor Control Systems 7.5.3.2 Simulation Routines for Distributed Control Systems 7.5.3.3 Simulation Routine Coding Principles 7.5.4 Simulators, Toolkits, and Toolboxes 7.5.4.1 MATLAB 7.5.4.2 SIMULINK 7.5.4.3 SIMULINK Real-Time Workshop 7.5.4.4 ModelSim 7.5.4.5 Link for ModelSim
804 805 806 807 808 809 810 811 813 813 814 816 817 818 818 821 826 831 833 836 840 840 840 841 841 841 844 846 847 849 853
5/24/2008 9:42:08 AM
Preface Objectives Industrial control consists of industrial process control and industrial production automation. This book applies to both industrial process control and industrial production automation, and it covers the technology in three branches: theory, design, and technology. In recent years, there has been a technical revolution in the semiconductor industry and in the electronics industry, which has significantly advanced the existing technologies in industrial control. The recent technical developments in the semiconductor and electronics industries are mainly represented as these seven aspects: (1) The microprocessor chipsets have been very capable in interrupt handling, data passage, and interface communication. (2) The operating speeds of both microprocessors and programmable integrated circuits have become much faster. (3) The enhancements in the register arrays and the instruction set of microprocessor units have made multitasking or multithreads possible. (4) The sizes of various semiconductor chips are being increased and their production costs are going lower and lower. (5) The controllers of intelligent functionalities are more and more designed to perform various control strategies and protocols. For example, Programmable Logic Control (PLC) controllers implement Ladder Logic, and fuzzy logic controllers operate in terms of fuzzy control theory; the Controller Area Network (CAN) is a very powerful automatic system used even in aerospace. These industrial intelligent controllers are being increasingly used in industrial control so that the establishment of industrial control systems is becoming more and more feasible. (6) The various development tools for both hardware and software are becoming more and more feasible and powerful, which is largely shortening the time for developing software and hardware and is significantly enhancing their quality. (7) The programmable application-specific integrated circuits (ASIC) have now approached an intelligence similar to that of microprocessors, so that they are performing a more important functional role in various control systems.
xix
Zhang_Prelims.indd xix
5/24/2008 9:42:08 AM
xx
PREFACE
These technical developments in both the semiconductor industry and the electronics industry have advanced industrial control into both realtime control and distributed control. Real-time control requires controllers to capture all the significant target activities and to deliver their responses as swiftly as possible so that system performance is never degraded. Distributed control indicates that controls are performed by a number of microprocessor controllers and executed in a group of independent agents or units that are physically and electronically connected and communicate with each other. This tendency in industrial control has led to the future continuation of both real-time control and distributed control. Consequently, industrial control has been gradually extended from device and machine control to plant and enterprise and industry. To demonstrate that these technical developments satisfy the new industrial control requirements, this book provides comprehensive technical details, including the necessary rationales, methodologies, types, parameters, and specifications, for the devices of industrial control. As a technical handbook for engineers, a technical reference, and an academic textbook for students, this book particularly emphasizes the following seven areas: (1) the sensors, actuators, and valves currently existing in all kinds of industrial control systems; (2) the electronic hardware resident on the microprocessor chipset system; (3) the system interfaces including devices, Fieldbuses, and techniques used for all kinds of industrial control; (4) the digital controllers performing the written programs and the given protocols; (5) the embedded software on a microprocessor chipset for real-time control applications; (6) the data-transmission hardware and protocols between independent agents or units of their own microprocessors; (7) the routines, containing special hardware and software, which are very useful to any kind of industrial control system. All these seven areas are crucial for accomplishing both real-time control and distributed control in industry. This book, therefore, provides the key technologies applied to modern industrial control so that it will be widely available to all the engineers and researchers as well as students who are working in industrial control and its relative disciplines.
Zhang_Prelims.indd xx
5/24/2008 9:42:08 AM
PREFACE
xxi
Readership This book has been written primarily as an engineering handbook for those engineers working in the research and development of all kinds of control systems. However, the faculties and postgraduates in universities or colleges will also find this book a useful technical reference for their projects related to control and computer engineering. For university students, this book can be taken as a textbook in classes such as automation, control, computer network, and other related technical subjects. As an engineering handbook, this book will help professionals to design, deploy, and make both manufacture control equipment and production process control systems. Modern industrial control technologies involve three essential phases: machinery, hardware, and software. However, no matter what phase a control engineer is working with, he or she will find that this book is very helpful. As a reference, this book will aid the faculties and postgraduates in universities and colleges to understand all the technical details involved in their research projects on controls. The wide coverage of this book allows it to bridge the gap between theory and technique in control. In addition, it is suitable for practicing postgraduates who wish or need to gain an engineering knowledge of the control topics. This book is also intended to be a course textbook for students studying the subjects of automatic control, computer hardware and electronics, computer network, as well as data communication. Typically, the students will be in electronic engineering, computer control, control systems, or industrial automation courses.
Synopsis This book has been organized into chapters, sections, and titled graphs, etc. The first of its seven chapters, “Sensors and Actuators for Industrial Control,” lists the typical sensors, meters, actuators, and valves that are crucial devices located between the front and the rear of industrial control systems. This chapter provides the mechanism concepts, working principles, device types, technical data, and the guides to enable engineers to design and develop industrial control systems. The second chapter, “Computer Hardware for Industrial Control,” provides a detailed list of the types of electronic devices resident on the
Zhang_Prelims.indd xxi
5/24/2008 9:42:08 AM
xxii
PREFACE
system given by a microprocessor chipset. These are the microprocessor, programmable peripheral devices, and ASIC. The architecture of the electronic components on a computer motherboard is also plotted so that engineers are able to see how the microprocessor chipset is populated. This chapter provides engineers with an explanation of how microprocessors operate, and also all the necessary technical data for microprocessors to perform. The third chapter, “System Interfaces for Industrial Control,” discusses four types of interfaces: actuator–sensor interface, control system interfaces, human–machine (or human–controller) interfaces, and highway addressable remote transducer (HART) field interfaces. These four interfaces basically cover all the interface devices and technologies existing in various industrial control systems. The actuator–sensor interface is located at the front or rear of the actuator–sensor level to bridge the gap between this level and the controllers. The control system interfaces include the Fieldbus and microprocessor chipset interfaces that are used for connecting and communicating with controllers. The human–machine interfaces contain both the tools and technologies to provide humans with easy and comfortable methods of handling the devices. The HART field communications include the HART protocol and HART interface devices used for field communications in industrial process control. The fourth chapter is entitled “Digital Controllers for Industrial Control.” A controller, similar to a computer, is a system with its own hardware and software capable of performing independent control. This chapter lists the controllers necessary for both industrial production control and industrial process control: they are PLC controllers, CNC controllers, SCADA system, PID controller, batch controllers, servo controllers, and the fuzzy controllers. The title of the fifth chapter is “Application Software for Industrial Control.” The real-time control works with the microprocessor chipset installed on a motherboard or a daughter board. Any microprocessor chipset, except for the inherent microcode and BIO to the CPU, must have a software package consisting of three program systems: boot code, operating system, and application system. This chapter provides engineers with the basic rationale, semantics, principles, work sequence, and program structures for each of these three systems. With reference to this chapter, an experienced software engineer should be capable of programming the design of the whole software package of a microprocessor controller board. The sixth chapter is “Data Communications in Distributed Control System.” Several independent units, each of which will probably have their own microprocessor to monitor a number of mechanical systems, are
Zhang_Prelims.indd xxii
5/24/2008 9:42:08 AM
PREFACE
xxiii
physically and electronically connected together. These communicate with each other electronically in an interactive manner so as to form a distributed control system. In distributed control, to set up this type of industrial control system the engineers must understand the connection methodologies and communication rationales between the independent units. This chapter contains all the technical information, data, methodologies, and some theories necessary for the implementation of a distributed control system. The seventh chapter is for the complex topics. These are the topics dealing with subjects that are over and above the basics when compared with the first six chapters. Chapter 7 explains system routines that make the control systems more user friendly and safe to operate. With the Power-up and Power-down routines, the system is able to safely establish and terminate when power is switched ON and OFF, respectively. The installation and configuration routines permit the system devices to communicate with each other through both the software and the hardware. The diagnostic routines then allow engineers to determine the root reasons when a system suffers from a malfunction.
Bibliography Writing this book has involved reference to a large number of sources including academic books, journal articles, and in particular industry technical manuals and company introductory or demonstrational materials displayed on web sites of various dates and locations. The number and the scale of the sources are such that it would be practically impossible to acknowledge each source individually in the body of the book. The sources are, therefore, alphabetically ordered and placed at the end of each chapter. This method has two benefits. It enables the author to acknowledge the contribution of other individuals and institutions whose scholarship or products have been referred to in this book. It also provides the reader the convenience of tracing more relevant sources.
Acknowledgments My most sincere thanks go to my family: to my wife, Minghua and my son Huza for their unwavering understanding and support. I would also like to thank my younger brother Zhang wei for the ideas he contributed to this book.
Zhang_Prelims.indd xxiii
5/24/2008 9:42:08 AM
xxiv
PREFACE
I am extremely grateful to Professor D. T. Pham, director of the Manufacturing Engineering Center, Cardiff University, United Kingdom, who granted me some very valuable suggestions in his review of some portions of this manuscript, and to Dr. Chunquian Ji in Cardiff University and to Dr. Wenbo Mao in Hewlett Packard Research Laboratory, United Kingdom, for their valuable comments and suggestions on the manuscripts. Without the support of these people, there would not have been this book. Peng Zhang Beijing May 2008
Zhang_Prelims.indd xxiv
5/24/2008 9:42:08 AM
1
Sensors and Actuators for Industrial Control
1.1 Sensors 1.1.1
Bimetallic Switch
Bimetallic switches are electromechanical thermal sensors or limiters that are used for automatic temperature monitoring in industrial control. They limit the temperature of machines or devices by opening up the power load or electric circuit in the case of overheating or by shutting off a ventilator or activating an alarm in the case of overcooling. Bimetal switches can also serve as time-delay devices. The usual technique is to pass current through a heater coil that eventually (10 s or so) warms the bimetal elements enough to actuate. This is the method employed on some controllers such as cold-start fuel valves found on automobile engines.
1.1.1.1
Operating Principle
A bimetallic switch essentially consists of two metal strips fixed together. If the two metals have different expansibilities, then as the temperature of the switch changes, one strip will expand more than the other, causing the device to bend out of the plane. This mechanical bending can then be used to actuate an electromechanical switch or be part of an electrical circuit itself, so that contact of the bimetallic device to an electrode causes a circuit to be made. Figure 1.1 is a diagrammatic representation of the typical operation of temperature switches. There are two directional processes given in this diagram, which cause the contacts to change from open to close and from close to open, respectively: (1) Event starts; the time is zero, the temperature of switch is T1, and the contacts are open. As environment temperature increases, the switch is abruptly heated and reaches the temperature of T2 at some moment causing the contacts to close. (2) When the temperature of the switch is T2 and the contacts are closed, if the environment temperature keeps decreasing, the switch is abruptly cooled, which takes the temperature to T1 at some instance causing the contacts to open again. 1
Zhang_Ch01.indd 1
5/13/2008 5:45:16 PM
2
INDUSTRIAL CONTROL TECHNOLOGY Temperature Contacts closed T2 Differential Decreasing temperature
Increasing temperature
Environment temperature
T1
Contacts open
0 Time
Figure 1.1 Typical operation of temperature switches.
1.1.1.2
Basic Types
Bimetallic switches and thermal controls basically fall into two broad categories: (1) Creep action devices with slow make and slow break switching action and (2) snap action devices with quick make and quick break switching action. Creep action devices are excellent in either a temperature-control application or as a high-limit control. They have a narrow temperature differential between opening and closing, and generally have more rapid cycling characteristics than snap action devices. Snap action devices are most often used for temperature-limiting applications, as their fairly wide differential between opening and closing temperature provides slower cycling characteristics. Although in its simplest form a bimetallic switch can be constructed from two flat pieces of metal, in practical terms a whole range of shapes is used to provide maximum actuation or maximum force during thermal cycling. As given in Fig. 1.2, the bimetallic elements can be of three configurations in a bimetallic switch: (1) In Fig. 1.2(a), two metals make up the bimetallic strip (hence the name). In this diagram, the black metal would be chosen to expand faster than the white metal if the device were being used in an oven, so that as the temperature rises the black metal expands faster than the white metal. This causes the strip to bend downward, separating from contact so that current is cut off. In a
Zhang_Ch01.indd 2
5/13/2008 5:45:17 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
3
Contact
Rivet
Bimetallic strip (a)
Base
Set-point adjustment screw Insulating connection spring
Switch contact Bimetallic strip (b) Electrical connections Base
Contact
(c)
Bimetallic disc
Figure 1.2 The operating principle for bimetallic switches: (a) basic bimetallic switch, (b) adjustable set-point switch, and (c) bimetallic disc switch.
refrigerator you would use the opposite setup, so that as the temperature rises the white metal expands faster than the black metal. This causes the strip to bend upward making contact so that current can flow. By adjusting the size of the gap between the strip and the contact, you can control the temperature. (2) Another configuration uses a bimetallic element as a plunger or pushrod to force contacts open or closed. Here the bimetal does not twist or deflect, but instead is designed to lengthen or travel as a means of actuation as illustrated by Fig. 1.2(b). Bimetallic switches can be designed to switch at a wide range of temperatures. The simplest devices have a single set-point temperature determined by the geometry of the bimetal and switch packaging. Examples include switches found in consumer products.
Zhang_Ch01.indd 3
5/13/2008 5:45:17 PM
4
INDUSTRIAL CONTROL TECHNOLOGY More sophisticated devices of industrial usages may incorporate calibration mechanisms for adjusting temperature sensitivity or switch-response times. These mechanisms typically set the separation between contacts as a means of changing the operating parameters. (3) Bimetal elements can also be disc shaped as in Fig. 1.2(c). These types often incorporate a dimple as a means of producing a snap action (not appropriately plotted in this figure). Disc configurations tend to handle shock and vibration better than cantilevered bimetallic switches.
1.1.1.3 Application Guide Bimetallic devices are generally specified for temperatures from –65°F to several hundred degrees Fahrenheit. Specialized devices can handle upward of 2000°F. Set-point tolerance and repeatability is generally on the order of ±5°F, and set-point drift is usually negligible. (1) Choosing the right thermal control. The rate of temperature rise, location of the thermal control, the electrical load, and the mass of the application can each greatly affect cycling (operational) characteristics of a thermal control. Because of these variables, it is strongly recommended that you conduct testing of the switches specifically in your application. Certain aspects should be taken into consideration when applying both creep and snap action devices. Careful attention must be paid to input voltage, load currents, and the characteristics of the load. Final design criteria should be based upon results of the testing of our devices in your application, at your facility. (2) Choosing the right bimetallic switches. It has been realized that each application for thermal controls is unique in one form or another. Because of this, there is no standard product. A wide range of options is offered, including the calibration temperature range and tolerances. The length of the lead wires and the type of insulation material also require deliberate consideration. You should require samples for your application testing before deciding to use bimetallic switches. (3) Snap action configurations. Snap action bimetal elements are used in applications where an action is required at a threshold temperature. As such, they are not temperature-measuring devices, but rather temperature-activated devices. The typical temperature change to activate a snap action device is several degrees and is determined by the geometry of the device. When
Zhang_Ch01.indd 4
5/13/2008 5:45:17 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
5
the element activates, a connection is generally made or broken, and a gap between the two contacts exists for a period of time. For a mechanical system, there is no problem; however, for an electrical system, the gap can result in a spark that can lead to premature aging and corrosion of the device. Having the switch activate quickly, hence the use of snap action devices, reduces the amount and duration of spark. Snap action elements also incorporate a certain amount of hysteresis into the system, which is useful in applications that would otherwise result in an oscillation about the set point. It should be noted, however, that special design of creep action bimetals can also lead to different ON/ OFF points, such as in the reverse lap-welded bimetal. (4) Sensitivity and accuracy. Modern techniques are more useful where sensitivity and accuracy are concerned for making a temperature measurement; however, bimetals find application in industrial temperature control where an action is required without external connections. Evidently, geometry is important for bimetal systems as the sensitivity is determined by the design, and a mechanical advantage can be used to yield a large movement per degree temperature change.
1.1.1.4
Calibration
Temperature range calibration can be conducted with the following two methods: (1) The ice method. Immerse the temperature probe at least 2 in. into a glass of finely crushed ice. Add cold tap water to remove air pockets. Wait at least 1 min. The gauge should read 32°F. If it does not, turn the adjustment nut on the back of the reading dial with a pair of pliers until the dial reads 32°F. Wait at least 1 min to verify correct adjustment. (2) The boiling method. Submerge the probe into boiling water. Wait until the needle stops moving, then adjust the calibration nut until the dial reads 212°F. Since the boiling point of water decreases as altitude increases, this method may not be as accurate as the ice method at altitudes above sea level unless the exact boiling point temperature is known. Calibration is a broad topic and includes the ultimate reference sources, such as the national metrology laboratories, which are the custodians of the International Temperature Scale, and those services that are directly traceable to the national standards. For example, this is the scale that
Zhang_Ch01.indd 5
5/13/2008 5:45:17 PM
6
INDUSTRIAL CONTROL TECHNOLOGY
the national labs, or those affiliated to those labs, refer to in the calibration certificates of reference devices that may be used in corporation or university or other measurement laboratories that provide a more local service, such as to working instruments in a process plant or experimental apparatus.
1.1.2
Color Sensors
Color sensors that can operate in real time under various environmental conditions can benefit many applications, including quality control, chemical sensing, food production, medical diagnostics, energy conservation, and monitoring of hazardous waste, etc. Analogous applications can also be found in other fields of economy, for example, in the electric industry for recognition and assignment of colored cords, in the electronic industry for the automatic test of mounted LED arrays or matrices, in the textile industry to check coloring processes, or in the building materials industry to control compounding processes. These color sensors are generally advisable wherever color structures, color processes, color nuances, or colored body rims must be recognized in homogeneously continuous processes over a long period and have influence on the process control or quality protection as measuring or controlled variables.
1.1.2.1
Operating Principle
The color detection occurs at the color sensors according to the threefield procedure. Color sensors cast light on the objects to be tested, calculate the chromaticity coordinates from the reflected or transmitted radiation, and compare them with previously stored reference tristimulus (red, green, and blue) values. If the tristimulus values are within the set tolerance range, a switching output is activated. Color sensors can detect both the color of opaque objects through their reflections (incident light) and of transparent materials in transmitted light, whereby a reflector is mounted opposite the sensor. In Fig. 1.3, the color sensor can sense eight colors: red, green, and blue (primary colors); magenta, yellow, and cyan (secondary colors); and black and white. The ASIC chipset of the color sensor is based on the fundamentals of optics and digital electronics. The object whose color is to be detected should be placed in front of the system. The light rays reflected from the object will fall on the three convex lenses that are fixed in front of
Zhang_Ch01.indd 6
5/13/2008 5:45:17 PM
7
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL Light
Color sensor Blue Green
Object
Red
LDR1
Magenta LDR2
ASIC chipset
LDR3
Cyan Yellow Black White
Figure 1.3 Operating principle of an assumed color sensor.
the three LDRs. The convex lenses are used to cause the incident light rays to converge. Red, green, and blue glass filters are fixed in front of LDR1, LDR2, and LDR3, respectively. When reflected light rays from the object fall on the gadget, the filter glass plates determine which of these three LDRs would get triggered. When a primary color light ray falls on the system, the glass plate corresponding to that primary color will allow that specific light to pass through. But the other two glass plates will not allow any light to pass through. Thus, only one LDR will get triggered and the gate output corresponding to that LDR will indicate which color it is. Similarly, when a secondary color light ray falls on the system, the two primary glass plates corresponding to the mixed color will allow that light to pass through while the remaining one will not allow any light ray to pass through it. As a result two of these three LDRs get triggered and the gate output corresponding to these indicates which color it is. When all three LDRs get triggered or none of them are triggered, you will observe white and black light indications, respectively.
1.1.2.2
Basic Types
In accordance with the working processes and application features, color sensors can be categorized into three-field color sensors and structured color sensors. (1) Three-field color sensors. The sensor works based on the tristimulus (standard spectral) value function and identifies colors with absolutely unerring precision and 10,000 times faster than the human eye could. It provides a compact and dynamic technical
Zhang_Ch01.indd 7
5/13/2008 5:45:17 PM
8
INDUSTRIAL CONTROL TECHNOLOGY solution for general color detection and color measurement jobs. It is capable of detecting, analyzing, and measuring minute differences in color, for example, as part of LED testing, calibration of monitors, or where mobile equipment is employed for color measurement. These color sensors can have these advantages: (a) dielectric color filters adaptable for individual customers; (b) MHz signal frequency for time critical measurements; (c) profitable and fast signal processing; (d) compact design—without optical beam guidance; (e) high aging resistance; (f) temperature stability and environmental resistance. Based on the techniques used, there are two kinds of threefield color sensors: (i) Three-element color sensor. This sensor includes special filters so that their output currents are proportional to the function of standard XYZ tristimulus values. The resulting absolute XYZ standard spectral values can thus be used for further conversion into a randomly selectable color space. This allows for a sufficiently broad range of accuracies in color detection—from “eye accurate” to “true color,” that is, standard-compliant colorimetry to match the various application environments (ii) Integral color sensor. This kind of sensor accommodates integrative features including (1) detection of color changes; (2) recognition of color labels; (3) sorting colored objects; (4) checking of color-sensitive production processes; and (5) control of the product appearance. (2) Structured color sensors. Structured color sensors are used for the simultaneous recording of color and geometric information including (1) determination of color edges or structures and (2) checking of industrial mixed and separation processes. These color sensors can have the following advantages: (a) high selection of the sensor during applications in fast continuous processes of manufacture; (b) high signal sequence and parallel data transfer; (c) implementation of integral colorimetry; (d) applications with high-measuring frequencies; (e) adapted receiver geometries; (f) specific color adaptation. Based on the techniques used, there are two kinds of structured color sensors: (i) Row color sensor. The row color sensor has been developed for detecting and controlling the color codes and color sequences in the continuous measurement of
Zhang_Ch01.indd 8
5/13/2008 5:45:18 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
9
moving objects. These color sensors are designed as PIN-photo-diode arrays. The photo diodes are arranged in the form of honeycombs for each three rhombi. The diode lines consist of two honeycomb lines displaced to each other half-line. As a result, it is possible on least expanse to implement a high resolution. In this case, the detail to be controlled is determined by the choice of suitable focusing optics. (ii) Hexagonal color sensor. This kind of sensor generates information for the subsequent electronics about the three-field chrominance signal (intensity of the threereceiving segments covered with the spectral filters) as well as about the structure and the position.
1.1.2.3 Application Guide In industrial control, color sensors are selected by means of the applicationoriented principle. Although having many factors is a required consideration, the following technical items are of primary reference: (1) operating voltage, (2) rated voltage, (3) output voltage, (4) residual ripple maximum, (5) no-load current, (6) spectral sensitivity, (7) size of measuring dot minimum, (8) limiting frequency, (9) color temperature, (10) light spot minimum, (11) permitted ambient temperature, (12) enclosure rating, (13) control interface type, etc.
1.1.2.4
Calibration
Calibration is used to establish the link between the output signals (voltage, digital numbers) of the color sensor and their absolute physical values at color sensor input, which describe the overall transfer function of the color sensor. Calibration is a part of the color sensor characterization. Different applications require calibrating different parameters. For example, calibrating an ocean color sensor for remote sensing particularly concerns with spectral and radiometric parameters if geometric calibrations are not considered. Many companies or organizations in the world provide color sensor calibration services with their (1) software, (2) platform, and (3) evaluation boards. For example, SONY Corporation provides the ARTISANTM COLOR REFERENCE SYSTEM; DMN Digital Inc. provides Pantone ColorPlus Color Calibrator, etc.
Zhang_Ch01.indd 9
5/13/2008 5:45:18 PM
10
INDUSTRIAL CONTROL TECHNOLOGY
1.1.3
Ultrasonic Distance Sensors
There are numerous applications for ultrasonic distance sensors in industrial control. Ultrasonic distance sensors are used in all industries for measuring the distance to or size of material objects. That covers almost any size and type of object that can be measured in most industrial sectors. (1) Machine builders. Whether retrofitting existing machines or building new ones, ultrasonic distance sensors are used for motion control, or level control, or dimensioning, or proximity sensing. These are common applications in the converting, pulp and paper, printing, rubber, metals, textile, and other manufacturing industries. (2) Automation. Ultrasonic distance sensors reduce automation costs by providing a simple and effective means of monitoring the size or position of objects in production processes. Sensor information is used to accept or reject objects based on size, position, or fill level; make decisions on the routing of packages based on size or position; control the flow of liquid, solid, or granular materials; indicate when an object is nearby or in position, in/out of tolerance, or provide alarms when objects are in/out of position, near full/empty, or indicate process completion. (3) Process controls. Common applications include measuring the level of bulk materials in a tank or bin (inventory) or controlling the amount of material dispersed from a vessel (batching). Tank levels can be locally displayed or reported to a remote computer by a data network. Alarms can warn of low level, order levels, high level, or other conditions.
1.1.3.1
Operating Principle
Ultrasonic distance sensors measure the distance or presence of target objects by sending a pulsed ultrasound wave at the object and then measuring the time for the sound echo to return. Knowing the speed of sound, the sensor can determine the distance of the object. As displayed in Fig. 1.4, the ultrasonic distance sensor regularly emits a barely audible click. It does this by briefly supplying a high voltage either to a piezoelectric crystal or to magnetic fields of ferromagnetic materials. In the first case, the crystal bends and sends out a sound wave. A timer within the sensor keeps track of exactly how long it takes the sound wave to bounce off something and return. This delay is then converted into a voltage corresponding to the distance of the sensed object. In the second case, the physical response of a ferromagnetic material in a magnetic field
Zhang_Ch01.indd 10
5/13/2008 5:45:18 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
11
Figure 1.4 Operating principle of ultrasonic distance sensor.
is due to the presence of magnetic moments. Interaction of an external magnetic field with the domains causes a magnetostrictive effect. Controlling the ordering of the domains through alloy selection, thermal annealing, cold working, and magnetic field strength can optimize this effect. The generated magnetostrictive effects are the use of magnetostrictive bars to control high-frequency oscillators and to produce ultrasonic waves in gases, liquids, and solids. Applying converters based on the reversible piezoelectric effect makes one-head systems possible where the converter serves both as transmitter and as receiver. The transceivers work by transmitting a short burst ultrasonic packet. An internal clock starts simultaneously, measuring propagation time. The clock stops when the object reflects the sound packet back to the sensor. The time elapsed between transmitting the packet and receiving the echo is the basis for calculating distance. Complete control of the process is realized by an integrated microcontroller, which allows for excellent output linearity.
1.1.3.2
Basic Types
The ultrasonic distance sensor can be operated in two different modes. The first mode, referred to as continuous (or analog) mode, involves the sensor continuously sending out sound waves at a rate determined by the manufacturer. The second mode, called clock (or digital) mode, involves the sensor sending out signals at a rate determined by the user. This rate
Zhang_Ch01.indd 11
5/13/2008 5:45:18 PM
12
INDUSTRIAL CONTROL TECHNOLOGY
can be several signals per second with the use of a timing device, or it can be triggered intermittently by an event such as the press of a button.
1.1.3.3 Application Guide The major benefit of ultrasonic distance sensors is their ability to measure difficult targets—solids, liquids, granulates, powders, and even transparent and highly reflective materials that cause problems for optical sensors. In addition, analog output ultrasonic sensors offer comparatively long ranges, in many cases >3 m. They can be made very small too; some tubular models are only 12 mm in diameter, and 15 mm × 20 mm × 49 mm square-bodied versions are available for limited space applications. Ultrasonic devices do have some limitations. Foam and other attenuating surfaces may absorb most of the sound, significantly decreasing measuring range. Extremely rough surfaces may diffuse the sound excessively, decreasing range and resolution. However, an optimal resolution is usually guaranteed up to a surface roughness of 0.2 mm. Ultrasonic sensors emit a wide sonic cone, limiting their usefulness for small target measurement and increasing the chance of receiving feedback from interfering objects. Some ultrasonic devices offer a sonic cone angle as narrow as 6°, permitting detection of much smaller objects and sensing of targets through narrow spaces such as bottlenecks, pipes, and ampoules. For various distance-measuring sensors, there are four types of technical definitions emphasized in their applications: (1) resolution, (2) linearity, (3) repeat accuracy, and (4) reaction time. Figure 1.5 provides graphic illustrations of these four technical definitions for measuring distance sensors.
1.1.3.4
Calibration
Ultrasonic distance sensors in either continuous or clock mode can be calibrated by means of some uncomplicated method. Once calibrated, switching between modes will not affect calibration. To calibrate an ultrasonic distance sensor, a voltmeter is essential. First, fasten the sensor to a surface such that there will be nothing but the target in front of it, at the same time leaving plenty of room behind it to adjust the small screws on the potentiometers by hand. Apply power to the sensor to begin warming it up. Allow several minutes for warm-up before starting the calibration process. The device should start clicking once power is applied. Figure 1.6 shows location of pots on an ultrasonic sensor.
Zhang_Ch01.indd 12
5/13/2008 5:45:18 PM
13
∇
Output
Output
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
Deviation
Distance
∇
Distance
Distance
(b)
Output
Distance (a)
Output
∇
Response time Time
Time (c)
(d)
Figure 1.5 Technical definitions of distance-measuring sensors: (a) Resolution corresponds to the smallest possible change in distance that causes a detectable change in the output signal. (b) Linearity is the deviation from a proportional linear function or a straight line, given as a percentage of the upper limit of the measuring range (full scale). (c) Repeat accuracy is the difference between measured values in successive measurements within a period of 8 h at an ambient temperature of 23 ± 5°C. (d) Reaction time is the time required by the sensor’s signal output to rise from 10% to 90% of the maximum signal level. For sensors with digital signal processing, it is the time required for calculation of a stable measured value. Zero pot
Scale adjust pot
Full-scale pot
No connector Analog output Clock out Trig-enable Ext-trigger Common 8–16 V DC Gain
Figure 1.6 Location of pots on an ultrasonic sensor.
Zhang_Ch01.indd 13
5/13/2008 5:45:18 PM
14
INDUSTRIAL CONTROL TECHNOLOGY
The gain control must now be set to 50%. It is the potentiometer with the screw head near the bottom of the device. It should rotate between what would correspond to 8 and 4 on the face of a clock, but no further. Set it to the “12” position The positive lead of a voltmeter is put on pin 6 and the negative lead on pin 2. Try to find a way to fasten these leads in place, or get someone to hold them there. A screwdriver with a very small flat head is required to do this. Rotate the zero and full-scale potentiometer screws fully counterclockwise, or about 12 turns. The screws will not stop turning after 12 turns, so you have to keep track. Now place the target at the maximum distance from the sensor of interest. The literature claims a maximum distance of 10 ft, which we found to be quite accurate. The sensor will work for objects at least 13 ft away, but the objects must be very sound reflective and/or large (such as a refrigerator door) to obtain a usable reading. Once the target is in place, adjust the scale adjust potentiometer. This potentiometer compensates for the varying voltages that may be supplied to the sensor. Rotate its screw until the voltmeter reads +5 V. Rotate the full-scale potentiometer clockwise until its voltage ceases to change, and then slowly rotate it counterclockwise until the +5 V reading is obtained again. Place the target at the minimum distance from the sensor. Do not put it any closer than 6 in. Slowly rotate the zero adjust potentiometer until a reading of 0 V is attained. We could not get a reading below 0.034 V, but anything from +0.5 to –0.5 V is acceptable. At this point it would be useful to move the target back and forth between the minimum and maximum distances while watching the voltmeter. It should read +5 V when the target is at its maximum distance and 0 V when at the minimum. Make sure to keep the target within the sensor’s line of sight, which can be thought of as an imaginary cone emanating from the sensor to a width of 2.6 ft when standing 10 ft away. The more you step outside the cone, the less likely you are to get a good reading. To adjust the gain, put the target at maximum distance. Rotate the gain screw fully counterclockwise, and then slowly rotate it clockwise until detection occurs. Rotate it an additional 1/16th turn after this. It is always best to keep the gain setting as low as possible, since higher gain settings increase the likelihood of false target detection. Once all the above steps have been completed, the sensor should be calibrated and ready for detection.
Zhang_Ch01.indd 14
5/13/2008 5:45:19 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
1.1.4
15
Light Section Sensors
Light section methods have been utilized as a three-dimensional measurement technique for many years as noncontact geometry and contour control. The light section sensor is primarily used for automating industrial production processes and testing procedures in which system-relevant positioning parameters are generated from profile information. Twodimensional height information can be collected by means of laser light, the light section method, and a high resolution of receiver array. Height profiles can be monitored, filling level detected, magazines counted, and the presence of objects checked.
1.1.4.1
Operating Principle
The light section can be simply achieved in many cases with the laser light section method. The laser light section method is a three-dimensional procedure to measure object profiles in one-sectional plane. The principle of laser triangulation (see Fig. 1.7) requires a camera position orthogonal to the object’s surface area to measure the lateral displacement or the deformation of a laser line projected at an angle q onto the object’s surface (see Fig. 1.8). The elevation profile of interest is calculated from the deviation of the laser line from the zero position. A light section sensor consists of one camera and laser projector also called laser line generator. The measurement principle of a light section sensor is based on active triangulation (Fig. 1.7). Its simplest realization is Camera
Laser projector
q
h
Figure 1.7 Laser triangulation (optical scheme), where h is elevation measurement range and q is the angle between the plane of the laser line and the axis of the camera. A high resolution can be obtained by increasing h and decreasing q and vice versa.
Zhang_Ch01.indd 15
5/13/2008 5:45:19 PM
16
INDUSTRIAL CONTROL TECHNOLOGY
A
Figure 1.8 Laser light sectioning is the two-dimensional extension of the laser triangulation. By projecting the expanded laser line an elevation profile of the object under test is obtained. Inset A: Image recorded by the area camera. The displacement of the laser line indicates the object elevation at the point of incidence.
scanning a scene by a laser beam and detecting the location of the reflected beam. A laser beam can be spread by passing it through a cylindrical lens, which results in a plane of light. Its profile can be measured in the camera image thus realizing triangulation along one profile. In order to generate dense range images, one has to project not only one but also many light planes (Fig. 1.8). This can be achieved either by moving the projecting device or by projecting many stripes at once. In the latter case the stripes have to be encoded somehow; this is referred to as the coded-light approach. The simplest encoding is achieved by assigning different brightness to every projection direction, for example, by projecting a linear intensity ramp. Measuring range and resolution are determined by the triangulation angle between the plane of the laser line and the optical axis of the camera (see Fig. 1.7). The more grazing this angle, the larger is the observed lateral displacement of the line. The measured resolution is increased, but the measured elevation range is reduced. Criteria related to objects’ surface characteristics, camera aperture or depth of focus, and width of the laser line might reduce the achievable resolution.
1.1.4.2 Application Guide In applications, the laser light section method has some technical details requiring particular attention.
Zhang_Ch01.indd 16
5/13/2008 5:45:19 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
17
(1) Object surface characteristics. One requirement for the utilization of the laser light section method is an at least partial diffuse reflecting surface. An ideal mirror would not reflect any laser radiation into the camera lens, and the camera cannot view the actual reflecting position of the laser light on the object surface. With a complete diffuse reflecting surface, the angular distribution of the reflected radiation is independent of the angle of incidence of the incoming radiation as it hits the object under test (Fig. 1.8). Real technical surfaces usually provide a mixture of diffuse and reflecting behavior. The diffuse reflected radiation is not distributed isotropic, which means that the more grazing the incoming light, the lesser the radiation reflected in an orthogonal direction to the object’s surface. Using the laser light section method, the reflection characteristics of the object’s surface (depending on the submitted laser power and sensitivity of the camera) limit the achievable angle of triangulation q (Fig. 1.7). (2) Depth of focus of the camera and lens. To ensure widely constant signal amplitude on the sensor, the depth of focus of the camera lens as well as the depth of focus of the laser line generator have to cover the complete measurement elevation range. By imaging the object under test onto the camera sensor the depth of focus of the imaging lens increases proportional to the aperture number k, the pixel distance y, quadratic to the imaging factor @ (= field of view/sensor size). The depth of focus 2z is calculated by 2z = 2yk@ (1+@). In the range ±z around the optimum object distance, no reduction in sharpness of the image is evident. Example: Pixel distance y = 0.010 mm Aperture number k = 8 Imaging factor @ = 3 2z = 2 × 0.010 × 8 × 3 × (1 + 3) = 1.92 mm With fixed imaging geometry a fading aperture of the lens increases its depth of focus. A larger aperture number k cuts the signal amplitude by a factor of 2 with each aperture step; it decreases the optical resolution of the lens and increases the negative influence of the speckle effect. (3) Depth of focus of a laser line. The laser line is focused to a fixed working distance. With actual working distances diverging from the setting the laser line widens and the power density of the radiation decreases. The region around the nominal working distance, where line width does not increase by more than a given factor, is according
Zhang_Ch01.indd 17
5/13/2008 5:45:19 PM
18
INDUSTRIAL CONTROL TECHNOLOGY to agreement characterized as the depth of focus of a laser line. There are two types of laser line generators: laser micro line generators and laser macro line generators. Laser micro line generators create thin laser lines with Gaussian intensity profile orthogonal to the laser line. The depth of focus of a laser line at wavelength L and of width B is given by the so-called Rayleigh range 2ZR:2ZR = (pB2)/(2L), where p = 3.1415926. Laser macro line generators create laser lines with increased depth of focus. At the same working distance macro laser lines are wider than micro laser lines. Within the two design types, macro and micro line generators, the respective line width is proportional to the working distance. Due to the theoretical connection between line width and depth of focus, the minimum line width of the laser line is limited by the application due to the required depth of focus. (4) Basic setback: Laser speckle. Laser speckling is an interference phenomenon originating from the coherence of the laser radiation, for example, the laser radiation reflected by a rough-textured surface. Laser speckle disturbs the edge sharpness and the homogeneity of the laser lines. Orthogonal to the laser line, the center of intensity is displaced stochastically. The granularity of the speckle depends on the setting of the lens aperture viewing the object. With a small aperture number the arising speckles have a high spatial frequency; with a large k number the speckles are rather rough and particularly disturbing. Because a diffuse reflective and thus an optically rough-textured surface is essential for the utilization of the laser light section method, laser speckling cannot be avoided in principle. A reduction of the disturbing effect is possible by (1) utilizing laser beam sources with decreased coherence length, (2) a relative movement between object and sensor, possibly using a necessary or existent movement of the sensor or the object (e.g., the profile measurement of railroad tracks while the train is running), (3) diminishing the speckle pattern by choosing large lens apertures (small aperture numbers), as long as the requirements of depth of focus tolerate this. (5) Dome illuminator for diffuse illumination. The introduced application requires simultaneously with the three-dimensional profile measurement, control of the object outline, and surface. For this purpose, the object under test is illuminated homogeneously and is diffused by a dome illuminator. An LED ring lamp generates the illumination that propagates, scattered by a diffuse
Zhang_Ch01.indd 18
5/13/2008 5:45:19 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
19
reflecting cupola, to the object of interest. In the center of the dome an opening for the camera is located. There is no radiation falling onto the object from this direction. Shadow and glint are widely avoided. Because the circumstances correspond approximately to the illumination on a cloudy day, this kind of illumination is also called “cloudy day illumination.” (6) Optical engineering. Using a laser light section application with high requirements, the design of the system configuration is of great importance. This “optical engineering” implies the choice and the contractive design of the utilized components like camera, lens, and laser line generator from the optical point of view. Considering the optical laws and their interactions, an optimum picture recording within the given physical boundary conditions is accomplished. Elaborate picture preprocessing algorithms are avoided. Arranging first steps to measure objects with largely diffuse reflecting surfaces or with reduced requirements in resolution, cameras and laser line generators from the electronic mail order catalog may be utilized for first system testing (e.g., school practicum, etc). These simple laser line generators utilize mostly a glass rod lens to produce a Gaussian intensity profile along the laser line (as mentioned in the Operating Principle section). With increased requirements, laser lines with largely constant intensity distribution and line width have to be utilized.
1.1.4.3
Specifications
The specifications of the light section sensors are routinely documented with these technical data: (1) supply voltage which is the voltage of the direct current power supplied to the sensor; (2) voltage ripple which defines the maximum tolerance for the ratio of the maximum voltage bias to the supply voltage; (3) reverse polarity protected which indicates whether or not the sensor has functionality protected from reverse polarity; (4) short circuit protected which indicates whether or not the sensor can be protected from damage if short circuit occurs; (5) power consumption which represents the power consumption of the sensor; (6) maximum output load which is the output power value of the sensor;
Zhang_Ch01.indd 19
5/13/2008 5:45:20 PM
20
INDUSTRIAL CONTROL TECHNOLOGY (7) maximum operation frequency which is the permitted working frequency of the sensor; (8) response time tON/tOFF , where tON is the time interval from when the sensor is turned on to when it is ready to be loaded for tasks and tOFF is the time interval from when the sensor is turned off to when it completely stops; (9) hysteresis which is the sensing delay of the sensor; (10) length of light line which is maximum working laser length, etc.
In addition to these technical data, the specifications of the light section sensors normally also include some environmental data as below: (1) vibration which is the allowed environmental vibration; (2) shock which is the allowable energy of exterior air shocks; (3) operation temperature that is the tolerant environmental temperature for the sensors, etc.
1.1.4.4
Calibration
In the process of calibration, camera and projector parameters are estimated simultaneously using the nonlinear least squares estimation model. The three-dimensional coordinates of checkpoints are introduced as additional unknowns, and thus they are estimated simultaneously with the model parameters. This estimation model was chosen because its output contains not only the estimated parameters but also their accuracy. Obtaining the checkpoint accuracy is the major topic of this work. The estimation of camera and projector parameters requires a threedimensional calibration standard with well-defined target points. These control points are characterized by their three-dimensional coordinates, the two-dimensional coordinates of their images, and their one-dimensional projector coordinates. Given n control points as input, the maximum likelihood estimation of unknown parameters can be derived by the general nonlinear least squares estimation. Here are some important requirements for the calibration standard popularly used in industrial control. (1) The control points have to be regularly distributed in threedimensional workspace. (2) The images of the control points have to have a good contrast and an extent of at least 10 pixels. In this case they can be measured very accurately using least squares template matches.
Zhang_Ch01.indd 20
5/13/2008 5:45:20 PM
21
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
(3) Highly accurate three-dimensional coordinates of control points as well as their errors have to be available. For example, assume an aluminum plate with white squares printed on a black background is used as a calibration standard. To achieve a regular distribution of control points in work space, the calibration plate can be precisely displaced along the third axis. Figure 1.9 shows the section sensor setup as it was used in this work. Both camera and projector are inclined to facilitate the measurement of not only horizontal, but also vertical, and even of overhanging surfaces for our application of grasping threedimensional objects. The position of the RSP projected onto the XY plane of the world coordinate system is also given in Fig. 1.9(b). From Fig. 1.9(a) it should be clear that the angle q between the optical axes of the camera and the projector was chosen to be relatively small; it is approximately 15o. This choice has the advantage of making it possible to measure surfaces of larger orientation range while the achievable accuracy is not ideal as it would be theoretically with q. = 90o. Notice that the optical axis of the section sensor ar is defined as the symmetry axis of ac and ap in Fig. 1.9(a). The work space in this example (200 × 200 × 100 mm3) is mainly constrained by the depth of focus of the projector and the camera, which is about 200 mm at a distance of 1300 mm. Weights of observations depend on the measurement process. For light section sensor calibration, three types of observations are used: (1) a priori knowledge about the three-dimensional coordinates of white squares on the calibration plate, (2) image measurements of square centers, and (3) measurements of the projector coordinates of square centers. (a)
(b)
Y
Projector Work space
RSP
Camera ap ac
Z
ai
X
1300 mm q
RSP
200 mm
Z
100 mm
Work space
XY plane 200 mm
Figure 1.9 Experimental range sensor setup: (a) relative position of camera, projector, and work space in the range sensor plane (RSP); (b) position of the RSP and the work space projected onto the XY–Z plane of the world coordinate system.
Zhang_Ch01.indd 21
5/13/2008 5:45:20 PM
22
INDUSTRIAL CONTROL TECHNOLOGY
1.1.5 Linear and Rotary Variable Differential Transformers The linear variable differential transformer (LVDT) is a well-established transducer design which has been used throughout many decades for the accurate measurement of displacement and within closed loops for the control of positioning. The LVDT design lends itself for easy modification to fulfill a whole range of different applications in both research and industry: (1) (2) (3) (4) (5)
pressurized versions for hydraulic cylinder applications; materials suitable for sea water and marine services; dimensions to suit specific application requirements; multichannel, rack amplifier–based systems; automotive suspension system.
The rotational variable differential transformer (RVDT) is also a wellestablished transducer design used to measure rotational angles and operates under the same principles as the LVDT sensor. Whereas the LVDT uses a cylindrical iron core, the RVDT uses a rotary ferromagnetic core. Some of the typical applications of RVDT are (1) (2) (3) (4) (5)
1.1.5.1
flight control/navigation flap actuators fuel control cockpit control automation assembly equipment.
Operating Principle
(1) An LVDT is much like any other transformer in that it consists of a primary coil, secondary coils, and a magnetic core as illustrated in Fig. 1.10(a). The primary coils (the upper coil in Fig. 1.10(a)) are energized with constant amplitude alternating current. This produces an alternating magnetic field in the center of the transducer which induces a signal into the secondary coils (the two lower coils in Fig. 1.10(a)) depending on the position of the core. Movement of the core within this area causes the secondary signal to change (Fig. 1.10(b)). As the two secondary coils are positioned and connected in a set arrangement (push–pull mode), when the core is positioned at the center, a zero signal is derived.
Zhang_Ch01.indd 22
5/13/2008 5:45:20 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
23
Primary coil
Movable core
(a)
Secondary coil
Secondary #1
Secondary coil
Primary
Secondary #2
Lead wires
Displacement
(b)
Moveable core
Figure 1.10 The working principles of an LVDT: (a) the three coils and the movable core and (b) the displacement system of a sensor.
Movement of the core from this point in either direction causes the signal to increase. As the coils are wound in a particular precise manner, the signal output has a linear relationship with the actual mechanical movement of the core. The secondary output signal is then processed by a phasesensitive demodulator which is switched at the same frequency as the primary energizing supply. This results in a final output which, after rectification and filtering, gives direct current output proportional to the core movement and also indicates its direction, positive or negative, from the central zero point (Fig. 1.10(b)). As with any transformer, the voltage of the induced signal in the secondary coil is linearly related to the number of coils. The basic transformer relation is Vout/Vin = Nout/Nin where Vout is the voltage at the output, Vin is the voltage at the input, Nout is the number of windings of the output coil, and Nin is the number of windings of the input coil. The distinct advantage of using an LVDT displacement transducer is that the moving core does not make contact with other electrical components of the assembly, as with resistive types, and so offers high reliability and long life. Further, the core can
Zhang_Ch01.indd 23
5/13/2008 5:45:20 PM
24
INDUSTRIAL CONTROL TECHNOLOGY be so aligned that an air gap exists around it, which is ideal for applications where minimum mechanical friction is required. (2) An RVDT is an electromechanical transducer that provides a variable alternating current output voltage. This output voltage is linearly proportional to the angular displacement of its input shaft. When energized with a fixed alternating current source, the output signal is linear within a specified range over the angular displacement. RVDT utilizes brushless, noncontacting technology to ensure long life, reliability, and repeatable position sensing with infinite resolution. Such reliable and repeatable performance ensures accurate position sensing under the most extreme operating conditions. As diagrammed in Fig. 1.11, rotating a ferromagnetic-core bearing supported within a housed stator assembly provides a basic RVDT construction and operation. The housing is passively stainless steel. The stator consists of a primary excitation coil and a pair of secondary output coils. A fixed alternating current excitation is applied to the primary stator coil that is electromagnetically coupled to the secondary coils. This coupling is proportional to the angle of the input shaft. The output pair is structured so that one coil is in phase with the excitation coil, and the second is 180° out of phase with the excitation coil. When the rotor is in a position that directs the available flux equally in both the in phase and out of phase coils, the output voltages cancel Rotary ferromagnetic core
Vin Vout
Figure 1.11 The working principles of a typical RVDT.
Zhang_Ch01.indd 24
5/13/2008 5:45:20 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
25
and result in a zero value signal. This is referred to as the electrical zero position. When the rotor shaft is displaced from the electrical zero position, the resulting output signals have a magnitude and phase relationship proportional to the direction of rotation. Because the performance of an RVDT is essentially similar to that of a transformer, excitation voltage changes will cause directly proportional changes to the output (transformation ratio). However, the voltage out to excitation voltage ratio will remain constant. Since most RVDT signal conditioning systems measure signal as a function of the transformation ratio, excitation voltage drift beyond 7.5% typically has no effect on sensor accuracy and strict voltage regulation is not typically necessary. Excitation frequency should be controlled within ±1% to maintain accuracy.
1.1.5.2 Application Guide The following factors should be considered when selecting an LVDT (AC, alternating current; DC, direct current): (1) (2) (3) (4)
measurement range armature type AC–AC vs DC–DC environment.
LVDTs are available with ranges from ±0.01" to ±18.5". An LVDT with a ±18.5" range can be used in one direction to measure up to 37". If accuracy is important, the range selected should not be any larger than necessary. Three armature types are available: free unguided armatures, captive guided spring return armatures, and captive guided armatures. Free unguided armatures are recommended for applications in which the target being measured moves parallel to the transducer body as well as those which require frequent or continuous measurements. This armature type is well suited for dynamic applications. When using a free unguided armature, the armature and the LVDT body must be mounted so that their correct relative positions are maintained. This type of LVDT features an armature/ threaded push rod assembly which is completely separable from the LVDT body. Since the free unguided armature involves no mechanical coupling between the armature and the LVDT body, there are no springs or bearings to fatigue. This unit has a virtually unlimited fatigue life. Captive guided spring return armatures are well suited for those applications requiring the measurement of multiple targets or applications in which the target moves
Zhang_Ch01.indd 25
5/13/2008 5:45:21 PM
26
INDUSTRIAL CONTROL TECHNOLOGY
transverse to the armature and changes in the structure’s surface are to be measured. In this type of LVDT, the armature moves over bearings in the LVDT body. The armature is biased by an internal spring so that the ballended probe bears against the surface of the target whose displacement is being measured. The LVDT is held in position by clamping the body alone. The armature is not attached to the target being measured. Captive guided armatures are designed for applications requiring a longer working range. The armature moves freely over machined bearings but cannot be removed from the body. The LVDT body has a threaded mounting hole, and the armature is attached to the structure being measured. The armature end is threaded so that special adapters such as spherical bearings or rollers can be attached. The major advantages of DC–DC LVDTs are the ease of installation, the ability to operate from dry cell batteries in remote locations, and lower system cost, while the advantages of AC–AC LVDT include greater accuracy and a smaller body size. An AC–AC LVDT can be equipped with more sophisticated electronics such as SENSOTEC SC instrumentation. The SC instrument provides an AC power supply, a phase-sensitive demodulator, a scaling amplifier, and a DC output. The AC–AC LVDT system has less residual noise at minimum readings than DC–DC units which utilize internal electronics. For applications involving very high humidity or requiring submersion of the LVDT, a submersible LVDT is required. Submersible units are available for either AC–AC or DC–DC operation and with free unguided or captive spring return armatures. The unit selected should also operate and survive at the temperatures dictated by the application. Note that AC–AC units will operate at higher temperatures (up to 257°F) than the DC–DC units (up to 158°F) which have internal electronics. Side loads must be kept to a minimum since they will cause rubbing between the armature and the LVDT body. This friction will cause excessive wear of bearings, and in extreme cases the armature may bend. At a minimum, side loads will reduce the unit’s life and accuracy.
1.1.5.3
Calibration
The manual calibration procedure for both the LVDT sensors and the RVDT sensors is to check and adjust the zero and gain settings of the signal conditioner, which includes these steps: (1) First, the signal conditioner at zero displacement should be made to be zero. (2) Second, the micrometer should be traversed to a maximum displacement that can be anticipated in the calibration experiment. (3) The gain setting should be adjusted
Zhang_Ch01.indd 26
5/13/2008 5:45:21 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
27
to attain a transducer “high” output value. (4) Finally, the “reference values” of the zero, maximum displacement, and the corresponding transducer output must be recorded. In further experiments, it may be necessary to set the zero and gain of the signal conditioner to these “reference values” and use the calibration results or curves obtained from this laboratory for displacement measurements. The laboratory assignments are executed with the following procedure: (1) Take at least five different settings to cover the displacement range of 0–1 in. For each displacement setting, repeat five times for increasing displacement (or, low to high) and repeat five times for decreasing displacement (or, high to low) to check for hysteresis. (2) Establish a spread sheet and calculate the average voltage for each displacement. (3) Plot the averaged voltage output of the signal conditioner vs displacement and obtain a linear correlation between the averaged voltage output vs displacement using a least squares technique (e.g., in the format of y = a + bx ± c).
1.1.6
Magnetic Control Systems
Magnetic control systems are dominated with magnetic field sensors, magnetic switches, and instruments that measure magnetic fields and or magnetic flux by evaluating a potential, current, or resistance change due to the field strength and direction. They are used to study the magnetic field or flux around the Earth, permanent magnets, coils, and electrical devices in displacement measurement, rail inspection system, and the linear potentiometer replacement, etc. Magnetic field sensors and switches can measure and control these properties without physical contact and have become the eyes of many industrial and navigation control systems. Magnetic field sensors and switches are typically applied in industrial control for proximity detection, displacement sensing, rotational reference detection, current sensing, and vehicle detection. Magnetic field sensors indirectly measure properties such as direction, position, rotation, angle, and current by detecting the magnetic field and its changes. The first application of a permanent magnet was a third-century bc Chinese compass, which is a direction sensor. Compared to other direct methods such as optical or mechanical sensor, most magnetic file sensors require some signal processing to get the property of interest. However, they provide reliable data without physical contact even in adverse conditions such as dirt, vibration, moisture, hazardous gas and oil, etc. At present,
Zhang_Ch01.indd 27
5/13/2008 5:45:21 PM
28
INDUSTRIAL CONTROL TECHNOLOGY
there are two kinds of magnetic switches: magnetic reed switches and magnetic level switches. Magnetic switches are suitable for applications requiring a switched output for proximity, linear limit detection, logging or counting, or actuation purposes. Flux gate and coil instruments perform a continuous measurement of the differences in the magnetic field at the ends of a vertical rod and plot these differences on a grid of the area. Hall effect describes a device that converts the energy stored in a magnetic field to an electrical signal by means of the development of a voltage between the two edges of a current carrying conductor whose faces are perpendicular to a magnetic field.
1.1.6.1
Operating Principle
A unique aspect of using magnetic sensors and switches is that measuring magnetic fields is usually not the primary intent. Another parameter is usually desired such as wheel speed, presence of a magnetic ink, vehicle detection, or heading determination, etc. These parameters cannot be measured directly but can be extracted from changes or disturbances in magnetic fields. The most widely used magnetic sensors and switches are Hall effect sensor switches, magnetoresistive (MR) series of sensors switches, magnetic reed switch, magnetic level switch, etc. (1) Hall effect sensors and switches. The Hall effect is a conduction phenomenon which is different for different charge carriers. In most common electrical applications, the conventional current is used partly because it makes no difference whether positive or negative charge is considered to be moving. But the Hall voltage has a different polarity for positive and negative charge carriers, and it has been used to study the details of conduction in semiconductors and other materials which show a combination of negative and positive charge carriers. The Hall effect can be used to measure the average drift velocity of the charge carriers by mechanically moving the Hall probe at different speeds until the Hall voltage disappears, showing that the charge carriers are now not moving with respect to the magnetic field. Other types of investigations of carrier behavior are studied in the quantum Hall effect. An alternative application of the Hall effect is that it can be used to measure magnetic fields with a Hall probe. As shown in Fig. 1.12, if an electric current flows through a conductor in a magnetic field, the magnetic field exerts a transverse force on the moving charge carriers, which tends to push
Zhang_Ch01.indd 28
5/13/2008 5:45:21 PM
29
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL VH Fm = Magenetic force on negative charge carriers
I
Magenetic field B
+
−
Fe
− −
− d
I
+
− F m
−
+ +
+ +
Fe = Electric force from charge buildup
Direction of conventional electric current
Figure 1.12 The Hall effect.
them to one side of the conductor. This is most evident in a thin, flat conductor as illustrated. A build up of charge at the sides of the conductors will balance this magnetic influence, producing a measurable voltage between the two sides of the conductor. The presence of this measurable transverse voltage is called the Hall effect, after E. H. Hall who discovered it in 1879. Note that the direction of the current I in the diagram is that of conventional current, so that the motion of electrons is in the opposite direction. This further confuses all the “right-hand rule” manipulations you have to go through to get the direction of the forces. As displayed in Fig. 1.12, the Hall voltage VH is given by VH = IB/(ned), where I is the induced electric current, B is the strength Input signal
Magnetic flux
Sensor
HAL
Out signal
VOUT
Figure 1.13 The functional principle of a Hall sensor.
Zhang_Ch01.indd 29
5/13/2008 5:45:21 PM
30
INDUSTRIAL CONTROL TECHNOLOGY of the magnetic filed, n is the density of mobile charges, e is the electron charge, and d is the thickness of the film. In the Hall sensor, the Hall element with its entire evaluation circuitry is integrated on a single silicon chip. The Hall plate with the current terminals and the taps for the Hall voltage are arranged on the surface of the crystal. This sensor element detects the components of the magnetic flux perpendicular to the surface of the chip and emits a proportional electrical signal which is processed in the evaluation circuits integrated on the sensor chip. The functional principle of a Hall sensor is, as displayed in Fig. 1.13, that the output voltage of the sensor and the switching state, respectively, depend on the magnetic flux density through the Hall plate. (2) Magnetoresistive sensors and switches. Magnetoresistance is the property of some conductive materials to gain or lose some of their electrical resistance when placed inside a magnetic field. The resistivity of some materials is greatly affected when the material is subjected to a magnetic field. The magnitude of this effect is known as magnetoresistance and can be expressed by the equation: MR = (r(H)–r(0))/r(0) where MR is the magnetoresistance, r(0) is the resistivity at zero magnetic field, and r(H) is the resistivity in an applied magnetic field. The magnetoresistance of conventional materials is quite small, but materials with large magnetoresistance have been synthesized now. Depending on the magnitude, it is either called giant magnetoresistance (GMR) or colossal magnetoresistance (CMR). Magnetoresistive sensor or switch elements are magnetically controllable resistors. The effect whereby the electric resistance of a thin, anisotropic ferromagnetic layer changes through a magnetic field is utilized in these elements. The determining factor for the specific resistance is the angle formed by the internal direction of magnetization (M) and the direction of the current flow (I). Resistance is largest if the current flow (I) and the direction of magnetization run parallel. The resistance in the base material is smallest at an angle of 90° between the current flow (I) and the direction of magnetization (M). In addition, highly conductive material is applied below an angle of 45°. The current passing the sensor element takes the shortest distance between these two ranges. This means that it flows at a preferred direction of 45° against the longitudinal axis of the sensor element. Without an external field, the resistance of the element is then in the medium range. An external magnetic
Zhang_Ch01.indd 30
5/13/2008 5:45:23 PM
31
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
field with field strength (H) influences the internal direction of magnetization, which causes the resistance to change as a factor of the influence. Figure 1.14 gives an example using Permalloy (NiFe) film to illustrate that the resistance of a material depends upon the angle between the internal direction of magnetization M and the direction of the current flow I. The actual sensor element is often designed with four magnetic field sensitive resistors interconnected to form a measuring bridge (Fig. 1.15). The measuring bridge is energized and Current I Magnetization M q
Easy axis
No applied field I M
HApplied
Figure 1.14 As an example, the Permalloy (NiFe) film has a magnetization vector, M, that is influenced by the applied magnetic field being measured. The resistance of the film changes as a function of the angle between the vector M and current flow, I, flowing through it. This change in resistance is known as the magnetoresistive effect. Bias current
Permalloy
Out−
Shorting bars
Vb
GND
Easy axis
Out+
Sensitive axis
Figure 1.15 As an example, the magnetoresistive bridge is made up of four Permalloy parallel strips. A crosshatch pattern of metal is overlaid onto the strips to form shorting bars. The current then flows through the Permalloy, taking the shortest path, at a 45° angle from shorting bar to shorting bar. This establishes the bias angle between the magnetization vector M of the film and the current I flowing through it.
Zhang_Ch01.indd 31
5/13/2008 5:45:23 PM
32
INDUSTRIAL CONTROL TECHNOLOGY supplies a bridge voltage. A magnetic field which influences the bridge branches in different degrees leads to a voltage difference between the bridge branches which is then amplified and evaluated. The sensor detects the movement of ferromagnetic structures (e.g., in gearwheels) caused by the changes in the magnetic flow. The sensor element is biased with a permanent magnet. A tooth or a gap moving past the sensor influences the magnetic field at different degrees. This causes changes in the magnetic field dependent on resistance values in a magnetoresistive sensor. The changes in the magnetic field can therefore be converted into an electric variable and can also be conditioned accordingly. The output signal from the sensor is a square wave voltage which reflects the changes in the magnetic field. Changes in the magnetic field cause the bridge voltage to be deflected. This voltage is amplified and supplied to a Schmitt trigger after conditioning. If the effective signal reaches an adequate level, the output stage is set accordingly. The sensor is used for the noncontact rotational speed detection on ferromagnetic sensing objects such as gearwheels. The distance between the sensed object and the surface of the active sensor is described as air gap. The maximum air gap is dependent on the geometry of the object. The measurement principle dictates the direction-dependent installation. The magnetoresistive sensor is sensitive to changes in the external magnetic field. For this reason the sensed objects should not have different degrees of magnetization. (3) Magnetic switches. Most magnetic switches actually work with two mechanisms: the magnetic reed switches and the magnetic level switches. Magnetic reed switches normally consist of two overlapping flat contacts which are sealed into a glass tube filled with inert gas. When approached by a permanent magnet the contact ends attract each other and make contact. When the magnet is removed, the contacts separate immediately (Fig. 1.16). For magnetic level switches, the operation is achieved using the time-proven principle of repelling magnetic forces. One permanent magnet forms part of a float assembly which rises and falls with changing liquid level. A second permanent magnet is positioned within the switch head so that the adjacent poles of the two magnets repel each other through a nonmagnetic diaphragm. A change in liquid level moves the float through its permissible range of travel causing the float magnet to pivot and repel the switch magnet. The resulting snap action of the repelling magnets actuates the switch.
Zhang_Ch01.indd 32
5/13/2008 5:45:24 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL Flat contacts
33
Magnet
Glass tube
Figure 1.16 Operating principle of a simple magnetic reed switch.
1.1.6.2
Basic Types and Application Guide
One way to classify the various magnetic sensors and switches is by the magnetic field sensing range. These sensors and switches can be arbitrarily divided into three categories: (1) low field, (2) medium field, and (3) high field sensing. Sensors that detect magnetic fields less than 1 µG will be classed as low field sensors. Sensors with a range of 1 µG to 10 G will be considered Earth’s field sensors, and those sensors that detect fields above 10 G will be considered bias magnet field sensors in this book. Only those magnetic field sensors and switches used in industrial control are listed below so that the low field magnetic sensors are neglected because these are primarily used for medical applications and laboratory research. (1) Earth’s field sensors. (1 µG to 10 G) The magnetic range for the medium field sensors lends itself well to using the Earth’s magnetic field. Several ways to use the Earth’s field are to determine compass headings for navigation, detect anomalies in it for vehicle detection, and measure the derivative of the change in field to determine yaw rate. (a) Fluxgate. Fluxgate magnetometers are the most widely used sensors for compass navigation systems. Fluxgate sensors have also been used for geophysical prospecting and airborne magnetic field mapping. The most common type of fluxgate magnetometer is called the second harmonic device. This device involves two coils, a primary and a secondary, wrapped around a high-ability ferromagnetic core. The magnetic induction of this core changes in the presence of an external magnetic field. Another way of looking at the fluxgate operating principle is to sense the ease, or resistance, of saturating the core caused by the change in its magnetic flux. The difference is due to the external magnetic field.
Zhang_Ch01.indd 33
5/13/2008 5:45:24 PM
34
INDUSTRIAL CONTROL TECHNOLOGY (b) Magnetoinductive. Magnetoinductive magnetometers are relatively new with the first patent issued in 1989. The sensor is simply a single winding coil on a ferromagnetic core that changes permeability within the Earth’s field. The sense coil is the inductance element in an L/R relaxation oscillator. The frequency of the oscillator is proportional to the field being measured. A static DC current is used to bias the coil in a linear region of operation. (c) Anisotropic magnetoresistive (AMR). Magnetoresistive (MR) sensors come in a variety of shapes and forms. The newest market growth for MR sensors is high density read heads for tape and disk drives. Most AMR sensors are made of Permalloy (NiFe) thin film deposited onto a silicon substrate and patterned to form a Wheatstone resistor bridge. AMR sensors provide an excellent means of measuring both linear and angular position and displacement in the Earth’s magnetic field. Permalloy thin films of a sensor deposited on a silicon substrate in various resistor bridge configurations provide highly predictable outputs when subjected to magnetic fields. Low cost, high sensitivity, small size, noise immunity, and reliability are advantages over mechanical or other electrical alternatives. Highly adaptable and easy to assemble, these sensors solve a variety of problems in custom applications. (2) Bias magnet field sensors. (above 10 G) Most industrial sensors use permanent magnets as a source of the detected magnetic field. These permanent magnets magnetize, or bias, ferromagnetic objects close to the sensor. The sensor then detects the change in the total field at the sensor. Bias field sensors must not only detect fields which are typically larger than the Earth’s field, but they must also not be permanently affected or temporarily upset by a large field. Sensors in this category include reed switches, InSb magnetoresistors, Hall devices, and GMR sensors. (a) Reed switch. The reed switch can be considered the simplest magnetic sensor used to produce a usable output for industrial control. Reed switches are maintenance free and highly immune to dirt and contamination. Rhodium-plated contacts ensure long contact life. Typical capabilities are 0.1–0.2 A switching current and 100–200 V switching voltage. Contact life is measured at 106–107 operations at 10 mA. Reed switches are available with normally open, normally closed, and class C contacts. Latching reed switches are also available. Mercury-wetted reed switches can switch currents as high as 1 Å and have no contact bounce.
Zhang_Ch01.indd 34
5/13/2008 5:45:24 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
35
Low cost, simplicity, reliability, and zero power consumption make reed switches popular in many applications. The addition of a separate small permanent magnet yields a simple proximity switch often used in security systems to monitor the opening of doors or windows. The magnet, affixed to the movable part, activates the reed switch when it comes close enough. The desire to sense almost everything in cars is increasing the number of reed switch sensing applications in the automotive industry. (b) Lorentz force devices. There are several sensors that utilize the Lorentz force, or Hall effect, on charge carriers in a semiconductor. The Lorentz force equation describes the force FL experienced by a charged particle with charge q moving with velocity v in a magnetic field B: FL = q (v × B). The Hall effect is a consequence of the Lorentz force in semiconductor materials. The following sensors work with the Lorentz force and the Hall effect. (i) Magnetoresistors. The simplest of Lorentz force devices are magnetoresistors using semiconductors with high room temperature carrier mobility. If a voltage is applied along the length of a thin slab of semiconductor material, a current will flow and a resistance can be measured. When a magnetic field is applied perpendicular to the slab, the Lorentz force will deflect the charge carriers. If the width of the slab is greater than the length, the charge carriers will cross the slab without a significant number of them collecting along the sides. The effect of the magnetic field is to increase the length of their path and, therefore, the resistance. An increase in resistance of several hundred percentage is possible in large fields. In order to produce sensors with hundreds to thousands of ohms of resistance, long, narrow semiconductor stripes a few microns wide are produced using photolithography. The required length to width ratio is accomplished by forming periodic low-resistance metal shorting bars across the traces. Each shorting bar produces an equipotent across the semiconductor stripe. The result is, in effect, a number of small semiconductor elements with the proper length to width ratio connected in series. (ii) Hall effect sensors. The second type of sensor, which utilizes the Lorentz force on charge carriers, is a Hall sensor.
Zhang_Ch01.indd 35
5/13/2008 5:45:24 PM
36
INDUSTRIAL CONTROL TECHNOLOGY The Hall resistance and Hall voltage increase linearly with applied field to several teslas (10 s of kG). The temperature dependence of the Hall voltage and the input resistance of Hall sensors are governed by the temperature dependence of the carrier mobility and that of the Hall coefficient. The Hall voltage is measured between electrodes placed at the middle of each side. This differential voltage is proportional to the magnetic field perpendicular to the slab. It also changes sign when the sign of the magnetic field changes. The ratio of the Hall voltage to the input current is called the Hall resistance, and the ratio of the applied voltage to the input current is called the input resistance. (iii) Integrated Hall sensors. Hall devices are often combined with semiconductor elements to make integrated sensors. By adding comparators and output devices to a Hall element, manufacturers provide unipolar and bipolar digital switches. Adding an amplifier increases the relatively low voltage signals from a Hall device to produce radiometric linear Hall sensors with an output centered on one-half the supply voltage. (c) Giant magnetoresistive (GMR) devices. Large magnetic field dependent changes in resistance are possible in thin-film ferromagnetic and/or nonmagnetic metallic multilayers. This phenomenon was first observed in 1988. Changes in resistance with magnetic field of up to 70% were observed. Compared to the few percentage change in resistance observed in anisotropic magnetoresistance (AMR), this phenomenon was truly giant magnetoresistance (GMR). GMR devices involve a sandwich of two outer layers of a ferromagnetic material, such as cobalt or iron, with a center of a nonmagnetic metal. One of the ferromagnetic layers is kept under a constant magnetic field, while the other layer is exposed to the variable magnetic field to be sensed. Maximum current flows when both ferromagnetic layers are magnetized in the same direction; minimum current flows when the layers are magnetized in reverse directions. GMR sensors are currently used in read-write heads for disk drives. (i) Unpinned sandwich GMR. These materials consist of two soft magnetic layers of iron, nickel, and cobalt alloys separated by a layer of a nonmagnetic conductor such as copper. With magnetic layers 4–6 nm (40–60 Å) thick separated by a conductor layer typically 3–5 nm thick, there is relatively little magnetic coupling between
Zhang_Ch01.indd 36
5/13/2008 5:45:24 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
37
the layers. For use in sensors, sandwich material is usually patterned into narrow stripes. The magnetic field caused by a current of a few milliamperes per micron of stripe width flowing along the stripe is sufficient to rotate the magnetic layers into antiparallel or high resistance alignment. An external magnetic field of 3–4 kA/m applied along the length of the stripe is sufficient to overcome the field from the current and rotate the magnetic moments of both layers parallel to the external field. A positive or negative external field parallel to the stripe will produce the same change in resistance. An external field applied perpendicular to the stripe will have little effect due to the demagnetizing fields associated with the extremely narrow dimensions of these magnetic objects. The value usually associated with the GMR effect is the percentage change in resistance normalized by the saturated or minimum resistance. Sandwich materials have values of GMR typically 4–9% and saturate with 2.4–5 kA/m applied field. Figure 1.21 shows a typical resistance vs field plot for sandwich GMR material. (ii) Antiferromagnetic multilayer. These materials consist of multiple repetitions of alternating conducting magnetic layers and nonmagnetic layers. Since multilayer has more interfaces than do sandwiches, the size of the GMR effect is larger. The thickness of the nonmagnetic layers is less than that for sandwich material (typically 1.5–2.0 nm) and the thickness is critical. Only for certain thicknesses, the polarized conduction electrons cause antiferromagnetic coupling between the magnetic layers. Each magnetic layer has its magnetic moment antiparallel to the moments of the magnetic layers on each side—exactly the condition needed for maximum spin-dependent scattering. A large external field can overcome the coupling which causes this alignment and can align the moments so that all the layers are parallel— the low-resistance state. If the conducting layer is not of proper thickness, the same coupling mechanism can cause ferromagnetic coupling between the magnetic layers resulting in no GMR effect. (iii) Spin valves. These materials, or antiferromagnetically pinned spin valves, are similar to the unpinned spin valves or sandwich materials described earlier. An additional layer of an antiferromagnetic material is provided
Zhang_Ch01.indd 37
5/13/2008 5:45:24 PM
38
INDUSTRIAL CONTROL TECHNOLOGY on the top or the bottom. The antiferromagnetic material couples to the adjacent magnetic layer and pins it in a fixed direction. The other magnetic layer is free to rotate. These materials do not require the field from a current to achieve antiparallel alignment. (iv) Colossal magnetoresistance (CMR). CMR occurs in crystals of manganese oxide known as “magnates.” Under certain conditions these mixed oxides undergo a semiconductor to metallic transition with the application of a magnetic field of a few teslas (10 s of kG). The size of the resistance ratios, measured at 103–108%, has generated considerable excitement even though they required high fields and liquid nitrogen temperatures. The MR of these crystals actually decreases in the presence of a magnetic field. CMR requires cryogenic cooling.
1.1.7
Limit Switches
A limit switch is an electromechanical device that can be used to determine the physical position of equipment. For example, an extension on a valve shaft mechanically trips a limit switch as it moves from open to shut or shut to open. The limit switch is designed to give a signal to an industrial control system when a moving component like an overhead door or piece of machinery has reached the limit (end point) for its travel or just a specific point on its journey. The primary purpose of the limit switch is to control the intermediate or end limits of a linear or rotary motion. In industrial control systems, the limit switch is often used as a safety device to protect against accidental damage to equipment.
1.1.7.1
Operating Principle
A linear limit switch is an electromechanical device that requires the physical contact of an object with the activator of the switch to make the contacts change state. As an object (target) makes contact with the actuator of the switch, it moves the activator to the “limit” where the contacts change state. Limit switches can be used in almost any industrial environment because of their typically rugged design. However, the device uses mechanical parts that can wear over time and the device is “slower” when compared to noncontact, electrical devices such as proximity sensors and photoelectric sensors. Rotary limit switches are similar to relays in that they are used to allow or prevent current flow when in closed or open position. The groupings
Zhang_Ch01.indd 38
5/13/2008 5:45:24 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
39
within this family are usually defined based on the manner in which the switch is actuated (e.g., rocker, foot, read, lever, etc.). Switches can range from simple push button devices, usually used to delineate between ON and OFF, to rotary and toggle devices, for varying levels, through to multiple entry keypads, for multiple control functions. In addition to maintaining or interrupting flow, and maintaining flow levels, switches are used in safety applications as security devices (locker switches) and as functionary actuators when controlled by sensors or computer systems. Normally, the limit switch gives ON/OFF output that corresponds to valve position. Limit switches are used to provide full open or full shut indications as illustrated in Fig. 1.17 which gives a typical linear limit switch operation. Many limit switches are of the push button variety. In Fig. 1.17, when the valve extension comes in contact with the limit switch, the switch depresses to complete, or turn on, the electrical circuit. As the valve extension moves away from the limit switches, spring pressure opens the switch, turning off the circuit. Limit switch failures are normally mechanical in nature. If the proper indication or control function is not achieved, the limit switch is probably faulty. In this case, local position indication should be used to verify equipment position. Full open limit switch
Indicating and control circuite
Stem
Full closed limit switch
Flow
Figure 1.17 Operations of a linear limit switch.
Zhang_Ch01.indd 39
5/13/2008 5:45:24 PM
40
INDUSTRIAL CONTROL TECHNOLOGY
1.1.7.2
Basic Types and Application Guide
Limit switches come in many different forms, from small, enclosed switches to large, heavy-duty multicircuit switches and are divided into several types as given below. The basic types of limit switches are (1) linear limit switches where an object will move a lever (or simply depress a plunger) on the switch far enough for the contact in the switch to change state; (2) rotary limit switches where a shaft must turn a preset number of revolutions before the contact changes state, used in cranes, overhead doors, etc.; and (3) magnetic limit switches, or reed switches, where the object is not touched but sensed. One heavy-duty application would be in mine hoists where a stationary switch will sense a strong magnet mounted to a car going up or down the mine shaft. (1) Linear limit switches. Linear limit switches basically have these three types: (a) Safety limit switches. Safety limit switches are designed for use with moveable machine guards/access gates, which must be closed for operator safety and for any other presence, and position-sensing application normally addressed with conventional limit switches. Their positive opening network connection contacts provide a higher degree of reliability than conventional spring-driven switches whose contacts can weld or stick shut. These limit switches are of multiple actuator styles and four 90° head position so that they provide application versatility. (b) Comprehensive range limit switches. Comprehensive range limit switches are provided in the most popular industrial sizes, shapes, contact configurations, as well as an innovative series of encapsulated limit switches, available with a connection cable or plug. Their output contacts are offered in snap action, slow action, or overlapping configurations. Various types of push button and roller actuators, roller levers, and multidirectional levers are available. These switches are ideal for manufacturers of material handling, packaging, conveying, and machine tool equipment. (c) Mechanical limit switches. Mechanical limit switches are frequently used in position detection on doors and machinery and parts detection on conveyors and assembly lines. Their range comprises various housing and actuator styles, choice of slow or fast action contacts, and various contact arrangements.
Zhang_Ch01.indd 40
5/13/2008 5:45:25 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
41
(2) Rotary limit switches. Rotary switches move in a circle and can stop in several positions in their range, which provide singledeck rotary limit and multiple-deck rotary limit. (a) Single-deck rotary switches. Single-deck rotary switches can control several circuits at a time. Actuator choices for singledeck rotary switches include flush actuator, bare shaft actuator, knobbed shaft, and locker. In a flush actuator configuration the actuator does not project above the switch body. Typically it requires a screwdriver for operation. In a bare shaft actuator configuration the shaft has no knob, but may be notched to accept various knob configurations. A knobbed shaft comes with an integral knob. In a locker configuration the actuation is done with a key or other security or tamperproof method. Important physical switch specifications to consider when searching for single-deck rotary switches include angle between positions, mechanical life, number of decks, number of poles per deck, and number of poles. The angular distance (in degrees) exists between positions. For example, for a 4-position switch, the angle of throw is 90°, and for a 100position switch, the angle of throw is 3.6°. The mechanical life is the maximum life expectancy of the switch. Often, electrical life expectancy is less than mechanical life (please consult manufacturer). The number of decks specifies the maximum number of decks that can be attached to a common actuating shaft. The number of poles per deck refers to the number of separate circuits that can be activated through a switch at any given time per deck. The number of poles refers to the number of separate circuits that can be activated through a switch at any given time. Important electrical switch specifications to consider include maximum current rating, maximum AC voltage rating, and maximum DC voltage rating. Common materials of construction for the base and actuator include plastics and metals. Other specifications to consider for single-deck rotary switches include stop style, contact style, actuator features, terminal type, features, and environmental parameters. Stop style choices include fixed stop, adjustable stop, and continuous or no stops. Contact style choices are shorting or nonshorting. Actuator features include integral potentiometer, actuator detents, tease proof, and guarded positions. Terminal type choices include wire leads, solder terminals, screw terminals, and PCB pins. An important environmental parameter to consider is the operating temperature.
Zhang_Ch01.indd 41
5/13/2008 5:45:25 PM
42
INDUSTRIAL CONTROL TECHNOLOGY (b) Multiple-deck rotary switch. Multiple-deck rotary switches can control several circuits simultaneously. Actuator choices for multiple-deck rotary switches include flush actuator, bare shaft actuator, knobbed shaft, and locker. In a flush actuator configuration the actuator does not project above the switch body, and typically requires a screwdriver for operation. In a bare shaft actuator configuration the shaft has no knob, but may be notched to accept various knob configurations. A knobbed shaft comes with an integral knob. In a locker configuration the actuation is done with a key or other secure or tamperproof method. Important physical switch specifications to consider when searching for multiple-deck rotary switches include angle between positions, mechanical life, number of decks, and number of poles per deck. The angular distance (in degrees) exists between positions. For example, for a 4-position switch, the angle of throw is 90°; for a 100-position switch, the angle of throw is 3.6°. The mechanical life is the maximum life expectancy of the switch. Often, electrical life expectancy is less than mechanical life (please consult manufacturer). The number of decks specifies the maximum number of decks that can be attached to a common actuating shaft. The number of poles per deck refers to the number of separate circuits that can be activated through a switch at any given time per deck. Important electrical switch specifications to consider include maximum current rating, maximum AC voltage rating, and maximum DC voltage rating. Common materials of construction for the base and actuator include plastics and metals. Other specifications to consider for multiple-deck rotary switches include stop style, contact style, actuator features, terminal type, features, and environmental parameters. Stop style choices include fixed stop, adjustable stop and continuous or no stops. Contact style choices are shorting or nonshorting. Actuator features include integral potentiometer, actuator detents, tease proof, and guarded positions. Terminal type choices include wire leads, solder terminals, screw terminals, and PCB pins. Common features for multiple-deck rotary switches include optional coded outputs, momentary on, wiping contacts, CE certification, CSA certification, UL listed, dustproof, and weather resistant or waterproof. An important environmental parameter to consider is the operating temperature.
Zhang_Ch01.indd 42
5/13/2008 5:45:25 PM
43
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
(c) Magnetic limit switches. Magnetic limit switches work based on the operation of reed switches (see Section 1.2.8), and are more reliable than the read switches because of their simplified construction. The switches are constructed of flexible ferrous strips (reeds) and are placed near the intended travel of the valve stem or control rod extension. When using reed switches, the extension used is a permanent magnet. As the magnet approaches the reed switch, the switch shuts. When the magnet moves away, the reed switch opens. This ON/OFF indicator is similar to a mechanical limit switch. By using a large number of magnetic reed switches, incremental position can be measured. Failures are normally limited to a reed switch, which is stuck open or stuck shut. If a reed switch is stuck shut, the open (closed) indication will be continuously illuminated. If a reed switch is stuck open, the position indicator for that switch remains extinguished regardless of valve position.
1.1.7.3
Calibration
Limit switches are not adjusted at the factory. Limit switches should be set during installation according to the specifications from the factories and vendors. In most cases, turning some screws performs these adjustments. Figure 1.18 is an example of a rotary limit switch that gives eight positions. By turning some screws, each of these positions can be fitted in installations.
Position 1
Position 2
Position 3
Position 4
Position 5 Position 7
Position 6
Position 8
Figure 1.18 The positions of a rotary limit switch.
Zhang_Ch01.indd 43
5/13/2008 5:45:25 PM
44
1.1.8
INDUSTRIAL CONTROL TECHNOLOGY
Photoelectric Devices
Photoelectric sensors and switches represent perhaps the largest variety of problem-solving choices in the industrial sensor market. Today, photoelectric technology has advanced to the point where it is common to find a sensor or a switch that can detect a target less than 1 mm in diameter while other units have a sensing range up to 60 m. These factors make them extremely adaptable in an endless array of applications. Photoelectric sensors and switches and their future successors will continue to be key detection devices that can be depended on for these applications. Photoelectric sensors and switches are used extensively on packaging machinery, automatic door systems, automotive and metal industries, and food processing industry. For example, they are used for detecting the presence of automobile wheel rims and tire presence on an assembly line in automotive and metal industries, or for detecting the part cases on an assembly line in food processing and packaging. A very familiar application of photoelectric switches is that they can turn the trap on at dark and off at daylight. This enables the user to set the trap out earlier and retrieve it later in the morning while reducing wear on the battery. Quite often, a garage door opener has a through beam photoelectric sensor mounted near the floor, across the width of the door. This sensor makes sure nothing is in the path of the door when it is closing. A more industrial application for a photoelectric device is detecting objects on a conveyor. An object will be detected any place on a conveyor running between the emitter and the receiver as long as there is a gap between the objects and the sensor’s light does not “burn through” the object. This is more a figurative term than a literal one. It refers to an object that is thin or light in color and allows the light emitted from the emitter to penetrate the target so the receiver never detects the object.
1.1.8.1
Operating Principle
Almost all photoelectric sensors and switches contain an emitter, which is a light source such as a light emitting diode or laser diode, a photodiode, or a phototransistor receiver to detect the light source, as well as the supporting electronics designed to amplify the signal relayed from the receiver. Photoelectric sensing uses a beam of light to detect the presence or absence of an object: the emitter transmits a beam of light either visible or infrared, which in some fashion is directed to and detected by the receiver. All photoelectric sensors and switches identify their output as “dark on” and “light on,” which refer to output of the sensor or switch in
Zhang_Ch01.indd 44
5/13/2008 5:45:26 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
45
relation to when the light source is hitting the receiver. If an output is present while no light is received, this would be called a “dark-on” output. In reverse, if the output is ON while the receiver is detecting the light from the emitter, the sensor or switch would have a “light-on” output. Either way, a light-on or dark-on output needs to be selected prior to purchasing the sensor unless it is user adjustable. In this case it can be decided upon during installation by either flipping a switch or wiring the sensor accordingly. The method in which light is emitted and delivered to the receiver is the way to categorize the different photoelectric configurations. They can be categorized into three main categories: through beam, retroreflective, and proximity. (1) Through beam photoelectric sensors or switches. are configured with the emitter and detector opposite the path of the target and sense presence when the beam is broken. (2) Retroreflective photoelectric sensors or switches. are configured with the emitter and detector in the same housing and rely on a reflector to bounce the beam back across the path of the target. This type may be polarized to minimize false reflections. (3) Proximity photoelectric sensors or switches. have the emitter and detector in the same housing and rely upon reflection from the surface of the target. This mode can include presence sensing and distance measurement via analog output. The proximity category can be further broken down into five submodes: diffuse, divergent, convergent, fixed field, and adjustable field. With a diffuse sensor the presence of an object is detected when any portion of the diffuse reflected signal bounces back from the detected object. Divergent beam sensors and switches are shortrange, diffuse-type sensors or switches without collimating lenses. Convergent, fixed focus, or fixed distance optics (such as lenses) are used to focus the emitter beam at a fixed distance from the sensor or switch. Fixed-field sensors or switches are designed to have a distance limit beyond which they will not detect objects, no matter how reflective. Adjustable field sensors or switches utilize a cut-off distance beyond which a target will not be detected, even if it is more reflective than the target. Some photoelectric sensors and switches can be set for multiple different optical sensing modes. Reflective properties of the target and environment are important considerations in the choice and use of photoelectric sensors and switches.
Zhang_Ch01.indd 45
5/13/2008 5:45:26 PM
46
INDUSTRIAL CONTROL TECHNOLOGY Diffuse sensors or switches operate under a somewhat different style than retroreflective and through-beams although the operating principle remains the same: diffuse photoelectric sensors and switches actually use the target as the “reflector,” such that detection occurs upon reflection of the light off the object back onto the receiver as opposed to an interruption of the beam. The emitter sends out a beam of light. Most often it is a pulsed infrared, visible red, or laser beam, which is reflected by the target when it enters the detectable area. The beam is diffused off of the target in all directions. Part of the beam will actually return to the receiver inside the same housing from which the sensor or switch originally emitted it. Detection occurs and the output will either turn on or off (depending upon if it is light on or dark on) when sufficient light is reflected to the receiver. This can be commonly witnessed in airport washrooms, where a diffuse photo will detect your hands as they are placed under the faucet and the attending output will turn the water on. In this application, your hands act as the reflector.
To ensure repeatability and reliability, photoelectric sensors and switches are available with three different types of operating principles: fixed-field sensing, adjustable field sensing, and background suppression through triangulation. In the simplest terms, these sensors and switches are focused on a specific point in the foreground, ignoring anything beyond that point. (1) Standard fixed-field sensors and switches. operate optimally at their preset “sweet spot,” the distance at which the foreground receiver will detect the target. As a result, these sensors and switches must be mounted within a certain fixed distance of the target. In fixed-field technology, when the emitter sends out a beam of light, two receivers sense the light on its return. The shortrange receiver is focused on the target object’s location. The long-range receiver is focused on the background. If the longrange receiver detects a higher intensity of reflected light than the short-range receiver, the output will not turn on. If the shortrange receiver detects a higher intensity of reflected light than the long-range receiver, an output occurs and the object is detected. (2) Adjustable field sensors and switches. operate under the same principle as fixed-field sensors or switches, but the sensitivity of the receivers can be electrically adjusted using a potentiometer. By adjusting the level of light needed to trigger an output, the range and sensitivity of the sensor or switch can be altered to fit the application.
Zhang_Ch01.indd 46
5/13/2008 5:45:26 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
47
(3) Background suppression by triangulation. also emits a beam of light that is deflected back to the sensor or switch. Unlike fixedand adjustable field sensors and switches, which rely on the intensity of the light reflected back to them, background suppression sensors rely completely on the angle at which the beam of light returns. Like fixed- and adjustable field sensors or switches, background suppression sensors or switches feature short-range and long-range receivers in fixed positions. In addition, background suppression sensors or switches have a pair of lenses that are mechanically adjusted to precisely focus the reflected beam to the appropriate receiver, changing the angle of the light received. The long-range receiver is focused through the lens on the background. Deflected light returning along that focal plane will not trigger an output. The short-range receiver is focused, through a second lens, on the target. Any deflected light returning along that focal plane will trigger an output—an object will be detected.
1.1.8.2 Application Guide In industrial control systems which require photoelectric sensors and switches, it is important to check whether the parameters and principle of the sensors and switches satisfy your applications. (1) Choosing the right parameters. Important parameters to consider when looking for photoelectric sensors and switches include sensing mode, detecting range, position measurement window, minimum detectable object, and response time. Sensing modes can be used for detecting presence or absence and position measurement. With a presence or absence sensor, the sensor detects presence or absence in an ON or OFF mode. In position measurement, the sensing mode of the sensor can detect position in a linear region by the intensity of reflected light. Analog output is linear with position in the measurement range. The detecting range is the range of sensor detection. For presence sensors, this goes up to the maximum distance for which the signal is stable. For position measurement sensors, this is the distance or range over which the position vs output response is linear and stable. The position measurement window is the width of linear region for the sensor. For example, if the sensor could measure between 14 and 24 cm, this window would be 10 cm. The minimum detectable object is the smallest sized object detectable by the sensor. The response time is the time from target object entering detection zone to the production of the detection signal.
Zhang_Ch01.indd 47
5/13/2008 5:45:26 PM
48
INDUSTRIAL CONTROL TECHNOLOGY Common configuration features for photoelectric sensors and switches include beam visibility, light-on or dark-on modes, light and dark programmability, adjustable sensitivity, self-teaching, laser source, fiber-optic glass, and fiber-optic plastic. The body style of the sensor can be threaded barrel, cylindrical, limit switch, rectangular, slot, ring, and window or frame. The sensor may be self-contained and may have a remote head. Repeatability and reliability are critical to the overall performance of a photoelectric sensor or switch in an industrial processing line. One of the most common type of sensor and switch used for object detection is the diffuse photoelectric sensor which performs well in a wide range of industrial processing applications. However, these sensors and switches can experience problems with some target or background materials. The accuracy of diffuse sensors is often at the mercy of the surface properties of the target and background materials. A nonreflective target, such as a matte or dull black material, is difficult for diffuse photoelectric sensors to detect, because it reflects much less light than a brightly colored or white target. Similarly, if the target is presented against a light-colored or reflective background, the sensor can be falsely triggered by light reflected from the background material rather than the target object. To overcome these challenges, a variety of technologies have been developed to allow the sensors to see an object while ignoring background materials. (2) Choosing the right principle. Photoelectric sensors and switches are reliable, versatile, and able to sense and take on or off action with objects of almost any material, size, and shape at distances ranging from 5 mm to 40 m depending on type and configuration used. The addition of fiber cables considerably extends application opportunities for photoelectric sensors, allowing their installation in confined areas, as well as in areas in which intrinsic safety would normally disallow the use of electronics. For applications where backgrounds are not within sensing range and target color is consistent, a standard diffuse sensor or switch is completely sufficient for object detection. These sensors and switches are also quite effective for detecting large objects. Similarly, if a background within the sensing range is not particularly reflective and the color and reflectivity of the target will remain relatively constant, a fixed- or adjustable field sensor will likely provide trouble-free performance. These sensors are also appropriate for smaller targets than standard diffuse sensors. When reliable sensing is challenging due to shiny backgrounds and targets, shifting colors, and reflectivity, background
Zhang_Ch01.indd 48
5/13/2008 5:45:26 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
49
suppression by triangulation is the most repeatable and reliable solution. These sensors and switches, especially laser varieties, are very effective for detecting ultra-small targets. The precisely focused, finely collimated beam allows them to detect extremely small items. When an application requires the durability and performance of a full-featured photoelectric sensor or switch but distances are shorter and space is tight, you can take a 90° turn to right sight of photoelectric sensors and switches. Right sight of photoelectric sensor and switches takes many of the features of the Series 9000 and puts them in a smaller, more adaptable package to deliver excellent detection capabilities where size and shape matter. These photoelectric sensors and switches combine universal voltages (24–264 V AC/DC) with short-circuit protection across the full voltage range.
1.1.8.3
Basic Types
(1) Diffuse photoelectric sensors (a) Standard diffuse sensors. The emitter and receiver are in the same housing. The emitter sends out a beam of pulsed red or infrared light that is reflected directly by the target. When the beam of light hits the target at any angle, it is diffused in all directions and some light is reflected back. The receiver sees only a small portion of the original light, switching the sensor when a target is detected within the effective scan range. The simplest diffuse photoelectric sensors use the target object as the reflective surface for object detection. Detection occurs when a beam of infrared, visible red, or laser light emitted from the sensor is deflected off the target material in all directions and detected by the receiver. Standard diffuse sensors have these features: (1) the sensing range depends largely on the reflective properties of the target’s surface, (2) suitable for distinguishing between black and white targets, (3) relatively large active range, and (4) positioning and monitoring with only one sensor. The typical applications of standard diffuse sensors are (1) distinguishing and sorting of objects according to their volume or degree of reflection, (2) counting of objects, and (3) presence detection of boxes. (b) Diffuse sensors with background suppression. These sensors are a special development of the diffuse sensor. The beam of light is closely focused, and therefore able to distinguish a
Zhang_Ch01.indd 49
5/13/2008 5:45:26 PM
50
INDUSTRIAL CONTROL TECHNOLOGY target within a precisely defined scan range and ignore targets outside the range. Diffuse sensors with background suppression have these features: (1) sensing range largely independent of the color and surface of the target and (2) they detect small objects. They obtain the following typical applications: (a) sorting objects without concern for the background color, purely on their distance from the sensor and (b) sensing contents within transparent packaging. (2) Retroreflective photoelectric sensors (a) Retroreflective sensors. With the emitter and receiver in the same housing, this sensor transmits a pulsed infrared or red light beam which is reflected back from a “triple prism” reflector or reflective tape. The sensor switches when the light beam is interrupted. These devices recognize objects independent of their surface qualities, as long as they are not too shiny. Retroreflective sensors offer these features: (1) large sensing range and (2) matte-finished objects are recognized independent of their surface properties. The typical applications are (1) height detection of stacked objects and (2) control of randomly positioned objects on a conveyor. (b) Retroreflective sensors with polarization filter. Retroreflective sensors with polarization filters correctly recognize highly reflective objects. The polarizing filter prevents false switching with shiny objects. Only the stray and nonpolarized light from the reflector actuates the sensor. Features are similar to retroreflective sensors, but with the added advantage of being able to accurately distinguish shiny objects. Typical application is for monitoring shiny cans on a conveyor belt. (3) Through-beam photoelectric sensors. Emitter and receiver are in two separate housings facing each other. The sensor switches whenever the light beam is interrupted. The features of the through-beam photoelectric sensors are (1) through-beam sensors offer the largest sensing ranges, (2) the switching point is independent of the surface nature of the object, and (3) due to the narrow effective beam, through-beam sensors have excellent repeatability. Typical applications are (1) monitoring doors and gates and (2) counting and monitoring of objects over large distances. (4) Fiber-optic photoelectric sensors. The front surfaces of the fibers, which are set and glued into the sensing head, are precisely
Zhang_Ch01.indd 50
5/13/2008 5:45:26 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
51
ground and polished to provide outstanding optics. Even difficult multiple sensing problems can be solved by using fiber optics. There are two types of fibers: Reflective fiber types may be applied using the same guidelines as diffuse sensors. Throughbeam type fiber optics may be applied using the same guidelines as through-beam sensors. Typical applications are (1) sorting various objects, (2) measuring of diameters and heights, (3) checking for double sheets, (4) detecting small objects, (5) monitoring flow of bread in an oven, and (6) detecting absence/presence of lids on a process filling line. (5) Light barrier photoelectric sensors. The infrared light barrier photoelectric sensors are a series of infrared light through beams, mounted in a tower-type emitter and receiver housing. The beam arrays can be used to recognize objects or to make continuous measurements. The system consists of individual emitters and receivers separately mounted in two towers, placed opposite each other. In addition, a micro controller is also mounted in one of the housings to control the light beams. During a measuring cycle, individual emitting diodes are activated in sequence. At the same time the corresponding receiving diode is enabled. Each light beam is defined as interrupted as soon as the imaginary line from transmitter to receiver is blocked. The same principle is followed for all subsequent beams; hence a light barrier is created. During a measuring cycle all interrupted beams are registered. Light grid sensors with retroreflective technology are simple in installation and setup. (6) Level monitoring photoelectric sensors. Levels can be measured simply and accurately using infrared light, without the need for any electrical or thermal connection between the target medium and sensor. The ratio of reflective indices changes depending on whether the tip of the sensor is surrounded by liquid or air. If the sensor tip is immersed in liquid, the light rays will be deflected into the liquid and the electronics of the receiver changes its switching status. The operating principle remains the same irrespective of whether the liquid medium can conduct electricity or not. The medium can also be clear or cloudy. (7) Line photoelectric sensor. Line photoelectric sensors can easily detect positions, edges, or widths of objects. The accurate information can be read out either as a 4–20 mA or via serial interface. The adopted integrated illumination allows a reliable function without spending too much time on maintenance and installation.
Zhang_Ch01.indd 51
5/13/2008 5:45:26 PM
52
INDUSTRIAL CONTROL TECHNOLOGY The three different measuring principles (width, edge, middle) are reachable by a button—the current measuring mode, if the sensor is powerless for a certain time. (8) Photoelectric safety switches. Single-beam or through-beam photoelectric safety switches are used as noncontact access protection to hazardous zones. Photoelectric safety switches consist of testable sender and receiver units in combination with a safety evaluation unit, which are mainly used on robot systems and processing machines. (9) Photoelectric proximity switches. Photoelectric proximity switches are sought for reliable detection of objects within a defined scanning range on a conveyor system. Objects which reduce distance from scanning plane to sensor are detected. Adjusting sensitivity can set scanning range and switching point. In the case of photoelectric switches with fiber-optic cable, the sender and receiver are contained in a single housing. A separate fiber-optic cable is used for the sender and the receiver for operation as a throughbeam system. For use as a proximity switch the sender and receiver fiber-optic cables are combined in one cable.
1.1.9
Proximity Devices
Proximity sensing is the ability of a device to tell when it is near an object or when something is near it. This sense keeps a device from running into things. It can also be used to measure the distance from a device to some object. Proximity sensors detect the presence of an object without physical contact, and proximity switches execute necessary responses when sensing the presence of the targets or some critical positions located by the targets. A position sensor determines an object’s coordinates (linear or angular) with respect to a reference; displacement means moving from one position to another for a specified distance (or angle). In effect, a proximity sensor is a threshold version of a position sensor. Proximity sensing is the technique of detecting the presence or absence of an object using a critical distance. Typical applications include the control, detection, position, inspection, and automation of machine tools and manufacturing systems. They are also used in the following machinery: packaging, production, printing, plastic merging, metal working, and food processing, etc. The measurement of proximity, position, and displacement of objects is essential in many different applications: valve position, level detection, process control, machine control, security, etc. Special purpose proximity sensors perform in extreme environments (exposure to high temperatures or harsh chemicals) and address specific
Zhang_Ch01.indd 52
5/13/2008 5:45:26 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
53
needs in automotive and welding environments. Inductive proximity sensors are ideal for virtually all metal sensing applications, including detecting all metals or nonferrous metals only.
1.1.9.1
Operating Principle
In terms of physics, proximity sensors and switches operate with capacitive, inductive, photoelectric, ultrasonic, and magnetic mechanisms to achieve both the proximity sensing and proximity switching in industrial control. The sensing principle with ultrasonic waves is given in Section 1.1.3; the operating principle with magnetism is given in Section 1.1.6 and with photoelectric physics is given in Section 1.1.8; this subsection therefore introduces the working principle for capacitive and inductive proximity sensors and switches. (1) Capacitive proximity sensors and switches make use of the variation of the parasitic capacitance that develops between the sensor and the object to be detected. When the object is at a predetermined distance from the sensitive side of the sensor, an electronic circuit inside the sensor begins to oscillate. A capacitive sensor can detect metallic and nonmetallic objects like wood, plastic, and liquid materials. The operating distance can be trimmed, making the sensor useful for each specific application. Capacitive proximity sensors are housed in smooth or threaded cylindrical metallic cases. Capacitive proximity sensors sense “target” objects, owing to the target’s ability to be electrically charged. Since even nonconductors can hold charges, this means that just about any object can be detected with this type of sensor. Figure 1.19 demonstrates the working principle of capacitive proximity sensing. As given in Fig. 1.19, a capacitive proximity sensor or switch normally contains four essential components: an electrode assembly, an oscillator circuit, an evaluation circuit, and an output circuit. When the increase in capacitance is large enough, an oscillation is set up. This oscillation is detected by the evaluation circuit, which then changes the state of the output circuit. An electrode assembly is designed so that an electrostatic field is formed between the active electrode and the earth electrode. Any object entering this field will increase the capacitance. The increase in capacitance depends on the following factors: the distance and position of the object in front of the active electrode, the dimensions of the object, and the dielectric constant of the object.
Zhang_Ch01.indd 53
5/13/2008 5:45:26 PM
54
INDUSTRIAL CONTROL TECHNOLOGY Internal capacitor plate Effective capacitor plate in target
Current sensor Oscillator + DC supply
DC output
−
Air (dielectric)
+
Figure 1.19 Capacitive proximity sensor and its sensing principle.
(2) Inductive proximity sensors and switches comprise an oscillating circuit, a signal evaluator, and a switching amplifier. The coil of this oscillating circuit generates a high-frequency electromagnetic alternating field. This field is emitted at the sensing face of the sensor. If a metallic object (switching trigger) nears the sensing face, eddy currents are generated. The resultant losses draw energy from the oscillating circuit and reduce the oscillations. The signal evaluator behind the oscillating circuit converts this information into a clear signal. Figure 1.20 demonstrates its operating principle. The supply DC is used to generate AC in an internal coil, which in turn causes an alternating magnetic field. If no conductive materials are near the face of the sensor, the only impedance to the internal AC is due to the inductance of the coil. If, however, a conductive material enters the changing magnetic field, eddy currents are generated in that conductive material, and there is a resultant increase in the impedance to the
Oscillator (generates AC)
Induction coil (generates changing magnetic field)
+
Magnetic field (metal sensing region)
DC supply − + DC output (NO or NC available)
Current sensor
Figure 1.20 Inductive proximity sensor and its sensing principle.
Zhang_Ch01.indd 54
5/13/2008 5:45:26 PM
55
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
AC in the proximity sensor. A current sensor, also built into the proximity sensor, detects when there is a drop in the internal AC current due to increased impedance. The current sensor controls a switch providing the output. The operating principle for both capacitive and inductive is based on a high-frequency oscillator that creates a field in the close surroundings of the sensing surface. The presence of a metallic object (inductive) or any material (capacitive) in the operating area causes a change in the oscillation amplitude. The rise or fall of such oscillation is identified by a threshold circuit that changes the output state of the sensor. The operating distance of the sensor depends on the shape and size of both capacitive and inductive proximity sensing devices and is strictly linked to the nature of the material. Table 1.1 gives the sensitivity of the capacitive proximity sensing devices with respect to several typical metals, and Table 1.2 gives the sensitivity of the inductive proximity sensing devices with respect to several typical metals. Normally, both a capacitive and inductive proximity sensor and switch have a screw that allows regulation of the operating distance. This sensitivity regulation is useful in applications such as detection of full containers and nondetection of empty containers. Table 1.1 The Sensitivity of Capacitive Proximity Sensors for Different Metals Metal Water Plastic Glass Wood
1 ⫻ Sn 1 ⫻ Sn 0.5 ⫻ Sn 0.5 ⫻ Sn 0.4 ⫻ Sn
Note: Sn is the operating distance.
Table 1.2 The Sensitivity of Inductive Proximity Sensors for Different Metals Fe37 (iron) Stainless steel Brass–bronze Aluminum Copper
1 ⫻ Sn 0.9 ⫻ Sn 0.5 ⫻ Sn 0.4 ⫻ Sn 0.4 ⫻ Sn
Note: Sn is the operating distance.
Zhang_Ch01.indd 55
5/13/2008 5:45:27 PM
56
INDUSTRIAL CONTROL TECHNOLOGY
1.1.9.2 Application Guide The most important parameter to consider when specifying proximity sensors is the operating distance. The rated operating distance is the distance at which switching takes place. Common body styles for proximity sensors are barrel, limit switch, rectangular, slot style, and ring. Important dimensions to consider when specifying proximity sensors include barrel diameter, length, width, and height. Proximity sensors can be a sensor element or chip, a sensor or transducer, an instrument or meter, a gauge or indicator, a recorder or totalizer, or a controller. A sensor element or chip denotes a “raw” device such as a strain gage, or one with no integral signal conditioning or packaging. A sensor or transducer is a more complex device with packaging and/or signal conditioning that is powered and provides an output such as DC voltage, a current loop, etc. An instrument or meter is a self-contained unit that provides an output such as a display locally at or near the device. Typically it also includes signal processing and/or conditioning. A gauge or indicator is a device that has a (usually analog) display and no electronic output such as a tension gauge. A recorder or totalizer is an instrument that records, totalizes, or tracks force measurement over time. It includes simple data logging capability or advanced features such as mathematical functions, graphing, etc. Load configurations are also important parameters to consider. Proximity sensors may switch an AC load or a DC load. DC load configurations can be NPN or PNP. NPN is a transistor output that switches the common or negative voltage to the load; load is connected between sensor output and positive voltage supply. PNP is a transistor output that switches the positive voltage to the load; load is connected between sensor output and voltage supply common or negative. Wire configurations are 2-wire, 3-wire NPN, 3-wire PNP, 4-wire NPN, and 4-wire PNP. Switch types can be normally open or normally closed.
1.1.9.3
Basic Types and Specifications
Proximity sensors and switches can have one of many physics and technology types. The physics types of proximity sensors and switches include capacitive, inductive, photoelectric, ultrasonic, and magnetic proximity sensors or switches. Common terms for technology types of proximity sensors and switches include these kinds of proximity sensing devices: eddy current, air, capacitance, infrared, fiber optics, etc. Proximity sensors and switches can be of the contact or noncontact type.
Zhang_Ch01.indd 56
5/13/2008 5:45:27 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
57
(1) Physics types of proximity sensors and switches (a) Capacitive proximity sensors and switches. Capacitive sensing devices utilize the face or surface of the sensor as one plate of a capacitor and the surface of a conductive or dielectric target object as the other. Capacitive proximity sensors can be a sensor element or chip, a sensor or transducer, an instrument or meter, a gauge or indicator, a recorder or totalizer, or a controller. Common body styles for capacitive proximity sensors are barrel, limit switch, rectangular, slot style, and ring. A barrel body style is cylindrical in shape, typically threaded. A limit switch body style is similar in appearance to a contact limit switch. The sensor is separated from the switching mechanism and provides a limit of travel detection signal. A rectangular or block body style is a one-piece rectangular or block-shaped sensor. A slot style body is designed to detect the presence of a vane or tab as it passes through a sensing slot, or “U” channel. A ring-shaped body style is a doughnutshaped sensor, where the object passes through the center of the ring. Electrical connections for capacitive proximity sensors can be fixed cable, connector(s), and terminals. A fixed cable is an integral part of a sensor and often includes “bare” stripped leads. A sensor with connectors has an integral connector for attaching into an existing system. A sensor with terminals has the ability to screw or clamp down. Important specifications for capacitive proximity sensors include operating distance, repeatability, and switching frequency. Rated operating distance is the distance at which switching takes place. Repeatability is the distance within which the sensor repeatably switches. It is a measure of precision. The switching frequency is the frequency at which the switch may be turned ON and OFF. Other important parameters to consider when specifying capacitive proximity sensors include housing materials, dimensions, whether or not the sensor is shielded and intrinsically safe, and environmental operating condition parameters. (b) Inductive proximity sensors and switches. Inductive proximity sensors are noncontact proximity devices that set up a radio frequency field with an oscillator and a coil. The presence of an object alters this field and the sensor is able to detect this alteration. The body style of inductive proximity sensors can be barrel, limit switch, rectangular, slot, or ring. Electrical connections
Zhang_Ch01.indd 57
5/13/2008 5:45:27 PM
58
INDUSTRIAL CONTROL TECHNOLOGY for inductive proximity sensors can be fixed cable, connector(s), and terminals. A fixed cable is an integral part of the sensor and often includes “bare” stripped leads. A sensor with connectors has an integral connector for attaching into an existing system. A sensor with terminals has the ability to screw or clamp down. Important specifications for inductive proximity sensors include operating distance, repeatability, field adjustability, and minimum target distance. Rated operating distance is the distance at which switching takes place. Repeatability is the distance within which the sensor repeatably switches. It is a measure of precision. Field adjustable sensors can be adjustable while in use. Depending on the sensor’s technology, there can be minimum target size requirements. Other important parameters to consider when specifying inductive proximity sensors include power requirements, housing materials, dimensions, special features, and environmental operating conditions. (c) Photoelectric proximity sensors and switches. These sensors utilize photoelectric emitters and receivers to detect distance, presence, or absence of target objects. Proximity photoelectric sensors have the emitter and detector in the same housing and rely upon reflection from the surface of the target. This mode can include presence sensing and distance measurement via analog output. The proximity category can be further broken down into five submodes: diffuse, divergent, convergent, fixed field, and adjustable field. A diffuse sensor presence is detected when any portion of the diffuse reflected signal bounces back from the detected object. Divergent beam sensors are short-range, diffuse-type sensors without any collimating lenses. Convergent, fixed focus, or fixed distance optics (such as lenses) are used to focus the emitter beam at a fixed distance from the sensor. Fixed-field sensors are designed to have a distance limit beyond which they will not detect objects, no matter how reflective. Adjustable field sensors utilize a cutoff distance beyond which a target will not be detected, even if it is more reflective than the target. Some photoelectric sensors can be set for multiple different optical sensing modes. Reflective properties of the target and environment are important considerations in the choice and use of photoelectric sensors. Important parameters to consider when looking for photoelectric sensors include sensing mode, detecting range, position measurement window, minimum detectable object, and
Zhang_Ch01.indd 58
5/13/2008 5:45:27 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
59
response time. The modes can be presence or absence sensing and position measurement. With a presence or absence sensor, the sensor detects presence or absence in an ON/OFF mode. In a position measurement sensing mode, the sensor can detect position in a linear region by the intensity of reflected light. Analog output is linear with position in the measurement range. The detecting range is the range of sensor detection. For presence sensors, this goes up to the maximum distance for which the signal is stable. For position measurement sensors, this is the distance range over which the position vs output response is linear and stable. The position measurement window is the width of the linear region for the sensor. The minimum detectable object is the smallest sized object detectable by the sensor. The response time is the time from target object entering detection zone to the production of the detection signal. Common configuration features for photoelectric proximity sensors include beam visibility, light-on or dark-on modes, light and dark programmability, adjustable sensitivity, selfteaching, laser source, fiber-optic glass, and fiber-optic plastic. The body style of the sensor can be threaded barrel, cylindrical, limit switch, rectangular, slot, ring, and window or frame. The sensor may be self-contained and may have a remote head. (d) Ultrasonic proximity sensors and switches. Ultrasonic proximity sensing can be a sensor element or chip, a sensor or transducer, an instrument or meter, a gauge or indicator, a recorder or totalizer, or a controller. The body style of the ultrasonic proximity sensors can be barrel, limit switch, rectangular, slot, or ring. Electrical connections can be fixed cable, connector(s), or terminals. Intrinsically safe ultrasonic proximity sensors are incapable of releasing sufficient electrical or thermal energy under normal or abnormal conditions to cause ignition of a specific hazardous atmospheric mixture in its most ignited concentration. Important performance specifications to consider when searching for ultrasonic proximity sensors include maximum operating distance, repeatability, sonic cone angle, impulse frequency, and transmitter frequency. Load configurations are also important parameters to consider. Ultrasonic proximity sensors may switch an AC load or a DC load. Additional parameters that are important to consider when searching for ultrasonic proximity sensors include switch types, housing materials, dimensions, and environmental operating parameters.
Zhang_Ch01.indd 59
5/13/2008 5:45:27 PM
60
INDUSTRIAL CONTROL TECHNOLOGY (e) Magnetic proximity sensors and switches. These noncontact proximity devices utilize inductance, Hall effect principles, variable reluctance, or magnetoresistive technology. Magnetic proximity sensors are characterized by the possibility of large switching distances, available from sensors with small dimensions. They detect magnetic objects (usually permanent magnets), which are used to trigger the switching process. As the magnetic fields are able to pass through many nonmagnetic materials, the switching process can also be triggered without the need for direct exposure to the target object. By using magnetic conductors (e.g., iron), the magnetic field can be transmitted over greater distances so that, for example, the signal can be carried away from high-temperature areas. Important specifications for magnetic proximity sensors include operating distance, repeatability, field adjustable, and minimum target distance. Rated operating distance is the distance at which switching takes place. Repeatability is the distance within which the sensor repeatably switches. It is a measure of precision. Field adjustable sensors can be adjustable while in use. Depending on the sensor’s technology, there can be minimum target size requirements. Other important parameters to consider when specifying magnetic proximity sensors include power requirements, housing materials, dimensions, special features, and environmental operating conditions. (2) Technical types of proximity sensors and switches (a) Eddy current proximity sensor or switch. In an eddy current proximity sensor, electrical currents are generated in a conductive material by an induced magnetic field. Interruptions in the flow of the electric currents (eddy currents), which are caused by imperfections or changes in a material’s conductive properties, will cause changes in the induced magnetic field. These changes, when detected, indicate the presence of change in the test object. Eddy current proximity sensors and switches detect the proximity or presence of a target by sensing the magnetic fields generated by a reference coil. They also measure variations in the field due to the presence of nearby conductive objects. Field generation and detection information is provided in the kilohertz to the megahertz range. They can be used as proximity sensors to detect presence of a target, or they can be configured to measure the position or displacement of a target. Target materials for eddy current proximity sensors can be magnetic, nonmagnetic, ferrous, and nonferrous. Magnetic
Zhang_Ch01.indd 60
5/13/2008 5:45:27 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
61
target materials are magnetized, usually with a permanent magnet component. Nonmagnetic detection targets do not require magnetization. Ferrous targets for position detection include iron or iron-based materials such as steel, stainless steel, etc. Nonferrous target materials are metallic but are not iron- or steel-based, such as aluminum, brass, and copper. Electrical connections for eddy current proximity sensors can be fixed cable, connector(s), and terminals. A fixed cable is an integral part of a sensor and often includes “bare” stripped leads. A sensor with connectors has an integral connector for attaching into an existing system. A sensor with terminals has the ability to screw or clamp down. Other important parameters to consider include switched output types, position or distance output type, housing materials, dimensions, and environmental operating parameters. (b) Air proximity sensor or switch. The air proximity sensor is a noncontact, no moving part sensor. In the absence of an object, air flows freely from the sensor resulting in a near zero output signal. The presence of an object within the sensing range deflects the normal air flow and results in a positive output signal. At low supply pressure, flow from the sensor exerts only minute forces on the object being sensed and is consequently appropriate for use where the object is light weight or easily marred by mechanical sensors. Since there are no moving mechanical parts in the air proximity sensor, there are no inherent wear mechanisms or life limitations. In this regard, the sensor is not cycle dependent and is particularly appropriate for applications requiring large numbers of cycles. Also, the air proximity sensor is inherently explosion-proof and self-purging. Consequently, it is suitable for many adverse environments. (c) Capacitance proximity sensor or switch. Many industrial capacitance sensors or switches work by means of the physics of capacitance. The physics of capacitance claims that the capacitance varies inversely with the distance between capacitor plates in this arrangement, and a certain value can be set to trigger target detection. Note that the capacitance is proportional to the plate area but is inversely proportional to the distance between the plates. When the plates are close to each other, even a small change in distance between the plates can result in a sizeable change in capacitance. Some of the capacitance proximity switches are tiny 1-in. cube electronic modules that operate using a capacitance
Zhang_Ch01.indd 61
5/13/2008 5:45:27 PM
62
INDUSTRIAL CONTROL TECHNOLOGY change technique. These sensors or switches contain two switch outputs: a latched output, which toggles the output ON and OFF with each cap input activation, and a momentary output, which will remain activated as long as the sensor input capacitance is higher than the level set by the module’s adjustment screw. (d) Infrared proximity sensor or switch. Infrared light is beyond the light range visible to the naked human eye and falls between the visible light and microwave spectra (the wavelength is longer than visible light). The longest wavelengths are red, which is where infrared got its name (“beyond red”). Infrared waves are electromagnetic waves and may be detected as also heat; heat from campfires, sunlight, etc., is actually infrared radiation. Infrared proximity sensors work by sending out a beam of infrared light, and then computing the distance to any nearby objects employing the characteristics of the returned signal. (e) Fiber-optic proximity sensor or switch. Fiber-optic proximity sensors are used to detect the proximity of target objects. Light is supplied and returned via glass fiber-optic cables. Fiber-optic cables can fit in small spaces, are not susceptible to electrical noise, and exhibit no danger of sparking or shorting. Light is supplied and returned via glass fiber-optic cables. Glass fiber exhibits very good optical qualities and typically carries high-temperature ratings. Plastic fiber can be cut to length in the field and can be flexible enough to accommodate various routing configurations. Important parameters to consider when specifying fiberoptic proximity sensors include detecting range, position measurement window, minimum detectable object, and response time. The detecting range is the range of sensor detection. For presence sensors, this goes up to the maximum distance for which the signal is stable. For position measurement sensors, this is the distance range over which the position vs output response is linear and stable. The position measurement window is the width of linear region for the sensor. For example, if the sensor could measure between 14 and 24 cm, this window would be 10 cm. The minimum detectable object is the smallest sized object detectable by the sensor. The response time is the time from target object entering detection zone to the production of the detection signal. Other important parameters to consider include output options, dimensions, electrical connections, and environmental operating conditions.
Zhang_Ch01.indd 62
5/13/2008 5:45:27 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
1.1.10
63
Scan Sensors
Scan sensors, also called image sensors or vision sensors, are built for industrial applications. Common applications for these sensors in industrial control include alignment or guidance, assembly quality, bar or matrix code, biotechnology or medical, color mark or color recognition, container or product counting, edge detection, electronics or semiconductor inspection, electronics rework, flaw detection, food and beverage, gauging, scanning and dimensioning, ID detection or verification, materials analysis, noncontact profilometry, optical character recognition, parcel or baggage sorting, pattern recognition, pharmaceutical packaging, presence or absence, production and quality control, seal integrity, security and biometrics, tool and die monitoring, and web inspection.
1.1.10.1
Operating Principle
A scan or vision or image sensor can be thought of as an electronic input device that converts analog information of a document like a map, photograph, or an overlay into an electronic image of a digital format that can be used by the computer. Scanning is the main operation of a scan or vision or image sensor, which automatically captures document features, text, and symbols as individual cells, or pixels, and produces an electronic image. While scanning, a bright white light strikes the image and is reflected onto the photosensitive surface of the sensor. Each pixel transfers a gray value (values given to the different shades of black in the image ranging from 0 (black) to 255 (white), that is, 256 values to the chipset (software). The software interprets the value in terms of 0 (black) or 1 (white), thereby forming a monochrome image of the scanned portion. As the sensor moves ahead, it scans the image in tiny strips and the sensor continues to store the information in a sequential fashion. The software running the scanner pieces together the information from the sensor into a digital form of the image. This type of scanning is known as one-pass scanning. Scanning a color image is slightly different as the sensor has to scan the same image for three different colors: red, green, and blue (RGB). Nowadays most of the color scans or vision or image sensors operate in one-pass scanning all the three colors in one go by using color filters. In principle, a color sensor works in the same way as a monochrome sensor. But in this each color is constructed by mixing red, green, and blue as given in Fig. 1.21. Thus, a 24-bit RGB sensor presents each pixel by 24 bits of information. Usually, a sensor using these three colors (in full 24 RGB mode) can create up to 16.8 million colors.
Zhang_Ch01.indd 63
5/13/2008 5:45:27 PM
64
INDUSTRIAL CONTROL TECHNOLOGY
Red
85
Green 43 Blue
6
“Brown”
Figure 1.21 The scan sensor operation of scanning a color image. In this figure, a pixel of red = 85, and green = 43, and blue = 6 is being scanned which is identified as “brown.”
A new technology: full width, single-line contact sensor array scanning has emerged in which the document to be scanned passes under a line of chips which captures the image. In this technology, a scanned line could be considered as the cartography of the luminosity of points on the line observed by the sensor. This new technology enables the scanner to operate at previously unattainable speeds. (1) CCD image sensors. A charge-coupled device (CCD) gets its name from the way the charges on its pixels are read after an exposure. After the exposure the charges on the first row are transferred to a place on the sensor called the read out register. From there, the signals are fed to an amplifier and then on to an analog-to-digital converter. Once the row has been read, its charges on the readout register row are deleted, the next row enters, and all of the rows above march down one row. The charges on each row are “coupled” to those on the row above so when one moves down, the next moves down to fill its old space. In this way, each row can be read one row at a time. Figure 1.22 is an elucidation of the CCD scanning process. (2) CMOS image sensors. A complementary metal oxide semiconductor (CMOS) typically has an electronic rolling shutter design. In a CMOS sensor the data is not literally passed from bucket to bucket. Instead, each bucket can be read independently to the output. This has enabled designers to build an electronic rolling slit shutter. This shutter is typically implemented by causing a reset to an entire row and then, some time later, reading the row out.
Zhang_Ch01.indd 64
5/13/2008 5:45:27 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
65
Last row read
First row read To output amplifier
Figure 1.22 The CCD image sensor shifts one whole row at a time into the readout register. The readout register then shifts one pixel at a time to the output amplifier.
The readout speed limits the speed of a wave that passes over the sensor from top to bottom. If the readout wave is preceded by a similar wave of resets, then uniform exposure time for all rows is achieved (albeit, not at the same time). With this type of electronic rolling shutter there is no need for a mechanical shutter except in certain cases. With these advantages, CMOS image sensors are used in some of the finest industrial control devices or finest cameras. Figure 1.23 gives a typical architecture of industrial control devices with CMOS image sensor.
1.1.10.2
Basic Types
All scan (or image or vision) sensors can be monochrome or color sensors. Monochrome sensing sensors present the image in black and white or grayscales. Color sensing sensors are able to read the spectrum range using varying combinations of different discrete colors. One common technique is sensing the red, green, and blue components (RGB) and combining them to create a wide spectrum of colors. Multiple chip colors are available on some scan (image or vision) sensors. In a widely used method, the colors are captured in multiple chips, each of them being dedicated to capturing part of the color image, such as one color, and the results are combined to generate the full color image. They typically employ color separation devices such as beam-splitters rather than having integral filters on the sensors.
Zhang_Ch01.indd 65
5/13/2008 5:45:27 PM
66
INDUSTRIAL CONTROL TECHNOLOGY
WR RD
Address
Data
RAM
Address
Pixel data CMOS image sensor
Pixel clock Line valid Frame valid Clock out
Programmable logic
Data RD CS
12c data 12c clock
Figure 1.23 Block diagram of an industrial control device having CMOS image sensor.
The imaging technology used in scan or image or vision sensors includes CCD, CMOS, tube, and film. (1) CCD image sensors (charge-coupled device) are electronic devices that are capable of transforming a light pattern (image) into an electric charge pattern (an electronic image). The CCD consists of several individual elements that have the capability of collecting, storing, and transporting electrical charges from one element to another. This together with the photosensitive properties of silicon is used to design image sensors. Each photosensitive element will then represent a picture element (pixel). With semiconductor technologies and design rules, structures are made that form lines or matrices of pixels. One or more output amplifiers at the edge of the chip collect the signals from the CCD. An electronic image can be obtained in this way: after having exposed the sensor with a light pattern, apply a series of pulses that transfer the charge of one pixel after another to the output amplifier, line after line. The output amplifier converts the charge into a voltage. External electronics will transform this output signal into a form suitable for monitors or frame grabbers.
Zhang_Ch01.indd 66
5/13/2008 5:45:27 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
67
CCD image sensors have extremely low noise figures and can also be a color sensor or a monochrome sensor. Choices for array type include linear array, frame transfer area array, full-frame area array, and interline transfer area array. Digital imaging optical format is a measure of the size of the imaging area. Optical format is used to determine what size lens is necessary for use with the imager. Optical format refers to the length of the diagonal of the imaging area. Optical format choices include 1/7, 1/6, 1/5, ¼, 1/3, ½, 2/3, ¾, and 1 in. The number of pixels and pixel size are important to consider. Horizontal pixels refer to the number of pixels in a row of the image sensor. Vertical pixels refer to the number of pixels in a column of the image sensor. The greater the number of pixels, the higher the resolution of the image. Important image sensor performance specifications to consider when searching for CCD image sensors include spectral response, data rate, quantum efficiency, dynamic range, and number of outputs. The spectral response is the spectral range (wavelength range) for which the detector is designed. The data rate is the speed of a data transfer process, normally expressed in megahertz. Quantum efficiency is the ratio of photon-generated electrons that the pixel captures to the photons incident on the pixel area. This value is wavelength dependent so the given value for quantum efficiency is generally for the peak sensitivity wavelength for the CCD. Dynamic range is the logarithmic ratio of well depth to the readout noise in decibels, the higher the number, the better. Common features for CCD image sensors include antiblooming and cooling. Some arrays for CCD image sensors offer an optional antiblooming gate designed to bleed off overflow from a saturated pixel. Without this feature, a bright spot, which has saturated the pixels, will cause a vertical streak. Some arrays are cooled for lower noise and higher sensitivity. An important environmental parameter to consider is the operating temperature. (2) CMOS image sensors operate at lower voltages than CCD image sensors, reducing power consumption for portable applications. In addition to their lower power consumption when compared with CCD image sensors, CMOS image sensors are generally of a much simpler design: often just a crystal and decoupling. For this reason, they are easier to design with, generally smaller, and require less support circuitry. Each CMOS active pixel sensor cell has its own buffer amplifier and can be addressed and read individually. A commonly
Zhang_Ch01.indd 67
5/13/2008 5:45:28 PM
68
INDUSTRIAL CONTROL TECHNOLOGY used cell has four transistors and a photo-sensing element. The cell has a transfer gate separating the photo sensor from a capacitive “floating diffusion,” a reset gate between the floating diffusion and power supply, a source-follower transistor to buffer the floating diffusion from readout-line capacitance, and a row-select gate to connect the cell to the readout line. All pixels on a column connect to a common sense amplifier. In addition to being a color sensor or a monochrome sensor, CMOS sensors have two categories as defined by their manner of output: analog and digital. Analog sensors feed their encoded signal in a video format that can be fed directly to standard video equipment. Digital CMOS image sensors provide digital output, typically via a 4/8- or 16-bit bus. The digital signal is direct not requiring transference or conversion via a video capture card. CMOS image sensors can offer many advantages over CCD image sensors. Just some of the technical advantages of CMOS sensors are (1) no blooming, (2) low power consumption, ideal for battery-operated devices, (3) direct digital output (incorporates ADC and associated circuitry), (4) small size and little support circuitry, and (5) simple to design. (3) Tube camera is also an electronic device in which the image is formed on a fluorescent screen. It is then read by an electron beam in a raster scan pattern and converted to a voltage proportional to the image light intensity. (4) Film technology exposes the image onto photosensitive film, which is developed to play or store. The shutter, a manual door that admits light to the film, typically controls exposure.
CCD and CMOS are the important types of image sensors. A comparison of CCD and CMOS features is given in Table 1.3.
Table 1.3 Comparison of CCD and CMOS Image Sensors Features CCD Smallest pixel size Low noise Lowest dark current –100% fill factor for full-frame CCD Established technology market base Highest sensitivity Electron shutter without artifacts
Zhang_Ch01.indd 68
CMOS Single power supply Single master clock Low power consumption X,Y addressing and subsampling Smallest system size Easy integration of circuitry
5/13/2008 5:45:28 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
69
1.1.10.3 Technical Specifications Inspection functions include object detection, edge detection, image direction, alignment, object measurement, object position, bar or matrix code, optical character recognition (OCR), and color mark or color recognition. Other parameters to consider when specifying scan or vision or image sensors include performance features, physical features, lens mounting, shutter control, sensor specifications, dimensions, and operating environment parameters. Sensor specifications to consider when searching for scan or image or vision sensors include number of images stored and maximum inspection rate. The number of images stored represents captured images that can be stored into on-board memory or nonvolatile storage. The maximum inspection rate is the maximum number of parts or process steps that can be inspected or evaluated per unit time. This is usually given in units of inspections per second. Other important parameters of the sensor specifications include (1) Image sensor resolution. Image resolution is a way of expressing how sharp or detailed images are. There are two kinds of resolution: optical and interpolated. The optical resolution of an image sensor is an absolute number because an image sensor’s pixels or photo-elements are physical devices that can be counted. To improve resolution in certain limited respects, the optical resolution can be increased using software. This process, called interpolated resolution, adds pixels to the image to increase the total number of pixels. To do so, software evaluates those pixels surrounding each new pixel to determine what its color should be. For example, if all of the pixels around a newly inserted pixel are red, the new pixel will be made red. It is important to keep in mind that interpolated resolution does not add any new information to the image; it just adds pixels and enlarges the file. This same thing can be done in a photo-editing program such as Photoshop by resizing the image. (2) Color depth. Resolution is not the only factor governing the quality of your images. Equally important is color. When you view a natural scene, or a well done photographic color print, you are able to distinguish millions of colors. Digital images can approximate this color realism, but whether they do so depends on their capabilities and settings. For example, almost all newer computer systems can display what’s called 24-bit True Color. It’s called True Color because these systems display 16 million colors, about the number the human eye can distinguish.
Zhang_Ch01.indd 69
5/13/2008 5:45:28 PM
70
INDUSTRIAL CONTROL TECHNOLOGY (3) Shutter control. There is much confusion about the need for a mechanical shutter with scan and image sensors. This discussion below clarifies the differences. Most progressive scan and image sensors can be considered to be long chains of pixels or buckets that are each sensitive to light. Sensing begins by clearing the sensor, and then exposing it to light. After the proper exposure time, the buckets pass from pixel to pixel in a bucket brigade fashion to the output pin. If the sensor is being exposed to light while it is passing the data, the pixels will be polluted with additional light that confuses the image. Therefore, this type of a progressive scan sensor requires a mechanical shutter to block the sensor while it is shifting the data out. Alternatively, some other progressive scan and image sensors have an entire frame store. This frame store is a place to save the contents of each pixel in a shielded bucket that is insensitive to light. The shielded buckets are then passed to the output pin and are, by their nature, not corrupted by additional light. Unfortunately, these sensors must be almost twice the size of their counterparts without the shielded frame storage area. So, it is important to compare the cost of the larger sensor to the savings associated with not needing a mechanical shutter. Both of these implementations can be used for flash photography since the flash can be fired when the sensor is in integration mode. Interlaced scan and image sensors are similar to progressive sensors except that there is a shielded row store located between each pair of odd and even rows. For normal use, alternating frames copy either the odd or even rows into the shielded store prior to shifting while the other row (even or odd) is being exposed. In this way there is the advantage of not requiring a mechanical shutter. However, note that each frame has only half the vertical resolution of the entire sensor. This is the type of sensor commonly found in television cameras. When the flash fires only one-half of the rows will be in integration mode. So, this type of sensor cannot be used for flash photography unless you are willing to limit the vertical resolution. (4) Sensitivity. An International Organization for Standardization (ISO) number that appears on the film package specifies the speed, or sensitivity, of a silver-based film. The higher the number the “faster” or more sensitive the film is to light. Each doubling of the ISO number indicates a doubling in film speed so each of these films is twice as fast as the next fastest. Image sensors are also rated using equivalent ISO numbers. Just as with film, an image sensor with a lower ISO needs more light for a good exposure than one with a higher ISO. All things
Zhang_Ch01.indd 70
5/13/2008 5:45:28 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
(5)
(6) (7)
(8)
Zhang_Ch01.indd 71
71
being equal, it is better to get an image sensor with a higher ISO because it will enhance freezing motion and shooting in low light. Typically, ISOs range from 100 (fairly slow) to 3200 or higher (very fast). Some cameras have more than one ISO rating. In low-light situations, you can increase the sensor’s ISO by amplifying the image sensor’s signal (increasing its gain). Some cameras even increase the gain automatically. This not only increases the sensor’s sensitivity, it also increases the noise or “grain,” making the images softer and less sharp. Aspect ratio. Image sensors have different aspect ratios—the ratio of image height to width. The ratio of a square is 1:1 (equal width and height) and that of 35 mm film is 1.5:1 (1.5 times wider than it is high). Most image sensors fall in between these extremes. The aspect ratio of a sensor is important because it determines the shape and proportions of the photographs you create. When an image has a different aspect ratio than the device it’s displayed or printed on, it has to be cropped or resized to fit. Your choice is to lose part of the image or waste part of the paper. To imagine this better, try fitting a square image on a rectangular piece of paper. The aspect ratio of an image sensor determines the shape of your prints. An image will only perfectly fill a sheet of paper if both have the same aspect ratio. If the ratios are different, you have to choose between losing part of the image or leaving some white space on the paper. To calculate the aspect ratio of any camera, divide the largest number in its resolution by the smallest number. For example, if a sensor has a resolution of 3000 × 2000, divide 3000 by 2000. In this case the aspect ratio is 1.5, the same as 35 mm film. Dynamic range. Dynamic range is the ratio of signal to noise of an image sensor. Package. The sensor package is often neglected in sensor selection. It is common in smaller sensors for the package cost to be at least half of the total cost of the product. Sensor packages are costly because they are produced in relatively low volumes, have optical quality glass tops, must be dirt free, have low humidity, and require precise die positioning. Image quality. The size of an image file depends in part on the resolution of the image. The higher the resolution, the more pixels there are to store, so the larger the image file becomes. To make large image files smaller and more manageable most cameras store images in a format called JPEG after its developer, the Joint Photographic Experts Group. This file format not
5/13/2008 5:45:28 PM
72
INDUSTRIAL CONTROL TECHNOLOGY only compresses images, but it also allows specifying how much they are compressed. This is a useful feature because there is a trade-off between compression and image quality. Less compression gives better images but cannot store as many images. More compression allows storing more images, but the only problem is that image quality would not be as good. (9) Frame rate. Most digital cameras have automatic exposure controls. There are two delays built into digital cameras that affect the ability to respond to fast action when taking pictures. The first brief delay is between pressing the shutter button and actually capturing the image. This delay, called the refresh rate, occurs because the camera clears the image sensor, sets white balance to correct for color, sets the exposure, and focuses the image. Finally, it fires the flash if it is needed and takes the picture. The second delay, the recycle time, occurs when the captured image is processed and stored. This delay can range from a few seconds to half a minute. Both of these delays affect how quickly a series of photos can be taken one after another, called the frame rate, shot-toshot rate, or click-to-click rate. (10) Clocking and power supply design. This specifies the requirement for a wider variety of power supply voltages and clocks or single power supply voltage and includes clocking.
1.1.11
Force and Load Sensors
Force and load sensors cover electrical sensing devices that are used to measure tension, compression, and shear forces. Tension cells are used for measurement of a straight line force “pulling apart” along a single axis, typically annotated as positive force. Compression tension cells are used for measurement of a straight line force “pushing together” along a single axis, typically annotated as negative force. Shear is induced by tension or compression along offset axes. They are manufactured in many different packages and mounting configurations. In industries, force and load sensors are used for security devices, packaging devices, production automation, etc.
1.1.11.1
Operating Principle
The most common technologies for the operation of force and load sensors are various types of strain gauges. Strain gauges are measuring elements
Zhang_Ch01.indd 72
5/13/2008 5:45:28 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
73
that convert force, pressure, tension, etc., into an electrical signal. They are the most universally used measuring devices for electrical measurement of mechanical quantities. A strain gauge is a resistive elastic sensor whose resistance is a function of applied strain (unit deformation). Many types of strain gauges exit, depending on the electrical resistance to strain. These types include piezoresistive or semiconductor, carbonresistive, bonded metallic wire, and foil gauges. The most widely used characteristic that varies in proportion to strain is electrical resistance. In these gauges the electrical resistance varies linearly with strain. The resistance of an electrically conductive material changes with dimensional changes that take place when the conductor is deformed elastically. When such a material is stretched, the conductors become longer and narrower, which causes an increase in resistance. A Wheatstone bridge then converts this change in resistance to an absolute voltage. The resulting value is linearly related to strain by a constant called the gauge factor. Capacitance devices, which depend on geometric features, can be used to measure strain. Changing the plate area or the gap can vary the capacitance. The electrical properties of the materials used to form the capacitor are relatively unimportant, so capacitance strain gauge materials can be chosen to meet the mechanical requirements. This allows the gauges to be more rugged, providing a significant advantage over resistance strain gauges. (1) The strain gauge. Strain is the amount of deformation of a body due to an applied force. More specifically, strain is defined as the fractional change in length. While there are several methods of measuring strain, the most common is with a strain gauge, a device whose electrical resistance varies in proportion to the amount of strain in the device. The most widely used gauge, however, is the bonded metallic strain gauge. The metallic strain gauge consists of a very fine wire or, more commonly, metallic foil arranged in a grid pattern. The grid pattern maximizes the amount of metallic wire or foil subject to strain in the parallel direction (Fig. 1.24). The cross-sectional area of the grid is minimized to reduce the effect of shear strain and Poisson strain. The grid is bonded to a thin backing, called the carrier, which is attached directly to the test specimen. Therefore, the strain experienced by the test specimen is transferred directly to the strain gauge, which responds with a linear change in electrical resistance. Strain gauges are available commercially with nominal resistance values from 30 to 3000 Ω, with 120, 350, and 1000 Ω being the most common values.
Zhang_Ch01.indd 73
5/13/2008 5:45:28 PM
74
INDUSTRIAL CONTROL TECHNOLOGY Alignment marks
Solder tabs Active grid length Carrier
Figure 1.24 Bonded metallic strain gauge.
It is very important that the strain gauge be properly mounted onto the test specimen so that the strain is accurately transferred from the test specimen, through the adhesive and strain gauge backing, to the foil itself. Manufacturers of strain gauges are the best source of information on proper mounting of strain gauges. (2) Strain gauge measurement. In practice, the strain measurements rarely involve quantities larger than a few millistrain ε × 10−3. To measure such small changes in resistance, and compensate for the temperature sensitivity, proper selection and use of the bridge, signal conditioning, wiring, and data acquisition components are required for reliable measurements. (a) Bridge completion. Unless you are using a full-bridge strain gauge sensor with four active gauges, you will need to complete the bridge with reference resistors. Therefore, strain gauge signal conditioners typically provide half-bridge completion networks consisting of two high-precision reference resistors. Figure 1.25 diagrams the wiring of a half-bridge strain gauge circuit to a conditioner with completion resistors R1 and R2. The nominal resistance of the completion resistors is less important than how well the two resistors are matched. Ideally, the resistors are well matched and provide a stable reference voltage of VEX/2 to the negative input lead of the measurement channel. For example, the half-bridge
Zhang_Ch01.indd 74
5/13/2008 5:45:28 PM
75
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL EC+ R1 VEX
+
In+ + –
– R2
RG
In– EC–
RG Strain gauges
Signal conditioner
Figure 1.25 Connection of half-bridge strain gauge circuit.
completion resistors provided on the SCXI-1122 signal conditioning module are 2.5 kΩ resistors, with a ratio tolerance of 0.02%. The high resistance of the completion resistors helps minimize the current drawn from the excitation voltage. (b) Bridge excitation. Strain gauge signal conditioners typically provide a constant voltage source to power the bridge. While there is no standard voltage level that is recognized industry-wide, excitation voltage levels of around 3 and 10 V are common. While a higher excitation voltage generates a proportionately higher output voltage, the higher voltage can also cause larger errors due to self-heating. Again, it is very important that the excitation voltage be very accurate and stable. Alternatively, one can use a less accurate or stable voltage, and accurately measure, or sense, the excitation voltage so the correct strain can be calculated. (c) Excitation sensing. If the strain gauge circuit is located at a distance away from the signal conditioner and excitation source, a possible source of error is voltage drop caused by resistance in the wires connecting the excitation voltage to the bridge. Therefore, some signal conditioners include a feature called remote sensing to compensate for this error. There are two common methods of remote sensing. With feedback remote sensing, you connect extra sense wires to the point where the excitation voltage wires connect to the bridge circuit. The extra sense wires serve to regulate the excitation supply to compensate for lead losses and deliver the needed voltage at the bridge. This scheme is used with the SCXI-1122. An alternative remote sensing scheme uses a separate measurement channel to measure directly the excitation voltage
Zhang_Ch01.indd 75
5/13/2008 5:45:29 PM
76
INDUSTRIAL CONTROL TECHNOLOGY delivered across the bridge. Because the measurement channel leads carry very little current, the lead resistance has negligible effect on the measurement. The measured excitation voltage is then used in the voltage-to-strain conversion to compensate for lead losses. (d) Signal amplification. The output of strain gauges and bridges is relatively small. Therefore, strain gauge signal conditioners usually include amplifiers to boost the signal level to increase measurement resolution and improve signal-to-noise ratios. SCXI signal conditioning modules, for example, include configurable gain amplifiers with gains up to 2000. (e) Bridge balancing, offset nulling. When a bridge is installed, it is very unlikely that the bridge will output exactly 0 V when no strain is applied. Rather, slight variations in resistance among the bridge arms and lead resistance will generate some nonzero initial offset voltage. There are a few different ways that a system can handle this initial offset voltage: (i) Software compensation. The first method compensates for the initial voltage in software. With this method, you take an initial measurement before strain input is applied. This initial voltage is then used in the strain equations listed at the end of this application note. This method is simple, fast, and requires no manual adjustments. The disadvantage of the software compensation method is that the offset of the bridge is not removed. If the offset is large enough, it limits the amplifier gain you can apply to the output voltage, thus limiting the dynamic range of the measurement. (ii) Offset nulling circuit. The second balancing method uses an adjustable resistance, or potentiometer, to physically adjust the output of the bridge to zero. For example, Fig. 1.26 illustrates the offset nulling circuit of the SCXI-1321 terminal block. By varying the position of
R1 + –
VEX
RPOT
R4
–
RNULL R2
VO
+ R3
Figure 1.26 Offset nulling circuit of SCXI-1321 terminal block.
Zhang_Ch01.indd 76
5/13/2008 5:45:29 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
77
the potentiometer (RPOT), you can control the level of the bridge output and set the initial output to 0 V. The value of RNULL sets the range that the circuit can balance. On the SCXI-1321, this resistor is socked for easy adjustment of the balancing range. (iii) Buffered offset nulling. The third method, like the software method, does not affect the bridge directly. With buffered nulling, a nulling circuit adds an adjustable DC voltage to the output of the instrumentation amplifier. For example, the SC-2043-SG strain gauge accessory uses this method. The SC-2043-SG includes a useradjustable potentiometer that can add ±50 mV to the output of an instrumentation amplifier that has a fixed gain of 10. Therefore, the nulling range, referred to input, is ±5 mV. (f) Shunt calibration. The normal procedure to verify the output of a strain gauge measurement system relative to some predetermined mechanical input or strain is called shunt calibration. Shunt calibration involves simulating the input of strain by changing the resistance of an arm in the bridge by some known amount. Shunting, or connecting, a large resistor of known value accomplishes this across one arm of the bridge, creating a known drift resistance. The output of the bridge can then be measured and compared to the expected voltage value. The results can then be used to correct span errors in the entire measurement path, or to simply verify general operation to gain confidence in the setup.
1.1.11.2
Basic Types
(1) Force and load sensors. Force and load sensors can be devices of many different types including sensor element or chip, sensor or transducer, instrument or meter, gauge or indicator, and recorder and totalizers. A sensor element or chip denotes a “raw” device such as a strain gauge, or one with no integral signal conditioning or packaging. A sensor or transducer is a more complex device with packaging and/or signal conditioning that is powered and provides an output such a DC voltage, a 4–20 mA current loop, etc. An instrument or meter is a self-contained unit that provides an output such as a display locally at or near the device. Typically it also includes signal processing and/or conditioning. A gauge or indicator is a device that has a (usually analog) display and no electronic output such as a tension gauge. A recorder
Zhang_Ch01.indd 77
5/13/2008 5:45:29 PM
78
INDUSTRIAL CONTROL TECHNOLOGY or totalizer is an instrument that records, totalizes, or tracks force measurement over time. It includes simple data logging capability or advanced features such as mathematical functions, graphing, etc. Features common to force and load sensors include biaxial measurement, triaxial measurement, and temperature compensation. Biaxial load cells can provide load measurements along two, typically orthogonal, axes. Triaxial load cells can provide load measurements along three, typically orthogonal, axes. Temperature compensated load cells provide special circuitry to reduce/eliminate sensing errors due to temperature variations. Other parameters to consider include operating temperature, maximum shock, and maximum vibration. (2) Strain gauges. The most common technology used by force and load sensors is the principle of strain gauges. In a photoelectric strain gauge a beam of light is passed through a variable slit, actuated by the extensometer, and directed to a photoelectric cell. As the gap opening changes, the amount of light reaching the cell varies, causing a varying intensity in the current generated by the cell. Semiconductor or piezoelectric strain gauges are constructed of ferroelectric materials. In ferroelectric materials, such as crystalline quartz, a change in the electronic charge across the faces of the crystal occurs when the material is mechanically stressed. The piezoresistive effect is defined as the change in resistance of a material due to an applied stress, and this term is used commonly in connection with semiconducting materials. Optical strain gauge types include photoelastic, moiré interferometer, and holographic interferometer strain gauges. In a fiber-optic strain gauge, the sensor measures the strain by shifting the light frequency of the light reflected down the fiber from the Bragg grating, which is embedded inside the fiber itself. The gauge pattern refers cumulatively to the shape of the grid, the number and orientation of the grids in a multiple grid (rosette) gauge, the solder tab configuration, and various construction features that are standard for a particular pattern. Arrangement types include uniaxial, dual linear, strip gauges, diaphragm, tee rosette, rectangular rosette, and delta rosette. Specialty applications for strain gauges include crack detection, crack propagation, extensometer, temperature measurement, residual stress, shear modulus gauge, and transducer gauge. The three primary specifications when selecting strain gauges are operating temperature, the stat of the strain (including
Zhang_Ch01.indd 78
5/13/2008 5:45:29 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
79
gradient, magnitude, and time dependence), and the stability required by the application. The operating temperature range is the range of ambient temperature where the use of the strain gauge is permitted without permanent changes of the measurement properties. Other important parameters to consider include the active gauge length, the gauge factor, nominal resistance, and strain sensitive material. The gauge length of a strain gauge is the active or strain-sensitive length of the grid. The end loops and solder tabs are considered insensitive to strain because of their relatively large cross-sectional area and low electrical resistance. The strain sensitivity, k, of a strain gauge is the proportionality factor between the relative changes of the resistance. The strain sensitivity is a figure without dimension and is generally called gauge factor. The resistance of a strain gauge is defined as the electrical resistance measured between the two metal ribbons or contact areas intended for the connection of measurement cables. The principal component that determines the operating characteristics of a strain gauge is the strain-sensitive material used in the foil grid.
1.1.11.3 Technical Specifications Important parameters for force and load sensors include the force and load measurement range and the accuracy. The measurement range is the range of required linear output. Most force sensors actually measure the displacement of a structural element to determine force. The force is associated with a deflection as a result of calibration. There are many form factors or packages to choose from: S-beam, pancake, donut or washer, plate or platform, bolt, link, miniature, cantilever, canister, load pin, rod end, and tank weighing. Shear cell type can be shear beam, bending beam, or single point bending beam. Force and load sensors can have one of many output types. These include analog voltage, analog current, analog frequency, switch or alarm, serial, and parallel. (1) Force and load sensor specifications (a) Force to measure (i) Force/load measurement range. The range required of linear output. Search Logic: User may specify either, both, or neither of the “At Least” and “No More Than” values.
Zhang_Ch01.indd 79
5/13/2008 5:45:29 PM
80
INDUSTRIAL CONTROL TECHNOLOGY Products returned as matches will meet all specified criteria. (ii) Accuracy. The accuracy required of the device. Search Logic: All matching products will have a value less than or equal to the specified value. (b) Force sensor type (i) Tension. Tension cell for measurement of a straight line force “pulling apart” is along a single axis; typically annotated as positive force. (ii) Compression. Tension cell for measurement of a straight line force “pushing together” is along a single axis; typically annotated as negative force. (iii) Shear. Shear is induced by tension or compression along offset axes. Search Logic: All products with ANY of the selected attributes will be returned as matches. Leaving all boxes unchecked will not limit the search criteria for this question; products with all attribute options will be returned as matches. (c) Load cell package. Most force sensors actually measure the displacement of a structural element to determine force. The force is associated with a deflection as a result of calibration. There are many form factors or packages to choose from: S-beam, pancake, donut or washer, plate or platform, bolt, link, miniature, cantilever, canister, load pin, rod end, and tank weighing. Your choices are (i) S-beam. S-beam units are shaped like a squared-off S. Variable resistors (whose resistance is a function of strain induced by the load, e.g., strain gauges or piezoresistive elements) are bonded to the regions of maximum strain and change resistance as a load is applied. These resistance changes are typically measured in a Wheatstone bridge circuit. (ii) Pancake. Pancake cells are similarly instrumented short, low-profile cylinders. They are quite popular and are capable of measuring very small through very large loads. (iii) Donut/washer. Donut cells are like pancake cells but with a through bore or hole. (iv) Plate/platform (v) Bolt. Load sensor with one or two threaded ends for attachment to measured system; measures force along the long axis of the bolt.
Zhang_Ch01.indd 80
5/13/2008 5:45:29 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
81
(vi) Link. Sensor for inline measurement of tension or compression. (vii) Miniature. Miniature is a characteristic assigned by the supplier. If the supplier considers the load cell to be miniature, it will be carried in the database as such. (viii) Cantilever. Cantilever units are designed to have the load applied to a cantilever that is typically instrumented at the base. (ix) Canister. Canister cells are instrumented cylinders. They are of considerably higher aspect ratio than pancake cells. (x) Load pin. Load pins are typically instrumented shear elements that undergo a strain when a load is applied. A typical application is an instrumented wrist pin that attaches a hook to a cable on a crane. (xi) Rod end—male. Rod end—male is an instrumented rod with male threads. (xii) Rod end—female. Rod end—female is an instrumented rod with female threads. (xiii) Tank weighing. Tank weighing load cells are specially designed to support tanks. Search Logic: All products with ANY of the selected attributes will be returned as matches. Leaving all boxes unchecked will not limit the search criteria for this question; products with all attribute options will be returned as matches. (d) Sensor output. It includes the type of electrical signal that will be produced. (i) Analog voltage. Output voltage is a simple (usually linear) function of the measurement, including voltage ranges such as 0–10 V, 5 V, and voltage ratios dependent upon excitation, typically expressed as millivolts per volt. (ii) Analog current. Often called a transmitter, a current is imposed on the output circuit proportional to the measurement; typical ranges are 4–20 mA, 0–50 mA, etc. Feedback is used to provide the appropriate current regardless of line noise, impedance, etc. Current outputs are often useful when sending signals over long distances. (iii) Analog frequency. The output signal is encoded via amplitude modulation (AM), frequency modulation
Zhang_Ch01.indd 81
5/13/2008 5:45:29 PM
82
INDUSTRIAL CONTROL TECHNOLOGY (FM), or some other modulation scheme; the signal is analog in nature. (iv) Switch/alarm. Sensor triggers on a sensed force level to close or open a switch, or provide a signal to an alarm or interlock. (v) Serial is a standard serial digital output protocol such as RS232, RS422, RS485, USB, etc. (vi) Parallel is a standard parallel digital output protocol such as IEEE 488, a Centronics or printer port, etc. (vii) Other. Any digital outputs other than the standard serial or parallel signals. Simple TTL logic signals are an example. Search Logic: All products with ANY of the selected attributes will be returned as matches. Leaving all boxes unchecked will not limit the search criteria for this question; products with all attribute options will be returned as matches. (e) Category of devices. You can think of products as belonging to general categories based on what they are designed to do and what you have to do to use them. The “category” criteria attempts to distinguish “unpacked” sensors that might be used as part of a larger sensor from, say, a gauge which can be read just by looking at it. (i) Sensor element/chip denotes a “raw” device such as a strain gauge or one with no integral signal conditioning or packaging. (ii) Sensor/transducer. A more complex device with packaging and/or signal conditioning that is powered and provides an output such a DC voltage, a 4–20 mA current loop, etc. (iii) Instrument/meter is a self-contained unit that provides an output such as a display locally at or near the device. Typically it also includes signal processing and/or conditioning. (iv) Gauge/indicator. A device that has a (usually analog) display and no electronic output such as a tension gauge. (v) Recorder/totalizer is an instrument that records, totalizes, or tracks force measurement over time including simple data logging capability or advanced features such as mathematical functions, graphing, etc. Search Logic: All products with ANY of the selected attributes will be returned as matches. Leaving all boxes unchecked will not limit the search criteria for this
Zhang_Ch01.indd 82
5/13/2008 5:45:29 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
83
question; products with all attribute options will be returned as matches. (f) Sensor technology (i) Piezoelectric. For piezoelectric devices, a piezoelectric material is compressed and generates a charge that is conditioned by a charge amplifier. (ii) Strain gauge. For strain gauge devices, strain gauges (strain-sensitive variable resistors) are bonded to parts of the structure that deform when making the measurement. These strain gauges are typically used as elements in a Wheatstone bridge circuit, which is used to make the measurement. Strain gauges typically require an excitation voltage and provide output sensitivity proportional to that excitation. Search Logic: All products with ANY of the selected attributes will be returned as matches. Leaving all boxes unchecked will not limit the search criteria for this question; products with all attribute options will be returned as matches. (g) Features (i) Biaxial measurement. Biaxial load cells can provide load measurements along two, typically orthogonal, axes. Search Logic: “Required” and “Must Not Have” criteria limit returned matches as specified. Products with optional attributes will be returned for either choice. (ii) Triaxial measurement. Triaxial load cells can provide load measurements along three, typically orthogonal, axes. Search Logic: “Required” and “Must Not Have” criteria limit returned matches as specified. Products with optional attributes will be returned for either choice. (iii) Temperature compensation. Temperature compensated load cells provide special circuitry to reduce/eliminate sensing errors due to temperature variations. Search Logic: “Required” and “Must Not Have” criteria limit returned matches as specified. Products with optional attributes will be returned for either choice. (2) Strain gauge specifications (a) Construction (i) Electrical resistance. The resistance of an electrically conductive material changes with dimensional changes that take place when the conductor is deformed elastically. When such a material is stretched, the conductors
Zhang_Ch01.indd 83
5/13/2008 5:45:29 PM
84
INDUSTRIAL CONTROL TECHNOLOGY
(ii)
(iii)
(iv)
(v)
Zhang_Ch01.indd 84
become longer and narrower, which causes an increase in resistance. A Wheatstone bridge then converts this change in resistance to an absolute voltage. The resulting value is linearly related to strain by a constant called the gauge factor. Capacitance. Capacitance devices, which depend on geometric features, can be used to measure strain. Changing the plate area or the gap can vary the capacitance. The electrical properties of the materials used to form the capacitor are relatively unimportant, so capacitance strain gauge materials can be chosen to meet the mechanical requirements. This allows the gauges to be more rugged, providing a significant advantage over resistance strain gauges. Photoelectric. A beam of light is passed through a variable slit, actuated by the extensometer, and directed to a photoelectric cell. As the gap opening changes, the amount of light reaching the cell varies, causing a varying intensity in the current generated by the cell. Semiconductor (piezoresistive). In ferroelectric materials, such as crystalline quartz, a change in the electronic charge across the faces of the crystal occurs when the material is mechanically stressed. The piezoresistive effect is defined as the change in resistance of a material due to an applied stress, and this term is used commonly in connection with semiconducting materials. The resistivity of a semiconductor is inversely proportional to the product of the electronic charge, the number of charge carriers, and their average mobility. The effect of applied stress is to change both the number and average mobility of the charge carriers. By choosing the correct crystallographic orientation and dopant type, both positive and negative gauge factors may be obtained. Silicon is now almost universally used for the manufacture of semiconductor strain gauges. Optical photoelastic strain gauges. When a photoelastic material is subjected to a load and illuminated with polarized light from the measurement instrumentation (called a reflection polariscope), patterns of color appear which are directly proportional to the stresses
5/13/2008 5:45:29 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
85
and strains within the material. The sequence of colors observed as stress increases is black (zero stress), yellow, red, blue, green, etc. The transition lines seen between the red and green bands are known as “fringes.” The stresses in the material increase proportionally as the number of fringes increases. A closely spaced fringe means a steeper stress gradient, and uniform color represents a uniformly stressed area. Hence, the overall stress distribution can easily be studied by observing the numerical order and spacing of the fringes. Furthermore, a quantitative analysis of the direction and magnitude of the strain at any point on the coated surface can be performed with the Polaris reflection and a digital strain indicator. (vi) Moire interferometry strain gauges. Moire interferometry is an optical technique that uses coherent laser light to produce a high contrast, two-beam optical interference pattern. Moire interferometry reveals planar displacement fields on a part’s surface, which is caused by external loading or other source deformation. It responds only to geometric changes of the specimen and is effective for diverse engineering materials. Contour maps of planar deformation fields can be generated from x and y components of displacements. (vii) Holographic interferometry strain gauges. Holographic interferometry allows the evaluation of strain, rotation, bending, and torsion of an object in three dimensions. Since holography is sensitive to the surface effects of an opaque body, extrapolation into the interior of the body is possible in some circumstances. In one or more double-exposure holograms, changes in the object are recorded. From the fringe patterns in the reconstructed image of the object, the interference phase-shifts for different sensitivity vectors are measured. A computer is then used to calculate the strain and other deformations. (viii) Fiber optic. The sensor measures the strain by shifting the light frequency of the light reflected down the fiber from the Bragg grating, which is embedded inside the fiber itself. Since it is possible to put several sensors on the same fiber, the amount of cabling required is reduced significantly compared to other types of strain gauges.
Zhang_Ch01.indd 85
5/13/2008 5:45:29 PM
86
INDUSTRIAL CONTROL TECHNOLOGY Also, since the signal is optical rather than electronic, it is not affected by electromagnetic interference. (ix) Other. Other is unlisted, specialized, or proprietary strain gauge construction. Search Logic: All products with ANY of the selected attributes will be returned as matches. Leaving all boxes unchecked will not limit the search criteria for this question; products with all attribute options will be returned as matches. (b) Physical specifications (i) Active gauge length (grid length). The gauge length of a strain gauge is the active or strain-sensitive length of the grid. The end loops and solder tabs are considered insensitive to strain because of their relatively large cross-sectional area and low electrical resistance. Search Logic: User may specify either, both, or neither of the “At Least” and “No More Than” values. Products returned as matches will meet all specified criteria. (ii) Number of gauges in gauge pattern. The total number of strain gauges in the gauge pattern. Search Logic: User may specify either, both, or neither of the “At Least” and “No More Than” values. Products returned as matches will meet all specified criteria. (iii) Operating temperature. The operating temperature range is the range of ambient temperature where the use of the strain gauge is permitted without permanent changes of the measurement properties. Search Logic: User may specify either, both, or neither of the limits in a “From–To” range; when both are specified, matching products will cover entire range. Products returned as matches will meet all specified criteria. (iv) Gauge factor. The strain sensitivity, k, of a strain gauge is the proportionality factor between the relative changes of the resistance. The strain sensitivity is a figure without dimension and is generally called gauge factor. Search Logic: All matching products will have a value greater than or equal to the specified value.
1.1.11.4
Calibration
Calibration and temperature compensation of strain gauge–based weight sensors using the precision sensor signal conditioner integrated circuits is
Zhang_Ch01.indd 86
5/13/2008 5:45:30 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
87
becoming increasingly popular. Leading automotive suppliers of safety products are increasingly turning to the use of force sensors in their quest to optimize airbag deployment forces appropriate for the mass of the occupant and severity of the deployment situation. A typical weight sensor characterization requires the use of a deadweight test stand (also sometimes called a creep tester) in order to obtain accurate and repeatable sensor loading. Several points must be observed in order to achieve the desired results. (1) fixture orientation for positive and negative force loads on the test stand; (2) sensor and cable orientation; (3) torque of the sensor mounting bolts; (4) weight numbering; (5) shaft-to-weight clearance (when using an automatic weight lift for removing weights from load); (6) preconditioning sensor and fixtures after sensor mounting; (7) monotonic application and removal of weights; (8) golden samples and data tracking.
1.2 Actuators An actuator is simply defined as a device that produces a linear, rotary motion from a source of power under the action of a source of control. The sources of power driving actuators can be from electric, pneumatic, fluid, and piezoelectric, or some others such as our hands. Actuators are accordingly classed into electric actuators, pneumatic actuators, hydraulic actuators, piezoelectric actuators, as well as manual actuators, based on the applied sources of power. Basic actuators are used to move valves or switches to either fully opened or fully closed positions. Actuators for control or position regulating valves or switches are also given a positioning signal to move to any intermediate position with a high degree of accuracy. Although the most common and important use of an actuator is to open and close valves or switches, current actuator designs go far beyond the basic open and close function to implement more and more positioning functions. Therefore, in some instances of industrial control technology, actuators are also termed as positioners. In addition to directly positioning, an actuator can be packaged together with position sensing equipment, torque sensing, motor protection, logic control, digital communication capacity,
Zhang_Ch01.indd 87
5/13/2008 5:45:30 PM
88
INDUSTRIAL CONTROL TECHNOLOGY
and even PID control to play a role as a position detector or position indicator.
1.2.1
Electric Actuators
Electric actuators, utilizing the simplicity of electrical operation, provide the most reliable means of positioning a valve in a safe condition including fail-safe to close or open, or lock in position on power or system failure. However, electric actuators are not restricted to open or close applications; with the addition of one or more of the available kit options, the requirements for fully fledged control units can often be met. For example, with both weatherproof and flameproof models, the range simplifies process automation by providing true electronic control from process variable to valve and supplies a totally electric system for all environments. The unit can be supplied with the appropriate electronic controls to match any process control system requirement. Electric actuators are actively marketed and considered as replacements for pneumatic actuators. Although pneumatic actuators are still an important method of actuation, more and more often, electric actuators provide a superior solution, especially when high accuracy, high duty cycle, excellent reliability, long life expectancy, and low maintenance are required by extra switches, speed controllers, potentiometers, position transmitters, positioners, and local control station. These options may be added to factory-built units, or supplied in kit form. When supplied as kits all parts are included together with an easy to follow installation sheet.
1.2.1.1
Operating Principle
Architecture for an electric actuator is given in Fig. 1.27, which consists basically of gears, a motor, and switches. In these components, the motor plays a key role. In most applications, the motor is the primary torquegenerating component. Motors are available for a variety of supply voltages, including standard single-phase alternating current, three-phase and DC voltages. In some applications, three-phase current for the asynchronous is generated by means of the power circuit module in the electronic, regardless of the power supply (one- or three phase). Frequency converters and microcontrollers allow different speeds and precise tripping torques to be set (no overtorque). When an electric actuator is running, the phase angle is checked and automatically adjusted so that the rotation is always correct. To prevent heat damage due to excessive current draw in a stalled
Zhang_Ch01.indd 88
5/13/2008 5:45:30 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
Gears
89
Motor
Hand wheel Switches
Figure 1.27 Basic components for electric actuator (courtesy of Emerson).
condition, or due to overwork, electric actuator motors usually include a thermal overload sensor or switch embedded in the winding of the stator. The sensor or switch is installed in series with the power source, and opens the circuit when the motor is overheated, and then closes the circuit once it has cooled to a safe operating temperature. Electric actuators rely on a gear train (a series of interconnected gears) to enhance the motor torque and to regulate the output speed of the actuator. Some gear styles are inherently self-locking. This is particularly important in the automation of butterfly valves or when an electric actuator is used in modulating control applications. In these situations, seat and disc contact, or fluid velocity, act upon the closure element of the valve and cause a reverse force that can reverse the motor and camshaft. This causes a reenergization of the motor through the limit switch when the cam position is changed. This undesirable cycling will continue to occur unless a motor brake is installed, and usually leads to an overheated motor. Spur gears are sometimes used in rotary electric actuators, but are not selflocking. They require the addition of an electromechanical motor brake for these applications. A few of the self-locking gear styles include the worm and wheel and some configurations of planetary gears. A basic worm gear system operates as follows. A motor applies a force through the primary worm gear to the worm wheel. This, in turn, rotates the secondary worm gear which applies a force to the larger radius of the secondary worm wheel to increase the torque.
Zhang_Ch01.indd 89
5/13/2008 5:45:30 PM
90
INDUSTRIAL CONTROL TECHNOLOGY
An electric actuator system replacing the traditional hydraulic piston system is given in Fig. 1.28: it is made up of a motor, a reducer, and a ball screw. The motor receives its power from an electrical unit (no longer generated hydraulically) situated in the aircraft’s avionics bay. Electric cables that carry the power to the motor have replaced hydraulic pipes. The control unit is able to directly and individually transmit the braking order to each brake, thus optimizing both braking and the use of each brake in operation. Electric brake technology offers aircraft manufacturers and airline companies significant gains in mass, installation costs (optimized aircraft assembly line integration), and operating costs (maintenance costs).
1.2.1.2
Basic Types
Electric actuators are divided into two different types: rotary and linear. Rotary electric actuators rotate from open to closed using butterfly, ball, and plug valves. With the use of rotary electric actuators, the electromagnetic power from the motor causes the components to rotate, allowing for numerous stops during each stroke. Either a circular shaft or a table can be used as the rotational element. When selecting an electric rotary actuator, important factors to consider include actuator torque and range of motion. The actuator torque refers to the power that causes the rotation, while the Reduction gear Stator Ball screw and nut
Electric motor
Rotor
Figure 1.28 Working principle of an electric actuator system.
Zhang_Ch01.indd 90
5/13/2008 5:45:30 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
91
full range of motion can be either nominal, quarter-turn, or multiturn. Linear electric actuators, in contrast, open and close using pinch, globe, diaphragm, gate, or angle valves. Linear electric actuators are often used when tight tolerances are required. These electric actuators use an acme screw assembly or motor-driven ball screw to supply linear motion. Within linear electric actuators, the load is connected to the end of a screw that is belt or gear driven. Important factors to consider when selecting linear electric actuators include the number of turns, actuating force, and the length of the valve stem stroke. (1) Linear electric actuators provide linear motion via a motordriven ball screw or screw assembly. The linear actuator’s load is attached to the end of a screw, or rod, and is unsupported. The screw can be direct, belt, or gear driven. Important performance specifications to consider when searching for linear actuators include stroke, maximum rated load or force, maximum rated speed, continuous power, and system backlash. Stroke is the distance between fully extended and fully retracted rod positions. The maximum rated load or force is not the maximum static load. The maximum rated speed is the maximum actuator linear speed, typically rated at low or no load. Continuous power is sustainable power; it does not include short-term peak power ratings. Backlash is position error due to direction change. Motor choices include DC, DC servo, DC brushless, DC brushless servo, AC, AC servo, and stepper. Input power can be specified for DC, AC, or stepper motors. Drive screw specifications to consider for linear actuators include drive screw type and screw lead. Features include selflocking, limit switches, motor encoder feedback, and linear position feedback. Screw choices include acme screws and ball screws. Acme screws will typically hold loads without power but are usually less efficient than ball screws. They also typically have a shorter life but are more robust to shock loads. If backlash is a concern, it is usually better to select a ball screw. Ball screws exhibit lower friction and therefore higher efficiency than “lead screws.” Screw lead is the distance the rod advances with one revolution of the screw. Other features for linear actuators to consider include holding brakes, integrated overload slip clutch or torque limiters, water-resistant construction, protective boot, and thermal overload protection. Design units can be English or metric. Some manufacturers specify both. Dimensions to consider when specifying linear actuators include retracted length, width, height, and weight. The housing can have flanges, rear
Zhang_Ch01.indd 91
5/13/2008 5:45:30 PM
92
INDUSTRIAL CONTROL TECHNOLOGY clevis, side angle brackets, side lugs, tapped holes, and spherical bearings. Rod ends can be clevis, female eye, female thread, male thread, and spherical bearing. An important environmental parameter to consider is the operating temperature. (2) Rotary electric actuators provide incremental rotational movement of the output shaft. In its most simple form, a rotary actuator consists of a motor with a speed reducer. These AC and DC motors can be fabricated to the exact voltage, frequency, power, and performance specified. The speed reducer is matched with the ratio to the speed, torque, and acceleration required. Life, duty cycle, limit load, and accuracy are considerations that further define the selection of the speed reducer. Hardened, precision spur gears are supported by antifriction bearings as a standard practice in these speed reducers. Compound gear reduction is accomplished in compact, multiple load path configurations, as well as in planetary forms. The specifications for rotary actuator include angular rotation, torque, and speed, as well as control signals and feedback signals, and the environment temperature. Rotary actuators can incorporate a variety of auxiliary components such as brakes, clutches, antibacklash gears, and/or special seals. Redundant schemes involving velocity or torque summing of two or more motors can also be employed. Today the linear motion in actuators is converted to a rotary one in many applications. By delivering the rotary motion directly, some fittings can be saved in the bed. This enables the bed manufacturer to build in a rotary actuator far more elegantly than a linear actuator. The result is a more “pure” design because the actuator is not experienced as a product hanging under the bed, but as a part of the bed. Those rotary electric actuators are used for modulating valves, which are divided based on the range from multiturn to quarterturn. Electrically powered multiturn actuators are one of the most common and dependable configurations of actuators. A single or three-phased electric motor drives a combination of spurs and/or level gears, which in turn drive a stem nut. The stem nut engages the stem of the valve to open or close it, frequently via an acme threaded shaft. Electric multiturn actuators are capable of quickly operating very large valves. To protect the valve, the limit switch turns off the motor at the ends of travel. The torque sensing mechanism of the actuator switches off the electric motor when a safe torque level is exceeded. Position indicating switches are utilized to indicate the open and closed position of the valve.
Zhang_Ch01.indd 92
5/13/2008 5:45:31 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
93
Typically a declutching mechanism and hand wheel are included so that the valve can be operated manually should a power failure occur. The main advantage of this type of actuator is that all of the accessories are incorporated in the package and are physically and environmentally protected. It has all the basic and advanced functions incorporated in a compact housing which can be water tight, explosion proof, and in some circumstances, submersible. The primary disadvantage of an electric multiturn actuator is that should a power failure occur, the valve remains in the last position and the fail-safe position cannot be obtained easily unless there is a convenient source of stored electrical energy. Electric quarter-turn actuators are very similar to electric multiturn actuators. The main difference is that the final drive element is usually in one quadrant that puts out a 90° motion. The newer generation of quarter-turn actuators incorporates many of the features found in most sophisticated multiturn actuators, for example, a nonintrusive, infrared, human machine interface for set up, diagnostics, etc. Quarter-turn electric actuators are compact and can be used on smaller valves. They are typically rated to around 1500 foot pounds. An added advantage of smaller quarter-turn actuators is that because of their lower power requirements, they can be fitted with an emergency power source such as a battery to provide fail-safe operation. Thrust actuators can be fitted to valves which require a linear movement. Thrust actuators transform the torque of a multi-turn actuator into an axial thrust by means of an integrated thrust unit. The required (switch-off) actuating force (thrust and traction) can be adjusted continuously and reproducibly. Linear actuators are mainly used to operate globe valves. Thrust units, fitted to the output drive of a multiturn actuator, consist mainly of a threaded spindle, a metric screw bolt to join the valve shaft, and a housing to protect the spindle against environmental influences. The described version is used for “direct mounting” of the actuator to the valve. However, thrust actuators version “fork joint” (indirect mounting) can also operate butterfly valves or dampers, when direct mounting of a part turn actuator is not possible or efficient. The thrust units of the thrust actuators for modulating duty also comply with the high demands of the modulating duty. Also, for these thrust units, high-quality materials and accurate tolerances secure perfect function for many years of operation. The thrust units are operated by modulating actuators.
Zhang_Ch01.indd 93
5/13/2008 5:45:31 PM
94
INDUSTRIAL CONTROL TECHNOLOGY
1.2.1.3 Technical Specification When selecting an electric actuator the specification including power source, correct type, and size can be found utilizing the following criteria: (1) Power source. When electric power is selected, a three-phase supply depends upon the valve driven by the actuator. It is usually required for large valves; however, small valves can be operated on a single-phase supply. Usually an electric valve actuator can accommodate any of the common voltages. Sometimes a DC supply is available. This is often an emergency back-up power supply. Variations of fluid power are much greater. First there is a variety of fluid media such as compressed air, nitrogen, hydraulic fluid, or natural gas. Then, there are the variations in the available pressures of those media. With a variety of cylinder sizes, most of the variations can be accommodated for a particular valve size. (2) Type of valve. Whenever sizing an actuator for a valve, the type of valve has to be known, so that the correct type of actuator can be selected. There are some valves that need multiturn input, whereas others need quarter-turn. This has a great impact on the type of actuator that is required. When combined with the available power supply, the size and type of actuator quickly come into focus. Generally multiturn fluid power actuators are more expensive than multiturn electric actuators. However, for rising nonrotating stem valves a linear fluid power actuator may be less expensive. A definitive selection cannot be made until the power requirements of the valve are determined. After that decision has been made, the torque requirement of the valve will be the next selection criterion. The next task is to calculate the torque required by the valve. For a quarter-turn valve, the best way of determining the torque required is by obtaining the valve maker’s torque data. Most valve makers have measured the torque required to operate their valves over the range of operating line pressures. They make this information available for customers. The situation is different for multiturn valves. These can be subdivided into several groups: the rising rotating, rising nonrotating, and nonrising rotating valves. In each of these cases the measurement of the stem diameter together with the lead and pitch of the valve stem thread is required in order to size the automation for the valve. This information coupled with the size of the valve and the differential pressure across the valve can be used to calculate torque demand.
Zhang_Ch01.indd 94
5/13/2008 5:45:31 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
95
(3) Type and size of the electric actuator. The type and size of the actuator can be determined after the power supply, the type of valve, and the torque demand of that valve have been defined. Once the actuator type has been selected and the torque requirement of the valve has been determined, then the actuator can be sized using one of the actuator manufacturer’s sizing programs or tables. A further consideration in sizing the actuator is the required speed of operation of the valve. As speed has a direct relationship to the power required from the actuator, more horsepower would be needed to operate a valve at a faster speed. The electric motor operators of the three-phase type have a fixed speed of operation. Smaller, quarter-turn actuators utilize DC motors and may have adjustable speed of operation. (4) Predictive maintenance. Motor operators can utilize built-in data loggers coupled with highly accurate torque sensing mechanisms to record data on the valve as it moves through its stroke. The torque profiles can be used to monitor changes in the operating conditions of the valve and to predict when maintenance is required. They can also be used to troubleshoot valves. Forces on a valve can include the following: (1) valve seal or packing friction; (2) valve shaft, bearing friction; (3) valve closure element seat friction; (4) closure element travel friction; (5) hydrodynamic forces on closure elements; (6) stem piston effect; (7) valve stem thread friction. Most of these are present in all types of valves, but in varying degrees of magnitude. For example, closure element travel friction in a butterfly valve is negligible, whereas a nonlubricated plug valve has significant travel friction. Valve actuators are designed to limit their torque to a preset level using a torque switch, usually in a closing direction. An increase in torque above this level will stop the actuator. In the opening direction, the torque switch is frequently bypassed for the initial unseating operation. The resulting torque profile is useful in analyzing the valve condition. Different types of valves have different profiles. For example, a wedge gate valve has significant torque at the opening and closing positions. During the remaining portion of the stroke the torque demand is made up of packing and thread friction on the acme threaded shaft. On seating, the hydrostatic force on the closure element increases the seating friction, and finally the wedging effect of the closure element in the seat causes a rapid increase in torque demand until seating is completed. Changes in torque profile can, therefore, give a good indication of pending problems and can provide valuable information for an effective predictive valve maintenance program.
Zhang_Ch01.indd 95
5/13/2008 5:45:31 PM
96
INDUSTRIAL CONTROL TECHNOLOGY
1.2.1.4 Application Guides There are many applications where an electric actuator may be considered for the process control. Although electric actuators may be used anywhere a power source (electricity) is available, there are many applications where they are particularly well suited. For instance, in many remote installations, it may be impractical to run an air supply and to maintain it. Air lines that freeze up may clog and render the equipment inoperable or damage more delicate instruments. If only a few actuators are to be installed in an area, electric actuators offer a simple means of automation for these smaller systems. Perhaps one of the most important reasons for the trend toward using electric actuators has been the decreasing cost of using computers as system controllers and the ease and economy with which the actuators can be interfaced to such systems. This trend can be expected to accelerate with the advent of new microprocessors, or smart controllers, based on the versatile, low-cost microprocessor chips. Because of the increased speed and decision- and control-making capability that the computer adds to a process system, there is less need for final control elements, which have high control capability, such as characterized by globe and plug valves. As a result, the simpler and less expensive electric actuators and ball and butterfly valves have become more acceptable and are proving more than adequate for many applications. Probably the most important reason for the widespread use of electric actuators is their control circuit versatility. As an electric device, electric actuators naturally lend themselves for use as an enclosure for a variety of control and feedback devices. Furthermore, the switch and cams may be set and wired for almost any contact development for process and valve control. It is more economical to install and maintain electric valve actuators than pneumatic ones. A pneumatic system includes not only actuators, but also compressors, piping, filters, air lubrication systems, and dryers. However, electric actuators eliminate the need for air, an expensive source of energy, and do not require energy when not in motion. There is no need to be concerned with compressor noise, housing to shield against noise, air venting, or other operating restrictions associated with pneumatic systems. One important advantage of an electric actuator is its ability to manually “jog” the valve position when it is used in filling operations. By installing a feedback potentiometer an operator can monitor the exact position of the actuators and stop it at any point between open and closed with a manual control switch.
Zhang_Ch01.indd 96
5/13/2008 5:45:31 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
97
A few examples of the control circuit versatility of electric actuators are three-position control and interposing relay. Three-position control is especially useful for the automation of multiported (three- or four-way) valves. A simple ON/OFF switch powers an interposing relay. This circuit is similar to the type used in energizing single-acting solenoid valves in pneumatic actuator installations. (1) Controls. The great advantage of having an automated valve is that it can be remotely controlled. This means that operators can sit in a control room and control a process without having to physically go to the valve and give it an open or close command, the most basic type of control for an automated valve. The ability to remotely control a valve is easily achieved by running a pair of wires out to the actuator from the control room. Applying power across the wires can energize a coil, initiating motion in an electric or fluid power actuator. Positioning a valve in an intermediate position can be done using this type of control. However, feedback would be needed to verify whether the actuator is at the desired position. A more common method of positioning an actuator is to feed a proportional signal to the actuator, such as 4–20 mA, so that the actuator, using a comparator device, can position itself in direct proportion to the received signal. (2) Modulating control. In some workplaces where an actuator is required to control a level, flow, or pressure in a system, it may be required to move frequently. Modulating or positioning control can be achieved using the same 4–20 mA signal. However, the signal would change as frequently as the process required. If very high rates of modulation are required then special modulating control valve actuators are needed that can accommodate the frequent starts required for such duty. Where there are many actuators on a process, the capital cost of installation can be reduced by utilizing digital communication over a communicating loop that passes from one actuator to another (Fig. 1.29). A digital communication loop can deliver commands and collect actuator status rapidly and cost effectively. There are many types of digital communication such as Foundation Fieldbus, Profibus, DeviceNet, Hart, as well as proprietary communication systems custom designed for valve actuator use such as Pakscan. Digital communication systems have many advantages over and above the saving in capital cost. They are able to collect a lot of data about the condition of the valve, and as such can be used for predictive maintenance programs.
Zhang_Ch01.indd 97
5/13/2008 5:45:31 PM
98
INDUSTRIAL CONTROL TECHNOLOGY
Figure 1.29 Digital communication systems for remote control of valves (courtesy of Rotork Controls, Inc.).
1.2.1.5
Calibrations
Here, VA-7202 Electric Valve Actuator (Johnson Controls, Inc.) is taken as an example to introduce a method and a process for calibrating electric actuators. (1) Set the stroke jumper to approximate the stroke of the valve. See Fig. 1.30 for jumper location that includes (1) Jumper 8: 5/16 in. or 8 mm; (2) Jumper 10: 3/8 in. or 10 mm; (3) Jumper 13: 1/2 in. or 13 mm; (4) Jumper 19: 3/4 in. or 19 mm. Stroke jumpers Stroke adjustment
Input signal selection
Span value
8 10 13 19
Action selection DA RA
V mA *Supply disconnect
Starting point
Down
Up
1 2 3 4
Fail-safe input signal Down Up LED
* Disconnects power supply to the circuit must be in place for actuator operation.
Figure 1.30 VA-7202 electric actuator (Johnson Controls, Inc.) components.
Zhang_Ch01.indd 98
5/13/2008 5:45:31 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
99
(2) Set the direct/reverse action jumper so that the valve stem travels in the desired direction (per changes in control signal) including (1) DA (top jumper) = stroke down on signal increase; (2) RA (bottom jumper) = stroke up on signal increase. (3) Set the input signal selection jumper for voltage input or current (mA) input to match the controller output; see Fig. 1.30. (If the input signal selection jumper is removed, the actuator defaults to voltage input.) However, if mA input is selected, multiply the start and span scales by 2. (4) Set the signal fail position jumper to select default position of fully up or fully down. If the signal is lost at the actuator (open connection), the actuator will default to the predesignated position of full up or full down. However, if mA input is selected, the actuator will default to the low input signal position. (5) Adjust the potentiometers to the nominal values. Set the stroke adjustment to the midpoint as shown in Fig. 1.30. Set the starting point (offset) to the low input signal using the scale printed on the circuit board as a reference. Set the span value to the high input signal minus the offset and then use the scales for reference. (6) Apply voltage specified by application (RA/DA) requirements to drive the actuator to the full up position. If mA input is selected, multiply all values by 2. (7) Slowly turn the starting point potentiometer (shown in Fig. 1.30) CW (clockwise) until the valve stem reaches the end of stroke to ensure that the valve stem is in the full-up position. LED will be on; there should be no gear movement. (8) Slowly turn the starting point potentiometer counterclockwise (CCW); stop when the LED flashes or goes out. If the LED does not flash or go out, verify Dimension “A” gaps. Excessive gap may not allow full-up calibration. The actuator circuit contains a time-out feature. If calibration takes longer than 3–10 min, the LED will go out giving a false satisfied condition. If this occurs, cycle the power to the actuator and readjust the starting point. (9) Apply the input voltage specified by application (RA/DA) requirements to drive the valve stem to the full-down position per chart in Step 6. (10) To ensure that the valve stem is in the full-down position, slowly turn the stroke potentiometer CW until the valve stem reaches the end of stroke. LED will be on, and there should be no gear movement. (11) Slowly turn the stroke potentiometer CCW until the LED goes off.
Zhang_Ch01.indd 99
5/13/2008 5:45:33 PM
100
INDUSTRIAL CONTROL TECHNOLOGY (12) If the full-down position cannot be reached, return the stroke potentiometer to the nominal position and slowly turn the span potentiometer CCW until full down is reached. Then, repeat Step 11. (13) Adjust voltage to drive the actuator to the full-up position. Verify starting point adjustment. (14) Check for proper operation using the desired minimum and maximum operating voltages. Allow the actuator to operate through several complete cycles. The LED will remain on for 3–10 min after the actuator has completed the operation cycle. (15) Replace the cover and secure with the screws. At this point, the unit is ready for operation.
1.2.2
Pneumatic Actuators
Pneumatics have obtained a variety of applications in manufacturing to control industrial processes, in automotive and aircraft settings to modulate valves, even in medical equipment such as dentistry drills to actuate torque movements, etc. In contrast with other physics like electrics and hydraulics, the operating torque of pneumatics makes possible a compact actuator that is economical to both install and operate. Pneumatic devices are also used where electric motors cannot be used for safety reasons and where no water is supplied, such as mining applications where rock drills are powered by air motors to preclude the need for electric motors deep in the mine where explosive gases may be present. In many cases, it is easier to use a liquid or gas at high pressure rather than electricity to provide power for the actuator. Pneumatic actuators provide a very fast response but little power, whereas hydraulic systems can provide very great forces, but act more slowly. This is partly because gases are compressible and liquids are not. The pneumatic actuators (Fig. 1.31) offer the latest technology; a premium quality ball valve, a quality actuator designed to meet the torque requirements of the valve, and a mounting system which ensures alignment and rigidity.
1.2.2.1
Operating Principle
Industrial pneumatics may be contrasted with hydraulics that use uncompressible liquid media such as oil, or water combined with soluble oil, instead of air. Air is compressible, and therefore it is considered to be a fluid. The pneumatic principles conclude that the pressure formed in compressible liquids can be harnessed to a high potential of power. This gives us new potential for several pneumatic-powered operations and
Zhang_Ch01.indd 100
5/13/2008 5:45:33 PM
101
H0
H0
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
D
D
H
H
Air inlet
Air inlet
Air to open
Air to close
Figure 1.31 Components of a spring-type pneumatic actuator (courtesy of Forbes Marshall).
hence creates many new developments. Both pneumatics and hydraulics are applications of fluid power. Both pneumatic linear and rotary actuators use pressurized air to drive or rotate mechanical components. The flow of pressurized air produces the shift or rotation of moving components via a stem and spring, rack and pinion, cams, direct air or fluid pressure on a chamber or rotary vanes, or other mechanical linkage. A valve actuator is a device mounted on a valve that, in response to a signal, automatically moves the valve to the desired position using an outside power source. Pneumatic valve actuators convert air pressure into motion. (1) Linear pneumatic actuators. A simplified diagram of a pneumatic linear actuator is shown in Fig. 1.32. It operates with a combination of force created by air and spring force. The actuator shifts the positions of a control valve by transmitting its motion through the stem. A rubber diaphragm separates the actuator housing into two air chambers. The left or upper chamber receives a supply of air through an opening in the top of the housing. The right or bottom chamber contains a spring that forces the diaphragm against mechanical stops in the upper chamber. Finally, a local indicator is connected to the stem to indicate the position of the valve.
Zhang_Ch01.indd 101
5/13/2008 5:45:33 PM
102
INDUSTRIAL CONTROL TECHNOLOGY
Positioner
Stem
Spring Air
Chamber
Figure 1.32 Operation principle of a simplified linear pneumatic actuator.
The position of the valve is controlled by varying supply air pressure in the left or upper chamber, which results in a varying force on the top of the diaphragm. At the beginning, with no supply air, the spring forces the diaphragm upward against the mechanical stops and holds the valve fully open. As supply air pressure is increased from zero, its force on top of the diaphragm begins to overcome the opposing force of the spring. The causes the diaphragm to move rightward or downward and the control valve to close. With increasing supply air pressure, the diaphragm will continue to move rightward or downward and compress the spring until the control valve is fully closed. Conversely, if supply air pressure is decreased, the spring will force the diaphragm leftward or upward and open the control valve. Additionally, if supply pressure is held constant at some value between zero and maximum, the valve will position at an intermediate point. The valve can hence be positioned anywhere between fully open and fully closed in response to changes in supply air pressure. A positioner is a device that regulates the supply air pressure to a pneumatic actuator. It does this by comparing the position demanded by the actuator with the control position of the valve. The requested position is transmitted by a pneumatic or electrical control signal from a controller to the positioner. The controller generates an output signal that represents the requested position. This signal is sent to the positioner. Externally, the positioner consists of an input connection for the control signal, a supply air input connection, a supply air output connection, a supply air vent connection, and a feedback linkage. Internally, it contains an intricate network of electrical transducers, air lines, valves, linkages, and necessary adjustments.
Zhang_Ch01.indd 102
5/13/2008 5:45:34 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
103
Other positioners may also provide controls for local valve positioning and gauges to indicate supply air pressure and controller pressure. (2) Rotary pneumatic actuators. Pneumatic rotary actuators may have fixed or adjustable angular strokes and can include such features as mechanical cushioning, closed-loop hydraulic dampening (oil), and magnetic features for reading by a switch. When the compressed air enters the actuator from the first tube nozzle, the air will push the double pistons toward both ends (cylinder end) for straight line movement. The gear on the piston drives the gear on the rotary shaft to rotate counterclockwise, and then the valve can be opened. This time, the air at both ends of the pneumatic actuator will drain with another tube nozzle. Conversely, when the compressed air enters the actuator from the second tube nozzle, the gas will push the double pistons toward the middle for straight line movement. The rack on the piston drives the gear on the rotary shaft to rotate clockwise and then the valve can be closed. This time, the air at the middle of the pneumatic actuator will drain with the first tube nozzle. Above is the standard-type driving principle. According to users’ requirement, the pneumatic actuator can be assembled into the driving principle beyond the standard type, which means opening the valve while rotating the rotary shaft clockwise and closing the valve while rotating the rotary shaft counterclockwise. Figure 1.33(a) is for the double-acting type. For clockwise operation, Port 2 (P2) is open to atmosphere, and air pressure is directed to Port 1 (P1). As the pistons move apart, the pinion rotates clockwise. The linear movement of the pistons is converted to rotary motion by the piston racks and the output pinion gear. For counterclockwise operation, Port 1 is open to atmosphere and air pressure is directed to Port 2. The pressure differential moves the pistons together, rotating the pinion counterclockwise. Figure 1.33(b) is for the spring-return type. For clockwise operation, Port 2 is open to atmosphere and air pressure is directed to Port 1. The air pressure compresses the springs and moves the pistons outward. As the pistons move apart, the pinion rotates clockwise. The linear movement of the pistons is converted to rotary motion by the piston racks and the output pinion gear. For counterclockwise operation, Port 1 is open to atmosphere and air pressure is directed to Port 2. High pressure and/ or spring force moves the pistons inward, rotating the pinion counterclockwise.
Zhang_Ch01.indd 103
5/13/2008 5:45:34 PM
104
INDUSTRIAL CONTROL TECHNOLOGY
P1 P2 Clockwise operation
P1 P2 Clockwise operation
P1 P2 Counterclockwise operation
P1 P2 Counterclockwise operation
(a)
(b)
Figure 1.33 Operation principle of rotary pneumatic actuators: (a) double acting and (b) spring return.
1.2.2.2
Basic Types and Specifications
In terms of the actuated movements, pneumatic actuators can be two types: linear and rotary. (1) Pneumatic linear motion specifications: (a) breakaway torque (in. lb) (b) actuation type: spring return or double acting (c) fail safe: no or yes (d) positioner: no or yes; if yes: pneumatic or electropneumatic (e) limit switch: no, two, or four. (2) pneumatic rotary valve specifications: (a) output torque (in. lb) (b) pipe size (specify) (c) product flowing in pipe (specify) (d) temperature (oF) (e) pressure (psig) (f) flow (GPM) (g) valve material (specify) (h) connections (specify).
Zhang_Ch01.indd 104
5/13/2008 5:45:34 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
105
In terms of operators, pneumatic actuators can be of four types: (1) Piston-style pneumatic actuators. Piston-type, air-operated valves offer a unique, reliable design providing for a long and dependable life. These valves are more compact than diaphragm valves and are appropriate for applications such as high-flow gas and liquid delivery systems to reactors and mixers and vaporizers. The technical data for the piston-style pneumatic actuators include (a) area of piston (sq. in.) (b) maximum allowable working pressure (psi or bar) (c) allowable piston temperature range (oF) (d) approximate air usage and air cycle ( SCF per 100 psi) (e) tested to 100,000 cycles at 100 psi (6.89 bar) with no leakages or signs of wear or fatigue. (2) Diaphragm-style pneumatic actuators. Diaphragm-type, airoperated valves are an efficient and economical means for “remote ON–OFF” of a wide range of process requirements. Diaphragmtype actuators are designed to provide a dependable alternative to piston-type actuators. The technical data for the diaphragmstyle pneumatic actuators include (a) area of diaphragm (sq. in.) (b) maximum allowable working pressure (psi or bar) (c) allowable diaphragm temperature range (oF) (d) approximate air usage and air cycle ( SCF per 100 psi). (3) Solenoid valve packages. Solenoid valves are used to supply ON–OFF control of air to the valve actuator. They are normally mounted on the valve actuator, but they can be mounted remotely as well. Solenoid valves can be supplied in different voltages and configurations if required. Solenoid manifolds are available and used when a large number of valve actuators require control or contain power to a local area reducing assembly time. Remotemounted solenoids also permit the usage of actuated valves in hazardous sensitive locations. These indices are assigned to solenoid valve packages of actuators: (a) Piston and diaphragm operators: light and heavy-light duty (b) piston and diaphragm operators: medium and heavy duty (c) piston and diaphragm operators: extra heavy duty. (4) Servo pneumatic actuator package. The servo pneumatic actuator has an integral displacement transducer and is designed to operate on standard compressed air available in most factories and laboratories. The package is aimed at engineers building specialpurpose systems including (a) Displacement control. The servo pneumatic actuator can be used in systems configured for displacement control using
Zhang_Ch01.indd 105
5/13/2008 5:45:38 PM
106
INDUSTRIAL CONTROL TECHNOLOGY feedback from the integral displacement transducer and signal conditioning card supplied. (b) Load control. Load control may be implemented by providing a load cell and signal conditioning. (c) Strain control. Strain control may be implemented by providing a clip gauge and signal conditioning.
1.2.2.3 Application Guide and Assemble on Valve (1) Preparation for installation (a) Actuator checks. When the actuator arrives, the actuator checks that determine whether it is mounted on the valve should be carried out: If the actuator arrives already assembled onto the valve, the setting of the mechanical stops and of the electric limit switches (if existing) has already been made by the person who assembled the actuator onto the valve. If the actuator arrives separately from the valve, the settings of the mechanical stops and of the electric limit switches (if existing) must be checked and, if necessary, carried out while assembling the actuator onto the valve. Furthermore, it is often necessary to check that the actuator has not been damaged during transport. If necessary, repair all damages to the paintcoat, etc. Thereafter, it is often required to check that the model, the serial number of the actuator, and the performance data written on the data plate are in accordance with those described on the order acknowledgment, test certificate, and delivery note. Then, check that the fitted accessories comply with those listed in the order acknowledgment and the delivery note. (b) Store checks. The actuators leave the factory in excellent working condition and with an excellent finish (these conditions are guaranteed by an individual inspection certificate); in order to maintain these characteristics until the actuator is installed, it is necessary to observe a few rules and take appropriate measures during the storage period. It should be ensured that plugs are fitted in the air connections and in the cable entries. The plastic plugs which close the inlets do not have a weatherproof function, but are only a means of protection against the entry of foreign matter during transport. If long-term storage is necessary and especially if the storage is out of doors, the plastic protection plugs must be replaced by metal plugs, which guarantee a complete weatherproof protection.
Zhang_Ch01.indd 106
5/13/2008 5:45:38 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
107
If the actuators are supplied separately from the valves, they must be placed onto a wooden pallet so as not to damage the coupling flange to the valve. In case of long-term storage, the coupling parts (flange, drive sleeve, insert bush) must be coated with protective oil or grease. If possible, blank off the flange by a protection disk. In case of long-term storage, it is advisable to keep the actuators in a dry place or to provide some means of weather protection. If possible, it is also advisable to periodically operate the actuator with filtered, dehydrated, and lubricated air; after such operations all the threaded connections of the actuator and the valves of the control panel (if present) should be carefully plugged. (2) Installation and start-up (a) Pneumatic connections. Pneumatic connections connect the actuator to the pneumatic feed line with fittings and pipes in accordance with the plant specifications. They must be sized correctly in order to guarantee the necessary air flow for the operation of the actuator, with pressure drops not exceeding the maximum allowable value. The shape of the connecting piping must not cause excessive stress to the inlets of the actuator. The piping must be suitably fastened so as not to cause excessive stress or loosening of threaded connections, if the system undergoes strong vibrations. Every precaution must be taken to ensure that any solid or liquid contaminants which may be present in the pneumatic pipe work to the actuator are removed to avoid possible damage to the unit or loss of performance. The inside of the pipes used for the connections must be well cleaned before use, which includes washing with suitable substances and blowing through them with air or nitrogen. The ends of the tubes must be well debarred and cleaned. Once the connections are completed, operate the actuator and ensure that it functions correctly, that the operation times meet the plant requirements, and that there are no leaks in the pneumatic connections. (b) Electrical connections. Electric connections connect the electrical feed, control, and signal lines to the actuator, by linking them up with the terminal blocks of the electrical components. In order to do this, the housing covers must be removed without damaging the coupling surfaces, the O-rings, or the gaskets. Then, the plugs should be removed from the cable entries. For electrical connections, we can use components (cable
Zhang_Ch01.indd 107
5/13/2008 5:45:38 PM
108
INDUSTRIAL CONTROL TECHNOLOGY glands, cables, hoses, conduits), which meet the requirements and codes applicable to the plant specifications (mechanical protection and/or explosion-proof protection). The cable glands should be screwed tightly into the threaded inlets so as to guarantee weatherproof and explosion-proof protection (when applicable). The connection cables are inserted into the electrical enclosures through the cable glands, and the cable wires are connected to the terminals according to the applicable wiring diagram. If conduits are used, it is advisable to carry out the connection to the electrical enclosures by inserting hoses so as not to cause anomalous stress on the housing cable entries. Replace the plastic plugs of the unused enclosure entries with metal ones to guarantee perfect weatherproof tightness and to comply with the explosion-proof protection codes where applicable. Once the connections are completed, check that the controls and signals work properly. After the installation is complete, it is time for the start-up of the actuator that proceeds as follows: (i) Check that the pressure and quality of the air supply (filtering degree, dehydration) are as prescribed. Check that the feed voltage values of the electrical components (solenoid valve coils, micro switches, pressure switches, etc.) are as prescribed. (ii) Check that the actuator controls work properly (remote control, local control, emergency controls, etc.) (iii) Check that the required remote signals (valve position, air pressure, etc.) are correct. (iv) Check that the setting of the components of the actuator control unit (pressure regulators, pressure switches, flow control valves, etc.) meet the plant requirements. (v) Check that there are no leaks in the pneumatic connections. If necessary, tighten the nuts of the pipe fittings. (vi) Remove all rust and, in accordance with the applicable painting specifications, repair paint coat that has been damaged during transport, storage, or assembly. (3) Mounting the actuator onto the valve. The actuator can be assembled onto the valve flange either by using the actuator housing flanges with threaded holes, or by the interposition of an adaptor flange or a spool piece. The actuator drive sleeve is generally connected to the valve stem by an insert bush or a stem extension. The assembly position of the actuator, with reference to the valve, must comply with the plant requirements (cylinder axis
Zhang_Ch01.indd 108
5/13/2008 5:45:38 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
109
parallel or perpendicular to the pipeline axis). To assemble the actuator onto the valve, proceed as follows: (a) Check that the coupling dimensions of the valve flange and stem, or of the relevant extension, meet the actuator coupling dimensions. (b) Bring the valve to the “closed” position. (c) Lubricate the valve stem with oil or grease in order to make the assembly easier. Be careful not to pour any of it onto the flange. (d) Clean the valve flange and remove anything that might prevent a perfect adherence to the actuator flange, especially all traces of grease, since the torque is transmitted by friction. (e) If an insert bush or stem extension for the connection to the valve stem is supplied separately, assemble it onto the valve stem and fasten it by tightening the proper stop dowels. (f) Bring the actuator to the “closed” position. (g) Connect a sling to the support points of the actuator and lift it: make sure the sling is suitable for the actuator weight. When possible, it is easier to assemble the actuator to the valve if the valve stem is in the vertical position. In this case the actuator must be lifted while keeping the flange in the horizontal position. (h) Clean the actuator flange and remove anything that might prevent a perfect adherence to the valve flange, especially all traces of grease. (i) Lower the actuator onto the valve in such a way that the insert bush, assembled on the valve stem, enters the actuator drive sleeve. This coupling must take place without forcing and only with the weight of the actuator. When the insert bush has entered the actuator drive sleeve, check the holes of the valve flange. If they do not meet with the holes of the actuator flange or the stud bolts screwed into them, the actuator drive sleeve must be rotated; feed the actuator cylinder with air at the proper pressure or actuate the manual override, if existing, until coupling is possible. (j) Tighten the nuts of the connecting stud bolts evenly with the torque prescribed in the table. (k) If possible, operate the actuator to check that it moves the valve smoothly. It is important that the mechanical stops of the actuator (and not those of the valve) stop the angular stroke at both extreme valve positions (fully open and fully closed), except when otherwise required by the valve operation
Zhang_Ch01.indd 109
5/13/2008 5:45:38 PM
110
INDUSTRIAL CONTROL TECHNOLOGY (e.g., metal-seated butterfly valves). The setting of the open valve position is performed by adjusting the travel stop screw in the left wall of the mechanism housing, or in the end flange of the manual override, if that exists. The setting of the closed valve position is performed by adjusting the travel stop screw in the cylinder end flange. Proceed as follows: (i) Loosen the lock nut. (ii) If the actuator angular stroke is stopped before reaching the end position (fully open or closed), unscrew the stop screw by turning it counterclockwise, until the valve reaches the correct position. When unscrewing the stop screw, keep the lock nut still with a wrench so that the sealing washer does not withdraw together with the screw. (iii) Tighten the lock nut. (iv) If the actuator angular stroke is stopped beyond the end position (fully open or closed), screw the stop screw by turning it clockwise until the valve reaches the correct position. (v) Tighten the lock nut. (4) Maintenance. Before carrying out any maintenance operation, it is necessary to close the pneumatic feed line and exhaust the pressure from the actuator cylinder and from the control unit, to ensure the safety of maintenance staff. (a) Routine maintenance. Most actuators have been designed to work for long periods in the severest conditions with no need for maintenance. It is, however, advisable to periodically check the actuator as follows: (i) Check that the actuator operates the valve correctly and with the required operating times. If the actuator operation is very infrequent, we can carry out a few opening and closing operations with all the existing controls (remote, local, emergency controls, etc.), if this is allowed by the plant conditions. (ii) Check that the signals to the remote control desk are correct. (iii) Check that the air supply pressure value is within the required range. (iv) If there is an air filter on the actuator, bleed the condensed water accumulated in the cup by opening the drain cock. Disassemble the cup periodically and wash it with soap and water; disassemble the filter: if this is made of a sintered cartridge, wash it with nitrate solvent and blow through it with air. If the filter is made of cellulose, it must be replaced when clogged.
Zhang_Ch01.indd 110
5/13/2008 5:45:39 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
111
(v) Check that the external components of the actuator are in good condition. (vi) Check all the paint coat of the actuator. If some areas are damaged, repair the paint coat according to the applicable specification. (vii) Check that there are no leaks in the pneumatic connections. If necessary tighten the nuts of the pipe fittings. (b) Special maintenance. If there are air leaks in the pneumatic cylinder or a malfunction in the mechanical components, or in case of scheduled preventative maintenance, the actuator must be disassembled and seals must be replaced with reference to the sectional drawing, adopting the following procedures: (i) disassemble the actuator correctly; (ii) carry out the seal replacement in the actuator; (iii) reassemble the actuator correctly.
1.2.3
Hydraulic Actuators
Pneumatic actuators are normally used to control processes requiring quick and accurate response, as they do not require a large amount of motive force. However, when a large amount of force is required to operate a valve such as the main steam system valves, hydraulic actuators are normally used. A hydraulic actuator receives pressure energy and converts it to mechanical force and motion. Fluid power systems are manufactured by many organizations for a very wide range of applications, which often embody differing arrangements of components to fulfill a given task. Hydraulic components are manufactured to provide the control functions required for the operation of systems.
1.2.3.1
Operating Principle
(1) Hydraulic cylinders. A cylinder is a hydraulic actuator that is constructed of a piston or plunger that operates in a cylindrical housing by the action of liquid under pressure. Cylinder housing is a tube in which a plunger (piston) operates. In a ram-type cylinder, a ram actuates a load directly. In a piston cylinder, a piston rod is connected to a piston to actuate a load. At the end of a cylinder from which a rod or plunger protrudes is a rod end. Its opposite end is the head end. The hydraulic connections are a head-end port and a rod-end port (fluid supply). (a) Single-acting cylinder. This cylinder has only a head-end port and is operated hydraulically in one direction. When oil
Zhang_Ch01.indd 111
5/13/2008 5:45:39 PM
112
INDUSTRIAL CONTROL TECHNOLOGY
(b)
(c)
(d)
(e)
(f)
Zhang_Ch01.indd 112
is pumped into a port, it pushes on a plunger, thus extending it. To return or retract a cylinder, oil must be released to a reservoir. A plunger returns either because of the weight of a load or from some mechanical force such as a spring. In mobile equipment, a reversing directional valve of a singleacting type controls flow to and from a single-acting cylinder. Double-acting cylinder. This cylinder must have ports at the head and rod ends. Pumping oil into the head end moves a piston to extend a rod while any oil in the rod end is pushed out and returned to a reservoir. To retract a rod, flow is reversed. Oil from a pump goes into the rod end, and the head-end port is connected to allow return flow. The flow direction to and from a double-acting cylinder can be controlled by a double-acting directional valve or by actuating control of a reversible pump. Differential cylinder. In a differential cylinder, the areas where pressure is applied on a piston are not equal. On a head end, a full piston area is available for applying pressure. At a rod end, only an annular area is available for applying pressure. The area of a rod is not a factor, and what space it does take up reduces the volume of oil it will hold. Two general rules about a differential cylinder: with an equal GPM delivery to either end, a cylinder will move faster when retracting because of a reduced volume capacity. With equal pressure at either end, a cylinder can exert more force when extending because of the greater piston area. In fact, if equal pressure is applied to both ports at the same time, a cylinder will extend because of a higher resulting force on a head end. Nondifferential cylinder. This cylinder has a piston rod extending from each end. It has equal thrust and speed either way, provided that pressure and flow are unchanged. A nondifferential cylinder is rarely used on mobile equipment. Ram-type cylinder. A ram-type cylinder is a cylinder in which the cross-sectional area of a piston rod is more than one-half the cross-sectional area of the piston head. In many cylinders of this type, the rod and piston heads have equal areas. A ram-type actuating cylinder is used mainly for push rather than pull functions. Piston-type cylinder. In this cylinder, a cross-sectional area of a piston head is referred to as a piston-type cylinder. A piston-type cylinder is used mainly when the push and pull functions are needed.
5/13/2008 5:45:39 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
113
A single-acting, piston-type cylinder uses fluid pressure to apply force in one direction. In some designs, the force of gravity moves a piston in the opposite direction. However, most cylinders of this type apply force in both directions. Fluid pressure provides force in one direction and spring tension provides force in the opposite direction. Most piston-type cylinders are double acting, which means that fluid under pressure can be applied to either side of a piston to provide movement and apply force in a corresponding direction. This cylinder contains one piston and pistonrod assembly and operates from fluid flow in either direction. The two fluid ports, one near each end of a cylinder, alternate as inlet and outlet, depending on the directional-control valve flow direction. This is an unbalanced cylinder, which means that there is a difference in the effective working area on the two sides of a piston. A cylinder is normally installed so that the head end of a piston carries the greater load; that is, a cylinder carries the greater load during a piston-rod extension stroke. (g) Cushioned cylinder. To slow an action and prevent shock at the end of a piston stroke, some actuating cylinders are constructed with a cushioning device at either or both ends of a cylinder. This cushion is usually a metering device built into a cylinder to restrict the flow at an outlet port, thereby slowing down the motion of a piston. (h) Lockout cylinders. A lockout cylinder is used to lock a suspension mechanism of a tracked vehicle when a vehicle functions as a stable platform. A cylinder also serves as a shock absorber when a vehicle is moving. Each lockout cylinder is connected to a road arm by a control lever. When each road wheel moves up, a control lever forces the respective cylinder to compress. Hydraulic fluid is forced around a piston head through restrictor ports causing a cylinder to act as a shock absorber. When hydraulic pressure is applied to an inlet port on each cylinder’s connecting eye, an inner control-valve piston is forced against a spring in each cylinder. This action closes the restrictor ports, blocks the main piston’s motion in each cylinder, and locks the suspension system. (2) Hydraulic motors. Hydraulic motors convert hydraulic energy into mechanical energy. In industrial hydraulic circuits, pumps and motors are normally combined with a proper valve and pipe to form a hydraulic-powered transmission. A pump, which is mechanically linked to a prime mover, draws fluid from a reservoir
Zhang_Ch01.indd 113
5/13/2008 5:45:39 PM
114
INDUSTRIAL CONTROL TECHNOLOGY and forces it to a motor. A motor, which is mechanically linked to the workload, is actuated by this flow so that motion or torque, or both, are conveyed to the work. (a) Gear-type motors. Both gears are driven gears, but only one is connected to the output shaft. Operation is essentially the reverse of that of a gear pump. Flow from the pump enters chamber A and flows in either direction around the inside surface of the casing, forcing the gears to rotate as indicated. This rotary motion is then available for work at the output shaft. (b) Vane-type motors. Flow from the pump enters the inlet, forces the rotor and vanes to rotate, and passes out through the outlet. Motor rotation causes the output shaft to rotate. Since no centrifugal force exists until the motor begins to rotate, something, usually springs, must be used to initially hold the vanes against the casing contour. However, springs usually are not necessary in vane-type pumps because a drive shaft initially supplies centrifugal force to ensure vane-to-casing contact. Vane motors are balanced hydraulically to prevent a rotor from side-loading a shaft. A shaft is supported by two ball bearings. Torque is developed by a pressure difference as oil from a pump is forced through a motor. On the trailing side open to the inlet port, the vane is subject to full system pressure. The chamber leading the vane is subject to the much lower outlet pressure. The difference in pressure exerts the force on the vane that is, in effect, tangential to the rotor. This pressure difference is effective across vanes. The other vanes are subject to essentially equal force on both sides. Each will develop torque as the rotor turns. The body port is the inlet, and the cover port is the outlet. Reverse the flow and the rotation becomes clockwise; otherwise the rotation is counterclockwise. In a vane-type pump, the vanes are pushed out against a cam ring by centrifugal force when a pump is started up. A design motor uses steel-wire rocker arms to push the vanes against the cam ring. The arms pivot on pins attached to the rotor. The ends of each arm support two vanes that are 90° apart. When the cam ring pushes vane A into its slot, vane B slides out. The reverse also happens. The pressure plate of a motor functions the same as a pump’s. It seals the side of a rotor and ring against internal leakage, and it feeds system pressure under the vanes to hold them out against a ring. This is a simple operation in a pump because a pressure plate is right by a high-pressure port in the cover. (c) Piston-type motors. Although some piston-type motors are controlled by directional-control valves, they are often used in
Zhang_Ch01.indd 114
5/13/2008 5:45:39 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
115
combination with variable-displacement pumps. This pump– motor combination (hydraulic transmission) is used to provide a transfer of power between a driving element, such as an electric motor, and a driven element. Piston-type motors can be inline-axis or bent-axis types: (i) Inline-axis piston-type motors. These motors are almost identical to the pumps. They come in built-in, fixed-, and variable-displacement models in several sizes. Torque is developed by a pressure drop through the motor. Pressure exerts a force on the ends of the pistons, which is translated into shaft rotation. Shaft rotation of most models can be reversed anytime by reversing the flow direction. Oil from a pump is forced into the cylinder bores through a motor’s inlet port. Force on the pistons at this point pushes them against a swash plate. They can move only by sliding along a swash plate to a point further away from a cylinder’s barrel, which causes it to rotate. The barrel is then splined to a shaft so that it must turn. The displacement of a motor depends on the angle of the swash plate. At maximum angle, displacement is at its highest because the pistons travel at maximum length. When the angle is reduced, piston travel shortens, reducing displacement. If flow remains constant, a motor runs faster, but torque is decreased. Torque is greatest at maximum displacement because the component of piston force parallel to the swash plate is greatest. (ii) Bent-axis piston-type motors. These motors are almost identical to the pumps. They are available in fixed- and variable-displacement models, in several sizes. Variabledisplacement motors can be controlled mechanically or by pressure compensation. These motors operate similar to inline motors except that the piston thrust is against a drive-shaft flange. A parallel component of thrust causes a flange to turn. Torque is the maximum displacement; speed is at a minimum. This design of piston motor is very heavy and bulky, particularly the variabledisplacement motor. Use of these motors on mobile equipment is limited.
1.2.3.2
Basic Types and Specifications
Table 1.4 gives the basic types of hydraulic actuators in three columns which are cylinder, valve, and motor, and the valve controlling source in three rows which are electro, servo, and piezoelectric. The motion type for
Zhang_Ch01.indd 115
5/13/2008 5:45:39 PM
116
INDUSTRIAL CONTROL TECHNOLOGY
Table 1.4 Basic Types of Hydraulic Actuators Types of Valve Controller Electro Servo Piezoelectric
Hydraulic Cylinder Actuators
Hydraulic Valve Actuators
Hydraulic Motor Actuators
Linear Linear Linear
Linear Rotary Linear Rotary Linear Rotary
Rotary Rotary Rotary
each type of hydraulic actuator device is specified in each cell of this table, which indicates that cylinders have linear motion only, valves can have both linear and rotary motions, and motors rotary motion only. An actuator can be linear or rotary. A linear actuator gives force and motion outputs in a straight line. It is more commonly called a cylinder but is also referred to as a ram, reciprocating motor, or linear motor. A rotary actuator produces torque and rotating motion. It is more commonly called a hydraulic motor. (1) Hydraulic cylinders and linear actuators. Hydraulic cylinders are actuation devices that utilize pressurized hydraulic fluid to produce linear motion and force. Hydraulic cylinders are used in a variety of power transfer applications. Hydraulic cylinders can be single action or double action. A single action hydraulic cylinder is pressurized for motion in only one direction. A double action hydraulic cylinder can move along the horizontal (x-axis) plane, the vertical (y-axis) plane, or along any other plane of motion. Operating specifications, configuration or mounting, materials of construction, and features are all important parameters to consider when searching for hydraulic cylinders. Important operating specifications for hydraulic cylinders include the cylinder type, stroke, maximum operating pressure, bore diameter, and rod diameter. Choices for cylinder type include tie-rod, welded, and ram. A tie-rod cylinder is a hydraulic cylinder that uses one or more tie-rods to provide additional stability. Tie-rods are typically installed on the outside diameter of the cylinder housing. In many applications, the cylinder tierod bears the majority of the applied load. A welded cylinder is a smooth hydraulic cylinder that uses a heavy-duty welded cylinder housing to provide stability. A ram cylinder is a type of hydraulic cylinder that acts as a ram. A hydraulic ram is a device in which the cross-sectional area of the piston rod is more than
Zhang_Ch01.indd 116
5/13/2008 5:45:39 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
117
one-half the cross-sectional area of the moving component. Hydraulic rams are primarily used to push rather than pull and are most commonly used in high-pressure applications. Stroke is the distance that the piston travels through the cylinder. Hydraulic cylinders can have a variety of stroke lengths, from fractions of an inch to many feet. The maximum operating pressure is the maximum working pressure the cylinder can sustain. The bore diameter refers to the diameter at the cylinder bore. The rod diameter refers to the diameter of the rod or piston used in the cylinder. Choices for cylinder configuration are simple configuration or telescopic configuration. A simple configuration hydraulic cylinder consists of a single cylindrical housing and internal components. A telescopic configuration hydraulic cylinder uses “telescoping” cylindrical housings to extend the length of the cylinder. Telescopic configuration cylinders are used in a variety of applications that require the use of a long cylinder in a space-constrained environment. Choices for mounting method include flange, trunnion, threaded, clevis or eye, and foot. The mount location can be cap, head, or intermediate. Materials of construction include steel, stainless steel, and aluminum. Common features for hydraulic cylinders include integral sensors, double end rod, electrohydraulic cylinders, and adjustable stroke. (2) Hydraulic valves. Hydraulic valve actuators convert a fluid pressure supply into a motion. A valve actuator is a hydraulic actuator mounted on a valve that, in response to a signal, automatically moves the valve to the desired position using an outside power source. The hydraulic actuators in hydraulic valves can be either linear like cylinders or rotary like motors. The hydraulic actuator operates under servo-valve control; this provides regulated hydraulic fluid flow in a closed loop system having upper and lower cushions to protect the actuator from the effects of high speed and high mass loads. Piston movement is monitored via a linear voltage displacement transducer (LVDT), which provides an output voltage proportional to the displacement of the movable core extension to the actuator. The outside power sources used by hydraulic valves are normally in these types: (a) electronics (b) servo (c) piezoelectricity. (3) Hydraulic motors and rotary actuator. Hydraulic motors are powered by pressurized hydraulic fluid and transfer rotational kinetic energy to mechanical devices. Hydraulic motors, when
Zhang_Ch01.indd 117
5/13/2008 5:45:39 PM
118
INDUSTRIAL CONTROL TECHNOLOGY powered by a mechanical source, can rotate in reverse direction and act as a pump. Hydraulic rotary actuators use pressurized fluid to rotate mechanical components. The flow of pressurized hydraulic fluid produces the rotation of moving components via a rack and pinion, cams, direct fluid pressure on rotary vanes, or other mechanical linkage. Hydraulic rotary actuators and pneumatic rotary actuators may have fixed or adjustable angular strokes and can include such features as mechanical cushioning, closed-loop hydraulic dampening (oil), and magnetic features for reading by a switch. Motor type is the most important consideration when searching for hydraulic motors. Choices for motor type include axial piston, radial piston, internal gear, external gear, and vane. An axial piston motor uses an axially mounted piston to generate mechanical energy. High-pressure flow into the motor forces the piston to move in the chamber, generating output torque. A radial piston hydraulic motor uses pistons mounted radially about a central axis to generate energy. An alternate-form radial piston motor uses multiple interconnected pistons, usually in a star pattern, to generate energy. Oil supply enters the piston chambers, moving each individual piston and generating torque. Multiple pistons increase the displacement per revolution through the motor, increasing the output torque. An internal gear motor uses internal gears to produce mechanical energy. Pressurized fluid turns the internal gears, producing output torque. An external gear motor uses externally mounted gears to produce mechanical energy. Pressurized fluid forces the external gears to turn, producing output torque. A vane motor uses a vane to generate mechanical energy. Pressurized fluid strikes the blades in the vane, causing it to rotate and produce output torque. Additional operating specifications to consider for hydraulic motors include operating torque, operating pressure, operating speed, operating temperature, power, maximum fluid flow, maximum fluid viscosity, displacement per revolution, and motor weight. The operating torque is the torque that the motor is capable of delivering. Operating torque depends directly on the pressure of the working fluid delivered to the motor. The operating pressure is the pressure of the working fluid delivered to the hydraulic motor. Working fluid is pressurized by an outside source before it is delivered to the motor. Working pressure affects operating torque, speed, flow, and horsepower of the motor. The operating speed is the speed at which the hydraulic motors’ moving parts rotate. Operating speed is expressed in
Zhang_Ch01.indd 118
5/13/2008 5:45:39 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
119
revolutions per minute or similar terms. The operating temperature is the fluid temperature range the motor can accommodate. Minimum and maximum operating temperatures are dependent on the internal component materials of the motor and can vary greatly between products. The power the motor is capable of delivering is dependent on the pressure and flow of the fluid through the motor. The maximum volumetric flow through the motor is expressed in terms of gallons per minute or similar units. The maximum fluid viscosity the motor can accommodate is a measure of the fluid’s resistance to shear, and is measured in centipoises (cP), a common metric unit of dynamic viscosity equal to 0.01 poise or 1 mP. The dynamic viscosity of water at 20°C is about 1 cP (the correct unit is cP, but cPs and cPo are sometimes used). The fluid volume displaced per revolution of the motor is measured in cubic centimeters (cc) per revolution, or similar units. The weight of the motor is measured in pounds or similar units.
1.2.3.3 Application Guide (1) Proactive maintenance for hydraulic cylinders. Damaged hydraulic cylinder rods and wiper seals are an eternal problem for users of hydraulic machinery. Dents and gouges on the surface of hydraulic cylinder rods reduce seal life and give dust and other contaminants an easy path into the hydraulic system. These siltsized particles act like lapping compound, initiating a chain of wear in hydraulic components. The top four causes of hydraulic seal failure in cylinders are the following: (a) Improper installation is a major cause of hydraulic seal failure. The important things to watch during seal installation are (1) cleanliness, (2) protecting the seal from nicks and cuts, and (3) proper lubrication. Other problem areas are over tightening of the seal gland where there is an adjustable gland follower or folding over a seal lip during installation. Installing the seal upside down is a common occurrence, too. The solution to these problems is commonsense and taking reasonable care during assembly. (b) Hydraulic system contamination is another major factor in hydraulic seal failure. It is usually caused by external elements such as dirt, grit, mud, dust, ice, and internal contamination from circulating metal chips, breakdown products of fluid, hoses, or other degradable system components. As most external contamination enters the system during rod
Zhang_Ch01.indd 119
5/13/2008 5:45:39 PM
120
INDUSTRIAL CONTROL TECHNOLOGY retraction, the proper installation of a rod wiper and scraper is the best solution. Proper filtering of system fluid can prevent internal contamination. Contamination is indicated by scored rod and cylinder bore surfaces, excessive seal wear and leakage, and sometimes, tiny pieces of metal imbedded in the seal. (c) Chemical breakdown of the seal material is most often the result of incorrect material selection in the first place, or a change of hydraulic system fluid. Misapplication or use of noncompatible materials can lead to chemical attack by fluid additives, hydrolysis, and oxidation–reduction of seal elements. Chemical breakdown can result in loss of seal lip interface, softening of seal durometer, excessive swelling, or shrinkage. Discoloration of hydraulic seals can also be an indicator of chemical attack. (d) Heat degradation is to be suspected when the failed seal exhibits a hard, brittle appearance and/or shows a breaking away of parts of the seal lip or body. Heat degradation results in loss of sealing lip effectiveness through excessive compression set and/or loss of seal material. Causes of this condition may be use of incorrect seal material, high dynamic friction, excessive lip loading, no heel clearance, and proximity to outside heat source. Correction of heat degradation problems may involve reducing seal lip interference, increasing lubrication, or a change of the seal material. In borderline situations, consider all upper temperature limits to be increased by 50°F in hydraulic cylinder seals at the seal interface due to running friction caused by the sliding action of the lips. In response to this problem, a protective cylinder rod cover called Seal Saver has been developed and patented. Seal Saver is a continuous piece of durable material, which wraps around the cylinder and is closed with Velcro. It is then clamped onto the cylinder body and rod end. This makes installation simple with no disassembly of hydraulic cylinder components required. Seal Saver forms a protective shroud over the cylinder rod as it strokes and prevents build-up of contaminants around the wiper seal that is a common cause of rod scoring, seal damage, and contaminant ingress. Research has shown that the cost to remove contaminants is 10 times the cost of exclusion. This, combined with the benefits of extended hydraulic cylinder rod and seal life, makes Seal Saver a cost-effective, proactive maintenance solution.
Zhang_Ch01.indd 120
5/13/2008 5:45:39 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
121
(2) Hydraulic cylinder rods maintenance. As a product group, hydraulic cylinders are almost as common as pumps and motors combined. They are less complicated than other types of hydraulic components and are therefore relatively easy to repair. As a result, many hydraulic equipment owners or their maintenance personnel repair hydraulic cylinders in-house. An important step in the repair process that is often skipped is the checking of rod straightness. Bent rods load the rod seals causing distortion, and ultimately premature failure of the hydraulic cylinder seals. Rod straightness should always be checked when hydraulic cylinders are being resealed or repaired. This is done by placing the rod on rollers and measuring the run-out with a dial gauge. The rod should be as straight as possible, but a run-out of 0.5 mm per linear meter of rod is generally considered acceptable. In most cases, bent rods can be straightened in a press. It is sometimes possible to straighten hydraulic cylinder rods without damaging the hard-chrome plating; however, if the chrome is damaged, the rod must be either rechromed or replaced. Black nitride is a relatively recent alternative to the hard chrome-plated hydraulic cylinder rod. With reports of achieved service life three times that of conventional chrome, longer seal life, and comparable cost, black nitride rods for hydraulic cylinders are an option that all hydraulic equipment users should be aware of. Black nitride is an atmospheric furnace treatment developed and patented in the early 1980s. It combines the high surface hardness and corrosion resistance of nitride with additional corrosion resistance gained by oxidation. The process begins with the cleaning and superpolishing of the material to a surface roughness of 6–10 Ra. The steel bars or tubes are then fixed vertically and lowered into an electrically heated pit furnace. (3) Other maintenances for hydraulic cylinders. Hydraulic cylinders are compact and relatively simple. The key points to watch are the seals and pivots. The following lists service tips in maintaining cylinders: (a) External leakage. If the end caps of a cylinder are leaking, tighten them. If the leaks still do not stop, replace the gasket. If a cylinder leaks around a piston rod, replace the packing. Make sure that a seal lip faces toward the pressure oil. (b) Internal leakage. Leakage past the piston seals inside a cylinder can cause sluggish movement or settling under load. Piston leakage can be caused by worn piston seals or rings or scored cylinder walls. The latter may be caused by dirt and
Zhang_Ch01.indd 121
5/13/2008 5:45:39 PM
122
INDUSTRIAL CONTROL TECHNOLOGY
(c) (d)
(e)
(f)
(g) (h)
(i)
(j)
Zhang_Ch01.indd 122
grit in the oil. When repairing a cylinder, replace all the seals and packing before reassembly. Creeping cylinder. If a cylinder creeps when stopped in middle stroke, check for internal leakage. Another cause could be a worn control valve. Sluggish operation. Air in a cylinder is the most common cause of sluggish action. Internal leakage in a cylinder is another cause. If an action is sluggish when starting up a system, but speeds up when the system is warm, check for oil of too high a viscosity (see the machine’s operating manual). If a cylinder is still sluggish after these checks, test the whole circuit for worn components. Loose mounting. Pivot points and mounts may be loose. The bolts or pins may need to be tightened, or they may be worn out. Too much slop or float in a cylinder’s mountings damages the piston-rod seals. Periodically check all the cylinders for loose mountings. Misalignment. Piston rods must work in-line at all times. If they are side-loaded, the piston rods will be galled and the packing will be damaged, causing leaks. Eventually, the piston rods may be bent or the welds broken. Lack of lubrication. If a piston rod has no lubrication, a rod packing could seize, which would result in an erratic stroke, especially on single-acting cylinders. Abrasives on a piston rod. When a piston rod extends, it can pick up dirt and other material. When it retracts, it carries the grit into a cylinder, damaging a rod seal. For this reason, rod wipers are often used at the rod end of a cylinder to clean the rod as it retracts. Rubber boots are also used over the end of a cylinder in some cases. Piston-rod rusting is another problem. When storing cylinders, the piston rods are always retracted to protect them. Burrs on a piston rod. Exposed piston rods can be damaged by impact with hard objects. If a smooth surface of a rod is marred, a rod seal may be damaged. Clean the burrs on a rod immediately, using crocus cloth. Some rods are chromeplated to resist wear. Replace the seals after restoring a rod surface. Air vents. Single-acting cylinders (except ram types) must have an air vent in the dry side of a cylinder. To prevent dirt from getting in, use different filter devices. Most are selfcleaning, but inspect them periodically to ensure that they operate properly.
5/13/2008 5:45:39 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
1.2.3.4
123
Calibration
Many applications in modern industrial control require numerous hydraulic actuators that direct the flow of hydraulic fluid between the system components when necessary. For example, a typical antilock brake system in cars can include several hydraulic actuators to control the fluid pressure in the individual components such as a master cylinder and a plurality of wheel cylinders. The memory of an actuator control system includes numerous look-up tables which allow the control system to know what electric signals, such as current values, to apply to the actuators in order to yield specific actuation pressures. Typically, these look-up tables are generic tables that are not tailored to the individual actuators in the fluid system. These generic tables are created to account for worst-case part-to-part variances, manufacturing variances, and system variances. Thus, the tolerances of the values contained in the tables are relatively large and result in less than optimal performance of the actuators. Although very expensive actuators can be used to decrease part-to-part variances, the overall system tolerances remain larger. However, less tolerance can be obtained by individually calibrating less expensive actuators to create customized look-up tables for each actuator. The following introduces two calibration examples for hydraulic actuators: (1) Stroke calibration for actuator valve. The first example is stroke calibration for a hydraulic actuator for valves. Figure 1.34 illustrates the working block for this kind of hydraulic actuator that consists of a piston and spring. The spring, which can be preloaded, trends to keep the piston at the initial position. As pressure applied to the piston develops enough force to overcome the spring preload, the piston moves to the opposition until it reaches its maximum stroke.
A
p
s
Valve
Figure 1.34 The working block of a hydraulic actuator for valve (courtesy of Siemens).
Zhang_Ch01.indd 123
5/13/2008 5:45:39 PM
124
INDUSTRIAL CONTROL TECHNOLOGY To determine the stroke positions 0 and 1 in the valve, calibration is required when the valve and actuator are commissioned for the first time. For this purpose, the actuator must be mechanically connected to the valve and supplied with a standard voltage of electric power. The calibration procedure can be repeated as often as necessary. Normally, there is a slot on the printed circuit boards of many actuators. In most cases, to initiate the calibration procedure, the contacts inside this slot must be shortcircuited by, say, a screwdriver. The calibration thus can proceed automatically: (a) actuator runs to the zero stroke position: valve closes, green LED flashes; (b) actuator then runs to the 100 stroke position; (c) measured values are stored; (d) the calibration procedure is finished, and green LED flashes; (e) the actuator now moves to the position defined by control signal of its controller. (2) Resistance calibration for hydraulic pump. Figure 1.35 is the schematic diagram of a hydraulic actuator for pumping oil.
S 2 1 C1
Pump
C2
3
3 R
4
4 5
Figure 1.35 Schematic diagram of a hydraulic actuator. The pump (1) supplies a steady flow of oil to the supply point of a hydraulic Wheatstone bridge labeled as S. The oil continuously flows through the bridge to the return point labeled as R and is finally returned to the pump station. The four variable flow restrictors in this bridge are contained in a valve unit. In this diagram, 2 indicates a pneumatic valve; 4 gives two bellows; 5 is the actuator plate.
Zhang_Ch01.indd 124
5/13/2008 5:45:40 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
125
The goal of the calibration is for all four nozzles to have a nominal flow resistance measured by pascals per second per cubic meter. The test should be done at the nominal pressure and flow rate and with the fluid running from the supply to the return (Fig. 1.35). Before starting calibration, ensure that (1) the resistance of each nozzle is equal; (2) the pressure at the two control ports is the same when the electric drive is 0 A; (3) at 0 A drive, the pressure of the control port is half the pressure drop between the supply and the return; (4) the flow through the two sides of the bridge is the same; (5) the valves must be all the same; (6) for each side of the valve, calibration proceeds independently.
1.2.4
Piezoelectric Actuators
Piezoelectric actuators represent an important new group of actuators for active control of mechanical systems. Although the magnitudes of piezoelectric voltages, movements, or forces are small, and often require amplification (e.g., a typical disc of piezoelectric ceramic will increase or decrease in thickness by only a small fraction of a millimeter), piezoelectric materials have been adapted to an impressive range of applications requiring small amounts of displacement (typically less than a few thousandths of an inch of displacement). Today, modern polycrystalline piezoelectric ceramic is mass produced for applications including underwater transducers, point level sensors, medical products, ultrasonic cleaners, actuators, fish finders, and motors. The piezoelectric effect of the piezoelectric ceramic is used in sensing applications, such as accelerometers, sensors, flow meters, level detectors, and hydrophones as well as in force or displacement sensors. Its inverse piezoelectric effect is used in actuation applications, such as in motors and devices that precisely control positioning, and in generating sonic and ultrasonic signals. Piezoelectric actuators can be used for the conversion of electrical energy to mechanical movement, for accurate positioning down to nanometer levels, for producing ultrasonic energy and sonar signals, and for the conversion of pressure and vibration into electrical energy. Piezoelectric actuators can also be manufactured in a variety of configurations and fabrication techniques. The industry recognizes these devices as monomorphs, bimorphs, stacks, cofired actuators, and flexure elements. Piezoelectric actuators are found in telephones, stereo music systems, and musical instruments such as guitars and drums. The use of piezoelectric actuators is beginning to appear in endoscope lenses used in medical treatment. Piezoelectric actuators are also being used for valves in drilling equipment
Zhang_Ch01.indd 125
5/13/2008 5:45:40 PM
126
INDUSTRIAL CONTROL TECHNOLOGY
at offshore oil fields. Piezoelectric actuators are also used to control hydraulic valves, act as small-volume pumps or special-purpose motors, and in other applications. The use of piezoelectric actuators is thus steadily growing in advanced fields where conventional actuators are no longer effective. At present, however, this is no more than just a beginning. Piezoelectric actuators that combine a number of superior characteristics will continue to evolve into powerful devices that support our society in the future.
1.2.4.1
Operating Principle
(1) Piezoelectricity. In 1880, Jacques and Pierre Curie discovered an unusual characteristic of certain crystalline minerals: when subjected to a mechanical force, the crystals became electrically polarized. Tension and compression generated voltages of opposite polarity, and in proportion to the applied force. Subsequently, the converse of this relationship was confirmed: if one of these voltage-generating crystals was exposed to an electric field it lengthened or shortened according to the polarity of the field, and in proportion to the strength of the field. These behaviors were called the piezoelectric effect and the inverse piezoelectric effect, respectively. The findings by Jacques and Pierre Curie have been more and more confirmed since then. Many polymers, ceramics, and molecules such as water are permanently polarized; some parts of the molecule are positively charged, whereas other parts of the molecule are negatively charged. This behavior of piezoelectric materials is depicted in Fig. 1.36(a). When the material changes dimensions as a result of an imposed mechanical force, a permanently polarized material such as quartz (SiO2) or barium titanate (BaTiO3) will produce an electric field. This behavior of piezoelectric materials when subject to an imposed force is depicted in Fig. 1.36(b). Furthermore, when an electric field is applied to these materials, the polarized molecules within them will align themselves with the electric field, resulting in induced dipoles within the molecular or crystal structure of the material as illustrated in Fig. 1.36(c). Piezoelectricity involves the interaction between the electrical and mechanical behaviors of the material. Static linear relations between two electrical and mechanical variables have approximated this interaction: S = SET + dE, D = dT + eTE,
Zhang_Ch01.indd 126
5/13/2008 5:45:40 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL Domain
Dipole
127
Stress
– Voltage +
Electric field
Strains Electric strains
Stress (a)
(b)
(c)
Figure 1.36 Behaviors of piezoelectric materials: (a) nonpolarized state when no force and no electricity are applied, (b) polarized state when compression stresses are imposed, and (c) polarized state when electric field is applied after poling.
where S is the strain tensor, T is the stress tensor, E is an electric field vector, D is the electric displacement vector, SE is the elastic compliance matrix when subjected to a constant electric field (the superscript E denotes that the electric field is constant), d is the matrix of piezoelectric constants, and eT is the permittivity measured at constant stress. The piezoelectric effect is, however, very nonlinear in nature. Piezoelectric materials exhibit, for example, a strong hysteresis and drift that is not included in the above model. It should be noted, too, that the dynamics of the material are not described by the two equations above. (2) Piezoelectric actuator. The three basic types of piezoelectric actuators are stacks, linear motors, and benders. (a) Piezoelectric stack actuators. The linear motion produced by the piezoelectric effect has been used for making a stack actuator, which is a multilayer construction: each stack is composed of several piezoelectric layers, as depicted in Fig. 1.37. The required dimensions of the stack can be easily determined from the requirements of the application in question. The height is determined with respect to the desired movement and the cross-sectional area with respect to the desired force. The main problem of stack actuators is the relatively small strain (0, 1–0, 2%) obtained. Using, for example, levers or hydraulic amplifiers can increase the movement. It is noticeable that, in addition to the desired longitudinal movement,
Zhang_Ch01.indd 127
5/13/2008 5:45:40 PM
128
INDUSTRIAL CONTROL TECHNOLOGY
–
+
Figure 1.37 Structure of a piezoelectric stack.
some lateral movement typically also occurs, which causes the bias of the piezoelectric stack to be away from its straight line. Therefore, a guide has to be used if only longitudinal motion is desired. Figure 1.38 illustrates deviations from straight line accuracy. (b) Linear motors. Since the strain of piezoelectric ceramics is relatively small, displacement amplifiers or hybrid structures are needed. There are many amplification techniques such as levers and hydraulic systems, and piezoelectric motors. In the lever system, amplification is achieved with lever arms that magnify the displacement. The output force of the lever system is significantly smaller than the actuator force. Hydraulic systems generally use a piston for amplification. The principle of the piezohydraulic actuator is illustrated in Fig. 1.39, which develops a hydraulic amplifier based on the use of bellows. This kind of piezohydraulic motor uses a linear piezoelectric actuator to control the liquid input to the fluid chamber which drives the bellows. Piezoelectric motors increase displacement by providing many small steps. There are many different types of linear piezoelectric motors: the main categories are linear stepper motors and ultrasonic motors. The linear steppers include an inchworm motor, a stick and slip actuator, and an impact V
Straightness
H
Flatness
Figure 1.38 Straight line accuracy of piezoelectric stack.
Zhang_Ch01.indd 128
5/13/2008 5:45:41 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
129
Fluid chamber
Piezoelectric actuator
Bellows
Figure 1.39 Schematic of piezohydraulic actuator.
drive motor. The ultrasonic motors can be divided into standing wave and traveling wave ultrasonic motors. The operating principles of the inchworm motor, the stick and slip actuator, and the traveling wave ultrasonic motor are described below. (i) Inchworm motors. Inchworm motors are a kind of linear motor in which the linear movement is achieved by using three piezoelectric elements. The operation principle is illustrated in Fig. 1.40. The outer piezoelectric elements work as clamps. The contractions and expansions of the middle element generate the movement of the motor rod. (ii) Stick and slip actuators. The stick and slip actuator is a type of an inertia device that uses inertia of the moving mass. The actuator consists of particular legs and a slider. Each step consists of a slow deformation of the legs and fast jump backward. In slow deformation of the legs, the moving mass follows the legs due to friction (the frictional force is higher than the force caused by the slider inertia). In the sudden jump backward, the slider cannot follow the legs due to its inertia. Figure 1.41 shows the operating principle of stick and slip actuator. (iii) Traveling wave ultrasonic motors. A voltage having two phases drives the traveling wave ultrasonic motor. The voltage is applied to the piezoelectric element at the resonance frequency. The resonance frequency produces a traveling wave. The particles on the surface move along the elliptical trajectories. The motion of the
Zhang_Ch01.indd 129
5/13/2008 5:45:41 PM
130
INDUSTRIAL CONTROL TECHNOLOGY 2 1
3
OFF
Clamp element 1
Extend element 2
Camp element 3
Uncamp element 1
Contract element 2
Camp element 1
Uncamp element 3
Figure 1.40 Operation processes of inchworm motor.
Zhang_Ch01.indd 130
5/13/2008 5:45:42 PM
131
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL Slider
Voltage 2
Leg Stick::1
Time 1
Slip::2
Figure 1.41 Operation principle of the stick and slip actuator.
particles is on the opposite direction of the wave. When a moving body (rotor) is placed in contact with the surface, it moves in the same direction as the particles due to the frictional force produced between the moving body and the elastic body. The ultrasonic piezoelectric motor’s faster response times, higher precision, hard brake with no backlash, high power-to-weight ratio, and smaller packaging envelope more than compensate for the lack of brute horsepower and speed associated with its electromagnetic motor counterparts. (c) Piezoelectric benders. Piezoelectric bending actuators (or piezoelectric cantilevers or piezoelectric bimorphs) bear a close resemblance to bimetallic benders. The application of an electric field across the two layers of the bender results in the expansion of one layer while the other contracts. The net result is a curvature much greater than the length or thickness deformation of the individual layers. With a piezoelectric bender, relatively high displacements can be achieved, but at the cost of force and speed. There are some benders that have only one piezoelectric layer on top of a metal layer, but generally there are two piezoelectric layers and no metal. Two piezoelectric layers make the displacement double in comparison to a single layer version. If the number of piezoelectric layers exceeds two, the bender is referred to as a multilayer. With thinner piezoelectric layers, a smaller voltage is required to produce the same electric field strength, and the benefit of the multilayer benders is, therefore, lower operating voltage. Multilayer benders can be built into one of these two types: a serial or parallel bender. In a serial
Zhang_Ch01.indd 131
5/13/2008 5:45:42 PM
132
INDUSTRIAL CONTROL TECHNOLOGY bender, there are two piezoelectric layers with an antiparallel polarization connected to each other, and two surface electrodes. In this arrangement, one of the electrodes is connected to the ground and the other to the output of a bipolar amplifier. Figure 1.42 gives the schematic of a parallel bender in operation. Parallel benders can be distinguished from serial benders by their three electrodes. In between the two parallel-polarized piezoelectric layers is a middle electrode to which the actual control signal is supplied. The two surface electrodes are connected to the ground and to a fixed voltage. The control voltage is applied to the middle electrode, and it varies between zero and a fixed voltage (Fig. 1.42). The parallel bender can also be connected in such a way that the two surface electrodes are connected to the ground and a bipolar signal is applied to the middle electrode.
1.2.4.2
Basic Types
Piezoelectric devices make use of direct and inverse piezoelectric effects to perform a function. Both these piezoelectric effects are found in the crystal structures of some materials. For example, ceramics acquire a charge when being compressed, twisted or distorted and produced physical displacements when electric voltages are imposed. Several types of devices are available. Some of the most important are listed here: (1) Piezoelectric actuators. Piezoelectric actuators are devices that produce a small displacement with a high force capability when voltage is applied. There are many applications where a piezoelectric actuator may be used, such as ultra-precise positioning and in the generation and handling of high forces or pressures in static or dynamic situations. +Ur
Displacement Electrodes Pr PZT layers
0
Figure 1.42 Schematic of a parallel bender in operation.
Zhang_Ch01.indd 132
5/13/2008 5:45:42 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
133
Actuator configuration can vary greatly depending on application. Piezoelectric stack or multilayer actuators are manufactured by stacking up piezoelectric disks or plates, the axis of the stack being the axis of linear motion when a voltage is applied. Tube actuators are monolithic devices that contract laterally and longitudinally when a voltage is applied between the inner and outer electrodes. A disk actuator is a device in the shape of a planar disk. Ring actuators are disk actuators with a center bore, making the actuator axis accessible for optical, mechanical, or electrical purposes. Other less common configurations include block, disk, bender, and bimorph styles. These devices can also be ultrasonic. Ultrasonic actuators are specifically designed to produce strokes of several micrometers at ultrasonic (>20 kHz) frequencies. They are especially useful for controlling vibration, positioning applications, and quick switching. In addition, piezoelectric actuators can be either direct or amplified. The effect of amplification is not only larger displacement, but it can also result in slower response times. The critical specifications for piezoelectric actuators are displacement, force, and operating voltage of the actuator. Other factors to consider are stiffness, resonant frequency, and capacitance. Stiffness is a term used to describe the force needed to achieve a certain deformation of a structure. For piezoelectric actuators, it is the force needed to elongate the device by a certain amount. It is normally specified in terms of newtons per micrometer. Resonance is the frequency at which the actuators respond with maximum output amplitude. The capacitance is a function of the excitation voltage frequency. (2) Piezoelectric motors. Piezoelectric motors use a piezoelectric ceramic element to produce ultrasonic vibrations of an appropriate type in a stator structure. The elliptical movements of the stator are converted into the movement of a slider pressed into frictional contact with the stator. The consequent movement may either be rotational or linear depending on the design of the structure. Linear piezoelectric motors typically offer one degree of freedom, such as in linear stages. However, these devices can be combined to provide more complex positioning factors. Rotating piezoelectric motors are commonly used in submicrometric positioning devices. Large mechanical torque can be achieved by combining several of these rotational units. Piezoelectric motors have a number of potential advantages over conventional electromagnetic motors. They are generally small and compact when compared with their power output, and provide greater force and torque than their dimensions would
Zhang_Ch01.indd 133
5/13/2008 5:45:43 PM
134
INDUSTRIAL CONTROL TECHNOLOGY seem to indicate. In addition to a very positive size to power ratio, piezoelectric motors have high holding torque maintained at zero input power, and they offer low inertia from their rotors, providing rapid start and stop characteristics. Additionally, they are unaffected by electromagnetic fields, which can hamper other motor types. Piezoelectric motors usually do not produce magnetic fields and they are not affected by external magnetic fields either. Because they operate at ultrasonic frequencies, these motors do not produce sound during operation. However, piezoelectric motors do have some disadvantages. These disadvantages include the need for high voltage, highfrequency power sources, and the possibility of wear at the rotor/ stator interface, which tends to shorten their service life. Piezoelectric motors have been in industrial use for years, but have not been popular due to what was perceived as an exorbitant cost of production and use. However, recent advances have significantly reduced the channel cost of this technology for closedloop systems that require high positioning accuracy. With the use of a wide range of controllers and/or position sensors, the list of piezoelectric motor product applications is constantly growing. Some of the common applications for piezoelectric motors include camera focus systems, computer disk drives, material handling, robotics, and semiconductor testing and production systems. (3) Multilayer piezoelectric benders. High efficiency, low-voltage multilayer benders have been developed to meet the growing demand for precise, controllable, and repeatable deflection devices in the millimeter and micrometer range. Multilayer piezoelectric ceramic benders are devices capable of rapid (<10 ms) millimeter movements with micrometer precision. They utilize the inverse piezoelectric effect, in which an electric field creates a cantilever bending effect. By making the ceramic layers very thin, between 20 and 40 µm, deflections can be generated with low power consumption at operating voltages from –10 to +60 V. With an electrical field of <3 kV/mm large deflections per unit volume can be achieved with high reliability. Note: A single bender cannot combine maximum deflection and maximum blocking force! Typical applications: proportional valves, low-energy consuming switches, or pumps. (4) Piezoelectric drivers and piezoelectric amplifiers. Piezoelectric drivers and piezoelectric amplifiers are developed to match the requirements for driving and controlling piezoelectric actuators and stages in some applications. Standard linear amplifier products are simple voltage followers that amplify a low-voltage
Zhang_Ch01.indd 134
5/13/2008 5:45:43 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
135
input signal, and others are recommended for use as integrated or stand-alone systems in applications that require advanced capabilities for closed-loop servo control requirements. A voltage amplifier is typically needed to control piezoelectric actuators due to the high operating voltage needed for the former. In other words, before the computer, through a DA converter, provides the control signal, it must be amplified. This section describes the most important piezoelectric amplifier characteristics such as voltage range, peak and average currents, a slew rate, power efficiency, and noise. For bench top products, menu-driven user interfaces through front panel LCD and input dial enable amplifier setup, monitoring, and configuration. A built-in sinusoidal function generator capability is available for these models. Standard amplifier bandwidth is <1.2 kHz (–3 dB). Their features include RS-232 serial communication, digital I/O, analog or digital feedback for closed-loop input signals, a 24-bit analog input, and userconfigurable PID gain parameter settings. The serial communication capability allows the user to configure and monitor system parameters, to command the desired target position, and to query actual position. The output voltage range is perhaps the most important property of the amplifier because it either limits the range of displacement when being too small or decreases the displacement resolution when being too large. In addition to the supply voltage range, an important property is the current-driving capability of the amplifier. This together with the capacitance of the piezoelectric actuator determines the maximum operating frequency. For most amplifiers, both the peak and the average current limits are given. With capacitive loads, such as piezoelectric actuators, the peak current is more important but average current cannot be forgotten. The required peak and average currents ratio show a fixed ratio of approximately 3:1 for a sine oscillation, for example. One aspect to consider is the power efficiency of the supplied power. This is important especially in portable devices, in devices that have wireless power supply, and in devices operating on high frequencies. Piezoelectric actuators have theoretically an unlimited resolution. Therefore, every infinitely small voltage step caused by the noise of the amplifier, for example, is transformed into an infinitely small mechanical shift. Therefore, an important property of the amplifier when designing a precision positioning system is the noise characteristics of the amplifier.
Zhang_Ch01.indd 135
5/13/2008 5:45:43 PM
136
INDUSTRIAL CONTROL TECHNOLOGY
1.2.4.3 Technical Specifications (1) Piezoelectric actuator configuration. Your choices are (a) Stack. Piezoelectric stack actuators are manufactured by stacking up piezoelectric disks or plates. These disks are electrically connected. The stack axis is the axis of the linear motion. When a voltage is applied, the thickness of the layers increases and thereby the total stack lengthens. (b) Tube. Piezoelectric tube actuators are monolithic devices that contract laterally and longitudinally when a voltage is applied between the inner and outer electrodes. With quadrature electrodes the tubes can be operated as XY scanners. (c) Disk. A disk actuator is a device in the shape of a planar disk. (d) Ring. Ring actuators are disk actuators with a center bore. This makes the actuator axis accessible for optical, mechanical, or electrical purposes. (e) Other. Other unlisted configurations are specialized or proprietary configurations, such as block, disk, bender, bimorph, etc. (2) Performance specifications (a) Maximum displacement. The maximum elongation (normally specified in meters) that the actuator will produce when the maximum operating voltage is applied. (b) Blocked force. The maximum force (normally specified in newtons, N) that the actuator will produce when the maximum operating voltage is applied. (c) Maximum operating voltage. The maximum voltage that can be applied to the actuator without impairing its functionality. (d) Stiffness. Stiffness is a term used to describe the force needed to achieve a certain deformation or deflection of a structure. For piezoelectric actuators, it is the force needed to elongate the device by a certain amount. It is normally specified in terms of newtons per meter, or N/m. (e) Resonance frequency. Resonance is the frequency at which the actuator responds with maximum output amplitude. (f) Capacitance. The capacitance is that which the actuator exhibits. The capacitance is a function of the excitation voltage frequency. (3) Ultrasonic operation. Ultrasonic actuators are specifically designed to produce strokes of several micrometers at ultrasonic (>20 kHz) frequencies. (4) Electrical connectors. Your choices are (a) DB-9. Similar in appearance to a CAPITOL D, D-subminiature connectors are generally referred to by the number of pins or sockets they have, for example, DB-9, DB-25, etc. The design
Zhang_Ch01.indd 136
5/13/2008 5:45:43 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
(b)
(c) (d) (e) (f)
1.2.4.4
137
of these connectors varies little among manufacturers, except for the color of the shell. BNC. The BNC is essentially a miniature version of the C connector that is a bayonet version of the N connector. BNC connectors are available in both 50 and 75 Ω versions; both versions will mate together. The 50 Ω designs operate up to a frequency of 4 GHz. BNC connectors are used in many applications, some of which are flexible networks, instrumentation, and computer peripheral interconnections. Two wires AWG26. Two 15.9-mils diameter (AWG26) wires. Two wires AWG30. Two 10.0-mils diameter (AWG30) wires. LEMO connector. LEMO is a precision push–pull locking connector for demanding applications. LEMO is a registered trademark of LEMO. Other. Other includes those unlisted, specialized, or proprietary connector types.
Calibration
Under normal environments, piezoelectricity is often stable in physics. Those devices working with piezoelectricity, including actuators, sensors, and motors, are therefore stable, and their calibrated performance characteristics do not change over time under normal environmental conditions. However, often these devices are exposed to harsh environmental conditions, like mechanical shock, temperature changes, humidity, etc., which basically generate three groups of errors in piezoelectric devices: (1) Sensitivity. Errors that include calibration errors, linearity errors, frequency and phase response errors, aging errors, temperature coefficients. (2) Coupling. Errors that include influence of transducer weight, quality of the coupling surfaces, transverse sensitivity. (3) Noise and environmental influences that include noise, base strain, magnetic fields, temperature transients, sound pressure, cable motion, electromagnetic interference in cables. In order to correct these errors it is necessary to establish a recalibration cycle. For applications where high accuracy is required, we recommend recalibrating the piezoelectric devices every time after use under severe conditions or at least every 2 years. Nevertheless, in some less critical applications, for example, in machine monitoring, recalibration may be unnecessary.
Zhang_Ch01.indd 137
5/13/2008 5:45:43 PM
138
INDUSTRIAL CONTROL TECHNOLOGY
To recalibrate these piezoelectric devices, many companies choose to purchase their own calibration equipment to perform recalibration themselves. This may save calibration cost, particularly if a large number of piezoelectric devices is used. When no calibrator is at hand, a measuring chain can be calibrated by one of the following techniques: (1) adjusting the amplifier gain to the required sensitivity of the piezoelectric devices; (2) typing in the stated sensitivity when using a computer-based data acquisition system; (3) replacing the piezoelectric device by a generator signal and measuring the equivalent magnitude. However, due to the limitations of calibration, the uncertainty of calibration may not be better than ±2% by means of these three techniques. These errors also cause systematic errors in industrial control. For the evaluation of systematic errors it is very important to assess their contribution from all relevant error sources. This is of particular importance for unknown and undetectable systematic errors. Most errors, however, will occur accidentally in an unpredictable manner. They cannot be compensated for by a simple mathematical model since their amount and their process of formation are unknown. For practical measurements, systematic errors and accidental errors are combined in one quantity called measuring uncertainty. The following example illustrates the contribution of several error components and their typical amounts: (1) Accelerometer: (a) calibration error 2% (b) frequency error (band limits at 5% deviation) 5% (c) linearity error 2% (d) external influences 5%. (2) Instrument with mathematical model calculation: (a) basic error 1% (b) frequency error (band limits at 5% deviation) 5% (c) linearity error 1% (d) waveform error 1%. Piezoelectric actuators have exceptional linearity when properly mounted, with typical 0.5% value of full scale output (FSO). New users may sometimes become confused about how to mount and calibrate the piezoelectric actuators. Piezoelectric actuators may be used at multiple incremental ranges
Zhang_Ch01.indd 138
5/13/2008 5:45:43 PM
139
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
up to their maximum measuring range. Therefore, selecting the proper actuator really depends on the size and mechanical constraints of the apparatus under test. In general, there are two types of piezoelectric actuator: (1) internally preloaded actuators (2) ring-style actuators that require external preloading. Internally preloaded actuators do not require any preloading, whereas ring-style actuators require preloading during installation. Ring-style actuators must be preloaded to approximately 20% or more of their measuring range in order to obtain the best possible linearity. This linearity is achieved by tightly clamping the internal components (piezoelectric material and housing) together (see Fig. 1.43). The preload also acts to limit slippage of the actuator caused by side loads experienced during use. Tension measurements are also possible if the actuator has been mounted with proper preload. This allows the ability to measure tensile and compressive loads with one actuator. This is how actuators are constructed. Preloading is also required for shear force measurements using threeaxis actuators. The preload generates the required friction between the actuator and mating surfaces in order to transmit the shear forces. The required preload force is calculated as: Force of preload = Force of shear/the coefficient of friction. A typical value for the coefficient of friction is 0.13. Thus, the required preload is at least 7.7 times the desired shear force. Some manufacturers of Sensor output
Linear portion of output
Applied load 20% of FSO
FSO
Figure 1.43 Internally preloaded actuators (sensor) do not require any preloading (courtesy of PCB PIEZOTRONICS, Inc.).
Zhang_Ch01.indd 139
5/13/2008 5:45:43 PM
140
INDUSTRIAL CONTROL TECHNOLOGY
piezoelectric actuators recommend 10 times the desired shear force. Force rings that require preload are calibrated and shipped with a standard mounting stud. This stud is specially designed to stretch, yet still maintain a very high tensile strength beyond the force ring measurement range. The stretching action of the stud has been designed to allow the force ring to maintain the best possible sensitivity. The stiffer the stud, the more force it takes away from the actuator, thus effectively reducing the force ring output. The standard stud is normally made of beryllium copper and shunts approximately 5% of the force. Steel bolts can take away approximately 20–50% of the applied force. Different bolt materials may be used, but the actuator requires recalibration with the new bolt. A properly mounted and preloaded force ring is depicted in Fig. 1.44. Proper alignment and orientation of the actuator is also critical to long-term performance and calibration values. The general guidelines are to mount the actuator between flat, parallel, and rigid supports that are at least twice the thickness of the piezoelectric actuator. This aligns the actuator and contact surfaces to prevent edge loading or bending moments, resulting in better dynamic measurements. Loading the entire force-sensing surface is also important for good measurements. This can be difficult if the surface being brought into contact with the actuator is not parallel to the actuator surface. F
Preload stud Antifriction washer Force ring sensor
Pilot bushing (for centering)
Figure 1.44 A properly mounted force ring (courtesy of PCB PIEZOTRONICS, Inc.).
Zhang_Ch01.indd 140
5/13/2008 5:45:43 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
141
The unique quasistatic nature of piezoelectric actuators allows static calibrations to be performed. As long as the calibration engineer applies the following three rules, the results will closely match the factory calibration: (1) The actuator must have a discharge time constant of at least 50 s, or be a charge mode actuator. (2) The signal conditioning must be DC coupled. (3) For force rings, the factory-supplied beryllium copper preload stud must be used. The calibration engineer simply needs to place a known weight on the actuator and wait for the signal to decay to zero (or reset the charge amplifier). The next step is to remove the weight from the actuator and record the voltage output. This produces a negative voltage output. This value, divided by the applied weight, equals the actuator sensitivity in volts per pound.
1.2.5
Manual Actuators
A manual actuator employs levers, gears, or wheels to facilitate movement. There are some manual gear actuators that are recommended for use on ball and butterfly valves. For the sake of convenience, and where lever handles present space problems, gear actuators are the ideal choice. Fully enclosed weatherproof, all cast iron and carbon steel construction manual gear actuators are factory lubricated for their lifetime, requiring no future lubrication. Each unit is supplied with a pointer to indicate valve position. A manual actuator, by definition, is an actuator that requires no outside power source. Handwheels, chainwheels, and levers are examples of manual actuators. A handwheel or lever is utilized to drive a series of gears (typically worm gears) whose ratio results in a higher output torque compared to input (manual) torque. The manually operated actuators are of screw mechanism, and are of large diameter wheels or long levers for use in high-pressure applications equipped with a reduction gear to ease the operation. Manual actuators can also be fitted with a chain wheel and extended stem that are used for tank outlet valves or process valves where lack of space demands extended handles. An automatic actuator has an external power source to provide the force and motion to operate a valve remotely or automatically. Power actuators are a necessity on valves in pipelines located in remote areas: they are also used on valves that are frequently operated or throttled. Valves that are particularly large may be impossible or impractical to operate manually
Zhang_Ch01.indd 141
5/13/2008 5:45:44 PM
142
INDUSTRIAL CONTROL TECHNOLOGY
simply by the sheer horsepower requirements. Some valves may be located in extremely hostile or toxic environments, which preclude manual operation. Additionally, as a safety feature, certain types of power actuators may be required to operate quickly, shutting down a valve in case of emergency.
1.3 Valves The valve is one of the most basic and indispensable component of our modern industries. It is essential to industrial control technology that is included in virtually all manufacturing processes and every energy production and supply system. The valve is one of the oldest products known to man, with a history of thousands of years. The modern history of the valve industry parallels the Industrial Revolution, which began in 1705 when Thomas Newcomen invented the first industrial steam engine. Because steam built up pressures that had to be contained and regulated, valves acquired a new importance. As Newcomen’s steam engine was improved upon by James Watt and other inventors, designers and manufacturers also improved the valves for these steam engines. Their interest, however, was in the whole project, and the manufacture of valves as a separate product was not undertaken on a large scale for a number of years. A valve is a device that controls not only the flow of a fluid, but also the rate, the volume, the pressure or the direction of liquids, gases, slurries, or dry materials through a pipeline, chute, or similar passageway. With valves, the flow of a fluid in various passageways can be turned ON and OFF, regulated, modulated, or isolated for the range in size from a fraction of an inch to as large as 30 ft in diameter, which can vary in complexity from a simple brass valve available at the local hardware store to a precisiondesigned, highly sophisticated coolant system control valve, made of an exotic metal alloy in a nuclear reactor. Valves can control flow of all types of fluid, from the thinnest gas to highly corrosive chemicals, superheated steam, abrasive slurries, toxic gases, and radioactive materials. They can handle temperatures from cryogenic region to molten metal and pressures from high vacuum to thousands of pounds per square inch.
1.3.1
Control Valves
Final control element stands for the device that implements the control strategy determined by the output of the controller. While the final control element can be a damper or a variable speed drive pump or an ON–OFF
Zhang_Ch01.indd 142
5/13/2008 5:45:44 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
143
switching device, the most common final control element in the process control industries is the control valve. The control valve is this kind of final element that manipulates a flowing fluid, such as gas, steam, water, or chemical compounds, to compensate for the load disturbance and keep the regulated process variable as close as possible to the desired set point. Control valves or valves really refer to a control valve assembly. The control valve assembly typically consists of the valve body, the internal trim parts, an actuator to provide the motive power to operate the valve, and a variety of additional valve accessories that can include positioners, transducers, supply pressure regulators, manual operators, snubbers, or limit switches. The control valve regulates the rate of fluid flow as the position of the valve plug or disk is changed by force from the actuator. To do this, the valve must contain the fluid without external leakage; have adequate capacity for the intended service; be capable of withstanding the erosive, corrosive, and temperature influences of the process; and incorporate appropriate end connections to mate with adjacent pipelines and actuator attachment means to permit transmission of actuator thrust to the valve plug stem or rotary shaft.
1.3.1.1
Basic Types
Many styles of control valve bodies have been developed through the years. Some have found wide application; others meet specific service conditions and are used less frequently. The following summary describes some popular control valve body styles in use today, some special application valves, stem conditioning valves, and the ancillary devices to the control valve including valve actuators, positioners, and accessories. (1) Linear globe valves. Linear globe valves are those valves with a linear motion closure member, one or more ports, and a body distinguished by a globular-shaped cavity around the port region. Globe valves can be further classified as single-ported valve bodies, balance-plug cage-guided bodies, high capacity cage-guided valve bodies, port-guided single-port valve bodies, double-ported valve bodies, and three-way valve bodies. (2) Rotary shaft valves. Rotary shaft valves are those valves of the style in which the flow closure member (full ball, partial ball, disk, or plug) is rotated in the flow stream to control the capacity of the valve. Rotary shaft valves can be further classified as butterfly valve bodies, V-notch ball control valve bodies, eccentric-disk control valve bodies, and eccentric-plug control valve bodies.
Zhang_Ch01.indd 143
5/13/2008 5:45:44 PM
144
INDUSTRIAL CONTROL TECHNOLOGY (3) Special valves. Standard control valves can handle a wide range of control applications. Certainly, corrosiveness and viscosity of the fluid, leakage rates, and many other factors demand consideration even for standard applications. The following discusses some special control valve modifications useful in severe controlling applications. (a) High capacity control valves. The following covers the special valve category of globe style valves larger than 12-in., ball valves over 24-in., and high-performance butterfly valves larger than 48-in. As valve sizes increase arithmetically, static pressure loads at shutoff increase geometrically. Consequently, shaft strength, bearing loads, unbalance forces, and available actuator thrust all become more significant with increasing valve size. Normally, maximum allowable pressure drop is reduced on large valves to keep design and actuator requirements within reasonable limits. Even with lowered working pressure ratings, the flow capacity of some large-flow valves remains tremendous. (b) Low flow control valves. Many applications exist in laboratories and pilot plants in addition to the general processing industries where control of extremely low flow rates is required. These applications are commonly handled in one of two ways. First, special trims are often available in standard control valve bodies. The special trim is typically made up of a seat ring and valve plug that have been designed and machined to very close tolerances to allow accurate control of very small flows. These types of control valves are specially designed for the accurate control of very low-flowing liquid or gaseous fluid applications. (c) High temperature control valves. These designed control valves are special for service at temperatures above 450oF (232oC). They are frequently at elevated temperatures, such as may be encountered in boiler feed water systems and superheater bypass systems. (d) Cryogenic service valves. Cryogenic service valves are for dealing with materials and processes at temperatures below –150oF (–101oC). For control valve applications in cryogenic services, many of the same issues need consideration as with high temperature control valves. Packing is a concern in cryogenic applications. Plastic and electrometric components often cease to function appropriately at temperatures below 0oF (–18oC). In these temperature ranges, components such as packing and plug seals require special consideration. For plug seals, a standard soft seal will become
Zhang_Ch01.indd 144
5/13/2008 5:45:44 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
145
very hard and less pliable, thus not providing the shut-off required from a soft seat. Special electrometrics have been applied in these temperatures but require special loading to achieve a tight seal. (4) Stem conditioning valves. A stem conditioning valve is used for the simultaneous reduction of steam pressure and temperature to the level required for a given application. Frequently, these applications deal with high inlet pressures and temperatures and require significant reductions of both properties. They are, therefore, best manufactured in a forged and fabricated body that can better withstand steam loads at elevated pressures and temperatures. Forged materials permit higher design stresses, improved grain structure, and an inherent material integrity over cast valve bodies. The forged construction also allows the manufacturer to provide up to Class 4500, as well as intermediate and special class ratings, with greater ease vs cast valve bodies. Due to frequent extreme changes in steam properties as a result of the temperature and pressure reduction, the forged and fabricated valve body design allows for the addition of an expanded outlet to control outlet steam velocity at lower pressure. Similarly, with reduced outlet pressure, the forged and fabricated design allows the manufacturer to provide different pressure class ratings for the inlet and outlet connections to more closely match the adjacent piping. The latest versions of the stem conditioning valves have these designs: feed-forward design, manifold design, pressure reduction– only design, and turbine bypass design, etc. The turbine bypass system has evolved over the past few decades as the mode of power plant operations has changed. It is employed routinely in utility power plants where operations require quick response to wide swings in energy demands. A typical power plant operation might start at minimum load, increase to full capacity for most of the day, rapidly reduce back to minimum output, and then up again to full load—all within a 24-h period. Boilers, turbines, condensers, and other associated equipment cannot respond properly to such rapid changes without some form of turbine bypass system. The turbine bypass system allows operation of the boiler independent of the turbine. In the start-up mode, or rapid reduction of generation requirement, the turbine bypass not only supplies an alternate flow path for steam, but conditions the steam to the same pressure and temperature normally produced by the turbine expansion process. By providing an alternate flow path for the steam, the turbine bypass system protects the turbine, boiler, and condenser from
Zhang_Ch01.indd 145
5/13/2008 5:45:44 PM
146
INDUSTRIAL CONTROL TECHNOLOGY damage that may occur from thermal and pressure excursions. For this reason, many turbine bypass systems require extremely rapid open/close response times for maximum equipment protection. This is accomplished with an electrohydraulic actuation system that provides both the forces and controls for such operation. Additionally, when commissioning a new plant, the turbine bypass system allows start-up and check out of the boiler separately from the turbine. This means quicker plant start-ups, which results in attractive economic gains. It also means that this closed loop system can prevent atmospheric loss of treated feed water and reduction of ambient noise emissions. (5) Valve actuators. Pneumatically operated control valve actuators are the most popular type in use, but electric, hydraulic, and manual actuators are also widely used. All these actuators are introduced in Section 1.2 of this chapter. The spring-and-diaphragm pneumatic actuator is most commonly specified due to its dependability and simplicity of design. Pneumatically operated piston actuators provide high stem force output for demanding service conditions. Adaptations of both spring-and-diaphragm and pneumatic piston actuators are available for direct installation on rotary shaft control valves. Electric and electrohydraulic actuators are more complex and more expensive than pneumatic actuators, which offer advantages where no air supply source is available, where low ambient temperatures could freeze condensed water in pneumatic supply lines, or where unusually large stem forces are needed. Pneumatically operated diaphragm actuators use air supply from controller, positioner, or other sources. Various styles include direct action of increasing air pressure pushes down the diaphragm and extends actuator stem; reverse action of increasing air pressure pushes up the diaphragm and retracts actuator stem; reversible actuators that can be assembled for either direct or reverse action; direct acting unit for rotary valves of increasing air pressure pushes down on the diaphragm, which may either open or close the valve, depending on orientation of the actuator lever on the valve shaft. Piston actuators are pneumatically operated using high-pressure plant air to 150 psig, often eliminating the need for a supply pressure regulator. Piston actuators furnish maximum thrust output and fast stroking speeds. Piston actuators are double acting to give maximum force in either direction, or spring return to provide fail-open or fail-closed operation. Electrohydraulic actuators require only electrical power to the motor and an electrical input signal from the controller.
Zhang_Ch01.indd 146
5/13/2008 5:45:44 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
147
Electrohydraulic actuators are ideal for isolated locations where pneumatic supply pressure is not available but where precise control of valve plug position is needed. Units are normally reversible by making minor adjustments and might be self-contained, including motor, pump, and double-acting hydraulically operated piston within a weatherproof or explosion-proof casing. Rack and pinion designs provide a compact and economical solution for rotary shaft valves. Because of backlash, they are typically used for ON–OFF applications or where process variability is not a concern. Traditional electric actuator designs use an electric motor and some form of gear reduction to move the valve. Through adaptation, these mechanisms have been used for continuous control with varying degrees of success. To date, electric actuators have been much more expensive than pneumatic for the same performance levels. This is an area of rapid technological change, and future designs may cause a shift toward greater use of electric actuators. Manual actuators are useful where automatic control is not required, but where ease of operation and good manual control is still necessary. They are often used to actuate the bypass valve in a three-valve bypass loop around control valves for manual control of the process during maintenance or shutdown of the automatic system. Manual actuators are available in various sizes for both globe style valves and rotary shaft valves. Manual actuators are much less expensive than automatic actuators. (6) Positioners. Positioners are used for pneumatically operated valves that depend on a positioner to take an input signal from a process controller and convert it to a valve travel. Positioners are mostly available in three configurations: (a) Pneumatic. A pneumatic signal (usually 3–15 psig) is supplied to the positioner. The positioner translates this to a required valve position and supplies the valve actuator with the required air pressure to move the valve to the correct position. (b) Analog I/P. This positioner performs the same function as the one above, but uses electrical current (usually 4–20 mA) instead of air as the input signal. (c) Digital. Although this positioner functions very much like the analog I/P described above, it differs in that the electronic signal conversion is digital rather than analog. The digital products cover three categories: (i) Digital noncommunicating. A current signal (4–20 mA) is supplied to the positioner, which both powers the electronics and controls the output.
Zhang_Ch01.indd 147
5/13/2008 5:45:44 PM
148
INDUSTRIAL CONTROL TECHNOLOGY (ii) HART. This is the same as the digital noncommunicating but is also capable of two-way digital communication over the same wires used for the analog signal. (iii) Fieldbus. This type receives digitally based signals and positions the valve using digital electronic circuitry coupled to mechanical components. (7) Valve accessories. Valve accessories include (a) Limit switches. Limit switches operate discrete inputs to a distributed control system, signal lights, small solenoid valves, electric relays, or alarms. An assembly that mounts on the side of the actuator houses the switches. Each switch adjusts individually and can be supplied for either alternating current or direct current systems. Other styles of valve-mounted limit switches are also available. (b) Solenoid valve manifold. The actuator type and the desired fail-safe operation determine the selection of the proper solenoid valve. The solenoids can be used on double-acting pistons or single-acting diaphragm actuators. (c) Supply pressure regulator. Supply pressure regulators, commonly called airsets, reduce plant air supply to valve positioners and other control equipment. Common reduced air supply pressures are 20, 35, and 60 psig. The regulator mounts integrally to the positioner or nipple-mounts or bolts to the actuator. (d) Pneumatic lock-up systems. Pneumatic lock-up systems are used with control valves to lock in existing actuator loading pressure in the event of supply pressure failure. These devices can be used with volume tanks to move the valve to the fully open or closed position on loss of pneumatic air supply. Normal operation resumes automatically with restored supply pressure. Functionally similar arrangements are available for control valves using diaphragm actuators. (e) Fail-safe systems for piston actuators. In these fail-safe systems, the actuator piston moves to the top or bottom of the cylinder when supply pressure falls below a predetermined value. The volume tank, charged with supply pressure, provides loading pressure for the actuator piston when supply pressure fails, thus moving the piston to the desired position. Automatic operation resumes, and the volume tank is recharged when supply pressure is restored to normal. (f) PC diagnostic software. PC diagnostic software provides a consistent, easy to use interface to every field instrument within a plant. For the first time, a single resource can be used to communicate and analyze field electronic “smart”
Zhang_Ch01.indd 148
5/13/2008 5:45:44 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
149
devices such as pressure xmtrs, flow xmtrs, etc., not pneumatic positioners, boosters. Users can benefit from reduced training requirements and reduced software expense. A single purchase provides the configuration environment for all products. Products and services are available that were not possible with stand-alone applications. The integrated product suite makes higher level applications and services possible. (g) Electropneumatic transducers. The transducer receives a direct current input signal and uses a torque motor, nozzle flapper, and pneumatic relay to convert the electric signal to a proportional pneumatic output signal. Nozzle pressure operates the relay and is piped to the torque motor feedback bellows to provide a comparison between input signal and nozzle pressure.
1.3.1.2 Technical Specifications Control valves handle all kinds of fluids at temperatures from the cryogenic range to well over 1000°F (538oC). Selection of a control valve body assembly requires particular consideration to provide the best available combination of valve body style, material, and trim construction design for the intended service. Capacity requirements and system operating pressure ranges must also be considered in selecting a control valve to ensure satisfactory operation without undue initial expense. Reputable control valve manufacturers and their representatives are dedicated to helping in selecting the control valve most appropriate for the existing service conditions. Because there are often several possible correct choices for an application, it is important that all the following information be provided: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
Zhang_Ch01.indd 149
type of fluid to be controlled; temperature of fluid; viscosity of fluid; specific gravity of fluid; flow capacity required (maximum and minimum); inlet pressure at valve (maximum and minimum); outlet pressure (maximum and minimum); pressure drop during normal flowing conditions; pressure drop at shutoff; maximum permissible noise level, if pertinent and the measurement reference point;
5/13/2008 5:45:44 PM
150
INDUSTRIAL CONTROL TECHNOLOGY (11) (12) (13) (14) (15) (16) (17) (18)
degrees of superheat or existence of flashing, if known; inlet and outlet pipeline size and schedule; special tagging information required; body material; end connections and valve rating; action desired on air failure; instrument air supply available; instrument signal.
The following information will require the agreement of the user and the manufacturer depending on the purchasing and engineering practices being followed: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12)
valve type number; valve size; valve body construction (angle, double port, butterfly, etc.); valve plug guiding (cage style, port guided, etc.); valve plug action (push down to close or push down to open); port size (full or restricted); valve trim materials required; flow action (flow tends to open valve or flow tends to close valve); actuator size required; bonnet style (plain, extension, etc.); packing material (laminated graphite, environmental sealing systems, etc.); accessories required (positioner, handwheel, etc.).
The following steps need to be undertaken for the selection of a valve: (1) (2) (3) (4) (5) (6)
determine service condition; calculate preliminary cv required; select trim type; select valve body and trim size; select trim materials; other consideration such as shutoff, stem packing, etc.
1.3.1.3 Application Guide The performance of the control valves can be affected by the following factors: (1) Dead band. Dead band is a range or band of controller output values that fails to produce a change in the measured process
Zhang_Ch01.indd 150
5/13/2008 5:45:44 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
151
variable when the input signal reverses direction. When a load disturbance occurs, the process variable deviates from the set point; the deviation initiates a corrective action through the controller and back through the process. However, an initial change in controller output can produce no corresponding corrective change in the process variable. Only when the controller output has been changed enough to progress through the dead band does a corresponding change in the process variable occur. Any time the controller output reverses direction, the controller signal must pass through the dead band before any corrective change in the process variable will occur. Therefore, dead band is a major contributor to excess process variability, and control valve assemblies can be a primary source of dead band in an instrumentation loop due to a variety of causes such as friction, backlash, shaft windup, relay or spool valve dead zone, etc. Its presence in the process ensures that the process variable deviation from the set point will have to increase until it is big enough to get through the dead band. Only then can a corrective action occur. Some of most common causes affecting dead band are friction and backlash in the control valve, along with shaft wind-up in rotary valves, and relay dead zone. Because most control actions for regulatory control consist of small changes (1% or less), a control valve with excessive dead band might not even respond to many of these small changes. A well-engineered valve should respond to signals of 1% or less to provide effective reduction in process variability. However, it is not uncommon for some valves to exhibit dead band as great as 5% or more. In a recent plant audit, 30% of the valves had dead bands in excess of 4%. Over 65% of the loops audited had dead bands greater than 2%. Friction is a major cause of dead band in control valves. Rotary valves are often very susceptible to friction caused by the high seat loads required to obtain shut-off with some seal designs. Because of the high seal friction and poor drive train stiffness, the valve shaft winds up and does not translate motion to the control element. As a result, an improperly designed rotary valve can exhibit significant dead band that clearly has a detrimental effect on process variability. Manufacturers usually lubricate rotary valve seals during manufacture, but after only a few hundred cycles this lubrication wears off. In addition, pressureinduced loads also cause seal wear. As a result, the valve friction can increase by 400% or more for some valve designs. This illustrates the misleading performance conclusions that can result from evaluating products using bench-type data before the torque has stabilized. Packing friction is the primary source of friction
Zhang_Ch01.indd 151
5/13/2008 5:45:45 PM
152
INDUSTRIAL CONTROL TECHNOLOGY in sliding stem valves. In these types of valves, the measured friction can vary significantly between valve styles and packing arrangements. Actuator style also has a profound impact on control valve assembly friction. Generally, spring-and-diaphragm actuators contribute less friction to the control valve assembly than piston actuators. Piston actuator friction probably will increase significantly with use as guide surfaces and the O-rings wear, lubrication fails, and the elastomer degrades. Backlash is the name given to slack or looseness of a mechanical connection. This slack results in a discontinuity of motion when the device changes direction. Backlash commonly occurs in gear drives of various configurations. Rack-and-pinion actuators are particularly prone to dead band due to backlash. Some valve shaft connections also exhibit dead band effects. Spline connections generally have much less dead band than keyed shafts or double-D designs. While friction can be reduced significantly through good valve design, it is a difficult phenomenon to eliminate entirely. A wellengineered control valve should be able to virtually eliminate dead band due to backlash and shaft wind-up. For best performance in reducing process variability, the total dead band for the entire valve assembly should be 1% or less. Ideally, it should be as low as 0.25%. (2) Actuator/positioner design. Both the actuator and positioner designs greatly affect static performance (dead band), as well as the dynamic response of the control valve assembly and the overall air consumption of the valve instrumentation. Static gain is related to the sensitivity of the device to the detection of small (0.125% or less) changes of the input signal. Unless the device is sensitive to these small signal changes, it cannot respond to minor upsets in the process variable. This high static gain of the positioner is obtained through a preamplifier, similar in function to the preamplifier contained in high fidelity sound systems. In many pneumatic positioners, a nozzle flapper or a similar device serves as this high static gain preamplifier. Once the high static gain positioner preamplifier has detected a change in the process variable, the positioner must then be capable of making the valve closure member move rapidly to provide a timely corrective action to the process variable. This requires much power to make the actuator and valve assembly move quickly to a new position, which means that the positioner must rapidly supply a large volume of air to the actuator to make it respond promptly. The ability to do this comes from the high dynamic gain of the positioner. Although the positioner
Zhang_Ch01.indd 152
5/13/2008 5:45:45 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
153
preamplifier can have high static gain, it typically has little ability to supply the power needed. Thus, the preamplifier function must be supplemented by a high dynamic gain power amplifier that supplies the required air flow as rapidly as needed. A relay or a spool valve typically provides this power amplifier function. In summary, high-performance positioners with both high static and dynamic gain provide the best overall process variability performance for any given valve assembly. (3) Valve response time. It is important that the valve reach a specific position quickly in control. A quick response to small signal changes (1% or less) is one of the most important factors in providing optimum process control. Valve response time is measured by a parameter called T63 (Tee-63). T63 is the time measured from initiation of the input signal change to when the output reaches 63% of the corresponding change. It includes both the valve assembly dead time, which is a static time, and the dynamic time of the valve assembly. The dynamic time is a measure of how long the actuator takes to get to the 63% point once it starts moving. Dead band, whether it comes from friction in the valve body and actuator or from the positioner, can significantly affect the dead time of the valve assembly. It is important to keep the dead time as small as possible. Generally dead time should be no more than one-third of the overall valve response time. However, the relative relationship between the dead time and the process time constant is critical. If the valve assembly is in a fast loop where the process time constant approaches the dead time, the dead time can dramatically affect loop performance. On these fast loops, it is critical to select control equipment with dead time as small as possible because some valve assembly designs can have dead times that are 3–5 times longer in one stroking direction than the other. Once the dead time has passed and the valve begins to respond, the remainder of the valve response time comes from the dynamic time of the valve assembly. This dynamic time will be determined primarily by the dynamic characteristics of the positioner and actuator combination. These two components must be carefully matched to minimize the total valve response time. This dynamic gain comes mainly from the power amplifier stage in the positioner. However, this high dynamic gain power amplifier will have little effect on the dead time unless it has some intentional dead band designed into it to reduce static air consumption. The design of the actuator significantly affects the dynamic time. For example, the greater the volume of the actuator air chamber to be filled, the slower the valve response time.
Zhang_Ch01.indd 153
5/13/2008 5:45:45 PM
154
INDUSTRIAL CONTROL TECHNOLOGY To minimize the valve assembly dead time, minimize the dead band of the valve assembly, whether it comes from friction in the valve seal design, packing friction, shaft wind-up, actuator, or positioner design. As indicated, friction is a major cause of dead band in control valves. On rotary valve styles, shaft wind-up can also contribute significantly to dead band. Actuator style also has a profound impact on control valve assembly friction. On the impact from the actuators, generally say, spring-and-diaphragm actuators contribute less friction to the control valve assembly than piston actuators over an extended time. As mentioned, this is caused by increasing friction from the piston O-ring, misalignment problems, and failed lubrication. Having a positioner design with a high static gain preamplifier can make a significant difference in reducing dead band. This can also make a significant improvement in the valve assembly resolution. Valve assemblies with dead band and resolution of 1% or less are no longer adequate for many process variability reduction needs. Many processes require the valve assembly to have dead band and resolution as low as 0.25%, especially where the valve assembly is installed in a fast process loop. Selecting the proper valve, actuator, and positioner combination is not easy. It is not simply a matter of finding a combination that is physically compatible. Good engineering judgment must go into the practice of valve assembly sizing and selection to achieve the best dynamic performance from the loop. (4) Valve types and sizing. The style of valve used and the sizing of the valve can have a large impact on the performance of the control valve assembly in the system. Although a valve must be of sufficient size to pass the required flow under all possible contingencies, a valve that is too large for the application is a detriment to process optimization. Flow capacity of the valve is also related to the style of valve through the inherent characteristic of the valve. The inherent characteristic is the relationship between the valve flow capacity and the valve travel when the differential pressure drop across the valve is held constant. The best process performance occurs when the required flow characteristic is obtained through changes in the valve trim rather than through use of cams or other methods. Proper selection of a control valve designed to produce a reasonably linear installed flow characteristic over the operating range of the system is a critical step in ensuring optimum process performance. Oversizing of valves sometimes occurs when trying to optimize process performance through a reduction of process variability.
Zhang_Ch01.indd 154
5/13/2008 5:45:45 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
155
This results from using line-size valves, especially with highcapacity rotary valves, as well as the conservative addition of multiple safety factors at different stages in the process design. Oversizing the valve hurts process variability in two ways. First, the oversized valve puts too much gain in the valve, leaving less flexibility in adjusting the controller. Best performance results when most loop gain comes from the controller. The second way oversized valves hurt process variability is that an oversized valve is likely to operate more frequently at lower valve openings where seal friction can be greater, particularly in rotary valves. Because an oversized valve produces a disproportionately large flow change for a given increment of valve travel, this phenomenon can greatly exaggerate the process variability associated with dead band due to friction. When the valve is oversized, the valve tends to reach system capacity at relatively low travel, making the flow curve flatten out at higher valve travels. When selecting a valve, it is important to consider the valve style, inherent characteristic, and valve size that will provide the broadest possible control range for the application.
1.3.2
Self-Actuated Valves
Self-actuated valve stands for those valves that use fluid or gas existing in a system to position the valve. Check valves and relief valves are two important examples of self-actuated valves. In additional to check valves and relief valves, safety valves and steam traps are also defined as selfactuated valves. All of these valves are being actuated with the system fluid or gas; no source of power outside the system fluid or gas energy is necessary for operation of these valves.
1.3.2.1
Check Valves
Check valves are self-activating safety valves that permit gases and liquids to flow in only one direction, preventing process flow from reversing. When open and under flow pressure, the checking mechanism will move freely in the medium, offering very little resistance and minimal pressure drop. Check valves are classified as one-way directional valves: fluid flow in the desired direction opens the valve, while backflow forces the valve to close. (1) Operating principle. A check valve is a one-way valve for fluid flow. There are many ways to achieve one-way flow. Most check valves contain a ball that sits freely above the seat, which has
Zhang_Ch01.indd 155
5/13/2008 5:45:45 PM
156
INDUSTRIAL CONTROL TECHNOLOGY only one through-hole. The ball has a slightly larger diameter than that of the through-hole. When the pressure behind the seat exceeds that above the ball, liquid is allowed to flow through the valve; however, once the pressure above the ball exceeds the pressure below the seat, the ball returns to rest in the seat, forming a seal that prevents backflow. Figure 1.45 is used here to describe briefly the principles of check valves. Device A consists of a ball bearing retained by a spring. Fluid flowing to the right will push the ball bearing against the spring, and open the valve to permit flow. This device requires some pressure to compress the spring and open the valve. If an attempt is made to flow fluid to the left, the ball bearing seals against the opening and no flow is allowed. This is a modern design that requires round balls. Device B is simply a flapper that is anchored on one side. The flapper can be a hinged metallic door, a thin piece of metal, or a piece of rubber or polymer. This is the simplest design and was used in early pumps. Two methods of incorporating check valves into pumps are also shown in Fig. 1.46. In schematic A, the check valves permit expulsion of the fluid to the right on the downward stroke while denying flow to the left. On the upward stroke, the pump fills from the left, while denying reverse flow from the right. The design in schematic B is somewhat different. The piston has one or more holes drilled though with a check valve on each hole. (One hole is illustrated.) On the downward stroke, the fluid moves from below the piston to the chamber above the piston and is denied exit to the left. On the upward stroke, fluid is pushed
Flow
(a) Flow
(b)
Figure 1.45 Device A consists of a ball bearing retained by a spring; device B is simply a flapper anchored on one side (courtesy of Michigan State University).
Zhang_Ch01.indd 156
5/13/2008 5:45:45 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
157
Flow
(a)—Downward stroke forces fluid out in this device
Flow Hole drilled in piston (b)—Upward stroke forces fluid out in this device
Figure 1.46 Two methods of incorporating check valves into pumps (courtesy of Michigan State University).
out the exit on the right and simultaneously more fluid is drawn from the entrance on the left. Case B is the design illustrated by Watt in his patent and described as the air pump since it pumps air as well as water. (2) Basic types. Check valves use a variety of technologies to allow and stop the flow of liquids and gases. Single-disk swing valves are designed with the closure element attached to the top of the cap. Double-disk or wafer check valves consist of two half-circle disks hinged together that fold together upon positive flow and retract to a full circle to close against reverse flow. Lift-check valves feature a guided disk. Spring-loaded devices can be mounted vertically or horizontally. Silent or center guide valves are similar to lift check valves, with a center guide extending from inlet to outlet ports. The valve stopper is spring and bushing actuated to keep the movement “quiet.” Ball check valves use a free-floating or spring-loaded ball resting in a seat ring as the closure element. Cone check valves use a free-floating or springloaded cone resting in the seat ring as the closure element. Although there are many types of check valves, two basic types are most popular in industrial control: swing check valves and ball check valves. Both types of valves may be installed vertically or horizontally. (a) Swing check valves. Swing check valves are used to prevent flow reversal in horizontal or vertical upward pipelines (vertical pipes or pipes in any angle from horizontal to vertical
Zhang_Ch01.indd 157
5/13/2008 5:45:45 PM
158
INDUSTRIAL CONTROL TECHNOLOGY with upward flow only). Swing check valves have disks that swing open and closed. The disks are typically designed to close on their own weight and may be in a state of constant movement if velocity pressure is not sufficient to hold the valve in a wide open position. Premature wear or noisy operation of the swing check valves can be avoided by selecting the correct size on the basis of flow conditions. The minimum velocity required to hold a swing check valve in the open position is expressed by the empirical formula given by Fig. 1.47, where V is the liquid flow measured in m/s or ft/s; v is the special volume of the liquid measured in m3/N or ft3/lb; j equals 133.7 (35) for Y-pattern, or = 229.1 (60) for bolted cap, or 381.9 (100) for U/L listed. Tilting disk check valves are pivoted circular disks mounted in a cylindrical housing. These check valves have the ability to close rapidly, thereby minimizing slamming and vibrations. Tilting disk checks are used to prevent reversal in horizontal or vertical-up lines similar to swing check valves. The minimum velocity required for holding a tilting check valve wide open can be determined by the empirical formula given in Fig. 1.47, where V is the liquid flow measured in m/s or ft/s; v is the special volume of the liquid measured in m3/N or ft3/lb; j equals 305.5 (80) for 5 times by disk angle (typically for steel), or = 114.6 (30) for 15 times by disk angle (typical for iron). Lift check valves also operate automatically by line pressure. They are installed with pressure under the disk. A lift check valve typically has a disk that is free floating and is lifted by the flow. Liquid has an indirect line of flow, so the lift check restricts the flow. Because of this, lift check valves are similar to globe valves and are generally used as a companion to globe valves. (b) Ball check valves. A ball check valve is a type of check valve in which the movable part is a spherical ball as illustrated in Fig. 1.48. Ball check valves are used in spray devices, dispenser spigots, manual and other pumps, and refillable dispensing syringes. Ball check valves may sometimes use a free-floating or spring-loaded ball. Ball check valves are generally simple, inexpensive metallic parts, although specialized ball check valves are also Vi j v
Figure 1.47 The minimum velocity formula of swing check valves.
Zhang_Ch01.indd 158
5/13/2008 5:45:45 PM
159
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL 7 3
5
2
6 4
1
8 4
E EG
Figure 1.48 The working block of a ball check valve. In this figure, 1 is the seat body, 2 is the cap, 3 is the ball, 4 is the angle body, 5 is the body clamp, 6 is the body gasket, and 8 is the cap gasket (courtesy of VNE Corporation).
available. For example, ball check valves in high-pressure pumps used in analytical chemistry have a ball of synthetic ruby, a hard and chemically resistant substance. A ball check valve is not to be confused with a ball valve, a quarter-turn valve similar to a butterfly valve in which the ball acts as a controllable rotor. (3) Specifications. The following contents are the basic specifications of check valves: (a) Technical types of check valves include (i) Ball and cone check valves. Ball and cone check valves use a free-floating or spring-loaded ball resting in a seat ring as the closure element. Upon reverse flow, the ball is forced back into its seat preventing backflow. (ii) Double check valves. Double check valves are assemblies that contain two distinct check valves. (iii) Duckbill check valves. Duckbill valves are flowsensitive, variable-area, check valves. They get their name from their shape, which consists of two flaps shaped like a duck’s bill. In zero flow conditions, the valve remains closed. As the flow increases, the pressure on the flaps increases and the valve opens. (iv) Foot check valves. Foot valves are a type of check valve with a built-in strainer. They are used at the point of liquid intake to retain liquid in the system.
Zhang_Ch01.indd 159
5/13/2008 5:45:46 PM
160
INDUSTRIAL CONTROL TECHNOLOGY (v) Lift check valves. Lift check valves use a free-floating closure element, consisting of a piston or poppet and a seat ring. The piston or poppet actuates either horizontal or vertical to the flow, depending on the valve construction. (vi) Swing check valves. Swing check valves are designed with the closure element attached to the top of the cap. The closure element can be pushed aside by the flow, but swings back into the close position upon flow reversal. (vii) Umbrella check valves. Umbrella check valves are elastomer self-activating devices. These valves simply press into a hole and can be designed to function within a specified pressure range. The valve gets its name from the umbrella-like shape of the device. (viii) Wafer/split disc check valves. Wafer or split disk check valves have two half-circle disks hinged together that fold together upon positive flow and retract to a full circle to close against reverse flow. (b) Technical parameters of check valves include (i) Valve size. It is the designated size of the valve as specified by the manufacturer, which typically represents the size of the passage opening. (ii) Pressure rating. It is the maximum safe pressure value for which the valve is rated. (iii) Media temperature. It is the maximum temperature of media the valve is designed to accommodate. (iv) Flow. The valve flow coefficient is the number of U.S. gallons per minute of 60° F water that will flow through a valve at a specified opening with a pressure drop of 1 psi across the valve. It is used to predict flow rates. (c) Connection methods for the check valves can be (i) Threaded. The valve has internal or external threads for inlet or outlet connection(s). (ii) Compression fitting. It is a sealed pipe connection without soldering or threading. As the nut on one fitting is tightened, it compresses a washer around the second pipe, forming a watertight closure. (iii) Bolt flange. The valve has a bolt flange(s) for inlet or outlet connection. (iv) Clamp flange. The valve has a clamp flange(s) for inlet or outlet connection. (v) Union. The valve has a union connection for inlet or outlet connection(s).
Zhang_Ch01.indd 160
5/13/2008 5:45:46 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
161
(vi) Tube fitting. The valve has a connection for directly joining tubing at the inlet and or outlet connections. (vii) Butt weld. The valve has a butt weld–sized connection for inlet or outlet connection. (viii) Socket weld/solder. The valve has a socket weld connection for inlet or outlet connection.
1.3.2.2
Relief Valves
The relief valve is a valve mechanism that ensures system fluid flow when a preselected differential pressure across the filter element is exceeded; the valve allows all or part of the flow to bypass the filter element. Relief valves are used on oil and gas production systems, compressor stations, gas transmission (pipeline) facilities, storage systems, in all gas processing plants, and whenever there is a need to exhaust the overpressure volume of gas, vapor, and/or liquid. (1) Operating principle. Figure 1.49 illustrates the working blocks of a relief valve. Relief valves operate on the principle of unequal areas exposed to the same pressure. When the relief valve is closed, the system pressure pushes upward against the piston seat seal on an area equal to the inside diameter of the seat. Simultaneously, the same system pressure, passing through the
Normal pressure
Figure 1.49 The working blocks of a relief valve (courtesy of P.C. McKenzie Company).
Zhang_Ch01.indd 161
5/13/2008 5:45:46 PM
162
INDUSTRIAL CONTROL TECHNOLOGY pilot, exerts a downward force on the piston acting on an area approximately 50% greater than the inside diameter of the seat. The resulting differential force holds the valve tightly closed. As the system reaches the discharge set pressure of the valve, the piston seal becomes tighter until the system pressure reaches the relief valve discharge set pressure. At that moment, and not before, the pilot cuts off the supply of system pressure to the top of the piston and vents that system pressure which is located in the chamber above the piston of the relief valve. At the same instant, the relief valve pops open. When the predetermined blow down pressure is reached (either fixed or adjustable), the pilot shuts off the exhaust and reopens flow of system pressure to the top of the piston, effectively closing the relief valve. Figure 1.50 is a drawing of a direct operating pressure relief valve. This pressure relief valve is mounted at the pressure side of the hydraulic pump that is located on its bottom. The task of this pressure relief valve is to limit the pressure in the system on an acceptable value. In fact a pressure relief valve has the same construction as a spring-operated check valve. When the system gets overloaded the pressure relief valve will open and the pump flow will be led directly into the hydraulic reservoir. The pressure in the system remains on the value determined by the spring
T
P Figure 1.50 A drawing of a direct operating pressure relief valve.
Zhang_Ch01.indd 162
5/13/2008 5:45:47 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
163
on the pressure relief valve. In the pressure relief valve, the pressure that is equal to the system energy will be converted into heat. For this reason pressure relief valves should not be operated for long durations. (2) Basic types and specifications. Table 1.5 lists several important types of relief valve, categorized in terms of their applications. In industrial control, typical features for relief valves include the following: (1) Pressure settings of relief valves are externally adjustable while the valve is in operation. Most vendors of relief valves offer eight different spring ranges to provide greater system sensitivity and enhanced performance. (2) Manual override option with positive stem retraction is available for pressures. This option permits the user to relieve upstream pressure while maintaining the predetermined cracking pressure. (3) Colorcoded springs and labels indicate spring cracking range. (4) Lock wire feature secures a given pressure setting. Typical specifications of a relief valve include (a) Working pressure. It can be up to 6000 psig (414 bars) or up to 8000 psig (552 bars) during relief with no internal seal damage.
Table 1.5 Important Types of Relief Valves Temperature and pressure relief valves
Reseating temperature and pressure relief valves
Pressure relief valves
Poppet style relief valves
Zhang_Ch01.indd 163
Temperature and pressure relief valves are used in water heater and hot water storage tank applications to provide automatic temperature and pressure protection to hot water supply tanks and hot water heaters Automatic reseating temperature and pressure relief valves are used in commercial water heater applications to provide automatic temperature and pressure protection to domestic hot water supply tanks and hot water heaters Pressure relief valves are used in hot water heating and domestic supply boiler applications to protect against excessive pressures on all types of hot water heating supply boiler equipment. Calibrated pressure relief valves are used in commercial, residential, and industrial applications to protect against excessive pressure in systems containing water, oil, or air.
5/13/2008 5:45:47 PM
164
INDUSTRIAL CONTROL TECHNOLOGY (b) Cracking pressure. For eight springs, it is normally from 50 to 6000 psig in the following ranges: 50–350, 350–750, 750– 1500, 1500–2250, 2250–3000, 3000–4000, 4000–5000, and 5000–6000 psig. (c) Temperature rating. For Buna-N Rubber it can be –30°F to +225°F (–34°C to +107°C); for highly fluorinated fluorocarbon rubber, it can be –20°F to +200°F (–29°C to +93°C); for ethylene propylene rubber, it can be –70°F to +275°F (–57°C to +135°C); for fluorocarbon rubber, it can be –10°F to +400°F (–23°C to +204°C); for neoprene rubber, it can be –45°F to +250°F (–43°C to +121°C). The American Society of Mechanical Engineers (ASME) has already issued the code for valves (please refer to ASME Code Section I or Section VIII for the proper code). (a) Pressure setting points. ASME Code stipulates that the pressure setting of a safety relief valve does not exceed 10 psig or 20% of the operating pressure of the system or vessel. (b) Capacity guidelines (i) ASME Code—Section I. The total relieving capacity of the valve shall not be less than the maximum operating pressure of the vessel of line, as designed by the manufacturer. (ii) ASME Code—Section VIII. The minimum relieving capability of the valve shall discharge the total amount of the maximum operating pressure of the system or vessel, without a rise in the pressure vessel in the event of overpowering the system. (c) About sizing. It is important not to oversize a relief valve. Typically, this will result in valve chatter or rapid opening and closing of the valve seating and disk. If chattering is present, it would be more economical to use two valves. (3) Installation and maintenance for pressure relief valves. It should be noted that only a qualified engineer who is familiar with pressure relief valves be allowed to perform all installations and maintenances. These steps below are routinely followed in all the installations and maintenances for pressure relief valves: (a) First, you should turn off the operations system and enable all pressure to bleed through prior to installation. (b) Then, you can remove all thread protectors and plugs from the valve. (c) Note that valves should only be installed in an upright position, allowing for correct reseating of the valve disc upon opening or popping. (d) You must clean the connecting area of all dirt and grime; then apply a small amount or piping compound to the valve inlet side.
Zhang_Ch01.indd 164
5/13/2008 5:45:47 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
165
(e) Keeping away from the first few threads, the valve can be tightened by hand to ensure proper thread alignments. (f) By using the proper sized wrench on the hex shaped valve body, you can tighten the system to a firm or snug setting by a padded wrench as recommended. (g) Please note that discharge piping must be equal to or larger than the outlet size of the valve, ensuring that flow-rated characteristics are not compromised. (h) Note that discharge piping must be designed to anchor and must be secured in a manner to prevent swaying, rattling, or vibration. (i) If the valve is venting to atmosphere, you should take all the necessary steps to ensure that the outlet (discharge) is pointed in a direction away from personnel or critical equipment. (j) When testing a valve, the lift lever is designed to be opened only when the system pressure is at 80% of the set pressure (popping or cracking) point. The valve is kept in an open position for a period of time, long enough to ensure the cleansing of the seating area.
1.3.3
Solenoid Valves
A solenoid control valve is a kind of isolation valve that is an electromechanical device allowing for an electrical device to control the flow of gas or liquid. The electrical device causes a current to flow through a coil located on the solenoid valve, which in turn results in a magnetic field that causes the displacement of a metal actuator. The actuator is mechanically linked to a mechanical valve inside the solenoid valve. This mechanical valve then opens or closes to allow a liquid or gas either to flow through or be blocked by the solenoid valve. In this control system, a spring is used to return the actuator and valve back to their resting states when the current flow is removed. Figure 1.51 gives an application of the typical control system with a solenoid valve. A coil inside the solenoid valve generates a magnetic field once an electric current is flowing through. The generated magnetic field actuates the ball valve that can change states to open or close the device in the fluid direction indicated by the arrow. Solenoid valves are used wherever fluid flow has to be controlled automatically. Factory automation makes a typical example of frequent use of solenoid valves. A computer device running a factory automation program to fill a container with some liquid can send a signal to the solenoid valve to open, allowing the container to fill, and then remove the signal to close the solenoid valve and stop the flow of liquid until the next container is in place. A gripper for grasping items on a robot is frequently an
Zhang_Ch01.indd 165
5/13/2008 5:45:47 PM
166
INDUSTRIAL CONTROL TECHNOLOGY Solenoid valve S Ball valve
Figure 1.51 A typical flow control system with solenoid valve (courtesy of Z-Tide Valves).
air-controlled device. A solenoid valve can be used to allow air pressure to close the gripper, and a second solenoid valve can be used to open the gripper. If a two-way solenoid valve is used, two separate valves are not needed in this application. Solenoid valve connectors are used to connect solenoid valves and pressure switches.
1.3.3.1
Operating Principles
(1) Solenoid. Solenoid valves are control units which, when electrically energized or deenergized, either shut off or allow fluid flow. The actuator inside a solenoid valve takes the form of an electromagnet. When energized, a magnetic field builds up which pulls a plunger or pivoted armature against the action of a spring. When deenergized, the plunger or pivoted armature is returned to its original position by the spring action. According to the mode of actuation, a distinction is made between direct-acting valves, internally piloted valves, and externally piloted valves. A further distinguishing feature is the number of port connections or the number of flow paths or “ways.” With a direct-acting solenoid valve, the seat seal is attached to the solenoid core. In the deenergized condition, a seat orifice is closed, which opens when the valve is energized. With direct-acting
Zhang_Ch01.indd 166
5/13/2008 5:45:47 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
167
valves, the static pressure forces increase with increasing orifice diameter, which means that the magnetic forces required for overcoming the pressure force become correspondingly larger. Internally piloted solenoid valves are, therefore, employed for switching higher pressures in conjunction with larger orifice sizes; in this case, the differential fluid pressure performs the main work in opening and closing the valve. The two-way solenoid valves are shut-off valves with one inlet port and one outlet port as shown in Fig. 1.52(a). In the deenergized condition, the core spring, assisted by the fluid pressure, holds the valve seal on the valve seat to shut off the flow. When energized, the core and seal are pulled into the solenoid coil and the valve opens. The electromagnetic force is greater than the combined spring force and the static and dynamic pressure forces of the medium. The three-way solenoid valves have three port connections and two valve seats. One valve seal always remains open and the other closed in the deenergized mode. When the coil is energized, the mode reverses. The three-way solenoid valve shown in Fig. 1.52(b) is designed with a plunger-type core. Various valve operations are available according to how the fluid medium is connected to the working ports in Fig. 1.52(b). The fluid pressure builds up under the valve seat. With the coil deenergized, a conical spring holds the lower core seal tightly against the valve seat and shuts off the fluid flow. Port A is exhausted through R. When the coil is energized the core is pulled in, and the valve seat at Port R is sealed off by the spring-loaded upper core seal. The fluid medium now flows from P to A. Unlike the versions with plunger-type cores, pivoted-armature solenoid valves have all port connections in the valve body. An isolating diaphragm ensures that the fluid medium does not come into contact with the coil chamber. Pivoted-armature valves can be used to obtain any three-way solenoid valve operation. The basic design principle is shown in Fig. 1.52(c). Pivoted-armature valves are provided with manual override as a standard feature. Internally piloted solenoid valves are fitted with either a twoway or a three-way pilot solenoid valve. A diaphragm or a piston provides the seal for the main valve seat. The operation of such a valve is indicated in Fig. 1.52(d). When the pilot valve is closed, the fluid pressure builds up on both sides of the diaphragm via a bleed orifice. As long as there is a pressure differential between the inlet and outlet ports, a shut-off force is available by virtue of the larger effective area on the top of the diaphragm. When the pilot valve is opened, the pressure is relieved from the upper side
Zhang_Ch01.indd 167
5/13/2008 5:45:48 PM
168
INDUSTRIAL CONTROL TECHNOLOGY R
A P
P
A (a)
(b)
P P
A (c)
A
R (d)
P
A (e)
(f)
Figure 1.52 The operating principle of solenoid valve (courtesy of OMEGA).
Zhang_Ch01.indd 168
5/13/2008 5:45:48 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
169
of the diaphragm. The greater effective net pressure force from below now raises the diaphragm and opens the valve. In general, internally piloted valves require a minimum pressure differential to ensure satisfactory opening and closing. Internally piloted four-way solenoid valves are used mainly in hydraulic and pneumatic applications to actuate double-acting cylinders. These valves have four port connections: a pressure inlet P, two cylinder port connections A and B, and one exhaust port connection R. An internally piloted four/two-way poppet solenoid valve is shown in Fig. 1.52(e). When deenergized, the pilot valve opens at the connection from the pressure inlet to the pilot channel. Both poppets in the main valve are now pressurized and switch over. Now port connection P is connected to A, and B can exhaust via a second restrictor through R. With these types an independent pilot medium is used to actuate the valve. Figure 1.52(f) shows a piston-operated angle-seat valve with closure spring. In the unpressurized condition, the valve seat is closed. A three-way solenoid valve, which can be mounted on the actuator, controls the independent pilot medium. When the solenoid valve is energized, the piston is raised against the action of the spring and the valve opens. A normally open valve version can be obtained if the spring is placed on the opposite side of the actuator piston. In these cases, the independent pilot medium is connected to the top of the actuator. Doubleacting versions controlled by four/two-way valves do not contain any spring. (2) Manifold. The manifold of the solenoid valves consists of a matrix of solenoid valves mounted in modules on a skid with adjustable legs along one direction (Fig. 1.53). The quantity of the mounted solenoid valves depends on the elements to be connected as tanks or lines, and on the functions of each of these elements. A plurality of solenoid valves is arranged and placed on a solenoid valve installing face of the manifold, and a board formed with an electric circuit for feeding these solenoid valves (Fig. 1.53). Each solenoid valve includes a valve portion containing a valve member and a solenoid operating portion for driving the valve member. The board is mounted on the first side face of the manifold under the solenoid operating portion. The board can be attached and detached while leaving the solenoid valves mounted on the manifold, feeding connectors and indicating lights being respectively provided in positions on the board corresponding to the respective solenoid valves. Each the feeding connector is disposed in such a position that is connected to a receiving terminal of the solenoid valve in a plug-in manner simultaneously with
Zhang_Ch01.indd 169
5/13/2008 5:45:50 PM
170
INDUSTRIAL CONTROL TECHNOLOGY
Figure 1.53 Several types of manifold of solenoid valves (courtesy of KIP Inc.).
mounting of the solenoid valve to the manifold. Each the indicating light is disposed in such a position that can be visually recognized from above the solenoid valve while leaving the solenoid valve mounted on the manifold. This manifold allows centralizing the functions of one or various tanks in a modular way, enhancing the efficiency of the system and control over the process. Manifold of the solenoid valves is an automated alternative to the flexible hoses and the flow divert panels with changeover bends. As many valves as the number of functions the element has to perform are connected to the tank or working line. No manual operation is required. The operation is automated, preventing any risk of accidents.
1.3.3.2
Basic Types
Solenoid valves are opened and closed via a solenoid activated by an electrical signal including all types of flow paths and proportional solenoid valves. In most industrial applications, solenoid valves are arranged as the following five types: (1) Two-way solenoid valves. This type of solenoid valve normally has one inlet and one outlet and is used to permit and shut off fluid flow. The two types of operations for this type are “normally closed” and “normally open.”
Zhang_Ch01.indd 170
5/13/2008 5:45:50 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
171
(2) Three-way solenoid valves. These valves normally have three pipe connections and two orifices. When one orifice is open, the other is closed and vice versa. They are commonly used to alternately apply pressure to an exhaust pressure from a valve actuator or a single-acting cylinder. These valves can be normally closed, normally open, or universal. (3) Four-way solenoid valves. These valves have four or five pipe connections, commonly called ports. One is a pressure inlet port, and two others are cylinder ports providing pressure to the double-acting cylinder or actuator, and one or two outlets exhaust pressure from the cylinders. They have three types of construction: single solenoid, dual solenoid, or single air operator. (4) Direct mount solenoid valves. These series are two-way, threeway, and four-way solenoid valves that are designed for gang mounting into different quantities of valves. Any combination of normally closed, normally open, or universal valves may be grouped together. These series are standard solenoid valves whose pipe connections and mounting configurations have been replaced by mounting configuration that allows each valve to be mounted directly to an actuator without the use of hard piping or tubing. (5) Manifolds. Manifolds are fluid distribution devices. They range from simple supply chambers with several outlets to multichambered flow control units including integral valves and interfaces to electronic networks. Manifolds are generally configured for several outlets sharing one inlet or supply chamber; exhaust manifolds can have several inlets sharing one exhaust port. They may have one or more shared supply chambers and any number of outlets. The manifold circuit style can be series or parallel. In a series manifold the pressure supply is ported through one valve to get to the next. In a parallel manifold the inlet ports all share common pressure supply. Valve specifications to consider for manifolds include integral manifold valves, integral valve types, and solenoid valve power input. Integral valves are integrally assembled with manifold, as opposed to a base or subplate to which separate valves are attached. Integral valve choice types include manual, solenoid, and air pilot. Manual valves are manually adjusted or actuated via knob, lever, or other manual device.
1.3.3.3 Technical Specifications Solenoid valves are composed of several parts such as the solenoid coil, electrical connector, bonnet nut, seal cartridge, O-rings, end connector, body, and union nut. All these components are critical to the overall
Zhang_Ch01.indd 171
5/13/2008 5:45:50 PM
172
INDUSTRIAL CONTROL TECHNOLOGY
performance of solenoid valves. If there is any malfunction, it will affect the entire operation of the automotive starter system, as well as the industrial air hammer and the electric bell assembly. That is why solenoid valves should always be maintained and regularly checked in order to keep them functioning at their best. Performance specifications for solenoid valve connectors include connection voltage, nominal power, jacket material, conductor size, insulation group, clamping voltage, bend radius. Captive screw solenoid valve connectors prevent the attachment screw from being lost. Low-profile solenoid valve connectors and right angle solenoid valve connectors allow for installation in tight spaces. Options for solenoid valve connectors include indicator lights and surge suppression. Solenoid valve connectors can also be part of a molded assembly, which saves installation time. Solenoid valve connectors vary in terms of applications and approvals. Some products are suitable for applications such as steam, air, gas, water, pure water, light oil, heavy oil, or hightemperature fluids. Others are designed for cryogenics or corrosive fluids. Complex pneumatic and hydraulic circuits can utilize manifolds with interfaces to sophisticated electronic networks. Applications, port specifications, flow and pressure specifications, manifold circuit style, and valve specifications are all important parameters to consider when searching for manifolds. Additional specifications to consider for manifolds include communication network, body materials, features, and operating temperature. Common applications for manifolds include general purpose, gas, pneumatic or compressed air, pneumatic or vacuum, water, steam, marine, coolant, refrigerant, cryogenic, high temperature, hydraulic fluid, oil or fuel, slurry, high viscosity, general chemical, corrosive or solvent chemical, sanitary, food processing, and medical or pharmaceutical. Important port specifications to consider when searching for manifolds include supply ports, outlet ports, and port types. Supply ports are the number of independent fluid supplies that can be interfaced with the manifold. Outlet ports specify the number of outlets. This is frequently specified as number of ports or valves that are or can be attached to the manifold. Port type choices include quickly connect and metric thread. Flow and pressure specifications are important to consider; when selecting manifolds include maximum flow for gas or air, maximum flow for liquid, and maximum pressure.
1.3.4
Float Valves
Float control systems monitoring the liquid or powder levels of containers such as tanks and reservoirs are installed with two kinds of sensors:
Zhang_Ch01.indd 172
5/13/2008 5:45:50 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
173
float switches and float valves. In a float control system, float switches are used to detect liquid or powder levels, or interfaces between liquids; float valves control the liquid or powder level in elevated tanks, standpipes, or storage reservoirs and modulate the reservoir flow to maintain a constant level. Figure 1.54 is an assumed float control system monitoring the liquid level of a high-temperature or high-pressure tank with both float switches and float valves. Three switches are located along the right side of this tank to detect three different levels of liquid by turning on either alarms or signals once the liquid reaches them, respectively. The two valves set at the top and bottom of this tank are used to input or output the liquid according to the control requirements to maintain the appropriate levels in this tank.
1.3.4.1
Operating Principle
Float control systems require both float switches and float valves to accomplish their functions. Figure 1.55(a) is the diagram of a sample float switch, and Fig. 1.55(b) is the diagram of a sample float valve. (1) Float switch. (Fig. 1.55(a)) Float switches can be used either as alarm devices or as control switches, turning something ON or OFF, such as a pump, or sending a signal to a valve actuator. What makes level switches special is that they have a switched
Emergency cut out
High temperature or high pressure tank
High level
Low level
Figure 1.54 An assumed float control system for a tank installed with three float switches along the right side and two float valves at the top and the bottom.
Zhang_Ch01.indd 173
5/13/2008 5:45:50 PM
174
INDUSTRIAL CONTROL TECHNOLOGY
Inner chamber Outer chamber Magnet
Reed switch and wiring
Rod
Approx. level Float pilot
Float
Fingertrip controller (a)
(b)
Figure 1.55 (a) A float switch and (b) a float valve.
output and can be either electromechanical or solid state, either normally open or normally closed. Float switches provide industrial control for motors that pump liquids from a sump or into a tank. For the tank operation, a float operator assembly is attached to the float switch by a rod, chain, or cable. The float switch is actuated based on the location of the float in the liquid. The float switch contacts are open when the float forces the operating lever to the Up position. As the liquid level falls, the float and operating lever move downward. The contacts can directly activate a motor or provide input for a logic system to fill the tank. As the liquid level rises, the float and operating lever move upward. When the float reaches a preset high level, the float switch contacts open, deactivating the circuit and stopping the motor. However, sump operation is exactly the opposite of tank operation. (2) Float valve. (Fig. 1.55(b)) A float valve is mounted on the tank or reservoir inlet, below or above the requested water level. The float pilot can either be assembled on the main valve (for abovelevel installation) or be connected to the main valve by a command tube. The valve closes when the water level rises by filter discharge pressure acting with the tension spring on the top of the diaphragm in the valve cover chamber, thus raising the float to its closed position. It opens when the float descends due to a
Zhang_Ch01.indd 174
5/13/2008 5:45:50 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
175
drop in water level by filter discharge pressure acting under the valve disk. The difference between maximum and minimum levels is very small and is affected by the length of the float pilot arm. In practice, the water level is maintained continuously at the maximum point as long as the upstream flow exceeds the downstream flow.
1.3.4.2
Specifications and Application Guide
The typical options for poles and throws are available. Most float switches and valves have either one or two poles or one or two throws, but some manufacturers will produce custom level switches for specifications. The measuring range is probably the most important specification to examine when choosing switches and valves. Also of critical concern are the ratings for current and voltage the switches require. Depending on the needs of the application, float switches and valves can be mounted different ways. These switches and valves can be mounted on the top, bottom, or side of the container holding the substance to be measured. Among the technologies for measuring level is air bubbler technology, capacitive or RF admittance, differential pressure, electrical conductivity or receptivity, mechanical or magnetic floats, optical units, pressure membrane, radar or microwave, radio frequency, rotation paddle, ultrasonic or sonic, and vibration or tuning fork technology. Analog outputs from level switches can be current or voltage signals. Also possible is a pulse or frequency. Computer signal outputs that are possible are usually serial or parallel. Float switches and valves can have displays that are analog, digital, or video displays. Control for the devices can be analog with switches, dials, and potentiometers; digital with menus, keypads, and buttons; or controlled by a computer. Some features that can make float switches and valves more desirable are being programmable, having controller, recorder, or totalizer functions and a built-in alarm indicator, whether audible or visible. Also important for some applications are sanitary ratings and the ability to handle slurries with suspended solids, such as wastewater or sewage.
1.3.4.3
Calibration
Many calibration laboratories are finding themselves facing more stringent accuracy requirements. Local gravity, the effects of air buoyancy on the piston gauge, and masses and temperature all affect the accuracy of the
Zhang_Ch01.indd 175
5/13/2008 5:45:51 PM
176
INDUSTRIAL CONTROL TECHNOLOGY
results as well as the uncertainty of the pressure being generated and should be calibrated or measured. The error may in fact be greater—in some cases much greater—than anticipated. The quality of the calibration output is dependent on the skill and knowledge of the operator. A digital transfer standard can improve accuracy in these cases, because it does not generate pressure and is not affected by local gravity, the effects of air buoyancy, or the age and condition of masses. If a high-accuracy digital transfer standard is used, these error sources are not present. The pressure generated for the calibration is more likely closer to the true pressure, resulting in better calibration. Automated digital transfer standards deliver equal or higher precision than that provided by many industrial dead-weight gauges. The precision of digital transfer standards usually ranges from 0.01% F.S. (Frequency Series or Fourier series) to 0.003% F.S., with total accuracies depending on the calibration standard used. The total accuracy of the digital transfer standard includes the accuracy of the primary standard used to calibrate it. Interfaces to PCs can be either through calibration software and gauge monitoring devices, or through the PC interfaces found in digital transfer standards. The interfaces to PCs dramatically improve the timeliness of reports and simplify the reporting process. Templates created in offthe-shelf word-processing or spreadsheet programs and accessed by calibration systems can include organization logos, addresses and contact information, and calibration standard information (e.g., the pressure range and the serial number and other identifying information for the the device under test). Data from calibrations can be automatically incorporated into the template files. This process allows managers to report as-found/as-left data and a host of other parameters and then save the files for retrieval later. A calibration sequence (a set of pressure points) is sent to the Model 2492 either via the IEEE 488 interface using a remote computer interface or locally via the system keyboard. The system’s microprocessor calculates the masses required generating a requested pressure, automatically correcting for environmental factors (e.g., local gravity, air density, head pressure, and temperature). The automated mass loading system selects and loads the amount of mass needed to generate the desired pressure. Once the masses are loaded, a self-recharging pump pressurizes the system to float the masses. When the masses reach the proper float position, the pump is turned off to ensure that a static pressure condition is met. Float position indicator and resistance temperature devices evaluate float position, sink rate, and temperature in determining a valid float condition. When conditions are met, the system signals the user (or host PC in remote operations), and the balance is maintained until a new command is entered.
Zhang_Ch01.indd 176
5/13/2008 5:45:51 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
177
If desired, the remote host computer can automatically acquire data from the device under test without an operator present.
1.3.5
Flow Valves
Measuring the flow of liquids and gases is a critical need in many industrial productions such as chemical packaging, wastewater treatments, and aircraft manufacturing. In some industrial operations, the ability to conduct accurate flow measurements is so important that it can make the difference between making a profit and taking a loss. In other cases, failure in flow control can cause serious (or even disastrous) results. Flow control systems are designed specifically for operation with a wide range of liquids and gases in both safe and hazardous areas, in hygienic, high-temperature, and high-pressure environments, and for use with aggressive media. Some applications for this purpose are pump protection, cooling circuit protection, high and low flow rate alarm, general flow monitoring, etc.
1.3.5.1
Operating Principle
The technologies for gas flow meters and liquid flow meters vary widely. The most common types of operation principles are inferential flow measurement, positive displacement measurement, velocity measurement, true mass flow measurement, and thermodynamic loss measurement. The main types of operating principles for gas flow meters and liquid flow meters are given below: (1) Gas flow switches and liquid flow switches, velocity. Gas flow switches and liquid flow switches, velocity, are used to measure the flow or quantity of a moving fluid in terms of velocity, such as feet per minute. The most common types are inferential flow measurement, positive displacement, velocity meters, and true mass flow meters. Inferential measurement refers to the indirect measurement of flow by directly measuring another value and inferring the flow based on well-known relationships between the directly measured value and flow. The use of differential pressure as an inferred measurement of a liquid’s rate of flow is the most common type of unit in use today. Positive displacement meters take direct measurements of liquid flows. These devices divide the fluid into specific increments and move it on. The total flow is an accumulation of the measured increments, which can be counted by mechanical or electronic techniques. They are often used for high-viscosity fluids.
Zhang_Ch01.indd 177
5/13/2008 5:45:51 PM
178
INDUSTRIAL CONTROL TECHNOLOGY Velocity-type gas flow switches and liquid flow switches are devices that operate linearly with respect to volume flow rate. Because there is no square-root relationship, as with differential pressure devices, their range ability is greater. (2) Liquid flow switches and gas flow switches, mass. Liquid flow switches and gas flow switches, mass, are devices used for measuring the flow or quantity of a moving liquid or gas, respectively, in terms of unit of mass per unit time, such as pounds per minute. These may be sensors with electrical output or may be standalone instruments with local displays and controls. The most common types of liquid flow switches and gas flow switches, mass, are true mass flow meters. True mass flow meters are devices that measure mass rate of flow directly, such as thermal meters or Carioles meters. The most important specifications for mass gas flow meters and liquid flow meters and sensors are the flow range to be measured and whether liquids or gases will be the measured fluids. Also important are operating pressure, the fluid temperature, and accuracy. Typical electrical outputs for mass gas flow meters and liquid flow meters are analog current, voltage, frequency, or switched output. Computer output options can include serial and parallel interfaces. These sensors can be mounted either as inline or insertion devices. Inline sensors can be held in place by using flanges, threaded connections, or clamps. Insertion style sensors are typically threaded through a pipe wall and stick directly in the process flow. (3) Gas flow switches and liquid flow switches, volumetric. Gas flow switches, volumetric, provide output based on the measured flow of a moving gas in terms of volume per unit time, such as cubic feet per minute. Liquid flow switches, volumetric, are devices with a switch output used for measuring the flow or quantity of a moving fluid in terms of a unit of volume per unit time, such as liters per minute. The basis of volumetric gas flow switch and liquid flow switch selection is a clear understanding of the requirements of the particular application. With most liquid flow measurement instruments, the flow rate is determined inferentially by measuring the liquid’s velocity or the change in kinetic energy. Velocity depends on the pressure differential that is forcing the liquid through a pipe or conduit. Because the pipe’s cross-sectional area is known and remains constant, the average velocity is an indication of the flow rate. The basic relationship for determining the liquid’s flow rate in such cases is Q = V × A, where Q is the liquid flow through
Zhang_Ch01.indd 178
5/13/2008 5:45:51 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
179
a pipe, V is the average velocity of the flow, and A is the crosssectional area of the pipe. Other factors that affect liquid flow rate include the liquid’s viscosity and density. Specific applications should be discussed with a volumetric gas flow switch manufacturer before purchasing to ensure proper fit, form, and function. Both volumetric gas flow switches and volumetric liquid flow switches are available with four different meter types: inferential flow meters, positive displacement meters, velocity meters, and true mass flow meters. The basic operating principle of differential pressure flow meters is that the pressure drop across the meter is proportional to the square of the flow rate. The flow rate is obtained by measuring the pressure differential and extracting the square root. Direct measurements of liquid flows can be made with positive-displacement flow meters. These devices divide the liquid into specific increments and move it on. The total flow is an accumulation of the measured increments, which can be counted by mechanical or electronic techniques. They are often used for high-viscosity fluids. (4) Pneumatic relays. Pneumatic relays control output air flow and pressure in response to a pneumatic input signal. They can perform simple functions such as boosting or scaling the output, or complex reversal, biasing, and math function operators.
1.3.5.2
Specifications and Application Guide
The most common specifications for flow control systems can be pressure drop, system efficiency, and process modifications. (1) Pressure drop. Pressure drop is defined by the following items: (a) Reduce piping system pressure loss by increasing the line size and rerouting the pipes. (b) Optimize process equipment pressure loss in the flow lines. (c) Reduce or eliminate control valve pressure loss. (2) System efficiency. System efficiency includes the following operations: (a) Operate rotating equipment at or close to its best efficiency point. (b) Split a single system into more than one to achieve higher aggregate O and M efficiency. (c) Downsize the pump (or trim impeller or modify compressor wheels) and motors. This is only an end result of system improvement or optimization.
Zhang_Ch01.indd 179
5/13/2008 5:45:51 PM
180
INDUSTRIAL CONTROL TECHNOLOGY (d) Modify a process control scheme, including loading and unloading and spillback controls on positive displacement compressors, and location of throttling control. (e) Install variable frequency drives (VFD)—investigate needs and implications carefully. The variable frequency drive is not always the best solution! (f) Improve operating efficiency by implementing an energy recovery strategy. For example, the recovery heat of compression from heating a building or from replacing low pressure steam heating. (g) Replace standard efficiency motors with premium efficiency motors. (3) Process modifications. Process modifications include the following items: (a) Lower the delivery pressure (without impacting process requirement) at user(s). (b) Lower the pressure profile of the system. (c) Incorporate an advanced process control scheme and algorithm to eliminate operator intervention. (d) Reroute or resequence the process streams. (e) Reduce the compressor inlet temperature. (f) Regenerate or replace catalyst (same for inline filters) to eliminate an excessive pressure drop due to carbon build-up. (g) Reduce or eliminate the minimum flow bypass recirculation flow.
1.3.5.3
Calibration
The calibration of a flow control system is facilitated with the following equipments: (1) Gas flow calibrator. The gas flow calibrator (GFC) is an automated, sonic nozzle, scalable, state-of-the-art test system providing exceptional metrology for all gas metering technologies. The customizable, turnkey system includes hardware, software, and accessories, and supports testing on different meter types at pressures ranging from atmospheric to over 100 psig with closed loop control. (2) Flow controller. The flow controller (FC) was the first commercially available flow computer that accommodates combinations of sonic nozzles configured in a multiple sonic nozzle array. This unique functionality allows the flow controller to act as a standalone flow controller. The flow controller can interface with a single sonic nozzle and/or subsonic Venturi, or up to several sonic nozzles, thus offering a ratio range in flow.
Zhang_Ch01.indd 180
5/13/2008 5:45:51 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
181
(3) Sonic nozzles. The sonic nozzle, also known as a “critical flow Venturi” or “critical flow nozzle” has rapidly gained acceptance as a flow measurement standard and flow meter. Sonic nozzles are now utilized in many diverse applications by the aerospace, automotive, energy, and metrology industries. Sonic nozzles can be used as a calibration standard for gas flow meters or any flow measurement device. By design, sonic nozzles are a constant volumetric flow meter. However, with the use of a regulated pressure supply, the sonic nozzle becomes a state-of-the-art mass flow meter. (4) Flow computer. Flow computers work with a single sonic nozzle and/or subsonic Venturi and can also accommodate combinations of sonic nozzles configured in a multiple sonic nozzle array. With the increasing popularity of using multiple, binary throat areas, sonic nozzles installed in a common inlet plenum. Flow Systems introduces the flow computer to meet the challenges of this application. Flow computers are able to interface with single, interchangeable, sonic nozzles and/or subsonic Venturis, or up to some sonic nozzles in a multiple sonic nozzle array. This unique solution (hardware and software) combines instrumentation, data acquisition and control, computation, monitoring, and data logging to meet the needs of the high-end flow measurement user.
Bibliography AccesIO (http://www.accesio.com). 2000. http://www.accesio.com/manuals/lvdt-8 .pdf. Accessed date: April 2005. Alibaba (http://www.alibaba.com). 2005. http://www.alibaba.com/productsearch/ Ultrasonic_Distance_Meter.html. Accessed date: April. AQUATIC (http://www.aquaticeco.com). 2005. http://www.aquaticeco.com/index .cfm/fuseaction/listings.categories/ssid/353. Accessed date: May. Asahi-America (http://asahi-america.com). 2005a. http://asahi-america.com/ documents/documents/Pneumatic%20Actuator%20Intro.pdf. Accessed date: May. Automation Direct (http://web5.automationdirect.com). 2005. http://web5 .automationdirect.com/adc/Overview/Catalog/Sensors_-z-_Encoders/ Ultrasonic_Sensors. Accessed date: April. BEAMEX (http://www.beamex.com). 2005. http://www.beamex.com/services/ services.html. Accessed date: May. Calibrator Depot (http://www.calibratordepot.com). 2005. Calibrators and Calibration Services. http://www.calibratordepot.com/index.asp. Accessed date: June. Carl T. Lira (
[email protected]). 2001. Check Valve and Pump Description. http:// www.egr.msu.edu/~lira/supp/Check Valves and Pumps.mht. Accessed date: September 2007.
Zhang_Ch01.indd 181
5/13/2008 5:45:51 PM
182
INDUSTRIAL CONTROL TECHNOLOGY
Chris Warnett (Rotork Controls Inc.). 2004. A descriptive definition of valve actuators. http://www.valve-world.net/actuation/ShowPage.aspx?pageID=557. Accessed date: September 2007. COMPACT (http://www.compact4.com). 2005a. http://www.compact4.com/ technical.html. Accessed date: May. Crane Valve (http://www.cranevalve.com/index.htm). 2007. Electric Actuators. http://www.cranevalve.com/Act_Elec.htm. Accessed date: September. DALSA (http://www.dalsa.com). 2005a. http://www.dalsa.com/markets/ccd_vs_ cmos.asp. Accessed date: April. Direct Industry (http://www.directindustry.com). 2005a. http://www.directindustry .com/industrial-manufacturer/magnetic-sensor-70932.html. Accessed date: April. Direct Industry (http://www.directindustry.com). 2005b. http://www.directindustry .com/industrial-manufacturer/magnetic-switch-72332.html. Accessed date: April. Direct Industry (http://www.directindustry.com). 2005c. http://www.directindustry .com/industrial-manufacturer/hall-effect-sensor-71708.html. Accessed date: April. Direct Industry (http://www.directindustry.com). 2005d. http://www.directindustry .com/industrial-manufacturer/reed-switch-74782.html. Accessed date: April. DMN Digital (http://www.digitalmedianet.com). 2005. http://cad.digitalmedianet .com/articles/viewarticle.jsp?id=25291. Accessed date: April. EFLOTS-INFO (http://efloatswitch.info/). 2005. Accessed date: May. EMERSON (http://www.emersonprocess.com). 2005a. http://www.documentation .emersonprocess.com/groups/public/documents/book/cvh99.pdf. Accessed date: May. EMERSON (http://www.emersonprocess.com). 2005b. http://www.emersonprocess .com/valveautomation/bettis/Downloads/Hydraulic_Manuals.htm. Accessed date: May. FineMech (http://www.finemech.com). 2005. http://www.finemech.com/ Sensors%20category%202.shtml?gclid=COuHoPvhsosCFQPilAodU1qWvw. Accessed date: April. Forbes Marshall Inc. (http://www.forbesmarshall-inc.com). 2007. Pneumatic Actuator Manual. http://www.forbesmarshall-inc.com/ProductPage .asp?ProdGrpId=29&ProdId=79. Accessed date: September. Fraunhofer-ILT (http://www.ilt.fraunhofer.de). 2005. http://www.ilt.fraunhofer.de/ eng/ilt/pdf/eng/products/HZ_LTS-2D.pdf. Accessed date: December. Free Study (http://www.freestudy.co.uk). 2005a. http://www.freestudy.co.uk/ control/t2.pdf. Accessed date: May. GlobalSpec (http://www.globalspec.com). 2005a. http://sensors-transducers .globalspec.com/SpecSearch/Suppliers?QID = 8900153 & Comp = 106 . Accessed date: April. GlobalSpec (http://www.globalspec.com). 2005b. http://search.globalspec.com/ productfinder/findproducts?query=color%20sensor. Accessed date: April. GlobalSpec (http://www.globalspec.com). 2005c. http://search.globalspec.com/ productfinder/findproducts?query = Ultrasonic%20distance%20sensors . Accessed date: April. GlobalSpec (http://www.globalspec.com). 2005d. http://sensors-transducers .globalspec.com/LearnMore/Sensors_Transducers_Detectors/Linear_ Position_Sensing/LVDT_Position_Sensors. Accessed date: April.
Zhang_Ch01.indd 182
5/13/2008 5:45:51 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
183
GlobalSpec (http://www.globalspec.com). 2005e. http://sensors-transducers .globalspec.com/LearnMore/Sensors_Transducers_Detectors/Rotary_ Position_Sensing/Rotary_Position_Sensors. Accessed date: April. GlobalSpec (http://www.globalspec.com). 2005f. http://search.globalspec.com/ Search?query=hydraulic&show=total. Accessed date: May. GlobalSpec (http://www.globalspec.com). 2005g. http://flow-control.globalspec .com/Specifications/Flow_Transfer_Control/Valve_Actuators_Positioners/ Electric_Electronic_Motor_Actuators. Accessed date: June. GlobalSpec (http://www.globalspec.com). 2005h. http://flow-control.globalspec .com/LearnMore/Flow_Control_Flow_Transfer/Valves/Check_Valves . Accessed date: June. Grainger (http://www.grainger.com). 2005. http://www.grainger.com/production/ info/limit-switch.htm. Accessed date: April. Hamamatsu (http://www.sales.hamamatsu.com). 2005. http://www.sales.hamamatsu .com/en/products/solid-state-division/color_sensors.php. Accessed date: April. HASCO (http://www.hascorelays.com). 2003. http://www.hascorelays.com/reed_ switches.asp. Accessed date: April 2005. Honey Well (http://hpsweb.honeywell.com). 2005a. http://hpsweb.honeywell.com/ Cultures/en-US/IndustrySolutions/PulpPaperPrinting/PaperMeasurement/ ColourMeasurement/default.htm. Accessed date: April. Honey Well (http://hpsweb.honeywell.com). 2005b. http://content.honeywell.com/ sensing/prodinfo/solidstate/technical/chapter2.pdf. Accessed date: April. Honey Well (http://hpsweb.honeywell.com). 2005c. http://www.ssec.honeywell .com/magnetic/datasheets/sae.pdf. Accessed date: April. Honey Well (http://hpsweb.honeywell.com). 2005d. http://sensing.honeywell.com/. Accessed date: April. Honey Well (http://hpsweb.honeywell.com). 2005e. http://www.honeywell.com .pl/pdf/automatyka_domow/palniki_do_kotlow/golden/Silowniki/V4062.pdf. Accessed date: April. Hydraulic-Tutorial (http://www.hydraulicsupermarket.com). 2005a. http://www .hydraulicsupermarket.com/technical.html. Accessed date: May. IQS Directory (http://www.iqsdirectory.com). 2007. Actuators. http://www .iqsdirectory.com/actuators/. Accessed date: September. Johnson Controls, Inc. 1997. Product/Technical Bulletin for VA-7200 Electric Valve Actuator. KEYENCE-America (http://www.keyance.com). 2005. http://www.keyence.com/ products/sensors/rgb/rgb.php. Accessed date: April. KIP Inc. (http://www.kipinc.com). 2005. Solenoid Valves. http://www.norgren .com/kip/. Accessed date: June. LESLIECONTROLS (http://www.lesliecontrols.com). 2005. http://www .lesliecontrols.com/Handbooks/Handbooks.html. Accessed date: May. Leuze Electronics (http://www.leuze.de). 2003. http://www.leuze.de/downloads/ los/03/brusds_e.pdf. Accessed date: April of 2005. Liteon-Semi (http://www.liteon-semi.com). 2005. http://www.liteon-semi.com/_ en/02_cis/00_overview.php. Accessed date: May. Lvdt.co.uk (http://www.lvdt.co.uk). 2005a. http://www.lvdt.co.uk/howtheywork .html. Accessed date: April. Lvdt.co.uk (http://www.lvdt.co.uk). 2005b. http://www.lvdt.co.uk/selection.html. Accessed date: April.
Zhang_Ch01.indd 183
5/13/2008 5:45:51 PM
184
INDUSTRIAL CONTROL TECHNOLOGY
Machine Design (http://www.machinedesign.com). 2003. http://www.machinedesign .com/Switch Tips Bimetallic switches.mht. Accessed date: February 2005. Machine Design (http://www.machinedesign.com). 2005a. http://productsearch .machinedesign.com/browse/Motion_Controls/Limit_Switches. Accessed date: April. Machine Design (http://www.machinedesign.com). 2005b. http://productsearch .machinedesign.com/browse/Motion_Controls/Linear_Rotary_Motion_ Components. Accessed date: May. Macro Sensors (http://www.macrosensors.com). 2005a. http://www.macrosensors .com/lvdt_macro_sensors/lvdt_tutorial/why_use_lvdt.html. Accessed date: April. Macro Sensors (http://www.macrosensors.com). 2005b. http://www.macrosensors .com/ms-lvdt_faq-tutorial.html. Accessed date: April. Maintenance-World (http://www.maintenanceworld.com). 2005. http://www .maintenanceworld.com/Articles/wmaorg/Valve.pdf. Accessed date: May. Making Things (http://www.makingthings.com). 2005. http://www.makingthings .com/teleo/products/acc_datasheets/acc_usd_001.htm. Accessed date: April. Metra Mess- und Frequenztechnik (http://www.mmf.de). 2005. Piezoelectric Accuracy and Calibration. http://www.mmf.de/introduction.htm. Accessed date: June. Microchip (http://www.microchip.com). 2005. http://ww1.microchip.com/ downloads/en/DeviceDoc/39757a.pdf. Accessed date: May. Migatron (http://www.migatron.com). 2005. http://www.migatron.com/ understanding_ultrasonics.htm. Accessed date: April. MORGAN (http://www.morganelectroceramics.com). 2005. http://www.morgan electroceramics.com/tutorials/piezoguide1.html. Accessed date: May. NewArk (http://www.newark.com). 2005. http://www.newark.com/pdfs/techarticles/ pepperl.pdf. Accesses date: April. NEWPORT (http://www.newport.com). 2005. http://www.newport.com/ Manual-Actuator-Selection-Guide/168530/1033/catalog.aspx. Accessed date: May. NI (http://zone.ni.com). 2005. http://zone.ni.com/devzone/cda/tut/p/id/3602. Accessed date: May. Nook Industries (http;//www.nookindustries.com). 2004. http://www.nookindustries .com/pdf/CylinderLimitSwitch.pdf. Accessed date: April 2005. NPL (http://www.npl.co.uk). 2005. http://www.npl.co.uk/pressure/guidance/ nonprescals.html. Accessed date: May. OMEGA (http://www.omega.com). 2005a. http://www.omega.com/techref/pdf/ STRAIN_GAGE_TECHNICAL_DATA.pdf. Accessed date: May. OMEGA (http://www.omega.com). 2005b. http://www.omega.com/techref/ flowmetertutorial.html. Accessed date: May. OMEGA (http://www.omega.com). 2005c. http://www.omega.com/techref/ techprinc.html. Accessed date: May. OMRON (http://www.omoron.com). 2005a. http://www.simcotech.com/sensors/ omronrgb.pdf. Accessed date: April. OMRON (http://www.omron.com). 2005b. http://www.mikrokontrol.co.yu/katalog/ datasheets/senzori/foto/e3mc.pdf. Accessed date: April. OMRON (http://www.omron.com). 2005c. http://www.sti.com/switches/swdatash .htm. Accessed date: April.
Zhang_Ch01.indd 184
5/13/2008 5:45:51 PM
1: SENSORS AND ACTUATORS FOR INDUSTRIAL CONTROL
185
PAControl (http://electricalequipment.pacontrol.com). 2005a. http://electricalequip ment.pacontrol.com/proximitysensors.html. Accessed date: April. PAControl (http://electricalequipment.pacontrol.com). 2005b. http://electricalequip ment.pacontrol.com/capacitiveproximitysensors.html. Accessed date: April. PAControl (http://electricalequipment.pacontrol.com). 2005c. http://electricalequip ment.pacontrol.com/inductiveproximitysensors.html. Accessed date: April. PAControl (http://electricalequipment.pacontrol.com). 2005d. http://electricalequip ment.pacontrol.com/magneticproximitysensors.html. Accessed date: April. PCB PIEZOTRONICS (http://www.pcb.com). 2005. Mounting_force_sensors.pdf. Accessed date: June. PEPPERL+FUCHS (http://www.am.pepperl-fuchs.com). 2005a. http://www.am .pepperl-fuchs.com/products/productfamily.jsp?division=FA&productfamily_ id=1455. Accessed date: April. PEPPERL+FUCHS (http://www.am.pepperl-fuchs.com). 2005b. http://www.am .pepperl-fuchs.com/products/productfamily.jsp?division=FA&productfamily_ id=1575. Accessed date: April. PHILIPS. 2000. http://www.nxp.com/acrobat_download/various/SC17_GENERAL_ MAG_2-1.pdf. Accessed date: April 2005. Physics-psu.edu. 2004. http://class.phys.psu.edu/p457/experiments/html/hall_ effect_2004.htm. Accessed date: April 2005. PI (http://www.physikinstrumente.com). 2005. http://www.physikinstrumente .com/en/products/piezo_tutorial.php. Accessed date: May. PIEZO (http://www.piezo.com). 2005. http://www.piezo.com/tech2intropiezotrans .html. Accessed date: May. Plant-Maintenance (http://www.plant-maintenance.com). 2005. http://www .plant-maintenance.com/maintenance_articles_valves.shtml. Accessed date: May. Red Soft (http://www.redsofts.com). 2005a. http://www.redsofts.com/articles/ read/386/70578/Electric_Linear_Actuators_The_Rotary_Motion_Producer .html. Accessed date: May. Red Valve (http://www.redvalve.com). 2005. http://www.redvalve.com/control.html. Accessed date: May. Rheodyne Corporation (http://www.rheodyne.com). 2007. Operating Instruction for Pneumatic Actuators. http://www.rheodyne.com/support/product/instructions/. Accessed date: September. Robotica (http://www.robotica.co.uk). 2005. http://www.robotica.co.uk/robotica/ ramc/products/sensors/ultra_sensor.htm. Accessed date: April. SENSORS (http://www.sensorsmag.com). 2005a. http://www.sensorsmag.com/ articles/1298/mag1298/main.shtml. Accessed date: April. SENSORS (http://www.sensorsmag.com). 2005b. http://www.sensorsmag.com/ sensors/article/articleDetail.jsp?id=179165. Accessed date: May. Short Courses (http://www.shortcourses.com). 2005. http://www.shortcourses .com/choosing/sensors/05.htm. Accessed date: April. SICK (http://www.sick.com). 2005a. http://www.sick.com/home/factory/catalogues/ industrial/coloursensors/en.html. Accessed date: April. SICK (http://www.sick.com). 2005b. http://englisch.meyle.de/contract_partner/ sick_magnetic_proximity_sensors.php. Accessed date: April. SIEMENS (http://www.sbt.siemens.com). 2005. Actuators and Valves. http://www .sbt.siemens.com/hvp/components/products/damperactuators/default.asp. Accessed date: June.
Zhang_Ch01.indd 185
5/13/2008 5:45:52 PM
186
INDUSTRIAL CONTROL TECHNOLOGY
SONY (http://www.sony.com). 2005. http://www.docs.sony.com/release/ GDMC520Kguide.pdf. Accessed date: April. SukHamburg (http://www.SukHamburg.de). 2005. http://www.silicon-software .de/download/archive/Laser_Light_Section_e.pdf. Accessed date: December. VNE (http://www.vnestainless.com/default.aspx). 2005. Valves. http://www .vnestainless.com/default.aspx. Accessed date: September 2007. WATTS (http://www.watts.com). 2005. http://www.watts.com/pro/_products_sub .asp?catId=69&parCat=125. Accessed date: May. Young (http://www.youngcalibration.co.uk). 2005. http://www.youngcalibration .co.uk/calibration.htm?gclid=CMSGqLDX9IsCFQ7dlAodBHewVQ. Accessed date: May. Z-Tide Valves (http://www.z-tide.com.tw). 2005. Valves. http://www.z-tide.com.tw/ multi/index-e.htm. Accessed date: June.
Zhang_Ch01.indd 186
5/13/2008 5:45:52 PM
2
Computer Hardware for Industrial Control
2.1 Microprocessor Unit Chipset The microprocessor within personal computers and industrial controllers has evolved continuously, with each newer version being compatible with the previous ones. The major producer of microprocessors has been Intel Corporation (Intel is a contraction of Integrated Electronics) since the early 1970s. Intel marketed the first microprocessor in 1971, named the 4004, which caused a revolution in the electronics industry. With this processor, the functionality started to be programmed by software. However, it could only handle 4 bits of data at a time (a nibble), contained 2000 transistors, had 46 instructions, and allowed 4 kB of program code and 1 kB of data. However, from this, personal computers and industrial controllers have evolved with the use of Intel microprocessors. (1) First generation. The next generation of Intel microprocessors arrived in 1974, which could handle 8 bits (a byte) of data at a time and were named the 8008, 8080, and 8085. The 8008 had a 14-bit address bus and can thus address up to 16 kB of memory; the 8080 had a 16-bit address bus giving it a 64 kB limit. (2) Second generation. The next generation came with the launch of the 16-bit processors. Intel released the 8086 microprocessor, which was mainly an extension to the original 8080 processor, and thus retained a degree of software compatibility. It had a 16-bit data bus and a 20-bit address bus, and thus had a maximum addressable capacity of 1 MB. The 8086 could handle either 8 or 16 bits of data at a time, although in a messy way. A strippeddown, 8-bit external data bus version called as the 8088 was also available. This stripped-down processor allowed designers to produce less complicated systems. An improved architecture version, the 80286, was launched in 1982 and was used in the IBM Advanced Technology. (3) Third generation. In 1985, Intel introduced its first 32-bit microprocessor, the 80386DX. This device was compatible with the previous 8088/8086/80286 (80x86) processors and gave excellent performance, handling 8, 16, or 32 bits at a time. It had full 32-bit data and address buses and could thus address up to 4 GB of physical memory. A stripped-down 16-bit external data bus 187
Zhang_Ch02.indd 187
5/13/2008 5:53:42 PM
188
INDUSTRIAL CONTROL TECHNOLOGY
(4)
(5)
(6)
(7)
and 24-bit address bus version called 80386SX was released in 1988, which could only access up to 16 MB of physical memory. Fourth generation. In 1989, Intel introduced the 80486DX, which was basically an improved 80386DX with a memory cache and math coprocessor integrated onto the chip. It had an improved internal structure making it around 50% faster than a compatible 80386. The 80486SX was also introduced, which was merely an 80486DX with the link to the math coprocessor broken. As processor speeds increased, there was a limiting factor for the system clock speed, thus the system clock was doubled or tripled to produce the processor clock. Typically, a system with clock double processors is around 75% faster than the compatible nondoubled processors. Intel has also produced a range of 80486 microprocessors, which run at three or four times the system clock speed and are referred to as DX4 processors. These include the Intel DX4-100 and Intel DX4-75, both with a 25 MHz clock. Fifth generation. The Pentium (or P-5) is a 64-bit superscalar processor. It can execute more than one instruction at a time and has a full 64-bit (8-byte) data bus and a 32-bit address bus. In terms of performance, it operates almost twice as fast as the equivalent 80486. It also has improved floating-point operations (roughly three times faster) and is fully compatible with previous 80 × 86 processors. Sixth generation. The Pentium II/III and Pentium Pro (or P-6) are enhancements of the P-5 and have a bus that supports up to four processors without extra supporting logic, with clock multiplying speeds of over 1 GHz. They also have major savings of electrical power and the minimization of electromagnetic interference. There is a great enhancement of the P-6 bus in that it detects and corrects all single-bit data bus errors and also detects multiple bit errors on the data bus. Seventh generation. New features added with the AMD K7 Athlon include (1) ultrahigh clock speeds of over 1 GHz, (2) 128 kB level 1 cache and up to 8 MB for level 2 cache, (3) capability to use and rearrange up to 72 instructions simultaneously, (4) ability to execute up to nine instructions simultaneously.
Processors come and go, but most manufacturers know the important differences are often processor clock speeds and cache sizes. The future is likely to involve an increase in real-time audio and video over the Internet. Although the performance of today’s processors continues to improve, existing architectures based on an out-of-order execution model require increasingly complex hardware mechanisms and are impeded increasingly by performance limiters such as branches and memory latency. Some new
Zhang_Ch02.indd 188
5/13/2008 5:53:42 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
189
architectures, like IA-64 architecture and x84-64 architecture, are a unique combination of innovative features, including explicit parallelism, prediction, and speculation, which are described below: (1) Parallelism. In today’s processor architectures, the compiler creates sequential machine codes that attempt to imply parallelism to hardware. The processor’s hardware must then reinterpret this machine code and try to identify opportunities for parallel execution, which is the key to higher performance. This process is inefficient not only because the hardware does not always interpret the compiler’s intentions correctly, but also because it uses a valuable die area that could be better used to do real work like executing instructions. Even today’s fastest and most efficient processors devote a significant percentage of hardware resources to this task of extracting more parallelism from software code. The use of explicit parallelism enables far more effective parallel execution of software instructions. In the new architecture models, the compiler analyzes and explicitly identifies parallelism in the software at compile time. This allows the most optimal structuring of the machine code to deliver the maximum performance before the processor executes it, rather than potentially wasting valuable processor cycles at run time. The result is significantly improved processor utilization. Also, there is no wasting of precious die area for the hardware reorder engine used in out-of-order reduced instruction set computer (RISC) processors. (2) Prediction. Simple decision structures, or code branches, are a hard performance challenge to out-of-order RISC architectures. In the simple if-then-else decision code sequence, traditional architectures view the code in four basic blocks. In order to continuously feed instructions into the processor’s instruction pipeline, a technique called branch prediction is commonly used to predict the correct path. With this technique, mispredicts commonly occur 5–10% of the time, causing the entire pipeline to be purged and the correct path to be reloaded. A misprediction rate of just 5–10% can slow processing speed as much as 30–40%. To address this problem and to improve performance, the new architectures use a technique known as prediction. Prediction begins by assigning special flags called predicate registers to both branch paths—p1 to “then” path and p2 to the “else” path. At run time, the compare statement stores either a true or a false value in the 1-bit predicate registers. The processor then executes both paths but only the results from the path with a true predicate flag
Zhang_Ch02.indd 189
5/13/2008 5:53:42 PM
190
INDUSTRIAL CONTROL TECHNOLOGY are used. Branches, and the possibility of associated mispredicts, are removed, the pipeline remains full, and performance is increased accordingly. (3) Speculation. Memory latency is another big problem for current processors’ architectures. Because memory speed is significantly slower than processor speed, the processor must attempt to load data from memory as early as possible to ensure that data is available when needed. Traditional architectures allow compilers and processor to schedule loads before data is needed, but branches act as barriers to this load hoisting. These new architectures employ a technique known as “speculation” to initiate loads from memory earlier in the instruction stream, even before a branch. Because a load can generate exceptions, a mechanism to ensure that exceptions are properly handled is needed to support speculation that hoists loads before branches. The memory load is scheduled speculatively above the branch in the instruction stream so as to start the memory access as early as possible. If an exception occurs, this event is stored and the “checks” instruction causes the exception to be processed. The elevation of the load allows more time to account for memory latency, without stalling the processor pipeline. Branches occur with great frequency in common software code sequences. The unique ability of these architectures to schedule loads before branches increases significantly the number of loads that can be speculated relative to traditional architectures.
2.1.1
Microprocessor Unit Organization
The microprocessor plays a significant role in the functioning of industries everywhere. Nowadays, the microprocessor is being used in a wide range of devices or systems as a digital data processing unit or a computing unit of an intelligent controller or a computer to control processes or turn ON or OFF devices. The microprocessor is a multipurpose, programmable, clock-driven, register-based electronic device that reads binary instructions from a storage device called memory, accepts binary data as input and process data according to those instructions, and provides results as output. At a very elementary level, an analogy can be drawn between microprocessor operations and the functions of the human brain that process information according to understandings (instructions) stored in its memory. The brain gets input from the eyes and ears and sends processed information to output “devices,” such as the face with its capacity to register expression, the hands, or the feet.
Zhang_Ch02.indd 190
5/13/2008 5:53:42 PM
191
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
A typical programmable machine can be represented with five components: microprocessor, memory, input, output, and bus. These five components work together and interact with each other to perform a given task; thus they comprise a system. The physical components are called hardware. A set of instructions written for the microprocessor to perform a task is called a program, and a group of programs is called a software. Assume that a program and data are already entered in the memory; the microprocessor executes a program by reading all the data including instructions from the memory via the bus, and then processing these data in terms of instructions, then writing the result into memory via the bus.
2.1.1.1
Function Block Diagram of a Microprocessor Unit
Figure 2.1 depicts the function block diagram of the Intel486 GX processor, which gives the microarchitecture of this Intel processor. Figure 2.2 is the block diagram for the microarchitecture of the Intel Pentium-4 processor.
64-bit interunit transfer bus
32-bit data bus 32-bit data bus
32
Linear address
32
Core clocks
32 PCD,PWT Barrel shifter
Base/ index bus
Register file 32
Segmentation unit Descriptor registers
Decoded Control ROM instruction path
Prefetcher
Request sequencer
32 byte code queue
Burst Bus control
2 × 16 bytes
Cache control
32
Instruction decode
24
Data bus transceivers Bus control
Displacement bus
Control and protection test unit
32
32
Code stream
A2-A31 BEO#-BE3#
Write buffers 4 × 32 8 kbyte cache
128
Microinstruction
CLK
Address drivers
32 20 Physical address
Translation look aside buffer
Limit and attibute PLA
ALU
Bus interface Cache unit
2 Paging unit
Clock control
Boundary scan control
DO-D31
ADS# W/R# D/C# M/IO# PCD PWT RDY# LOCK# PLOCK# BOFF# A20M# BREQ HOLD HLD ARESET SRE SET INTR NMI SMI# SMIACT# STP CLK#
BRDY# BLAST#
KEN# FLUSH# AHOLD EADS#
TCK TMS TDI TDO
Figure 2.1 The function block diagram of the Intel486 GX Processor (courtesy of Intel Corporation).
Zhang_Ch02.indd 191
5/13/2008 5:53:42 PM
192
INDUSTRIAL CONTROL TECHNOLOGY
Figure 2.2 The function block diagram of the Intel Pentium-4 processor (courtesy of Intel Corporation).
2.1.1.2
Microprocessor
The Pentium series of microprocessors made in Intel, up to now, includes the Pentium Pro, the Pentium II, the Pentium III, and the Pentium 4, etc. As given in Fig. 2.3(a), the Pentium Pro consists of the following basic hardware elements: (1) Intel Architecture registers. The Intel Architecture register set implemented in the earlier 80x86 is extremely small. The small number of registers permits the processor (and the programmer) to keep only a small number of data operands close to the execution units where they can access them quickly. Rather, the programmer is frequently forced to write back the contents of one or more of the processor’s registers to memory when he or she needs to read additional data operands from memory to be operated on. Later, when the programmer requires access to the original set of data operands, they must again be read from memory. This juggling of data between the register set and memory takes time and exacts a penalty on the performance of the program. Figure 2.3(b) illustrates the Intel Architecture general register set. (2) External bus unit. This unit performs bus transactions when requested to do so by the L2 cache or the processor core. (3) Backside bus unit. This unit interfaces the processor core to the unified L2 cache.
Zhang_Ch02.indd 192
5/13/2008 5:53:42 PM
193
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL Unified L2 cache (SRAM)
External bus
Backside External bus unit
bus
Backside bus unit L1 Code cache (SRAM)
Branch prediction
L1 Data cache (SRAM)
Prefetcher APIC bus
Local APIC
Address bus
Data bus
Instruction decoder
Retire unit
Execution unit
Instruction pool (a) 31
23
15
8 7
0
EAX
AH
AX
AL
EBX
BH
BX
BL
ECX
CH
CX
CL
EDX
DH
DX
DL
EBP
BP
ESI
SI
EDI
DI
ESP
SP (b)
Figure 2.3 Intel Pentium II processor: (a) simplified processor block diagram, (b) Intel Architecture general register set (courtesy of Intel Corporation).
(4) Unified L2 cache. It services misses on the L1 data and code caches. When necessary, it issues requests to the external bus unit. (5) L1 data cache. It services data load and stores requests issued by the load and store execution units. When a miss occurs, it forwards a request to the L2 cache.
Zhang_Ch02.indd 193
5/13/2008 5:53:43 PM
194
INDUSTRIAL CONTROL TECHNOLOGY (6) L1 code cache. It services instruction fetch requests issued by the instruction prefetcher. (7) Processor core. The processor logic is responsible for the following: (1) instruction fetch, (2) branch prediction, (3) parsing of Intel Architecture instruction stream, (4) decoding of Intel Architecture instructions into RISC instructions that are referred to as microops or uops, (5) mapping accesses for Intel Architecture register set to a large physical register set, (6) dispatch, execution, and retirement of micro-ops. (8) Local Advanced Programmable Interrupt Controller (APIC) unit. The APIC is responsible for receiving interrupt requests from other processors, the processor local interrupt pins, the APIC timer, APIC error conditions, performance monitor logic, and the IO APIC module. These requests are then prioritized and forwarded to the processor core for execution. (a) Processor startup. Refer to Fig. 2.4. At startup, upon the desertion of reset and the completion of a processor’s BIST bit of the configuration register, the processors within the cluster must negotiate amongst themselves to select the processor that will wake up and start fetching, decoding, and executing the Power-On Self Test (POST) code from the ROM. This processor is referred to as the BootStrap processor, or BSP. After the BSP is identified, the other processors, referred to as the Application processors, or APs, remain dormant until they receive a startup message from the BSP via the APIC bus. Processor cluster APIC bus CPU3 (BSP)
CPU2 (AP)
Pentium Pro Host/PCI bridge (compatibility PB)
CPU1 (AP)
CPU0 (AP)
Bus Main DRAM memory
Host/PCI bridge (auxiliary PB)
PCI bus IO APIC module
E/ISA bridge
E/ISA bus
Figure 2.4 Pentium multiprocessor system block diagram (courtesy of Intel Corporation).
Zhang_Ch02.indd 194
5/13/2008 5:53:43 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
195
The Intel multiprocessing specification (available for download at the Intel developers’ web site) dictates that the startup code executed by the BSP is responsible for detecting the presence of processors other than the BSP. When the available APs have been detected, the startup code stores this information as a table in nonvolatile memory. According to the Intel multiprocessor specification (available for download at the Intel developers’ web site), both the BIOS code and the POST code are responsible for detecting the presence of and initializing the APs. Intel recommends that this be accomplished in the following manner: (i) Both the POST code and the BIOS code executing on the BSP initialize a predefined RAM location to 1hex (to represent the fact that one processor, the BSP, is known to be present and functioning). This location is referred to as the Central Processing Unit (CPU) counter. (ii) Both the POST code and the BIOS code executing on the BSP clear a memory semaphore location to 00hex to permit one of the APs to execute the body of the FindAndInitAllCPU routine. (iii) Both the POST code and the BIOS code executing on the BSP broadcast a startup message to all APs (assuming any are present). The vector field in this message selects a slot in the interrupt table that points to the FindAndInitAllCPU routine. (iv) Upon receipt of this message, all of the APs simultaneously request ownership of the Pentium Pro bus to begin fetching and executing the FindAndInitAllCPU routine. (v) Both the POST code and the BIOS code executing on the BSP then wait for all the APs that may be present to complete execution of the FindAndInitAllCPU routine. The wait loop can be implemented using a long, software enforced delay, the chipset’s Timer 0 (refer to the Intel relevant specification available for download on the Intel web site), or using the timer built into the BSP’s local APIC. Alternatively, the following wait procedure can be used: (1) Using a locked read, both the POST code and the BIOS code executing on the BSP examine the CPU counter RAM location every 2 s and compare the counter value read 2 s ago. If the value has not changed (i.e., been incremented), all APs have completed their execution of the FindAndInitAllCPU routine. This assumes that an AP takes less than 2 s to
Zhang_Ch02.indd 195
5/13/2008 5:53:43 PM
196
INDUSTRIAL CONTROL TECHNOLOGY complete execution of routine. (2) The counter value at the end of wait indicates the total number of processors in the system including the BSP. (vi) Once all of the APs have completed execution of the FindAndInitAllCPU routine, they have all made entries in the Multiprocessor Table in the CMOS memory. (vii) Both the POST code and the BIOS code executing on the BSP read the user-selected setup parameters from CMOS to determine how many of the available processors to utilize during this Operating System (OS) session. Both the POST and the BIOS codes then complete building the multiprocessor table in CMOS, removing or disabling the entries associated with the processors not to be used in this session. A new checksum value is computed for the adjusted multiprocessor table and its length and number of entry fields are completed. (viii) As each of the APs complete execution of the FindAndInitAllCPU routine, they either halt or enter a program loop. Both the POST and the BIOS codes instruct the BSP’s local APIC to broadcast an INIT message to the APs. This causes them to enter (or remain) in the halted state and await receipt of a startup message that will be issued by the multiprocessor OS once it has been loaded and control passed to it. (b) The fetch, decode, execute engine. At the heart of the processor are the execution units that execute instructions. As given in Fig. 2.3(a), the processor includes a fetch engine that attempts to properly predict the path of program execution and generates an ongoing series of memory-read operations to fetch the desired instructions. The high-speed (e.g., 150 or 200 MHz) processor execution engine would then be bound by the speed of external memory accesses. It should be obvious that it is extremely advantageous to include a very high-speed cache memory on board so that the processor keeps copies of recently used information of both code and data. Memory-read requests generated by the processor core are first submitted to the cache for a lookup before being propagated to the external bus in an event of cache miss. The Pentium processors include both a code and a data cache in the level 1 cache. In addition, they include a level 2 cache tightly coupled to the processor core via a private bus. The processors’ caches are disabled at power-up time, however.
Zhang_Ch02.indd 196
5/13/2008 5:53:44 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
197
In order to realize the processors’ full potential, the caches must be enabled. The steps for the Pentium processors to execute instructions are briefly described below: (i) fetch Intel Architecture instructions from memory in strict program order; (ii) decode, or translate, them in strict program order into one or more fixed-length RISC instructions known as micro-ops or u-ops; (iii) place the micro-ops into an instruction pool in strict program order; (iv) until this point, the instructions have been kept in original program order. This part of the pipeline is known as the in-order front end. The processor then executes the micro-ops in any order possible as the data and execution units required for each micro-op become available. This is known as the out-of-order portion of pipeline; (v) finally, the processor commits the results of each micro-op execution to the processor’s register set in the order of the original program flow. This is the in-order rear end. The new Pentium processors implement a dynamic execution microarchitecture, a combination of multiple branch prediction, speculation execution, and data flow analysis. These Pentium processors execute MMX (will be detailed in a subsequent paragraph) technology instructions for enhanced media and communication performance. Multiple branch predicts the flow of the program through several branches: using a branch prediction algorithm, the processor can anticipate jumps in instruction flow. It predicts where the next instruction can be found in memory with a 90% or greater accuracy. This is made possible because, while the processor is fetching instructions, it is also looking at instructions further ahead in the program. Data flow analysis analyzes and schedules instructions to be executed in an optimal sequence, independent of the original program order; the processor looks at decoded software instructions and determines whether f they are available for processing or whether they are dependent on other instructions. Speculative execution increases the rate of execution by looking ahead of the program counter and executing instructions that are likely to be needed later. When the processor executes several instructions at a time, it does so using speculative execution. The instructions being processed are
Zhang_Ch02.indd 197
5/13/2008 5:53:44 PM
198
INDUSTRIAL CONTROL TECHNOLOGY based on predicted branches and the results are stored as speculative results. Once their final state can be determined, the instructions are returned to their proper order and committed to permanent machine state. (c) Processor cache. Figure 2.3(a) also provides an overview of the processor’s cache, which shows that the processor cache mainly contains these two types: data cache and code cache. The L1 code cache services the requests for instructions generated by the instruction prefetcher (the prefetcher is the only unit that accesses the code cache and it only reads from it, so the code cache is read only), whereas the L1 data cache services memory data read and write requests generated by the processor’s execution units when they are executing any instruction that requires a memory data access. The unified L2 cache resides on a dedicated bus referred to as the backside bus. It services misses on the L1 caches, and, in the event of an L2 miss, it issues a transaction request to the external memory. The information is placed in the L2 cache and is also forwarded to the appropriate L1 cache for storage. The L1 data cache in processor services memory data read and writes requests initiated by the processor execution units. The size and structure of the L1 data cache is processor implementation–specific. As processor core speeds increase, the cache sizes may also be increased because the faster core can process code and data faster. Each of the data cache’s cache banks, or ways, is further divided into two banks. When performing a lookup, the data cache views memory as divided into pages equal to the size of one of its cache banks (or ways). Furthermore, it views each memory page as having the same structure as one of its cache ways. The target number is used to index into the data cache directory and select a set of two entries to compare against. If the target page number matches the tag field in one of the entries in the E, S, or M state given below, it is a cache hit. The data cache has a copy of the target line from the target page. The action taken by the data cache depends on whether or not the data access is a read or a write, the current state of the line, and the rules of conduct defined for this area of memory. Each line storage location within the data cache can currently be in one of four possible states: (i) invalid state (I); there is no valid line in the entry. (ii) exclusive state (E); the line in the entry is valid, is still the same as memory, and no other processor has a copy of the line in its caches.
Zhang_Ch02.indd 198
5/13/2008 5:53:44 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
199
(iii) shared state (S); the line in the entry is valid, still the same as memory, and one or more processors may also have copies of the line or may not because the processor cannot discriminate between reads by processors and reads performed by other, noncaching entries such as a host/PCI bridge. (iv) modified state (M); the line in the entry is valid, has been updated by this processor since it was read into the cache, and no other processor has a copy of the line in its caches. The line in memory is stable. The L1 code cache exists for only one reason: to supply requested code to the instruction prefetcher. The prefetcher issues only read requests to the code cache, so it is a read-only cache. A line stored in the code cache can only be one of two possible states, valid or invalid, implemented as the S and I states. When a line of code is fetched from memory and is stored in the code cache, it consists of raw code. The designers could have chosen to prescan the code stream as it is fetched from memory and store boundary markers in the code cache to demark the boundaries between instructions within the cache line. This would preclude the need to scan the code line as it enters the instruction pipeline for decode so each of the variable-length Intel Architecture instructions can be aligned with the appropriate decoder. However, this would bloat the size of the code cache. Note the Pentium’s code cache stores boundary markers. When performing a lookup, the code cache views memory as divided into pages equal to the size of one of its cache banks (or ways). Furthermore, it views each memory page as having the same structure as one of its cache ways. (d) MMX technology. Intel’s Matrix Math Extensions (MMX) technology is designed to accelerate multimedia and communication applications. The MMX technology retains its full compatibility with the original Pentium processor. It contains five architectural design enhancements: (i) New instructions (ii) Single Instruction Multiple Data (SIMD). The new instructions use a SIMD model, operating on several values at a time. Using the 64-bit MMX registers, these instructions can operate on eight bytes, four words, or two double words at once, greatly increasing throughout. (iii) More cache. Intel has doubled on-chip cache size to 32k. That way, more instructions and data can be stored on
Zhang_Ch02.indd 199
5/13/2008 5:53:44 PM
200
INDUSTRIAL CONTROL TECHNOLOGY the chip, reducing the number of times the processor has to access slower, off-chip memory area for information. (iv) Improved branch prediction. The MMX processor contains four prefetch buffers that can hold up to four successive code streams. (v) Enhanced pipeline and deeper write buffers. An additional pipeline stage has been added and four write buffers are shared between the dual pipelines to improve memory write performance. MMX technology uses general-purpose basic instructions that are fast and easily assigned to the parallel pipelines in Intel processors. By using this general-purpose approach, MMX technology provides performance that will scale well across current and future generations of Intel processors. The MMX instructions cover several functional areas including: (i) basic arithmetic operations such as add, subtract, multiply arithmetic shift, and multiply add; (ii) comparison operations; (iii) conversation instructions to convert between the new data types—pack data together and unpack from small to larger data types; (iv) logical operations such as AND, NOT, OR, and XOR; (v) shift operations; (vi) data transfer (MOV) instructions for MMX register-toregister transfers, or 64-bit and 32-bit load and store to memory. The principal data type of the MMX instruction set is the packed, fixed-point integer, where multiple integer words are grouped into single 64-bit quantities. These 64-bit quantities are moved to the 64-bit MMX registers. The decimal point of the fixed-point values is implicit and is left for the programmer to control for maximum flexibility. Arithmetic and logical instructions are designed to support the different packed integer data types. These instructions have a different op code for each data type supported. As a result, the new MMX technology instructions are implemented with 57 op codes. The supported data types are signed and unsigned fixed-point integers, bytes, words, double words, and quad words. The four MMX technology data types are (i) packed bytes: 8 bytes packed into one 64-bit quantity; (ii) packed word: four 16-bit words packed into one 64-bit quantity;
Zhang_Ch02.indd 200
5/13/2008 5:53:44 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
201
(iii) packed double word: two 32-bit double words packed into one 64-bit quantity; (iv) quad word: one 64-bit quantity. From the programmer’s view, there are eight new MMX registers (MM0–MM7) along with new instructions that operate on these registers. But to avoid adding new states, these registers are mapped onto the existing floating-point registers (FP0–FP7). When a multitasking operating system (or application) executes an FSAVE instruction, as it does today to save state, the contents of MM0–MM7 are saved in place of FP0–FP7 if MMX instructions are in use. Detecting the existence of MMX technology on an Intel microprocessor is done by executing the CPUID instruction and checking a set bit. Therefore, when installing or running, the software can query the microprocessor to determine whether MMX technology is supported and install or execute the code that includes, or does not include, MMX instructions based on the result.
2.1.1.3
Internal Bus System
Figures 2.1, 2.3(a), and 2.4 show us that the internal bus systems of an Intel microprocessor, based on their functions, comprises these types below, each of them monitored by a corresponding bus controller or bus unit: (1) (2) (3) (4) (5)
2.1.1.4
Backside bus. Displacement bus. APIC bus. Cache buses that are divided into data bus and address bus. CPUs’ cluster bus.
Memories
Memory can be classified into two groups: prime (system or main) memory and storage memory. The R/WM and ROM are examples of prime memory; this is the memory the microprocessor uses in executing and storing programs. This memory should be able to respond fast enough to keep up with the execution speed of the microprocessor. Therefore, it should be random access memory, meaning that the microprocessor should be able to access information from any register with the same speed
Zhang_Ch02.indd 201
5/13/2008 5:53:44 PM
202
INDUSTRIAL CONTROL TECHNOLOGY
(independent of its place in the chip). The size of a memory chip is specified in terms of bits. For example, a 1k memory chip means it can store 1k (1024) bits (not bytes). On the other hand, memory in a system such as a PC computer is specified in bytes. For example, 4M memory means it has 4 megabytes of volume. The other group is the storage memory, such as magnetic disks and tapes (see Fig. 2.5). This memory is used to store programs and results after the completion of program execution. Information stored in these memories is nonvolatile, meaning information remains intact even if the system is turned off. The microprocessor cannot directly execute or process programs stored in these devices; programs need to be copied into the R/W prime memory first. Therefore, the size of the prime memory, such as 512k or 8M (megabytes), determines how large a program the system can process. The size of the storage memory is unlimited; when one disk is full, the next one can be used. Figure 2.5 also shows two groups in storage memory: secondary storage and backup storage. The secondary storage and backup storage include devices such as disks, magnetic tapes, etc. Figure 2.5 shows that the prime (system) memory is divided into two main groups: read/write memory (R/WM) and read-only memory (ROM); each group includes several different types of memory, as discussed below. (1) Read/write memory (R/WM). As the name suggests, the microprocessor can write into or read from this memory; it is popularly Memory
Prime memory
Storage memory
Readonly memory ROM
Read/write memory R/WM
Static R/WM Zero power RAM Nonvolatile RAM
Dynamic R/WM Integrated RAM
Secondary storage
Semi-random access
Erasable memory
Permanent memory
EPROM
PROM
EE-PROM
Masked ROM
Flash memory
Disk
Backup storage
Serial access
Magnetic tape CCD
Floppy CD-ROM; Zip disk
Figure 2.5 The classification of microprocessors’ memories.
Zhang_Ch02.indd 202
5/13/2008 5:53:44 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
203
known as Random Access Memory (RAM). It is used primarily for information that is likely to be altered, such as writing programs or receiving data. This memory is volatile, meaning that when the power is turned off, all the contents are destroyed. Two types of R/W memories, static and dynamic, are available; they are described in the following paragraphs. (a) Static memory (SRAM). This memory is made up of flipflops, and it stores the bit as a voltage. Each memory cell requires six transistors; therefore, the memory chip has low density but high speed. SRAM, known as cache memory, is included on the processor chip. In addition, high-speed cache memory is also included external to the processor to improve the performance of a system. (b) Dynamic memory (DRAM). This memory is made up of MOS transistor gates, and it stores the bit as a charge. For the DRAM, stored information needs to be read and then written again every few milliseconds. It is generally economical to use dynamic memory when system memory is at least 8k; for small systems, the static memory is appropriate. To increase the speed of DRAM, various techniques are being used. These techniques have resulted in the production of high-speed memory chips, such as Extended Data Out (EDO), Synchronous DRAM (SDRAM), and Rambus DRAM (RDRAM). (2) Read-only memory (ROM). The ROM is a nonvolatile memory; it retains stored information even if the power is turned off. This memory is used for programs and data that need not be altered. As the name suggests, the information can be read only, which means once a bit pattern is stored, it is permanent or at least semi-permanent. The permanent group also includes two types of memory: masked ROM and PROM. The semi-permanent group also includes two types of memory: EPROM and EE-PROM, as shown in Fig. 2.5. Five types of ROM—masked ROM, PROM, EPROM, EE-PROM, and flash memory—are described in the following paragraphs. (a) Masked ROM. In this ROM, a bit pattern is permanently recorded by the masking and metallization process. Memory manufacturers are generally equipped to do this process. It is an expensive and specialized process, but economical for large production quantities. (b) Programmable read-only memory (PROM). This memory has nichrome or polysilicon wires arranged in a matrix; these wires can be functionally viewed as diodes or fuses. This memory can be programmed by the user with a special PROM
Zhang_Ch02.indd 203
5/13/2008 5:53:44 PM
204
INDUSTRIAL CONTROL TECHNOLOGY programmer that selectively burns the fuses according to the bit pattern to be stored. The process is known as “burning the PROM,” and the information stored is permanent. (c) Erasable programmable read-only memory (EPROM). This memory stores a bit by charging the floating gate of an FET. Information is stored by using an EPROM programmer, which applies high voltages to charge the gate. All the information can be erased by exposing the chip to ultraviolet light through its quartz window, and the chip can be reprogrammed. Because the chip can be reused many times, this memory is ideally suited for product development and experimental projects. The disadvantages of EPROM are (1) it must be taken out of the circuit to erase it, (2) the entire chip must be erased, and (3) the erasing process could take 15 or 20 min. (d) Electrically erasable PROM (EE-PROM). This memory is functionally similar to EPROM, except that information can be altered by using electrical signals at the register level rather than erasing all the information. This has an advantage in field and remote control applications. In microprocessor systems, software update is a common occurrence. If EE-PROMs are used in the systems, they can be updated from a central computer by using a remote linkage via serial cable bus. This memory also includes Chip Erase mode, whereby the entire chip can be erased in 10 ms rather than the 20 min taken to erase an EPROM. (e) Flash memory. This is a variation of EE-PROM that is becoming popular. The major difference between the flash memory and EE-PROM is in the erasure procedure: The EE-PROM can be erased at a register level, but the flash memory chip must be erased either in its entirety or at the sector (block) level. These memory chips can be erased and programmed at least a million times.
In a microprocessor-based device, programs are generally written in ROM, and data that are likely to vary are stored in R/WM. Memory technology has advanced considerably in recent years. In addition to static and dynamic R/W memory, other options are also available in memory devices. Examples include zero power RAM, nonvolatile RAM, and integrated RAM. The zero power RAM is a CMOS read/write memory with battery backup built internally. It includes lithium cells and voltage-sensing circuitry.
Zhang_Ch02.indd 204
5/13/2008 5:53:44 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
205
When the external power supply voltage falls below 3 V, the power-switching circuitry connects the lithium battery; thus, this memory provides the advantages of R/W and read-only memory. The nonvolatile RAM is a high-speed static R/W memory array backed up, bit for bit, by EE-PROM array for nonvolatile storage. When the power is about to go off, the contents of R/W memory are quickly stored in the EE-PROM by activating the store signal or the memory chip, and stored data can be read into the R/W memory segment when the power is again turned on. This memory chip combines the flexibility of static R/W memory with the nonvolatility of EE-PROM. The integrated RAM (iRAM) is a dynamic memory with the refreshed circuitry built on a chip. For the user, it is similar to the static R/W memory. The user can derive the advantages of dynamic memory without having to build the external refresh circuitry.
2.1.1.5
Input/Output Pins
To allow for easy upgrades and to save space, the 80486 and Pentium processors are available in a pin-grid array (PGA) form. For all the Intel microprocessors, their PGA pin-out lists are provided in the corresponding Intel specifications. A 168-pin 80486 GX block is illustrated in Fig. 2.1; it can be seen that the 80486 processor has a 32-bit address bus (A0-A31) and a 32-bit data bus (D0-D31). Table 2.1 defines how the 80486 control signals are interpreted.
Table 2.1 Intel 80486 Processor Control Signal M/IO
D/C
W/R
Description
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
Interrupt acknowledge sequence STOP/special bus cycle Reading from an I/O port Writing to an I/O port Reading an instruction from memory Reserved Reading data from memory Writing data to memory
Zhang_Ch02.indd 205
5/13/2008 5:53:44 PM
206
INDUSTRIAL CONTROL TECHNOLOGY
The main 80486 pin connections are as follows: (1) A2-A31 (I/O) (2) A20M (I)
(3) ADS (O) (4) AHOLD (I)
(5) BE0–BE3 (O) (6) BLAST (O) (7) BOFF (I) (8) BRDY (I)
(9) BREQ (O) (10) BS16, BS8 (I)
(11) DP0–DP3 (I/O)
(12) EADS (I) (13) FERR (O) (14) FLUSH (I) (15) HOLD, HOLDA (I/O)
(16) IGNNE (I) (17) INTR (I) (18) KEN (I)
Zhang_Ch02.indd 206
The 30 most significant bits of the address bus When active low, the processor internally masks the address bit A20 before every memory access Indicates that the processor has valid control signals and valid address signals When active, a different bus controller can have access to the address bus. This is typically used in a multiprocessor system The byte enable lines indicate which of the bytes of the 32-bit data bus is active It indicates that the current burst cycle will end after the next BRDY signal The backoff signal informs the processor to deactivate the bus on the next clock cycle The burst ready signal is used by an addressed system that has sent data on the data bus or read data from the bus It indicates that the processor has internally requested the bus The BS16 signal indicates that a 16-bit data bus is used; the BS8 signal indicates that an 8-bit data bus is used. If both are high, then a 32-bit data bus is used The data parity bits give a parity check for each byte of the 32-bit data bus. The parity bits are always even parity Indicates that an external bus controller has put a valid address on the address bus Indicates that the processor has detected an error in the internal floating-point unit When it is active the processor writes the complete contents of the cache to memory The bus hold (HOLD) and acknowledge (HOLDA) are used for bus arbitration and allow other bus controllers to take control of the buses When active the processor ignores any numeric errors External devices to interrupt the processor use the interrupt request line This signal stops caching of a specific address
5/13/2008 5:53:44 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL (19) LOCK (O)
(20) M/IO, D/C, W/R (O) (21) NMI (I) (22) PCHK (O) (23) PLOCK (O)
(24) PWT, PCD (O) (25) RDY (I) (26) RESET (I)
2.1.1.6
207
If it is active, the processor will not pass control to an external bus controller when it receives a HOLD signal See Table 2.1 The nonmaskable interrupt signal causes an interrupt 2 If it is set active then a data parity error has occurred The active pseudo lock signal identifies that the current data transfer requires more than one bus cycle The page write-through (PWT) and page cache disable (PCD) are used with cache control When it is active, the addressed system has sent data on the data bus or read data from the bus If the reset signal is high for more than 15 clock cycles, then the processor will reset itself
Interrupt System
In the Intel microprocessors, the interrupt lines are interrupt request pin (INTR), nonmaskable interrupt request pin (NMI), and system reset pin (RESET), all of which are high signals. The INTR pin is activated when an external device, such as a hard disk or a serial port, wishes to communicate with the processor. This interrupt is maskable and the processor can ignore the interrupt if it wants. NMI pin is a nonmaskable interrupt and is always acted on. When it becomes active the processor calls the nonmaskable interrupt service routine. The RESET pin signal causes a hardware reset and is normally made active when the processor is powered up.
2.1.2
Microprocessor Unit Interrupt Operations
The interrupt I/O is a process of data transfer whereby an external device or a peripheral can inform the processor that it is ready for communication and it requests attention. The process is initiated by an external device and is asynchronous, meaning that it can be initiated at any time without reference to the system clock. However, the response to an interrupt request is directed or controlled by the microprocessor. Unlike the polling technique, an interrupt processing allows a program or an external device to interrupt the task currently being executed by the microprocessor. The generation of
Zhang_Ch02.indd 207
5/13/2008 5:53:44 PM
208
INDUSTRIAL CONTROL TECHNOLOGY
an interrupt can occur by hardware (hardware interrupt) or by software (software interrupt). When an interrupt occurs an interrupt service routine (ISR) is called. For a hardware interrupt, the ISR then communicates with the device and processes data. When it has finished the program execution, it then returns to the original program. A software interrupt causes the program to interrupt its execution and goes to an ISR. Software interrupts include the processor-generated interrupts normally occurring either when a program causes a certain type of error or if it is being used in a debug mode. In debug mode the program can be made to break from its execution when a breakpoint occurs. It seems that software interrupts, in most cases, do not require the program to return back when the ISR task is complete. Apart from this difference between them, both the software interrupts and the hardware interrupts use the same mechanisms, methodologies, and processes to handle interrupts. The interrupt requests are classified in two categories: maskable interrupt and nonmaskable interrupt. The microprocessor can ignore or delay a maskable interrupt request if it is performing some critical task; however, it must respond to a nonmaskable interrupt immediately.
2.1.2.1
Interrupt Process
(1) The operation of a real mode interrupt. When the microprocessor completes executing the current instruction, it determines whether an interrupt is active by checking the following: (1) instruction executions, (2) single step, (3) NMI pin, (4) coprocessor segment overrun, (5) INTR pin, and (6) INT instruction, in the order presented. If one or more of these interrupt conditions are present, the following sequence of events occurs: (a) The contents of the flag register are pushed onto the stack. (b) Both the interrupt (IF) and trap (TF) flags are cleared. This disables the INTR pin and the trap or single-step feature. (c) The contents of the code segment register (CS) are pushed onto the stack. (d) The contents of the instruction pointer (IP) are pushed onto the stack. (e) The interrupt vector contents are fetched and then placed into both IP and CS so that the next instruction executes the ISR addressed by the vector. Whenever an interrupt is accepted, the microprocessor stacks the contents of the flag register, CS and IP; clears both IF and TF; and jumps to the procedure addressed by the interrupt vector. After the flags are pushed onto the stack,
Zhang_Ch02.indd 208
5/13/2008 5:53:44 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
209
IF and TF are cleared. These flags are returned to the state prior to the interrupt when the IRET instruction is encountered at the end of the ISR. Therefore, if interrupts were enabled prior to the ISR, they are automatically reenabled by the IRET instruction at the end of the interrupt service routine. The return address (stored in CS and IP) is pushed onto the stack during the interrupt. Sometimes, the return address points to the next instruction in the program; sometimes it points to the instruction or point in the program where the interrupt occurred. Interrupt type numbers 0, 5, 6, 7, 8, 10, 11, 12, and 13 push a return address that points to the offending instruction, instead of to the next instruction in the program. This allows the ISR to possibly retry the instruction crashed in certain error cases. Some of the protected mode interrupts (type 8, 10, 11, 12, and 13) place an error code on the stack following the return address. The error code identifies the selector that caused the interrupt. In case no selector is involved, the error code is 0. (2) The operation of a protected mode interrupt. In the protected mode, interrupts have exactly the same assignments as in the real mode, but the interrupt vector table is different. In place of interrupt vectors, protected mode uses a set of 256 interrupt descriptors that are stored in an interrupt descriptor table (IDT). The interrupt descriptor table is normally 256 × 8 (2k) bytes long, with each descriptor containing 8 bytes. The IDT is located at any memory location in the system by the IDT address register (IDTR). Each entry in the IDT contains the address of the ISR in the form of a segment selector and a 32-bit offset address. It also contains the P bit (present) and DPL bits to describe the privilege level of the interrupt. Real mode interrupt vectors can be converted into protected mode interrupts by copying the interrupt procedure addresses from the interrupt vector table and converting them to 32-bit offset addresses that are stored in the interrupt descriptors. A single selector and segment descriptor can be placed in the global descriptor table that identifies the first 1M byte of memory as the interrupt segment. Other than the IDT and interrupt descriptors, the protected mode interrupt functions like the real mode interrupt. They return from both interrupts by using the IRET or IRETD instruction. The only difference is that in protected mode the microprocessor accesses the IDT instead of the interrupt vector table. (3) Interrupt flag bits. The interrupt flag (IF) and trap flag (TF) are both cleared after the contents of the flag register are stacked
Zhang_Ch02.indd 209
5/13/2008 5:53:45 PM
210
INDUSTRIAL CONTROL TECHNOLOGY during an interrupt. When the IF bit is set, it allows the INTR pin to cause an interrupt; when the IF bit is cleared, it prevents the INTR pin from causing an interrupt. When IF = 1, it causes a trap interrupt (interrupt type number 1) to occur after each instruction executes. This is why we often call trap a single step. When TF = 0, normal program execution occurs. The interrupt flag is set and cleared by the STI and CLI instructions, respectively. There are no special instructions that set or clear the trap flag.
2.1.2.2
Interrupt Vectors
The interrupt vectors and vector table are crucial to the understanding of hardware and software interrupts. Interrupt vectors are addresses that inform the interrupt handler as to where to find the ISR (also called interrupt service procedure). All interrupts are assigned a number from 0 to 255, with each of these interrupts being associated with a specific interrupt vector. The interrupt vector table is normally located in the first 1024 bytes of memory at addresses 000000H–0003FFH. It contains 256 different interrupt vectors. Each vector is 4 bytes long and contains the starting address of the ISR. This starting address consists of segment and offset of the ISR. Figure 2.6 illustrates the interrupt vector table used for the Intel microprocessors. Remember that in order to install an interrupt vector (sometimes called a hook), the assembler must address absolute memory. In an interrupt vector table, the first five interrupt vectors are identical in all Intel microprocessor family members, from the 8086 to the Pentium. Other interrupt vectors exist for the 80286 that are upward-compatible to 80386, 80486, and Pentium to Pentium 4, but not downward-compatible to the 8086 or 8088. Intel reserves the first 32 interrupt vectors for its use in various microprocessor family members. The last 224 vectors are available as user interrupt vectors.
2.1.2.3
Interrupts Service Routine (ISR)
The interrupts of the entire Intel family of microprocessors include two hardware pins that request interrupts (INTR pin and NMI pin), and one hardware pin (INTA) that acknowledges the interrupt requested through INTR. In addition to the pins, the Intel microprocessor also has software interrupt instructions: INT, INTO, INT 3, and BOUND. Two flag bits, IF (interrupt flag) and TF (trap flag), are also used with the interrupt structure
Zhang_Ch02.indd 210
5/13/2008 5:53:45 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL 0003FFH
211
Type 32 – 255: User interrupt vectors Type 14 – 31: Reserved
Type 18: Machine check Type 17: Alignment check Type 16: Coprocessor error Type 15: Unassigned Type 14: Page fault Type 13: General protection Type 12: Stack segment overrun Type 11: Segment not present Type 10: Invalid task state segment Type 9: Coprocessor segment overrun Type 8: Double fault Type 7: Coprocessor not available Type 6: Undefined opcode Type 5: BOUND Type 4: Overflow (INTO) Type 3: 1-byte breakpoint Type 2: NMI pin Type 1: Single-step Type 0: Divide error 000000H
(a)
Segment (low) Offset (high) Offset (low) (b)
Figure 2.6 (a) The interrupt vector table for the Intel microprocessor, and (b) the contents of an interrupt vector.
Zhang_Ch02.indd 211
5/13/2008 5:53:45 PM
212
INDUSTRIAL CONTROL TECHNOLOGY
and with a special return instruction IRET (or IRETD in the 80386, 80486, or Pentium-Pentium 4). (1) Software interrupts. Intel microprocessors provide five software interrupt instructions: BOUND, INTO, INT, INT 3, and IRET. Of these five software interrupt instructions, INT and INT 3 are very similar, BOUND and INTO are conditional, and IRET is a special interrupt return instruction. The INT n instruction calls the ISR that begins at the address represented in vector number n. The only exception to this is the “INT 3” instruction, a 1-byte instruction. The INT 3 instruction is often used as breakpoint-interrupt, because it is easy to insert a 1-byte instruction into a program. As mentioned previously, breakpoints are often used to debug faulty software. The BOUND instruction, which has two operands, compares a register with two words of memory data. The INTO instruction checks the overflow flag (OF); If OF = 1, the INTO instruction calls the ISR whose address is stored in interrupt vector type number 4. If OF = 0, then the INTO instruction performs no operation and the next sequential instruction in the program executes. The IRET instruction is a special return instruction used to return for both software and hardware interrupts. The IRET instruction is much like a “far RET” because it retrieves the return address from the stack. It is unlike the “near return” because it also retrieves a copy of the flag register from the stack. An IRET instruction removes six bytes from the stack: two for the IP, two for CS, and two for flags. In the 80386 to Pentium 4, there is also an IRETD instruction because these microprocessors can push the EFLAG register (32 bit) on the stack, as well as the 32-bit EIP in the protected mode. If operated in the real mode, we use the IRET instruction with the 80386 to Pentium 4 microprocessors. (2) Hardware interrupts. The microprocessor has two hardware inputs: nonmaskable interrupt (NMI) and interrupt request (INTR). Whenever the NMI input is activated, a type 2 interrupt occurs because NMI is internally decoded. The INTR input must be externally decoded to select a vector. Any interrupt vector can be chosen for the INTR pin, but we usually use an interrupt type number between 20H and FFH. Intel has reserved interrupts 00H through 1FH for internal and future expansion. The INTA signal is also an interrupt pin on the microprocessor, but it is an output that is used in response to the INTR input to apply a vector-type number to the data bus connections D7–D0.
Zhang_Ch02.indd 212
5/13/2008 5:53:45 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
213
The NMI is an edge-triggered input that requests an interrupt on the positive edge (0-to-1 transition). After a positive edge, the NMI pin must remain logic 1 until it is recognized by the microprocessor. The NMI input is often used for parity errors and other major system faults, such as power failure. Power failures are easily detected by monitoring the AC power line and causing an NMI interrupt whenever AC power drops out. The interrupt request input (INTR) is level sensitive, which means that it must be held at logic 1 level until it is recognized. The INTR pin is set by an external event and cleared inside the ISR. This input is automatically disabled once it is accepted by the microprocessor and reenabled by the IRET instruction at the end of the ISR. The microprocessor responds to the INTR input by pulsing the INTA output in anticipation of receiving an interrupt vector-type number on data bus connection D7–D0. There are two INTA pulses generated by the system that are used to insert the vector-type number on the data bus.
2.1.3
Microprocessor Unit Input/Output Rationale
The I/O devices can be interfaced with a microprocessor using both techniques: isolated I/O (also called peripheral-mapped I/O) and memorymapped I/O. The process of data transfer in both is identical. Each device is assigned a binary address, called a device address or port number, through its interface circuit. When the microprocessor executes a data transfer instruction for an I/O device, it places the appropriate address on the address bus, sends the control signals, enables the interfacing device, and then transfers data. The interface device is like a gate for data bits, which is opened by the microprocessor whenever it intends to transfer data.
2.1.3.1
Basic Input/Output Techniques
As previously mentioned, there are two main methods of communicating with external equipment: either the equipment is mapped into the physical memory and given a real address on the address bus of the microprocessor (memory mapped I/O), or it is mapped into a special area of input/output memory (isolated I/O). Devices mapped into memory are accessed by reading or writing to the physical address of the memory. Isolated I/O provides ports that are gateways between the interface device and the processor. They are isolated from the system using a buffering system and are accessed by four machine code instructions: IN, INS, OUT, OUTS. The IN (INS) instruction inputs a byte, or a word, and the
Zhang_Ch02.indd 213
5/13/2008 5:53:45 PM
214
INDUSTRIAL CONTROL TECHNOLOGY
OUT (OUTS) instruction outputs a byte, or a word. A high-level compiler interprets the equivalent high-level functions and produces machine code that uses these instructions. Figure 2.7 shows the two methods. This figure also tells us that devices are not directly connected onto the address and data bus because they may use part of the memory that a program uses or they could cause a hardware fault. This device interprets the microprocessor signals and generates the required memory signals. Two main output lines differentiate between a read and a write operation (R/W) and between direct and isolated memory access (M/IO). The R/W line is low when data is being written to memory and high when data is being read. When M/IO is high, direct memory access is selected, and when low, the isolated memory is selected. (1) Isolated I/O. The most common I/O transfer technique used in the Intel microprocessor-based system is isolated I/O. The term “isolated” describes how the I/O locations are isolated from the memory system in a separate I/O address space. The addresses for isolated I/O devices, called ports, are separate from the memory. Because the ports are separate, the user can expand the memory to its full size without using any of the memory space for I/O devices. A disadvantage of isolated I/O is that the data transferred between I/O and the microprocessor must be accessed Bus controller Read/write R/W Memory/isolated M/IO
Address bus Microprocessor
Data bus
Memory mapped I/O Interface device
Address bus
Data bus
Isolated I/O
Figure 2.7 Access memory-mapped and isolated I/O.
Zhang_Ch02.indd 214
5/13/2008 5:53:45 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
215
by the IN, INS, OUT, and OUTS instructions. Separate control signals for the I/O space are developed (using M/IO and R/W), which indicate an I/O read (IORC) or an I/O write (IOWC) operation. These signals indicate that an I/O port address, which appears on the address bus, is used to select the I/O device. In the personal computer, isolated I/O ports are used for controlling peripheral devices such as direct memory access (DMA) controller, NMI reset, game I/O adaptor, floppy disk controller, second serial port (COM2), and primary serial port (COM1). An 8-bit port address is used to access devices located on the system board, such as the timer and keyboard interface, while a 16-bit port is used to access serial and parallel ports as well as video and disk drive system. (2) Memory-mapped I/O. Interface devices can map directly onto the system address and data bus. Unlike isolated I/O, memorymapped I/O does not use the IN, INS, OUT, or OUTS instructions. Instead, it uses any instruction that transfers data between the microprocessor and memory. A memory-mapped I/O device is treated as a memory location in the memory map. The main advantage of memory-mapped I/O is that any memory transfer instruction can be used to access the I/O device. The main disadvantage is that a portion of the memory system is used as the I/O map, which reduces the amount of the usable memory volumes for applications. In a PC-compatible system the address bus is 20 bits wide, from address 00000h to FFFFFh (1MB). Figure 2.8 gives a typical memory allocation in PC. FFFFFFFFh (4 GB) Extended memory 00FFFFFFh (16 MB) Extended memory Video graphic text display Application programs (640 kB) Interrupt vectors BIOS
000FFFFFh (1 MB)
0009FFFFh (640 kB)
00000600h
00000000h
Figure 2.8 Typical PC memory map.
Zhang_Ch02.indd 215
5/13/2008 5:53:45 PM
216
INDUSTRIAL CONTROL TECHNOLOGY
2.1.3.2
Basic Input/Output Interfaces
The basic input device is a set of three-state buffers. The basic output device is a set of data latches. The term IN refers to moving data from the I/O device into the microprocessor and the term OUT refers to moving data out of the microprocessor to the I/O device. Many I/O devices accept or release information at a much slower rate than the microprocessor. Another method of I/O control, called “handshaking” or “polling,” synchronizes the I/O device with the microprocessor. An example device that requires handshaking is a parallel printer that prints 100 characters per second (CPS). It is obvious that the microprocessor can send more than 100 CPS to the printer, so a handshaking must be used to slow the microprocessor down to match speeds with the printer. (1) The basic input interface. Three-state buffers 74ALS244 are used to construct the 8-bit input port depicted in Fig. 2.9(a). The external TTL data (simple toggle switches in this example) are connected to the inputs of the buffers. The outputs of the buffers connect to the data bus. The exact data bus connections depend on the version of the microprocessor. For example, the 8088 has data bus connections D7–D0, the 80486 has D31–D0, and the Pentium to Pentium 4 have D63–D0. The circuit of Fig. 2.9(a) allows the microprocessor to read the contents of the eight switches that connect to any 8-bit section of the data bus when the select signal SEL becomes logic 0. Thus, whenever the IN instruction executes, the contents of the switches are copied into the AL register. When the microprocessor executes an IN instruction, the I/O port address is decoded to generate the logic 0 on SEL. A 0 placed on the output control inputs (1G and 2G) of the 74ALS244 buffer causes the data input connections (A) to be connected to the data input (Y) connections. If a logic 1 is placed on the output control inputs of the 74ALS244 buffer, the device enters the three-state high-impedance mode that effectively disconnects the switches from the data bus. The basic input circuit is not optional and must appear any time that input data are interfaced to the microprocessor. Sometimes it appears as a discrete part of the circuit, as shown in Fig. 2.9(a); sometimes it is built into a programmable I/O device. It is possible to interface 16- or 32-bit data to various versions of the microprocessor, but this is not nearly as common as using 8-bit data. To interface 16 bits of data, the circuit in Fig. 2.9(a) is doubled to include two 74ALS244 buffers that connect 16 bits of
Zhang_Ch02.indd 216
5/13/2008 5:53:46 PM
217
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
10 k
1 2
1
16
3
4 5 6 7
8
2
2
15
4
3 4 5
14 13 12
6 8 11
6 7 8
11 10 9
13 15 17 1 19
1A1 !A2
1Y1 1Y2
1A3 1A4 2A!
1Y3 1Y4 2Y1
2A2 2A3 2A4
2Y2 2Y3 2Y4
Data bus 18 16 14 12 9 7 5 3
1G 2G 74ALS244
.
SEL (a)
Data bus
D0 D1 D2
Q0 Q1 Q2
D3 D4 D5
Q3 Q4 Q5
D6 D7
Q6 Q7
OC CLK SEL
(b)
Figure 2.9 The basic input and output interfaces. (a) The basic input interface illustrating the connection of eight switches. Note that the 74ALS244 is a three state that controls the application of switch data to the data bus. (b) The basic output interface connected to a set of LED displays.
Zhang_Ch02.indd 217
5/13/2008 5:53:46 PM
218
INDUSTRIAL CONTROL TECHNOLOGY input data to the 16-bit data bus. To interface 32 bits of data, the circuit is expanded by a factor of 4. (2) The basic output interface. The basic output interface receives data from the microprocessor and must usually hold it for some external device. Its latches or flip-flops, like the buffers found in the input device, are often built into the I/O device. Figure 2.9(b) shows how eight simple light-emitting diodes (LEDs) connect to the microprocessor through a set of eight data latches. The latch stores the number output by the microprocessor from the data bus so that the LED can be lit with any 8-bit binary number. Latches are needed to hold the data because when the microprocessor executes an OUT instruction, the data are only present on the data bus for less than 1.0 µs. Without a latch, the viewer would never see the LED illuminate. When the OUT instruction executes, the data from AL, AX, or EAX are transferred to the latch via the data bus. Here, the D inputs of a 74ALS374 octal latch are connected to the data bus to capture the output data, and the Q outputs of the latch are attached to the LED. When a Q becomes a logic 0, the LED lights. Each time that the OUT instruction executes, the SEL signal to the latch activates, capturing the data output to the latch from any 8-bit section of the data bus. The data are held until the next OUT instruction executes. Thus, whenever the output instruction is executed in this circuit, the data from the AL register appears on the LED.
2.1.4
Microprocessor Unit Bus System Operations
This subsection uses the Peripheral Component Interconnect (PCI) bus to introduce the microprocessor unit bus system operations. The PCI bus has been developed by Intel for its Pentium processors. This technique can be populated with adaptors requiring fast accesses to each other and/or system memory and that can be accessed by the processor at speeds approaching that of the processor’s full native bus speed. A PCI physical device package may take the form of a component integrated onto the system board or may be implemented on a PCI add-in card. Each PCI package (referred to in the specification as a device) may incorporate from one to eight separate functions. A function is a logical device. Each function contains its own, individually addressable configuration space, 64 double words in size. Its configuration registers are implemented in this space. Using these registers, the configuration software can automatically detect the presence of a function, determine its resource requirements including memory space, I/O space, interrupt lines, etc., and can then assign resources
Zhang_Ch02.indd 218
5/13/2008 5:53:46 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
219
to the function that are guaranteed not to conflict with the resources assigned to other devices.
2.1.4.1
Bus Operations
The PCI bus operates in the multiplexing mode (also called normal mode) and/or in the burst mode. In multiplexing mode, the address and data lines are used alternatively. First, the address is sent, followed by a data read or write. Unfortunately, this mode requires two or three clock cycles for a single transfer that is an address followed by a read or write cycle. The multiplex mode obviously slows down the maximum transfer rate. Additionally, a PCI bus can be operated in burst mode. A burst transfer is one consisting of a single address phase followed by two or more data phases. In the burst mode, the bus master only has to arbitrate for bus ownership one time. The start addresses and transaction type are issued during the address phase. All devices on the bus latch the address and transaction type and decode them to determine which the target device is. The target device latches the start address into an address counter and is responsible for incrementing the address from data phase to data phase. Figure 2.10 shows an example of the burst data transfer. There are two participants in every PCI burst transfer: the initiator and the target. The initiator, or bus master, is the device that initiates a transfer. The target is the device currently addressed by the initiator for the purpose of performing a data transfer. PCI initiator and target devices are commonly referred to as PCI-compliant agents in the specifications. It should be noted that a PCI target may be designed such that it can only handle single data phase transactions. When a bus master attempts to perform a burst transaction, the target forces the master to terminate the transaction at the completion of the first data phase. The master must rearbitrate for Address and command Data Data Data Data Data Address and command Data Data ………….
Figure 2.10 Example of the burst data transfer.
Zhang_Ch02.indd 219
5/13/2008 5:53:46 PM
220
INDUSTRIAL CONTROL TECHNOLOGY
the bus to attempt resumption of the burst when the next data phase completes. Each burst transfer consists of the following basic components: (1) the address and the transfer type are output during the address phase; (2) a data object may then be transferred during each subsequent data phase. Assuming that neither the initiator nor the target device inserts wait states in each data phase, a data object may be transferred on the rising edge of each PCI clock cycle. At a PCI bus clock frequency of 33 MHz, a transfer rate of 132 MB/s may be achieved. A transfer rate of 264 MB/s may be achieved in a 64-bit implementation when performing 64-bit transfers during each data phase. (1) Address phase. Refer to Fig. 2.11. Every PCI transaction (with the exception of a transaction using 64-bit addressing) starts off with an address phase one PCI clock period in duration. During the address phase, the initiator identifies the target device and the type of transaction (also referred to as command type). The target device is identified by driving a start address within its assigned range onto the PCI address and data bus. At the same time, the initiator identifies the type of transaction by driving the command type onto the 4-bit wide PCI Command/Byte Enable bus. The initiator also asserts the FRAME# signal to indicate the presence of a valid start address or transaction type on the bus. Since the initiator only presents the start address and
Initiator starts transaction by inserting
Target latch and decode address and command
Turnaround cycle, initiator stops driving AD bus
Address phase 1
2
Data phase 1 Wait state 3 4
Data phase 2 Wait state 5
6
Data phase 3 Wait state 7
8
9
CLK Target begins to drive data back to initiator
FRAME#
AD C/BE#
IRDY#
Address
Bus cmd
Data 1
Byte enables
Data 2
Byte enables
Data 3
Byte enables
Wait states
TRDY#
Data transfer
DEVSEL# Target device asserts DEVSET# GNT#
Figure 2.11 Typical PCI bus transactions.
Zhang_Ch02.indd 220
5/13/2008 5:53:46 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
221
command for one PCI clock cycle, it is the responsibility of every PCI target device to latch the address and command on the next rising edge of the clock so that it may be decoded subsequently. By decoding the address latched from the address bus and the command type latched from the Command/Byte Enable bus, a target device can determine if it is being addressed and the type of transaction in progress. It is important to note that the initiator only supplies a start address to the target during the address phase. Upon completion of the address phase, the address or data bus becomes the data bus for the duration of the transaction and is used to transfer data in each of the data phases. It is the responsibility of the target to latch the start address and to autoincrement it to point to the next group of locations during each subsequent data transfers. (2) Data phase. Refer to Fig. 2.11. The data phase of a transaction is the period during which a data object is transferred between the initiator and the target. The number of data bytes to be transferred during a data phase is determined by the number of Command/Byte Enable signals that are asserted by the initiator during the data phase. Each data phase is at least one PCI clock period in duration. Both the initiator and the target must indicate that they are ready to complete a data phase, or the data phase is extended by a wait state one PCI CLK period in duration. The PCI bus defines ready signal lines used by both the initiator (IRDY#) and the target (TRDY#) for this purpose. The initiator does not issue a transfer count to the target. Rather, in each data phase it indicates whether it is ready to transfer the current data item and, if it is, whether it is the final data item. FRAME# is inserted at the start of the address phase and remains inserted until the initiator is ready (inserts IRDY#) to complete the final data phase. When the target samples IRDY# are inserted and FRAME# are not inserted, it realizes that this is the final data phase. Refer to Fig. 2.11. The initiator indicates that the last data transfer (of a burst transfer) is in progress by uninserting FRAME# and inserting IRDY#. When the last data transfer has been completed, the initiator returns the PCI bus to the idle state by uninserting its ready line (IRDY#). If another bus master had previously been granted ownership of the bus by the PCI bus arbiter and was waiting for the current initiator to surrender the bus, it can detect that the bus has returned to the idle state by detecting FRAME# and IRDY# both uninserted on the same rising edge of the PCI clock.
Zhang_Ch02.indd 221
5/13/2008 5:53:47 PM
222
2.1.4.2
INDUSTRIAL CONTROL TECHNOLOGY
Bus System Arbitration
Bus masters are devices on a PCI bus that are allowed to take control of the bus. A component named bus arbiter works for this purpose. An arbiter is usually integrated into the PCI chip set; specifically, it is typically integrated into the host/PCI or the PCI/expansion bus bridge chip. Each master device is physically connected to the arbiter via a separate pair of lines, with each of them being used as REQ# (request) signal or GNT# (grant) signal, respectively. Ideally, the bus arbiter should be programmable by the system. If it is, the startup configuration software can determine the priority to be assigned to each member by reading from the maximum latency (Max_Lat) configuration register associated with each bus master (see Fig. 2.12). The bus designer hardwires this register to indicate, in increments of 250 ns, how quickly the master requires access to the bus in order to achieve adequate performance. At a given instant in time, one or more PCI bus master devices may require use of the PCI bus to perform a data transfer with another PCI device. Each requesting master asserts its REQ# output to confirm to the bus arbiter its pending request for the use of the bus. In order to grant the PCI bus to a bus master, the arbiter asserts the device’s respective GNT# signal. This grants the bus to a bus master for one transaction as given in Fig. 2.11. If a master generates a request, it is subsequently granted the bus and does not initiate a transaction by asserting FRAME# signal within 16 PCI clocks after the bus goes idle; the arbiter may assume that this bus master is malfunctioning. In this case, the action taken by the arbiter would depend upon the system design. If a bus master has another transaction to 31
0 Unit ID Status Class code
BIST
Header
64-byte header
Man. ID Command
Latency
Revision CLS
Base address register Reserved Reserved Expansion ROM base address Reserved Reserved Max_Lat Min_GNT INT-pin INT-line
Figure 2.12 PCI configuration space.
Zhang_Ch02.indd 222
5/13/2008 5:53:47 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
223
perform immediately after the one it just initiated, it should keep its REQ# line asserted when it asserts FRAME# signal to begin the current transaction. This informs the arbiter of its desire to maintain ownership of the bus after completion of the current transaction. In the event that ownership is not maintained, the master should keep its REQ# line asserted until it is successful in acquiring bus ownership again. However, at a given instant in time, only one bus master may use the bus. This means that no more than one GNT# line will be asserted by the arbiter during any PCI clock cycle. On the other hand, a master must only assert its REQ# output to signal a current need for the bus. This means that a master must not use its REQ# line to “park” the bus on itself. If a system designer implements a bus parking scheme, the bus arbiter design should indicate a default bus owner by asserting the device’s GNT# signal when no request from any bus masters are currently pending. In this manner, signal REQ # from the default master is granted immediately once no other bus masters require the use of the PCI bus. The PCI specification does not define the scheme used by the PCI bus arbiter to decide the winner of the competition when multiple masters simultaneously request bus ownership. The arbiter may utilize any scheme, such as one based on fixed or rotational priority or a combination of these two, to avoid deadlocks. However, the central arbiter is required to implement a fairness algorithm to avoid deadlocks. Fairness means that each potential bus master must be granted access to the bus independent of other requests. Fairness is defined as a policy that ensures that high-priority masters will not dominate the bus to the exclusion of lower priority masters when they are continually requesting the bus. However, this does not mean that all agents are required to have equal access to the bus. By requiring a fairness algorithm there are no special conditions to handle when the signal LOCK# is active (assuming a resource lock) or when cacheable memory is located on PCI. A system that uses a fairness algorithm is still considered fair if it implements a complete bus lock instead of resource lock. However, the arbiter must advance to a new agent if the initial transaction attempting to establish a lock is terminated with retry.
2.1.4.3
Interrupt Routing
The host/PCI bus bridge will transfer the interrupt acknowledgment cycle from the processor to the PCI bus, which requires the microprocessor chipset having an interrupt routing functionality. This router for the interrupt routing could be implemented using an Intel APIC I/O module as given in Figs 2.3(a) and 2.4. The APIC I/O module can be programmed to
Zhang_Ch02.indd 223
5/13/2008 5:53:47 PM
224
INDUSTRIAL CONTROL TECHNOLOGY
assign a separate interrupt vector (interrupt table entry number) for each of the PCI interrupt request lines. It can also be programmed so that it realizes that one of its inputs is connected to an Intel programmable interrupt controller. If a system does not have this kind of programmable interrupt controller, the microprocessor chipset should incorporate a software programmable interrupt routine device. In this case, the startup configuration software of the microprocessor attempts to program the router to distribute the PCI interrupt in an optimal fashion. Whenever any of the PCI interrupt request lines is asserted, the APIC I/O module supplies the vector (see Fig. 2.6 for an interrupt vector table) associated with that input to the processor’s embedded local APIC I/O module. Whenever this programmable interrupt controller generates a request, the APIC I/O informs the processor that it must poll this programmable interrupt controller to get this vector. In response, the Intel processor can generate two back-to-back Interrupt Acknowledge transactions. The first Interrupt Acknowledge forces this programmable interrupt controller to prioritize the interrupts pending, while the second Interrupt Acknowledge requests that the interrupt controller send the vector to the processor. For a detailed discussion of APIC operation, refer to the MindShare book entitled Pentium Processor System Architecture (published by Addison-Wesley). For a detailed description of the Programmable Interrupt Controller chipset, refer to Section 2.2.2. Figure 2.11 can also be used to explain an interrupt acknowledgment cycle on the PCI bus, where a single byte enable is asserted. The PCI bus performs only one interrupt acknowledgment cycle per interrupt. Only one device may respond to the interrupt acknowledgment; that device must assert DEVSEL# indicating that it is claiming the interrupt acknowledgment. The sequence is as follows: (1) During the address phase, the AD signals do not contain a valid address; they must be driven with stable data so that parity can be checked. The C/BE# signals contain the interrupt acknowledge command code (not shown). (2) IRDY# and the BE#s are driven by the host/PCI bus bridge to indicate that the bridge (master), is ready for response. (3) The target will drive DEVSEL# and TRDY# along with the vector on the data bus (not shown).
2.1.4.4
Configuration Registers
Each PCI device has 256 bytes of configuration data, which is arranged as 64 registers of 32 bits. It contains a 64-byte predefined header followed
Zhang_Ch02.indd 224
5/13/2008 5:53:47 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
225
by an extra 192 bytes which contain extra configuration data. Figure 2.12 shows the arrangement of the header. The definitions of the fields in this header are as follows: (1) Unit ID and Man. ID. A Unit ID of FFFFh defines that there is no unit installed, while any other address defines its ID. The PCI SIG, which is the governing body for the PCI specification, allocates a Man. ID. This ID is normally shown at BIOS start-up. (2) Status and command (3) Class code and revision. The class code defines PCI device type. It splits into two 8-bit values with a further 8-bit value that defines the programming interface for the unit. The first defines the unit classification, followed by a subcode which defines the actual type. (a) BIST, header, latency, CLS. The built-in-self test (BIST) is an 8-bit field, where the most significant bit defines if the device can carry out a BIST, the next bit defines if a BIST is to be performed (a 1 in this position indicates that it should be performed), and bits 3–0 define the status code after the BIST has been performed (a value of zero indicates no error). The header field defines the layout of the 48 bytes after the standard 16-byte header. The most significant bit of the header field defines whether the device is a multifunction device or not. A 1 defines a multifunction unit. The cache line size (CLS) field defines the size of the cache in units of 32 bytes. Latency indicates the length of time for a PCI bus operation, where the amount of time is the latency + 8 PCI clock cycles. (b) Base address register. This area of memory allows the device to be programmed with an I/O or memory address area. It can contain a number of 32- or 64-bit addresses. The format of a memory address is (i) Bit 64-4: base address; (ii) Bit 3: PRF. Prefetching, 0 identifies not possible, 1 identifies possible; (iii) Bit 2, 1: Type. 00—any 32-bit address, 01—less than 1 MB, 10—any 64-bit address, and 11—reserved; (iv) Bit 0: 0. Always set a 0 for a memory address. For an I/O address space it is defined as: (i) Bit 31-2: base address; (ii) Bit 1, 0: 01. Always set to a 01 for an I/O address. (c) Expansion ROM base address. It allows a ROM expansion to be placed at any position in the 32-bit memory address area. (d) Max_Lat, Min_GNT, INT-pin, INT-line. The Min_GNT and Max_Lat registers are read-only registers that define minimum
Zhang_Ch02.indd 225
5/13/2008 5:53:47 PM
226
INDUSTRIAL CONTROL TECHNOLOGY and maximum latency values. The INT-line field is a 4-bit field that defines the interrupt line used (IRQ0–IRQ15). A value of 0 corresponds to IRQ0 and a value of 15 corresponds to IRQ15. The PCI bridge can then redirect this interrupt to the correct IRQ line. The 4-bit INT pin defines the interrupt line that the device is using. A value of 0 defines no interrupt line, 1 defines INTA, 2 defines INTB, and so on.
2.2 Programmable Peripheral Devices A programmable peripheral device is designed to perform various interface functions. Such a device can be set up to perform specific functions by writing an instruction (or instructions) in its internal register, called the control register. Furthermore, function can be changed any time during execution of the program by writing a new instruction in the control register. These devices are flexible, versatile, and economical; they are widely used in microprocessor-based products. In a programmable device, on the other hand, functions are determined through software instructions. A programmable peripheral device can not only be viewed as a multiple I/O device, but it also performs many other functions, such as time delay, interrupt handling, and graphic user– machine interactions, etc. In fact, it consists of many devices on a single chip, interconnected through a common bus. This is a hardware approach through software control to performing the I/O functions, discussed earlier in this chapter. This approach, a trade-off between hardware and software, should reduce programming efforts. This section describes five typical programmable peripheral devices that are the programmable I/O ports: interrupt, controller, timer, CMOS, and DMA.
2.2.1
Programmable Peripheral I/O Ports
The programmable peripheral interface (PPI), especially the 8255 Programmable Peripheral I/O Interface, is a very popular and versatile input and output chip that is easily configured to function in several different configurations. The 8255 is used on several ranges of cards that plug into an available slot in controllers or computers. This chip allows the use of both digital input and output (DIO) with controllers or computers. As illustrated in Fig. 2.13, each 8255 Programmable Peripheral I/O Interface has three off 8-bit TTL-compatible I/O ports that will allow the
Zhang_Ch02.indd 226
5/13/2008 5:53:47 PM
227
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
D0-D7
Data bus buffer
RD WR A0 A1 Reset
Read/write control logic
GA control
GB control
I n t e r n a l d a t a
PA PA0-PA7
PCU PC4-PC7
PCL PC0-PC3
CS b u s
PB PB0-PB7
Figure 2.13 8255 Programmable peripheral I/O interface functional blocks.
control of up to 24 individual outputs or 24 individual inputs or both input or output. For example, they can be attached to a robotic device to control movement by use of motors to control motion and switches to detect position, etc. Addressing ports is different from addressing memory. Ports have port addresses and memory has memory addresses; port address 1234 is different from memory address 1234. The 8255 Programmable Peripheral I/O Interface cards use port addresses and cannot be set to use memory addresses (see Table 2.2). The 8255 Programmable Peripheral I/O Interface cards plug into any available 8- or 16-bit slot (also known as an AT or ISA slot) on the motherboard of a controller or a computer, just like a sound card or disk drive controller card does with personal computers. The CPU of the motherboard communicates with cards by knowing the card’s address and sending data to it. By physically using jumpers on the card, we can assign a set of addresses to the card; then in software, we can tell the CPU what these addresses are (more about this in the programming section). The first thing that must be done, before the chip can be used, is to tell it which configuration is required. The configuration tells the 8255 whether ports are input or output and even some strange arrangements called bidirectional and strobed. The 8255 allows for three distinct operating modes (modes 0, 1, and 2) as follows: (1) Mode 0—Basic input/output. Ports A and B operate as either inputs or outputs and Port C is divided into two 4-bit groups either of which can be operated as inputs or outputs.
Zhang_Ch02.indd 227
5/13/2008 5:53:47 PM
228
INDUSTRIAL CONTROL TECHNOLOGY
Table 2.2 DC-0600 Addresses Options address 8255 8255Port Port 1A Port 1B Port 1C Port 1 Control register Port 2A Port 2B Port 2C Port 2 Control register
Option 1: Default (JP2 linked)
Option 2 (JP2 open)
Address [Hex (decimal)] 300H (768) 301H (769) 302H (770) 303H (771) 304H (772) 305H (773) 306H (774) 307H (775)
Address [Hex (decimal)] 360H (864) 361H (865) 362H (866) 363H (867) 364H (868) 365H (869) 366H (870) 367H (871)
(2) Mode 1—Strobed input/output. Same as Mode 0 but Port C is used for handshaking and control. (3) Mode 2—Bidirectional bus. Port A is bidirectional (both input and output) and Port C is used for handshaking. Port B is not used. For most applications using this range of cards, mode 0 will be used. Each of the 3 ports has 8 bits, and each of these bits can be individually set ON or OFF, somewhat like having three banks of eight light switches. These bits are configured in groups to be inputs or outputs allowing their function to either read data into the computer or control data out of the computer. The various modes can be set by sending a value to the control port. The control port is Base Address + 3 (i.e., 768 + 3 = 771 decimal). Table 2.3 shows the different arrangements that can be configured and the values to be sent to the configuration port. Table 2.3 8255 Control Register Configuration (Mode 0) Controlword [Hex(Dec)] 80H (128) 82H (130) 85H (133) 87H (135) 88H (136) 8AH (138) 8CH (140) 8FH (143)
Zhang_Ch02.indd 228
Port A
Port B
OUT OUT OUT OUT IN IN IN IN
OUT IN OUT IN OUT IN OUT IN
5/13/2008 5:53:48 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
229
As mentioned, the control port is Base Address + 3. Port A is always at Base Address; Port B is Base Address + 1; Port C is Base Address + 2. Thus, in our example Ports A, B, and C are at 768, 769, and 770 (decimal), respectively. By writing, say, 128, the control port will then configure the 8255 to have all three ports set for output.
2.2.2
Programmable Interrupt Controller Chipset
Both microcontroller and microcomputer system designs require that I/O devices such as keyboards, displays, sensors, and other components receive servicing in an efficient manner so that large amounts of the total system tasks can be assumed by the microcomputer with little or no effect on throughput. As mentioned in Section 2.1.2, there are two common methods of servicing such devices: the first method is the polled approach; the second is a more desirable method called interrupt that allows the microprocessor to continue executing its main program and only stop to service peripheral devices when it is told to do so by the device itself. The programmable interrupt controller (PIC) functions as an overall manager in an interrupt-driven system environment. It accepts requests from the peripheral equipment, determines which of the incoming requests is of the highest importance (priority), ascertains whether the incoming request has a higher priority value than the level currently being serviced, and issues an interrupt to the CPU based on this determination. The PIC, after issuing an interrupt to the CPU, must somehow input information into the CPU that can “point” the program counter to the service routine associated with the requesting device. This “pointer” is an address in a vectoring table and will often be referred to, in this document, as vectoring data. The 8259A is taken as an example of the PIC. It manages eight levels or requests and has built-in features for expandability to other 8259As (up to 64 levels). It is programmed by the system’s software as an I/O peripheral. A selection of priority modes is available to the programmer so that the manner in which the requests are processed by the 8259A can be configured to match system requirements. The priority modes can be changed or reconfigured dynamically at any time the main program is executing. This means that the complete interrupt structure can be defined on the requirements of the total system environment. Figure 2.14 gives the block function diagram of 8259A PIC, which includes these function blocks and pins: (1) Interrupt request register (IRR) and in-service register (ISR). The interrupts at the IR input lines are handled by two registers
Zhang_Ch02.indd 229
5/13/2008 5:53:48 PM
230
INDUSTRIAL CONTROL TECHNOLOGY INTAn
INT
Control logic Data bus buffer
D[7..0]
RDn WRn CSn A0 RESET
Read/write logic
In-service register
Priority resolver
Interrupt request register
IR0 IR1 IR2 IR3 IR4 IR5 IR6 IR7
CAS0 Cascade buffer/ comparator
CAS1 CAS2
Interrupt mask register
SPnENn
Figure 2.14 8259A PIC block diagram.
(2) (3)
(4) (5) (6)
(7)
Zhang_Ch02.indd 230
in cascade, the IRR and the ISR. The IRR is used to store all the interrupt levels that are requesting service, and the ISR is used to store all the interrupt levels that are being serviced. Priority resolver. This logic block determines the priorities of the bits set in the IRR. The highest priority is selected and strobed into the corresponding bit of the ISR during INTA pulse. Interrupt mask register (IMR). The IMR stores the bits that mask the interrupt lines to be masked. The IMR operates on the IRR. Masking of a higher priority input will not affect the interrupt request lines of lower priority. INT (interrupt). This output goes directly to the CPU interrupt input. The VOH level on this line is designed to be fully compatible with the 8080A, 8085A, and 8086 input levels. INTA (interrupt acknowledge). INTA pulses will cause the 8259A to release vectoring information onto the data bus. The format of this data depends on the system mode (mPM) of the 8259A. Data bus buffer. This three-state, bidirectional 8-bit buffer is used to interface the 8259A to the system data bus. Control words and status information are transferred through the data bus buffer. Read/write control logic. The function of this block is to accept OUTPUT commands from the CPU. It contains the initialization command word (ICW) registers and operation command word (OCW) registers that store the various control formats for device operation. This function block also allows the status of the 8259A to be transferred onto the data bus.
5/13/2008 5:53:48 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
231
(8) CS (chip select). A LOW on this input enables the 8259A. No reading or writing of the chip will occur unless the device is selected. (9) WR (write). A LOW on this input enables the CPU to write control words (ICWs and OCWs) to the 8259A. (10) RD (read). A LOW on this input enables the 8259A to send the status of the IRR, ISR, IMR, or the interrupt level onto the data bus. (11) A0. This input signal is used in conjunction with WR and RD signals to write commands into the various command registers, as well as reading the various status registers of the chip. This line can be tied directly to one of the address lines. (12) The cascade buffer/comparator. This function block stores and compares the IDs of all 8259A’s used in the system. The associated three I/O pins (CAS0-2) are outputs when the 8259A is used as a master and are inputs when the 8259A is used as a slave. As a master, the 8259A sends the ID of the interrupting slave device onto the CAS0 ± 2 lines. The slave thus selected will send its preprogrammed subroutine address onto the data bus during the next one or two consecutive INTA pulses. The powerful features of the 8259A in a microcomputer system are its programmability and the interrupt routine addressing capability. The latter allows direct or indirect jumping to the specific interrupt routine requested without any polling of the interrupting devices. The normal sequence of events during an interrupt depends on the type of CPU being used. The events occur as follows in an MCS-80/85 system: (a) One or more of the INTERRUPT REQUEST lines (IR7 ± 0) are raised high, setting the corresponding IRR bit(s). (b) The 8259A evaluates these requests and sends an INT to the CPU, if appropriate. (c) The CPU acknowledges the INT and responds with an INTA pulse. (d) Upon receiving an INTA from the CPU group, the highest priority ISR bit is set, and the corresponding IRR bit is reset. The 8259A will also release a CALL instruction code (11001101) onto the 8-bit data bus through its D7 ± 0 pins. (e) This CALL instruction will initiate two more INTA pulses to be sent to the 8259A from the CPU group. (f) These two INTA pulses allow the 8259A to release its preprogrammed subroutine address onto the data bus. The lower 8-bit address is released at the first INTA pulse and the higher 8-bit address is released at the second INTA pulse.
Zhang_Ch02.indd 231
5/13/2008 5:53:48 PM
232
INDUSTRIAL CONTROL TECHNOLOGY (g) This completes the 3-byte CALL instruction released by the 8259A. In the AEOI mode the ISR bit is reset at the end of the third INTA pulse. Otherwise, the ISR bit remains set until an appropriate EOI command is issued at the end of the interrupt sequence. The events occurring in an 8086 system are the same until the fourth step; from the fourth step onward: (d) Upon receiving an INTA from the CPU group, the highest priority ISR bit is set and the corresponding IRR bit is reset. The 8259A does not drive the data bus during this cycle. (e) The 8086 will initiate a second INTA pulse. During this pulse, the 8259A releases an 8-bit pointer onto the data bus where it is read by the CPU. (f) This completes the interrupt cycle. In the AEOI mode the ISR bit is reset at the end of the second INTA pulse. Otherwise, the ISR bit remains set until an appropriate EOI command is issued at the end of the interrupt subroutine. If no interrupt request is present at step (d) of either sequence (i.e., the request was too short in duration) the 8259A will issue an interrupt level 7. Both the vectoring bytes and the CAS lines will look like an interrupt level 7 was requested. When the 8259A PIC receives an interrupt, INT becomes active and an interrupt acknowledge cycle is started. If a higher priority interrupt occurs between the two INTA pulses, the INT line goes inactive immediately after the second INTA pulse. After an unspecified amount of time the INT line is activated again to signify the higher priority interrupt waiting for service. This inactive time is not specified and can vary between parts. The designer should be aware of this consideration when designing a system that uses the 8259A. It is recommended that proper asynchronous design techniques be followed.
Advanced programmable interrupt controllers (APICs) are designed to attempt to solve interrupt routing efficiency issues in multiprocessor computer systems. There are two components in the Intel APIC system: the local APIC (LAPIC) and the IOAPIC. The LAPIC is integrated into each CPU in the system, and the IOAPIC is used throughout the system’s peripheral buses. There is typically one IOAPIC for each peripheral bus in the system. In the original system designs, LAPICs and IOAPICs were connected by a dedicated APIC bus. Newer systems use the system bus for communication between all APIC components. LAPICs manage all external interrupts for the processor that it is part of. In addition, they are able to accept and generate interprocessor interrupts (IPIs) between LAPICs. LAPICs may support up to 224 usable IRQ vectors
Zhang_Ch02.indd 232
5/13/2008 5:53:48 PM
233
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
from an IOAPIC. Vectors numbers 0–31, out of 0–255, are reserved for exception handling by x86 processors. IOAPICs contains a redirection table, which is used to route the interrupts it receives from peripheral buses to one or more LAPICs.
2.2.3
Programmable Timer Controller Chipset
The programmable timer controller provides a programmable interval timer and counter that are designed for use with microcomputer systems to solve one of the most common problems in any microcomputer system: the generation of accurate time delays under software control. Instead of setting up timing loops in software, the programmer configures the programmable timer controller to match the requirements and programs one of the counters for the desired delay. After the desired delay, the programmable timer controller will interrupt the CPU. Software overhead is minimal and variable length delays can easily be accommodated. Some of the other computer and timer functions common to microcomputers that can be implemented with it are real-time clock, event counter, digital one shot, programmable rate generator, square wave generator, binary rate multiplier, complex waveform generator, and complex motor controller. Figure 2.15 gives the typical function blocks for an 82C54 programmable interval timer controller, which has these main blocks: (1) Data bus buffer. This three-state, bidirectional 8-bit buffer is used to interface the 82C54 to the system bus. CLK 0 D7–D0 8
Counter 0
Data bus buffer
OUT 0
RD WR
Read/ write logic
Control word register
Internal bus
A0 A1
GATE 0
Counter 1
CLK 1 GATE 1 OUT 1 CLK 2
Counter 0
GATE 2 OUT 2
Figure 2.15 82C54 Programmable timer controller function blocks.
Zhang_Ch02.indd 233
5/13/2008 5:53:48 PM
234
INDUSTRIAL CONTROL TECHNOLOGY (2) Read/write logic. The read/write logic accepts inputs from the system bus and generates control signals for the other functional blocks of the 82C54. A1 and A0 select one of the three counters on the control word register to be read from or written into. A “low” on the RD input tells the 82C54 that the CPU is reading one of the counters. A “low” on the WR input tells the 82C54 that the CPU is writing either a control word or an initial count. Both RD and WR are qualified by CS; RD and WR are ignored unless the 82C54 has been selected by holding CS low. (3) Control word register. The control word register is selected by the read/write logic when A1, A0 = 11. If the CPU then does a write operation to the 82C54, the data is stored in the control word register and is interpreted as a control word used to define the counter operation. The control word register can only be written to. (4) Counter 0, Counter 1, Counter 2. These three functional blocks are identical in operation, so only a single counter will be described. The counters are fully independent. Each counter may operate in a different mode. The programmable timer is normally treated by the system software as an array of peripheral I/O ports; three are counters and the fourth is a control register for mode programming. Basically, the select inputs A0, A1 connects to the A0, A1 address bus signals of the CPU. The CS can be derived directly from the address bus using a linear select method or it can be connected to the output of a decoder. After power-up, the state of the programmable timer is undefined. The mode, count value, and output of all counters are undefined. How each counter operates is determined when it is programmed. Each counter must be programmed before it can be used. Unused counters need not be programmed. Counters are programmed by writing a control word and then an initial count. All control words are written into the control word register, which is selected when A1, A0 = 11. The control word specifies which counter is being programmed. By contrast, initial counts are written into the counters, not the control word register. The A1, A0 inputs are used to select the counter to be written into. The format of the initial count is determined by the control word used. (a) Write operations. A new initial count may be written to a counter at any time without affecting the counter’s programmed mode in any way. Counting will be affected as described in the mode definitions. The new count must follow the programmed count format. If a counter is programmed to read and write 2-byte counts,
Zhang_Ch02.indd 234
5/13/2008 5:53:49 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
235
the following precaution applies. A program must not transfer control between writing the first and second byte to another routine that also writes into that same counter. Otherwise, the counter will be loaded with an incorrect count. (b) Read operations. There are three possible methods for reading the counters. The first is through the read-back command. The second is a simple read operation of the counter, which is selected with the A1, A0 inputs. The only requirement is that the CLK input of the selected counter must be inhibited by using either the GATE input or external logic. Otherwise, the count may be in the process of changing when it is read, giving an undefined result.
2.2.4
CMOS Chipset
The complementary metal oxide semiconductor (CMOS) chip is battery-powered and stores the hard drive’s configuration and other information. In a microcomputer and a microcontroller, CMOS chips normally provide two functions: real-time clock (RTC) and CMOS memory. The real-time clock provides the board with time-of-day clock, periodic interrupt, and system configuration information to a microcomputer or a microcontroller. With respect to personal computers, CMOS chipset typically contains 64 (00hex-3Fhex) 8-bit locations of battery-backed up CMOS RAM (random access memory). The split is (1) 00hex-0Ehex, used for real-time clock functions (time of day), (2) 0Fhex-35hex, used for system configuration information, for example, hard drive type, memory size, etc., and (3) 36hex-3Fhex, used for power-on password storage. The CMOS memory is an accessible set of memory locations on the same chip as the RTC and has its own battery backup so that it retains both functions, even when the computer is turned off. Battery-powered CMOS and RTCs did not originally exist and the current time was entered manually every time the system was turned on. This memory location in CMOS is separate and apart from the RTC registers and there are several ways to update it. Specifically, the BIOS can update the century information, as can many operating systems, network time systems, and applications, or the user can set it using the right commands.
2.2.5
Direct Memory Access Controller Chipset
Direct memory access (DMA) is an I/O technique commonly used for high-speed data transfer among internal memories, I/O ports, and peripherals, and also between the memories and I/O devices on different chipsets.
Zhang_Ch02.indd 235
5/13/2008 5:53:49 PM
236
INDUSTRIAL CONTROL TECHNOLOGY
DMA technique allows the microprocessor to release the control of the buses to a device called a DMA controller. The DMA controller manages data transfer between memory and a peripheral under its control, thus bypassing the microprocessor. The microprocessor communicates with the controller by using the chip select line, buses, and control signals. However, once the controller has gained control, it plays the role of a microprocessor for data transfer. For all practical purposes, the DMA controller is a microprocessor capable only of copying data at high speed from one location to another location. As an illustration, a programmable DMA controller, the Intel 8237A programmable DMA controller, is described below. The 8237A block diagram given in Fig. 2.16 includes the major logic blocks and all of the internal registers. The data interconnection paths are also shown. Not shown are the various control signals between the blocks. The 8237A contains 344 bits of internal memory in the form of registers. Table 2.4 lists these registers by name and shows the size of each. The 8237A contains three basic blocks of control logic. The Timing Control block generates internal timing and external control signals for the 8237A. The Program Command Control block decodes the various commands
RESET CLK CSN Ready IORNIN IOWNIN EOPNIN AEN ADSTB EOPNOUT MEMRN MEMWN IORNOUT IOWNOUT
16-bit incrementor/ decrementor temp address reg
16-bit decrementor temp word count reg Timing and control
Channel-3
Channel-2
AOUT [7..0]
C8237REG Channel-1
Channel-0
Read/write State machine
Current word count register
Current word address register Write Base word address register
Base word count register
HLDA DBOUT [7..0]
DREQ [3.0] HRQ DACK [3..0]
Fixed priority and rotating priority logic
Command register
DBIN [7..0]
Mask register Request register
AIN [3..0] Mode register
Status register
Temporary register
Figure 2.16 Intel 8237A DMA controller block diagram.
Zhang_Ch02.indd 236
5/13/2008 5:53:49 PM
237
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL Table 2.4 8237A DMA Controller Internal Registers Name Base address registers Base word count registers Current address registers Current word count registers Temporary address register Temporary word count register Status register Command register Temporary register Mode registers Mask register Request register
Size (bits)
Number
16 16 16 16 16 16 8 8 8 6 4 4
4 4 4 4 1 1 1 1 1 4 1 1
given to the 8237A by the microprocessor prior to servicing a DMA request. It also decodes the mode control word used to select the type of DMA during the servicing. The Priority Encoder block resolves priority contention between DMA channels requesting service simultaneously. To perform block moves of data from one memory address space to another with a minimum of program effort and time, the 8237A includes a memory-to-memory transfer feature. Programming a bit in the command register selects channels 0 and 1 to operate as memory-to-memory transfer channels. The transfer is initiated by setting the software DREQ for channel 0. The 8237A requests a DMA service in the normal manner. After HLDA is true, the device, using four state transfers in block transfer mode, reads data from the memory. The channel 0 current address register is the source for the address used and is decremented or incremented in the normal manner. The data byte read from the memory is stored in the 8237A internal temporary register. Channel 1 then performs a four-state transfer of the data from the temporary register to memory using the address in its current address register and incrementing or decrementing it in the normal manner. The channel 1 current word count is decremented. When the word count of channel 1 goes to FFFFH, a TC is generated causing an EOP output terminating the service. Channel 0 may be programmed to retain the same address for all transfers. This allows a single word to be written to a block of memory. The 8237A will respond to external EOP signals during memory-to-memory transfers. Data comparators in block search schemes may use this input to terminate the service when a match is found.
Zhang_Ch02.indd 237
5/13/2008 5:53:49 PM
238
INDUSTRIAL CONTROL TECHNOLOGY
The 8237A will accept programming from the host processor any time that the HLDA is inactive; this is true even if HRQ is active. The responsibility of the host is to ensure that programming and HLDA are mutually exclusive. Note that a problem can occur if a DMA request occurs on an unmasked channel while the 8237A is being programmed. For instance, the CPU may be starting to reprogram the 2-byte address register of channel 1 when channel 1 receives a DMA request. If the 8237A is enabled (bit 2 in the command register is 0) and channel 1 is unmasked, a DMA service will occur after only 1 byte of the address register has been reprogrammed. This can be avoided by disabling the controller (setting bit 2 in the command register) or masking the channel before programming any other registers. Once the programming is complete, the controller can be enabled and unmasked. After power-up it is suggested that all internal locations, especially the mode registers, be loaded with some valid value. This should be done even if some channels are unused. An invalid mode may force all control signals to go active at the same time. The 8237A is designed to operate in two major cycles. These are called idle and active cycles. Each device cycle is made up of a number of states. The 8237A can assume seven separate states, each composed of one full clock period. State I (SI) is the inactive state. It is entered when the 8237A has no valid DMA requests pending. While in SI, the DMA controller is inactive but may be in the program condition, being programmed by the processor. State S0 (S0) is the first state of a DMA service. The 8237A has requested a hold but the processor has not yet returned an acknowledge. The 8237A may still be programmed until it receives HLDA from the CPU. An acknowledge from the CPU will signal that DMA transfers may begin. S1, S2, S3, and S4 are the working states of the DMA service. If more time is needed to complete a transfer than is available with normal timing, wait states (SW) can be inserted between S2 or S3 and S4 by the use of the ready line on the 8237A. Eight states are required for a single transfer. The first four states (S11, S12, S13, S14) are used for the readfrom-memory half and the last four states (S21, S22, S23, S24) for the write-to-memory half of the transfer.
2.2.5.1
Idle Cycle
When no channel is requesting service, the 8237A will enter the idle cycle and perform “SI” states. In this cycle the 8237A will sample the DREQ lines every clock cycle to determine whether any channel is requesting a DMA service. The device will also sample CS, looking for an attempt by the microprocessor to write or read the internal registers of the 8237A.
Zhang_Ch02.indd 238
5/13/2008 5:53:49 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
239
When CS is low and HLDA is low, the 8237A enters the program condition. The CPU can now establish, change, or inspect the internal definition of the part by reading from or writing to the internal registers. Address lines A0 ± A3 are inputs to the device and select which registers will be read or written. The IOR and IOW lines are used to select and time reads or writes. Special software commands can be executed by the 8237A in the program condition. These commands are decoded as sets of addresses with the CS and IOW. The commands do not make use of the data bus. Instructions include Clear First/Last Flip-Flop and Master Clear.
2.2.5.2 Active Cycle When the 8237A is in the idle cycle and a nonmasked channel requests a DMA service, the device will output an HRQ to the microprocessor and enter the active cycle. It is in this cycle that the DMA service will take place, in one of four modes: (1) Single transfer mode. In single transfer mode the device is programmed to make one transfer only. The word count will be decremented and the address decremented or incremented following each transfer. When the word count “rolls over” from 0 to FFFFH, a terminal count (TC) will cause an autoinitialize if the channel has been programmed to do so. DREQ must be held active until DACK becomes active in order to be recognized. If DREQ is held active throughout the single transfer, HRQ will go inactive and release the bus to the system. It will again go active and, upon receipt of a new HLDA, another single transfer will be performed. Details of timing between the 8237A and other bus control protocols will depend upon the characteristics of the microprocessor involved. (2) Block transfer mode. In block transfer mode the device is activated by DREQ to continue making transfers during the service until a TC, caused by word count going to FFFFH, or an external end of process (EOP) is encountered. DREQ need only be held active until DACK becomes active. Again, an autoinitialization will occur at the end of the service if the channel has been programmed for it. (3) Demand transfer mode. In demand transfer mode the device is programmed to continue making transfers until a TC or external EOP is encountered or until DREQ goes inactive. Thus, transfers may continue until the I/O device has exhausted its data capacity. After the I/O device has had a chance to catch up, the DMA service is reestablished by means of a DREQ. During the time
Zhang_Ch02.indd 239
5/13/2008 5:53:49 PM
240
INDUSTRIAL CONTROL TECHNOLOGY between services when the microprocessor is allowed to operate, the intermediate values of address and word count are stored in the 8237A current address and current word count registers. Only an EOP can cause an autoinitialize at the end of the service. EOP is generated either by TC or by an external signal. DREQ has to be low before S4 to prevent another transfer. (4) Cascade mode. This mode is used to cascade more than one 8237A together for simple system expansion. The HRQ and HLDA signals from the additional 8237A are connected to the DREQ and DACK signals of a channel of the initial 8237A. This allows the DMA requests of the additional device to propagate through the priority network circuitry of the preceding device. The priority chain is preserved and the new device must wait for its turn to acknowledge requests. Since the cascade channel of the initial 8237A is used only for prioritizing the additional device, it does not output any address or control signals of its own. These could conflict with the outputs of the active channel in the added device. The 8237A will respond to DREQ and DACK but all other outputs except HRQ will be disabled. The ready input is ignored.
Each of the three active transfer modes above can perform three different types of transfers. These are read, write, and verify. Write transfers move data from an I/O device to the memory by activating MEMW and IOR. Read transfers move data from memory to an I/O device by activating MEMR and IOW. Verify transfers are pseudotransfers. The 8237A operates as in read or write transfers generating addresses, and responding to EOP, etc. However, the memory and I/O control lines all remain inactive. The ready input is ignored in verify mode.
2.3 Application-Specific Integrated Circuit (ASIC) ASIC is basically an integrated circuit designed specifically for a special purpose or application. Strictly speaking, this also implies that an ASIC is built for one and only one customer. The opposite of an ASIC is a standard product or general purpose IC, such as a logic gate or a general purpose microcontroller, both of which can be used in any electronic application by anybody. ASICs are usually classified into one of three categories: full custom, semi-custom, and structured as listed below. (1) Full-custom ASICs are those that are entirely tailor-fitted to a particular application from the very start. Since ultimate design
Zhang_Ch02.indd 240
5/13/2008 5:53:49 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
241
and functionality are prespecified by the user, they are manufactured with all the photolithographic layers of the device already fully defined, just like most off-the-shelf general purpose ICs. A full-custom ASIC cannot be modified to suit different applications, and it is generally produced as a single, specific product for a particular application only. (2) Semi-custom ASICs, on the other hand, can be partly customized to serve different functions within their general area of application. Unlike full-custom ASICs, semi-custom ASICs are designed to allow a certain degree of modification during the manufacturing process. A semi-custom ASIC is manufactured with the masks for the diffused layers already fully defined, so the transistors and other active components of the circuit are already fixed for that semi-custom ASIC design. The customization of the final ASIC product to the intended application is done by varying the masks of the interconnection layers, for example, the metallization layers. (3) Structured or platform ASICs, which belong to a relatively new ASIC classification, are those that have been designed and produced from a tightly defined set of (1) design methodologies, (2) intellectual properties, and (3) well-characterized silicon, aimed at shortening the design cycle and minimizing the development costs of the ASIC. A platform ASIC is built from a group of “platform slices,” with a platform slice being defined as a premanufactured device, system, or logic for that platform. Each slice used by the ASIC may be customized by varying its metal layers. The reuse of premanufactured and precharacterized platform slices simply means that platform ASICs are not built from scratch, thereby minimizing design cycle time and costs. There are two types of programmable ASIC: programmable logic devices (PLD) and field-programmable gate arrays (FPGA). The distinction between the two is blurred. The only real difference is their heritage. In this section, these two types of programmable ASICs are discussed, respectively. ASICs have been widely used in various industrial control applications. Examples of ASICs include (1) an IC that encodes and decodes digital data using a proprietary encoding and decoding algorithm, (2) a medical IC designed to monitor a specific human biometric parameter, (3) an IC designed to serve a special function within a factory automation system, (4) an amplifier IC designed to meet certain specifications not available in standard amplifier products, (5) a proprietary system-on-a-chip, and (6) an IC that is custom-made for particular automated test equipment.
Zhang_Ch02.indd 241
5/13/2008 5:53:49 PM
242
INDUSTRIAL CONTROL TECHNOLOGY
2.3.1
ASIC Designs
ASICs are used to design entire systems on a single chip. ASICs are interconnects of standard cells that have been standardized by fabrication houses. With the integration of more and more system components on a single IC, the complexity of IC fabrication has increased. An advanced system design involves complex layout issues. Specifications of cells are provided by the vendors in the form of a technology library that contains information about geometry, delay, and power characteristics of cells. Design flow of ASIC is highly automated. These automation tools provide reasonable performance and cost advantage over manual design process. Broadly, ASIC design flow can be divided into these phases given below in this subsection, which is also illustrated in Fig. 2.17.
2.3.1.1 ASIC Specification ASIC design specifications are written by designers at different levels of abstraction. Most common hardware specification languages used by designers are Verilog and VHDL. Both these languages are equally capable of providing complex constructs to describe complex functionality. Behavioral modeling forms the highest level of abstraction. (1) Behavioral specification. At the initial stage of the design process, the designer provides a behavioral specification of the functionality intended. The behavioral model does not care about the structure of the design, combinational, or sequential elements used in the design, clock signal, and the timing
Design specification HDL (Verilog and VHDL)
Technology library
Design rule constraints
Syntheses
Place and Route
Manufacture
Simulation
Extraction
Post layout analysis
Figure 2.17 ASIC design flow.
Zhang_Ch02.indd 242
5/13/2008 5:53:50 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
243
constraints involved. It captures the intended behavior of the design. It is important to note that this specification does not capture timing information. (2) RTL specification. RTL stands for register transfer level. In this model the entire design is split into registers with flow of information between these registers at each clock cycle. RTL specification captures the change in design at each clock cycle. All the registers are updated at the same time in a clock cycle. Typically an RTL specification divides a design into registers and the logic blocks that join those registers together. RTL captures the data flow but fails to give a good specification of control flow. (3) Structural specification. Structural specification consists of a network of instances of logic gates and registers described by a technology library. Technology library is provided by fabrication houses. Technology library is a specification of simple AND, OR, NOT, and complicated multiple functionality cells. The specification of a cell includes its geometry, delay, and power characteristics. Structural modeling describes circuits in the form of instances of cells and interconnects between those cells.
2.3.1.2 ASIC Functional Simulation Logic simulation is an essential part of digital circuit design. Logic simulation and verification are used to verify the functionality described by a design specification against output values expected at the output ports of a digital integrated circuit. There are mainly three classes of logic simulators given below: (1) Compiled code logic simulator. Compiled code logic simulation algorithm evaluates every logic element in the design at each time step. The earliest logic simulators were compiled code simulators. In a compiled code simulator a combinational circuit is topologically ordered and an equation is generated for each gate output in terms of its inputs using Boolean operators AND, OR, and NOT. A compiled code simulator evaluates every circuit element for every new input pattern. Two limitations of compiled code simulators are (1) their inability to handle asynchronous feedback and (2) lack of accurate timing and delay information in their models. Also since circuit activity is very low at each element, run time of such algorithm is huge for big circuits. A large part of the design cycle is taken up by circuit simulation. (2) Interpretive event-driven logic simulator. An event-driven logic simulator works on the principle that output of a logic element
Zhang_Ch02.indd 243
5/13/2008 5:53:50 PM
244
INDUSTRIAL CONTROL TECHNOLOGY changes only when one of its inputs changes. An event-driven logic simulator evaluates a logic element when an event occurs on one of its inputs. Statistical data shows that event activities in large circuits are very low, and with increase in size of the circuit percentage activity on a logic element decreases. Each concurrent process is converted to an abstract syntax tree. When a process is executed its tree is traversed, and nodes of the tree are interpreted and acted upon till the process suspends. A process may suspend when it finishes or due to nonavailability of one of the inputs. A suspended process is scheduled by the scheduler to wait for some event to wake it up. Execution of some statements in a process may resume other processes. There are two major classes of event-driven logic simulation algorithms: (1) Synchronous simulation algorithms: These algorithms are centralized-timed and are used for single processor machines. The algorithm follows the path of events in the circuit. Simultaneous events are handled through centralized control of time. In this scheme simulation does not advance until all the events that occurred on current simulation time are processed. To implement this algorithm one needs to store events in a global ordered queue of events that is circular in nature. Each slot in this queue represents simulation time and it stores a linked list of events that occur at that simulation time. As the events of the current time slot are processed, the output of those events is compared with the previous output of corresponding logic elements, and if they differ new events are generated on logic elements whose input is driven by the output of current event. (2) Asynchronous simulation algorithms: In asynchronous simulation there is no global centralized time. Instead each data item carries a time stamp that is indicative of time up to which data is valid. The evaluation of an event depends on availability of a token. An asynchronous algorithm can process events that occur at different time instances. Hence it can extract more parallelism compared to synchronous simulation algorithms. (3) Compiled-code event-driven simulation. Modern simulators employ the best of both of the above. Compiled-code eventdriven simulators compile the hardware description language (HDL) description of the design to machine code. The generated code is then linked with the simulator kernel.
2.3.1.3 ASIC Synthesis A design specification that is functionally correct is fed to a synthesis tool that extracts finite state machine from the design and performs data
Zhang_Ch02.indd 244
5/13/2008 5:53:50 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
245
path optimizations on the finite state machine. The resultant hardware specification is mapped to gates, flip-flops, and nets to get gate-level netlist. User supplied timing constraints are used to perform timing optimization on gate-level netlist. The first step in synthesis process is to convert a given RTL into a finite state machine. Many transformations can be applied to finite state machine in order to reduce the number of states. Some of the common transformations applied to FSM are constant propagation, gate merging, dead code elimination, and arithmetic merging. The next step is to generate hardware. RTL synthesis involves three major steps: transition from RTL description into gates and flip-flops, optimization of logic, and placement and routing of optimized netlist. Most of the intelligence resides in the optimization stage but modern synthesis tools apply many smart techniques while converting RTL description into gates in order to reduce the number of gates in the design. There are broadly two types of optimizations: technology independent optimizations that are carried out once netlist has been mapped into technology cells provided by fabrication house and technologydependent optimization. Timing and area constraints are provided by the designer. Slack is defined as the difference between the expected arrival time and actual arrival time of the signal at a particular output port. Slack is calculated for input to output paths. The aim of timing optimization is to reduce slack on critical paths. Certain timing optimizations might lead to area escalation. Area reclamation algorithms try to reclaim the area that does not affect timing on critical paths. Other design rule constraints are maximum fan-out for a logic element, maximum capacitance, and the slope of signal from 20% of target to 80% of target. The technology library provided by the fabrication house contains basic components like sequential gates: AND, OR, NOT, NAND, NOR, XOR, BUFFER, and sequential elements like latches, flip-flops, and memories. Information about cell characteristics includes cell delay and area. There are three major quality metrics: area, time, and power. Designer’s quality metric for an IC is driven by specific application. (1) Area. With shrinking system size, ASIC should be able to accommodate maximum functionality in minimum area. The designer can specify area constraint and the synthesis tool will optimize for minimum area. Area can be optimized by having lesser number of cells and by replacing multiple cells with a single cell that includes both functionalities. (2) Timing. Designer specifies maximum delay between primary input and primary output. This is taken as maximum delay across
Zhang_Ch02.indd 245
5/13/2008 5:53:50 PM
246
INDUSTRIAL CONTROL TECHNOLOGY any critical path. There are four types of critical paths: path between a primary input and primary output, path from any primary input to a register, path from a register to a primary output, and path from a register to another register. (3) Power. Development of handheld devices has led to reduction of battery size and hence low power consuming systems. Low power consumption has become a big requirement for lots of designers.
2.3.1.4 ASIC Design Verification The biggest challenge in IC design is verification because the cost of a single error is huge. Verification is time consuming and requires a large amount of resources. The types of verification tasks can be classified into two categories: (1) Functional verification that checks the functionality of synthesized and optimized design against golden representation of design. (2) Implementation verification that occurs once placement and routing is over. In implementation verification, the design is checked for functional correctness once again. Timing and power constraints are also verified. Formal verification methods are used to test the functional correctness of the gate-level netlist. Testing functional correctness involves testing an optimized design against a golden design specification. There are two methods of performing the verification: (1) Black box verification methods include simulation, emulation, and hardware acceleration. (2) White box verification methods involve the use of formal methods, for example, assertion-based verification. (1) Assertion. Assertion-based verification is aimed at digital designers. It is a white-box verification technique. Unlike simulation, it is not applied on the block level once the design is complete. Assertion-based verification can be applied alongside design process. In fact assertion-based verification entities reside in the HDL description of the design. Assertions are active comments embedded within the design. Assertions turn design specifications into verification objects. Assertions can be used to (1) monitor signals on interfaces that connect different blocks; (2) track expected behavior of a gate, flip-flop, or module; (3) watch for forbidden behavior within a design block. Assertions can be implicit or explicit: (1) Implicit assertions are supported by HDL like Verilog and VHDL. These assertions are added at the time of design analysis, synthesis, and HDL analysis. (2) Explicit assertions are user-defined assertions. Such assertions are provided by EDA vendors in the form of library
Zhang_Ch02.indd 246
5/13/2008 5:53:50 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
247
such as OVL. Academic languages like CTL, LTL, and automata provide a way to define explicit assertions. (2) Emulation. The emulator is a hardware device that can be used to emulate a piece of hardware functionality. It is commonly used as a debugging tool to test a system under development for functional correctness. Emulation is a faster solution to a verification problem. In emulation a portion of the design is synthesized and optimized. The compiled design is then loaded onto an emulator. The rest of the design is simulated by the workstations that are connected to the emulator. Remember only the portion of the design that is being tested resides on the emulator. Emulators are able to provide execution speed close to real time. This allows verification engineers to reduce verification time. An emulation system typically consists of a small number of large FPGA. This provides multimillion ASIC-equivalent gate capacities. Such an emulation system comes as a separate box. An emulation box can be connected to a collection of workstations using PCI card. The workstations are connected via emulation network architecture. A complex IC is typically divided into a number of different modules. Each module is developed by a separate team of designers. Each team verifies the functionality of its own module. The modules then go to an integration team that integrates all the modules and carries out verification. With emulation providing faster methods of design verification, last minute changes can be incorporated in the design. This significantly reduces time to market.
2.3.1.5 ASIC Integrity Analyses Due to an increase in signal speed, miniaturization of features, smaller chip sizes, and lower power supply voltages, there has been greater interconnect signal integrity problems. Signal delay due to interconnect delay is more significant compared to gate delay. As a result, more powerful automation tools are required for layout parameter extraction, timing delay and crosstalk simulation, and power analysis. (1) Parasitic extraction. Accurate extraction of on-chip parasitic is crucial due to shrinking size and the increasing contribution of interconnect delay. The parasitic consists of resistance, inductance, and capacitance. Inductance is not critical for signal propagation until transmission line effect occurs. Resistance is easy to compute using algorithms like square counting and two-dimensional
Zhang_Ch02.indd 247
5/13/2008 5:53:50 PM
248
INDUSTRIAL CONTROL TECHNOLOGY finite-difference approach. Another reason for easy resistance estimation is that one has to consider only one conductor trace at a time. On the other hand, capacitance extraction requires that neighborhood conductors be considered for electromagnetic coupling effect. (2) Signal integrity. The design for a high speed integrated circuit depends on understanding and predicting interconnect parasitic effects and behavior. Increasing switching speed and complexity of VLSI circuits are becoming crucial factors in determining reliability and performance of an electronic system. Estimation is complex due to increase in metallization layers, increasing material complexity, and higher operating frequencies. The various aspects of signal integrity include (a) Technology scale down. As technology dips into the deep submicrometer range, lateral coupling effects between interconnects dominate compared to vertical coupling effects in micrometer technology. Aluminum has been used until recently to manufacture interconnects but the increasing contribution of interconnects in signal propagation has forced IC manufacturers to replace it with materials like copper with lower receptivity. As a result, gain in propagation delay is almost double. Technology scale down has introduced some new problems like complex resistance, three-dimensional capacitance, and inductance. (b) Propagation delay. With decrease in the size of technology interconnect, delay increases. (c) Crosstalk. When two wire segments are closer to each other than a minimum threshold, they will interfere in each other’s functioning. A signal on one wire may weaken due to electromagnetic effects of a signal carried by the other wire. This interference with each other’s signal is called crosstalk. With diminishing technology size, crosstalk is a major contributor to high speed IC defects. (d) Crosstalk delay. Crosstalk delay is a major reason for timing uncertainty. The simultaneous switching of the victim and the affecting signals may lead to a wide variety of phenomena. Among the most important is delay increase when the victim and aggressor signals switch in opposite directions, starting with the victim signal followed by the aggressor.
2.3.2
Programmable Logic Devices (PLD)
PLD are designed with configurable logic and flip-flops linked together with programmable interconnect. PLDs provide specific functions, including
Zhang_Ch02.indd 248
5/13/2008 5:53:50 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
249
device-to-device interfacing, data communication, signal processing, data display, timing and control operations, and almost every other function a system must perform. Memory cells control and define the function that the logic performs and how the various logic functions are interconnected. Logic devices can be classified into two broad categories: fixed and programmable. As the name suggests, the circuits in a fixed logic device are permanent; they perform one function or a set of functions and once manufactured, they cannot be changed. With fixed logic devices, the time required to go from design to prototypes to a final manufacturing run can take from several months to more than a year, depending on the complexity of the device. If the device does not work properly, or if the requirements change, a new design must be developed. On the other hand, PLD are standard, off-the-shelf parts that offer customers a wide range of logic capacity, features, speed, and voltage characteristics—these devices can be changed at any time to perform any number of functions. With programmable logic devices, designers use inexpensive software tools to quickly develop, simulate, and test their designs. Then, a design can be quickly programmed into a device and immediately tested in a live circuit. The PLD that is used for this prototyping is the exact same PLD that will be used in the final production of a piece of end equipment, such as a network router, a DSL modem, a DVD player, or an automotive navigation system. There are no NRE costs and the final design is completed much faster than that of a custom, fixed logic device. Another key benefit of using PLD is that during the design phase customers can change the circuitry as often as they want until the design operates to their satisfaction. That is because PLDs are based on rewriteable memory technology: to change the design, simply reprogram the device. Once the design is final, customers can go into immediate production by simply programming as many PLDs as they need with the final software design file. PLDs can be described as being one of three different types: simple programmable logic devices (SPLD), complex programmable logic devices (CPLD), or field-programmable gate arrays (FPGA). The FPGA is individually discussed in Section 2.3.3. The distinction between CPLD and FPGA is often a little fuzzy, with manufacturers designing new, improved architectures, and frequently muddying the waters for marketing purposes. Together, CPLD and FPGA are often referred to as high-capacity programmable logic devices (HCPLD). The programming technologies for PLD devices are actually based on the various types of semiconductor memory. As new types of memories
Zhang_Ch02.indd 249
5/13/2008 5:53:50 PM
250
INDUSTRIAL CONTROL TECHNOLOGY
have been developed, the same technology has been applied to the creation of new types of PLD devices. Today, SPLD are devices that typically contain the equivalent of 600 or fewer gates, while HCPLD have thousands and hundreds of thousands of gates available. SPLDs are often used for address decoding, where they have several clear advantages over the 7400series TTL parts that they replaced. First, of course, is that one chip requires less board area, power, and wiring than several do. Another advantage is that the design inside the chip is flexible, so a change in the logic does not require any rewiring of the board. Rather, the decoding logic can be altered by simply replacing that one PLD with another part that has been programmed with the new design. Hardware designs for these simple PLDs are generally written in languages like ABEL or PALASM (the hardware equivalents of assembly). Inside each PLD is a set of fully connected macro cells. These macro cells are typically comprised of some amount of combinatorial logic (e.g., AND, OR gates) and a flip-flop. In other words, a small Boolean logic equation can be built within each macro cell. This equation will combine the state of some number of binary inputs into a binary output and, if necessary, store that output in the flip-flop until the next clock edge. Of course, the particulars of the available logic gates and flip-flops are specific to each manufacturer and product family. But the general idea is always the same. Because these chips are rather small, they do not have much relevance to the remainder of this discussion. But you do need to understand the origin of programmable logic chips before we can go on to talk about the larger devices. At the low end of the spectrum are the original programmable logic devices. These were the first chips that could be used to implement a flexible digital logic design in hardware. In other words, you could remove a couple of the 7400-series transistor-transistor logic parts (AND, OR, and NOT) from your board and replace them with a single PLD. Other names you might encounter from this class of device are programmable logic array (PLA), programmable array logic (PAL), and generic array logic (GAL).
2.3.3
Field-Programmable Gate Array (FPGA)
Field-programmable gate arrays (FPGA) are integrated circuits (ICs) that contain an array of logic cells surrounded by programmable I/O blocks. FPGAs contain as many as tens of thousands of logic cells and an even greater number of flip-flops. Because of cost, FPGAs do not provide a 100% interconnection between logic cells; however, FPGAs still provide
Zhang_Ch02.indd 250
5/13/2008 5:53:50 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
251
significantly higher capacities than programmable logic devices (PLD) that are interconnected through a central global routing pool. Often, design engineers use FPGA to program electrical connections through several iterations in order to minimize nonrecurring costs. FPGA are used in applications ranging from data processing and storage, to instrumentation, telecommunications, and digital signal processing. Other terms for FPGA include logic cell array and programmable application-specific integrated chip.
2.3.3.1
FPGA Types and Important Data
Selecting field-programmable gate arrays requires an analysis of memory, performance, and I/O interface requirements. Available memory types include content addressable memory; Flash, random access memory (RAM); dual-port RAM; read-only memory (ROM); electrically erasable programmable read-only memory (EEPROM); first-in, first-out; and last-in, last-out. Performance considerations include internal frequency, the number of integrated phase-locked loops and delay-locked loops with clock-frequency-synthesis capabilities, and the total number of I/O ports. I/O interfaces include accelerated graphics port, bus low voltage differential signaling, and peripheral component interconnect (PCI). Field-programmable gate arrays are available with different numbers of system gates, shift registers, logic cells, and lookup tables. Logic blocks or logic cells do not include I/O blocks, but generally contain a lookup table to generate any function of inputs, a clocked latch (flip-flop) to provide registered outputs, and control logic circuits for configuration purposes. Logic cells are also known as logic array blocks, logic elements, and configurable logic blocks. Lookup tables or truth tables are used to implement a single logic function by storing the correct output logic state in a memory location that corresponds to each particular combination of input variables. Field-programmable gate arrays are available with many logic families. Transistor-transistor logic and related technologies such as Fairchild advanced Schottky use transistors as digital switches. By contrast, emittercoupled logic uses transistors to steer current through gates that compute logical functions. Another logic family, CMOS, uses a combination of P-type and N-type metal-oxide-semiconductor field effect transistors to implement logic gates and other digital circuits. Logic families for fieldprogrammable gate arrays include crossbar switch technology, gallium arsenide, integrated injection logic, and silicon on sapphire. Gunning with transceiver logic and gunning with transceiver logic plus are also available.
Zhang_Ch02.indd 251
5/13/2008 5:53:50 PM
252
INDUSTRIAL CONTROL TECHNOLOGY
Field-programmable gate arrays are available in a variety of IC package types and with different numbers of pins and flip-flops. Basic IC package types for field-programmable gate arrays include ball grid array, quad flat package, single in-line package, and dual in-line package. Many packaging variants are available.
2.3.3.2
FPGA Architecture
The typical basic architecture consists of an array of logic blocks and routing channels. Multiple I/O pads may fit into the height of one row or the width of one column. Generally, all the routing channels have the same width (number of wires). Figure 2.18 illustrates the FPGA structure. (1) FPGA logic block. An application circuit must be mapped into an FPGA with adequate resources. The typical FPGA logic block consists of a 4-input lookup table (LUT) and a flip-flop, as shown in Fig. 2.19. There is only one output in each logic block, which can be either the registered or the unregistered LUT output. As shown in Figure 2.20, the logic block has four inputs for the LUT and a clock input. Since clock signals (and often other high-fanout
Routing channel
I/O pad
Logic block
Figure 2.18 FPGA structure.
Inputs
4-input LUT Clock
D Flip flop
Out
Figure 2.19 A FPGA logic block.
Zhang_Ch02.indd 252
5/13/2008 5:53:51 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
253
In3
In4 In2 Out
In1
Out
Figure 2.20 FPGA logic block pin locations.
signals) are normally routed via special-purpose dedicated routing networks in commercial FPGA, they are accounted for separately from other signals. Each input is accessible from one side of the logic block, while the output pin can connect to routing wires in both the channel to the right and the channel below the logic block. Each logic block output pin can connect to any of the wiring segments in the channels adjacent to it. Figure 2.21 should make the situation clear. Similarly, an I/O pad can connect to any one of the wiring segments in the channel adjacent to it. For example, an I/O pad at the top of the chip can connect to any of the W wires (where W is the channel width) in the horizontal channel immediately below it. (2) FPGA routing. Generally, the FPGA routing is unsegmented (Fig. 2.22). That is, each wiring segment spans only one logic block before it terminates in a switch box. By turning on some of the programmable switches within a switch box, longer paths can be constructed. For higher speed interconnect, some FPGA architectures use longer routing lines that span multiple logic blocks.
Potential connection Logic block pin
Routing wire
Figure 2.21 Logic block pin to routing channel interconnect.
Zhang_Ch02.indd 253
5/13/2008 5:53:51 PM
254
INDUSTRIAL CONTROL TECHNOLOGY Logic block
Switch block
Wire segment
Figure 2.22 Unsegmented FPGA routing mechanism.
Whenever a vertical and a horizontal channel intersect there is a switch box. In this architecture, when a wire enters a switch box, there are three programmable switches that allow it to connect to three other wires in adjacent channel segments. The pattern, or topology, of switches used in this architecture is the planar or domain-based switch box topology. In this switch box topology, a wire in track number 1 connects only to wires in track number 1 in adjacent channel segments, wires in track number 2 connect only to other wires in track number 2, and so on. Figure 2.23 illustrates the connections in a switch box. Modern FPGA families expand upon the above capabilities to include higher level functionality fixed into the silicon. Having these common functions embedded into the silicon reduces the area required and gives those functions increased speed compared to building them from primitives. Examples of these include multipliers, embedded processors, high speed I/O logic, and embedded memories.
Wire segment
Programmable switch
Figure 2.23 FPGA switch box topology.
Zhang_Ch02.indd 254
5/13/2008 5:53:51 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
2.3.3.3
255
FPGA Programming
To define the behavior of the FPGA the user provides a hardware description language (HDL) or a schematic design. Common HDLs are VHDL and Verilog. Then, using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fitted to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company’s proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification methodologies. Once the design and validation process is complete, the binary file generated (also using the FPGA company’s proprietary software) is used to reconfigure the FPGA device. In an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves to raise the abstraction level of the design. Companies are promoting SystemC as a way to combine high level languages with concurrency models to allow faster design cycles for FPGAs than is possible using traditional HDL. Approaches based on standard C or C++ (with libraries or other extensions allowing parallel programming) are found in the Catapult C tools from Mentor Graphics, and in the Impulse C tools from Impulse Accelerated Technologies. Languages such as SystemVerilog, SystemVHDL, and Handel-C (from Celoxica) seek to accomplish the same goal, but are aimed at making existing hardware engineers more productive vs making FPGAs more accessible to existing software engineers. To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called intellectual property blocks and are available from FPGA vendors, from third-party IP suppliers, and in the public domain via OpenCores.org and other sources. In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially, the RTL description in VHDL or Verilog is simulated by creating test benches to stimulate the system and observe results. Then after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally, the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.
Zhang_Ch02.indd 255
5/13/2008 5:53:52 PM
256
INDUSTRIAL CONTROL TECHNOLOGY
Bibliography Altera (http://www.altera.com). 2005. C8237 DMA Controller. http://www.cast-inc .com/cores/c8237/cast_c8237-a.pdf. Accessed date: June. Best-Microprocessor (http://www.best-microcontroller-projects.com). 2005. http:// www.best-microcontroller-projects.com/hardware-interrupt.html. Accessed date: June. Buchanan, William and Wilson, Austin. 2001. Advanced PC Architecture. Reading, England: Addison-Wesley. Chipx (http://www.chipx.com). 2007a. Structured ASIC. http://www.chipx.com/ about/index.asp. Accessed date: May. Chipx (http://www.chipx.com). 2007b. Embedded ASIC. http://www.chipx.com/ embedded/index.asp. Accessed date: May. Datorarkitektur, I. 2004. Input/Output Device. http://www.ida.liu.se/~TDTS57/ info/lectures/lect4.frm.pdf. Accessed date: June 2005. DCD (http://www.dcd.pl). 2005. http://www.dcd.pl/dcdpdf/xil/d8255_ds.pdf. Accessed date: June. DDJ (http://www.ddj.com). 2005. http://www.x86.org/intel.doc/686manuals.htm. Accessed date: June. Decision Cards (http://www.decisioncards.com). 2005. http://www.decisioncards .com/io/index.php?style=isol. Accessed date: June. DELORIE (http://www.delorie.com). 2005. http://www.delorie.com/djgpp/doc/ ug/interrupts/inthandlers1.html. Accessed date: June. Design-Reuse (http://www.us.design-reuse.com). 2007. FPGA Design Articles. http://www.us.design-reuse.com/articles/fpga-0.html. Accessed date: May. FPGA-ASIC Engineering (http://www.ge-research.com/index.html). 2007. ASIC design Flow. http://www.ge-research.com/atmel_design.html. Accessed date: May. FtdiChip (http://www.ftdichip.com). 2007. ASIC Design Service. http://www .ftdichip.com/DesignServices/ASICDesign.htm. Accessed date: May. Fujitsu (http://www.fujitsu.com). 2007. Standard Cell ASIC. http://www.fujitsu .com/emea/services/microelectronics/asic/standardcell/index.html. Accessed date: May. Gaonkar, Ramesh S. 2002. Microprocessor Architecture, Programming, and Applications with the 8085. Fifth Edition. New Jersey: Prentice Hall. IECI (http://www.ieci.com). 2005. http://www.ieci.com.au/products/data_ acquisition_index.asp. Accessed date: June. InfoMit (http://www.informit.com). 2005. http://www.informit.com/articles/article .asp?p=482324&seqNum=3&rl=1. Accessed date: June. Intel (http://www.intel.com). 2005a. http://www.intel.com/pressroom/kits/ quickreffam.htm. Accessed date: June. Intel (http://www.intel.com). 2005b. 8259A Programmable Interrupt Controller. http://bochs.sourceforge.net/techspec/intel-8259a-pic.pdf.gz. Accessed date: June. Intel (http://www.intel.com). 2005c. 82C54 Programmable Interval Timer. http:// bochs.sourceforge.net/techspec/intel-82c54-timer.pdf.gz. Accessed date: June. Intel (http://www.intel.com). 2005d. 8237A Programmable DMA Controller. http:// bochs.sourceforge.net/techspec/intel-8237a-dma.pdf.gz. Accessed date: June.
Zhang_Ch02.indd 256
5/13/2008 5:53:52 PM
2: COMPUTER HARDWARE FOR INDUSTRIAL CONTROL
257
Interface Bus (http://www.interfacebus.com). 2007. Programmable Logic Device Definition. http://www.interfacebus.com/Programmable_Logic.html. Accessed date: May. Intersil (http://www.intersil.com). 2005. http://www.ortodoxism.ro/datasheets/ intersil/fn2970.pdf. Accessed date: June. MITRE (http://www.mitre.org). 2005. Real-Time Clock and CMOS. http://www .mitre.org/tech/cots/RTC.html. Accessed date: June. Netrino (http://www.netrino.com). 2007. SPLD CPLD PLD. http://www.netrino .com/Publications/index.php. Accessed date: May. Patterson, David A. and Hennessey John L. 1998. Computer Organization and Design. Second Edition. San Francisco: Morgan Kaufmann. Real Time Devices (http://www.rtdfinland.fi). 2005. DM5854HR/DM6854HR Isolated Digital I/O-Module User’s Manual. Available from this company’s web site. Rieker, Mike 2005. Advanced Programmable Interrupt Controller. http://www .osdever.net/tutorials/pdf/apic.pdf. Accessed date: June. Silicon-Fareast (http://www.siliconfareast.com). 2005. Application-Specific Integrated Circuits. http://www.siliconfareast.com/asic.htm. Accessed date: June. Styles, Lain 2004. Lecture 11: I/O & Peripherals. http://www.cs.bham.ac.uk/ internal/courses/sys-arch/current/lecture11.pdf. Accessed date: June 2005. Tutorial-Report (http://www.tutorial-reports.com). 2007a. ASIC Design Guide. http://www.tutorial-reports.com/hardware/asic/tutorial.php. Accessed date: May. Tutorial-Report (http://www.tutorial-reports.com). 2007b. ASIC Simulation and Synthesis. http://www.tutorial-reports.com/hardware/asic/synthesis.php. Accessed date: May. Tutorial-Report (http://www.tutorial-reports.com). 2007c. FPGA Tutorials. http:// www.tutorial-reports.com/computer-science/fpga/tutorial.php?PHPSESSID= c3c8bde5dfc3cd38afdc175639f587c7. Accessed date: May. Wikipedia (http://en.wikipedia.org/wiki/Main_Page). 2005. Standard Cell ASIC. http://en.wikipedia.org/wiki/Standard_Cell. Accessed date: June. Xilinx (http://www.xilinx.com). 2007a. PLD Specifications. http://www.altium .com/Forms/libraries/dxp2002/library_contents.asp?zip=Xilinx%5F050503 %2Ezip&lib=Xilinx+PLD+XC7000%2EIntLib. Accessed date: May. Xilinx (http://www.xilinx.com). 2007b. PLD-SPLD CPLD. http://www.xilinx .com/index.shtml. Accessed date: May. Zilog (http://www.zilog.com). 2005. http://www.zilog.com/docs/z180/ps0140.pdf. Accessed date: June.
Zhang_Ch02.indd 257
5/13/2008 5:53:52 PM
Zhang_Ch02.indd 258
5/13/2008 5:53:52 PM
3
System Interfaces for Industrial Control
3.1 Actuator–Sensor (AS) Interface 3.1.1
Overview
Industrial automation systems require large amounts of control devices, and the number of binary actuators and sensors on a typical system has increased over the years. Conventional input and output (I/O) methods for wiring include point-to-point connection or bus systems. For example, typical batching valve wiring networks attach each of the I/O to a central location resulting in multiple wire runs for each field device. Large expenditures are needed for cabling conduit, installation, and I/O points. Space for I/O racks and cabling must be accommodated in order to attach only a few field devices. These methods can prove to be too complex for networking simple binary devices and, therefore, too slow for the interactions between the controllers and the controlled devices. Point-to-point wiring is the most common method of wiring in the industry, but large wire bundles take up valuable space, installation is time consuming, and troubleshooting is complex. Actuator–sensor interface, or AS interface, was developed by a group of sensor manufacturers and introduced into the market in 1994. Since that time, it has become the standard for discrete sensors in process industries throughout the world. AS interface is also a bus system for low-level field applications in industrial automation to communicate with small binary sensors and actuators using the AS interface standard. The AS interface modernizes automation systems effectively and eliminates wire bundles completely, with only one wire cable required, compared to one cable from each device with point-to-point wiring. Junction boxes are also eliminated and the size of the control cabinet needed is significantly reduced. The plug and play wiring supports all typologies. Figure 3.1 gives the locations of the AS interface in industrial control networks. In comparison with conventional I/O wiring methods, the AS interface has many advantages. The most important ones are given below: (1) Minimum wiring and cost saving. AS interface offers a single cable, which uses simple serial connection to the controller, instead of parallel with a multitude of cables. 259
Zhang_Ch03.indd 259
5/13/2008 5:41:14 PM
260
INDUSTRIAL CONTROL TECHNOLOGY Control level Master PLC, PC, SCADA, PID, ……
Field level (Profibus, Foundation Fieldbus, CAN bus, Ethernet, etc.)
AS interface level Slave device
Slave device
Slave device
Slave device
……
Actuators, sensors level
Figure 3.1 The functionality of AS interface in industrial control networks. The AS interface can be at two locations in an industrial control network: between the controllers and the actuators–sensors and between the field level and actuators–sensors.
(2) Fast and safe installation. Sensors and actuators are simply installed with modules on the AS interface cable. Contact pins in the modules penetrate the insulation of the cable and establish contact with the copper wire. Incorrect connections are practically impossible because of the design of the cable and the special piercing method. (3) Flexible configuration. Owing to the distributed and modular design, plant sections can be tested in parallel even before the overall solution is finished. This permits flexible modification and expansion. (4) Open system. AS interface is an open system, which means that it is independent of manufacturer and future-proof.
3.1.2
Architectures and Components
In industrial control networks, as displayed in Fig. 3.1, there are two types of AS interface architectures.
Zhang_Ch03.indd 260
5/13/2008 5:41:14 PM
261
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
3.1.2.1 AS Interface Architecture: Type 1 In the first type of AS interface architecture, a controller such as PLC, SCADA, or PC applies the controls of sensors and actuators via the field level buses including Fieldbus, PROFIBUS, etc. As displayed in Fig. 3.2, the AS interface has gateways directly connected to the field level bus and the I/O module. The I/O module is the device of this AS interface architecture to contact the sensors and the actuators. A field level bus may be able to support several AS interface gateways depending on the system designs, each of which profiles a segment of an industrial control system. In this type of architecture, AS interface requires the following components: (1) Gateways. Gateways are interface modules between the AS interface and a higher level bus system. They are used when more complex applications are to be solved using standard products. AS interface gateways are the core of the wiring system, which handles the complete data transfer, cyclically polling (master/slave) all participants connected to the wiring system. The AS interface gateway can be placed anywhere in the AS interface segments. One gateway can handle 124 inputs and 124 outputs over 31 addressable I/O modules. For gateways, setup is accomplished through the setup tools for the respective system.
.
Controller or PC
Field level bus
AS interface Gateway
AS interface I/O module
AS interface Gateway
AS interface I/O module
AS interface Power supply/repeater
AS interface Power supply/repeater
AS interface I/O module
. . . . . .
Sensors or actuators
Figure 3.2 AS interface architecture: Type 1.
Zhang_Ch03.indd 261
5/13/2008 5:41:15 PM
262
INDUSTRIAL CONTROL TECHNOLOGY (2) I/O modules. I/O modules are the interface between standard sensors and actuators and AS interface. I/O modules are available for any kind of application, including flat modules for limited space applications, compact modules for a variety of mounting options, field modules that use cord grips instead of quick disconnects, standard modules that use both the mechanically keyed AS interface cable and the standard 16 AWG round cable. For enclosures and junction boxes, enclosure modules connect AS interface bus to a power rail system, and junction box modules for use within junction boxes. (3) Power supplies and repeaters. With AS interface, one single cable transmits both power and data. Power supplies contain internal data separation coils so that the capacitive filtering of the supply does not interfere with the data stream. Adding to the high interference immunity of AS interface is the power supply data isolation coil between the voltage transformer and the output so that the data signals are isolated from line noise. Repeaters extend AS interface networks up to 100 additional meters, and by using two in series, an AS interface network can be up to 300 m long. Repeaters do not require a network address and allow I/O modules to be placed anywhere along the network. (4) AS interface safety at work. This extension of AS interface allows for safety equipment to be wired together on a two-wire cable rather than hardwired back to a panel. A maximum of 31 category 4 inputs, for example, E-Stops, can be put on one cable. The parts included in this section are safety slaves, safety monitor, and configuration software. In many applications safety relevant functions are to be guaranteed. These are in the form of emergency stop buttons near process lines or the implementation of safe sensors (e.g., safe photo grits and locking of safety related doors) to automatically stop machines. Depending on the safety category, there are requirements of differing severity. Typically, a separate wiring is necessary as well as redundancy or increased protection for the cables. With the integration of the safety technique into the AS interface line under the terminology “AS interface safety at work,” the additional costs can be drastically reduced. The concept designates connection of the safety related switches by safe AS interface module. There is also a safety monitor on the AS interface line permanently observing the communication. The communication happens through a given and predetermined pattern by a dynamic code table with 8 times 4 bits sequence. The safe monitor continuously checks “must” and “actual” values of the
Zhang_Ch03.indd 262
5/13/2008 5:41:15 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
263
communication through comparison. In case of the bit sequence “0000,” the safe monitor switches off the safe relay in less than 40 ms. Several safe monitors can be operated in one AS interface line arranged on any position. Clear benefits of AS interface safety at work include the following: only one AS interface line is required for the communication of safe and nonsafe data; full compatibility with all standard AS interface devices; no specific communication mechanisms required; mixed applications on one and the same AS interface line possible; diagnosis of the safe modules via the standard AS interface master possible. (5) AS interface encoders. In order to be able to meet the real-time requirements of many applications, a “multislave” solution was achieved. The position value, up to 16 bits in length, is transferred to the gateways within a single cycle, via the four integrated AS interface chips used for control purposes. AS interface rotary encoders include 13-bit-Singleturn and 16-bit-Multiturn. (6) Accessories. To make AS interface perfect and to make the installation as easy as possible, various accessories ranging from hand-held addressing devices to mounting bases to simulators for higher level bus systems are offered: sealing for flat cable and adaptor to round cable.
3.1.2.2 AS Interface Architecture: Type 2 In the second type of AS interface architecture, as shown in Fig. 3.3, the AS interface master module resides inside a controller such as PLC, SCADA, or PC. In this type of AS interface architecture, the AS interface master terminal enables the direct connection of AS interface slaves. The AS interface compliant interface supports digital and analog slaves. The AS interface master does not manage the sensors and actuators via the field level buses, but rather via the AS interface slave modules or cables. The slave modules are connected to each other by means of the AS interface cable which can be branched with the cable branch device. Power supply and repeater are used too. A group of slave modules frames a segment of an industrial control network with one interface cable. The AS interface master module is able to support several segments depending on the designed system capabilities. In this type of architecture, AS interface requires the following components: (1) AS interface masters. The AS interface master automatically controls all communication over the AS interface cable without
Zhang_Ch03.indd 263
5/13/2008 5:41:15 PM
264
INDUSTRIAL CONTROL TECHNOLOGY Controller or PC
AS interface cable
AS interface Master module
AS interface Power supply /repeater
AS interface AS interface Slave module
Sensors/actuators
Sensors/actuators
Segment A
Cable . . . . . Cable
Cable
Branch
Cable
AS interface Slave module
Segment B
Figure 3.3 AS interface architecture: Type 2.
the need for special software. The master can connect the system to a controller such as PLC, SCADA, or PC, act as a standalone controller, or serve as a gateway to higher level bus systems. There exist the following AS interface masters in the current markets: (a) Standard AS interface master. Up to 31 standard slaves or slaves with the extended addressing mode can be attached to standard AS interface masters. (b) Extended AS interface masters. The extended AS interface masters support 31 addresses that can be used for standard AS interface slaves or AS interface slaves with the extended addressing mode. AS interface slaves with the extended addressing mode can be connected in pairs (programmed as A or B slaves) to an extended AS interface master and can use the same address. This increases the number of addressable AS interface slaves to a maximum of 62. Due to the address expansion, the number of binary outputs is reduced to three per AS interface slave on slaves using the extended addressing mode.
Zhang_Ch03.indd 264
5/13/2008 5:41:16 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
265
(2) AS interface slaves. All the nodes that can be addressed by an AS interface master are defined as AS interface slaves. (a) AS interface slave assembly system. AS interface slaves with the following assembly systems are available: (i) AS interface modules. AS interface modules are AS interface slaves to which up to four conventional sensors and up to four conventional actuators can be connected. The standard coupling module, which is the lower section of a standard device, connects the user module to the yellow AS interface cable. The user module connects the sensors and actuators, while the application modules connect via screw terminals or connectors. Sensors and actuators with a built-in AS interface chip can be directly connected to the AS interface cable. (ii) Sensors/actuators with an integrated AS interface connection. Sensors/actuators with an integrated AS interface connection can be connected directly to the AS interface. (b) Addressing mode. AS interface slaves are available with the following addressing modes: (i) Standard slaves. Standard slaves each occupy one address on the AS interface. Up to 31 standard slaves can be connected to the AS interface. (ii) Slaves with an extended addressing mode (A/B slaves). Slaves with an extended addressing mode can be operated in pairs at the same address with an extended AS interface master. This doubles the number of addressable AS interface slaves to 62. One of these AS interface slaves must be programmed as an A slave using the addressing unit and the other as a B slave. Due to the address expansion, the number of binary outputs is reduced to three per AS interface slave. Slaves can also be operated with a standard AS interface master. For more detailed information about these functions, refer to the AS interface master discussion in the previous paragraphs. (c) Analog slaves. Analog slaves are special AS interface standard slaves that exchange analog values with the AS interface master. Analog slaves require special program sections in the user program (drivers, function blocks) that execute the sequential transfer of analog data. Analog slaves are intended for operation with extended AS interface masters. The extended AS interface masters handle the exchange of analog data with these slaves automatically. No special drivers or function blocks are required in the user program.
Zhang_Ch03.indd 265
5/13/2008 5:41:16 PM
266
INDUSTRIAL CONTROL TECHNOLOGY (3) Further AS interface system components. The further AS interface components include AS interface cable, AS interface power supply unit, addressing unit, and SCOPE for AS interface. (a) AS interface cable. The trapezoidal AS interface cable is recommended over standard two-wire round cable for quick and simple connection of slaves. The AS interface cable is available in different colors to signify its voltage rating with color assignments as follows: (i) Yellow. The yellow AS interface cable is used for data and control power between the master and its slaves. (ii) Black. It is the external output power cable up to 60 VDC. (iii) Red. It is the external output power cable up to 240 VAC. The AS interface cable, designed as an unshielded two-wire cable, transfers signals and provides the power supply for the sensors and actuators connected using AS interface modules. Networking is not restricted to one type of cable. If necessary, appropriate modules or “T pieces” can be used to change to a simple two-wire cable. (b) AS interface power supply unit. The AS interface power supply unit supplies power to the AS interface nodes connected to the AS interface cable. For actuators with particularly high power requirements, the connection of an additional load power supply may be necessary (e.g., using special application modules). Data and control power are normally transmitted simultaneously via the AS interface cable. Power for the electronics and inputs is supplied by a special AS interface power supply that feeds a symmetrical supply voltage into the AS interface cable via a data-decoupling device. (c) Addressing unit. The addressing unit allows simple programming of AS interface slave addresses. (d) SCOPE for AS interface. SCOPE AS interface is a monitoring program for Windows that can record and evaluate the data exchange in AS interface networks during the commissioning phase and during operation. SCOPE AS interface can be operated on a PC under Windows in conjunction with an AS interface master communications processor.
3.1.3 Working Principle and Mechanism AS interface utilizes a single, trapezoidal, unshielded two-wire cable, which eliminates the extensive parallel control wiring required with most installations. In a network with AS interface, a simple gateway interfaces
Zhang_Ch03.indd 266
5/13/2008 5:41:16 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
267
the network into the field communication bus. Data and power are transferred over the two-wire network to each of the AS interface compatible field devices. The existing controller sees AS interface as remote I/O; therefore, AS interface connects to the existing network with minimal programming changes. The AS interface system utilizes only one master per network to control the exchange of data. This allows the master to interrogate up to 31 slaves and update all I/O information within 5 ms (10 ms for 62 slaves). For slave connection, an insulated two-wire cable is recommended to prevent reversing polarity. The electrical connection is made using contacts that pierce the insulation of the cable, contacting the two wires, thus eliminating the need to strip the cable and wire to screw terminals. For data exchange to occur, each slave must be programmed with an address that is stored internally in nonvolatile memory and remains even after power is removed. The tasks and functions of an AS interface master are described below, which is important for understanding the functions, modes, and interfaces available with the AS interface master modules.
3.1.3.1
Master–Slave Principle
The AS interface operates on the master–slave principle. This means that the AS interface master connected to the AS interface cable controls the data exchange with the slaves via the interface to the AS interface cable. Figure 3.4 illustrates the two interfaces of the AS interface master communication processor. (1) The process data and parameter assignment commands are transferred via the interface between the master CPU and the master communication processor. The user programs have suitable function calls and mechanisms available for reading and writing via this interface. (2) Information is exchanged with the AS interface slaves via the interface between the master communication processor and AS interface cable. (1) Tasks and functions of the AS interface master. The AS interface master specification distinguishes masters with different ranges of functions known as a “profile.” For standard AS interface masters and extended AS interface masters, there are three different master classes (M0, M1, M2 for standard masters, and M0e, M1e, M2e for extended masters). The AS interface specification
Zhang_Ch03.indd 267
5/13/2008 5:41:16 PM
268
INDUSTRIAL CONTROL TECHNOLOGY
PLC/PC CPU User program
AS interface master communication processor
AS interface slave
I/O
Configuration Parameters Interface
Address
Figure 3.4 AS interface operation.
stipulates the functions a master in a particular class must be able to perform. The profiles have the following practical significance: (a) Master profile M0/M0e. The AS interface master can exchange I/O data with the individual AS interface slaves. The station configuration on the cable, called the “expected configuration,” is used to configure the master. (b) Master profile M1/M1e. This profile covers all the functions according to the AS interface master specification. (c) Master profile M2/M2e. The functionality of this profile corresponds to master profile M0/M0e, but in this profile the AS interface master can also assign parameters to the AS interface slaves. The essential difference between extended AS interface masters and standard AS interface masters is that they support the attachment of up to 62 AS interface slaves using the extended addressing mode. Extended AS interface masters also provide particularly simple access for AS interface analog slaves complying with profile specifications. However, if standard operation (master profile M0) is chosen for use, the following contents can be skipped. (2) How an AS interface slave functions (a) Connecting to the AS interface cable. The AS interface slave has an integrated circuit (AS interface chip) that provides the attachment of an AS interface device (sensor/actuator) to the common bus cable to the AS interface master. The integrated circuit contains these components: four configurable data inputs and outputs; four parameter outputs.
Zhang_Ch03.indd 268
5/13/2008 5:41:16 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
269
The operating parameters, configuration data with I/O assignment, identification code, and slave address are stored in additional memory (e.g., EEPROM). (b) I/O data. The useful data for the automation components that were transferred from the AS interface master to the AS interface slave are available at the data outputs. The values at the data inputs are made available to the AS interface master when the AS interface slave is polled. (c) Parameter. Using the parameter outputs of the AS interface slave, the AS interface master can transfer values that are not interpreted as simple data. These parameter values can be used to control and switch over between internal operating modes of the sensors or actuators. It could, for example, be possible to update a calibration value during the various operating phases. This function is possible with slaves with an integrated AS interface connection providing they support the function in question. (d) Configuration. The input/output configuration (I/O configuration) indicates which data lines of the AS interface slave are used as inputs, outputs, or as bidirectional outputs. The I/O configuration (4 bits) can be found in the description of the AS interface slave. In addition to the I/O configuration, the type of the AS interface slave is described by an identification code; with newer AS interface slaves it is identified by three identification codes (ID code, ID1 code, ID2 code). For more detailed information on the ID codes, refer to the manufacturer’s description.
3.1.3.2
Data Transfer
(1) Information and data structure. Before introducing the operating phases and the functions during these operating phases, a brief outline of the information structure of the AS interface master/ slave system is necessary. In Fig. 3.5, the data fields and lists of the system are configured in the system structure diagram as given in Fig. 3.4. The following structures are found on the AS interface master: (a) Data images. These contain temporarily stored information: (i) actual parameters that are an image of the parameters currently on the AS interface slave; (ii) actual configuration data that contains the I/O configurations and ID codes of all connected AS interface
Zhang_Ch03.indd 269
5/13/2008 5:41:16 PM
270
INDUSTRIAL CONTROL TECHNOLOGY
PLC/PC CPU
AS interface master communication processor
User program
Data images I/O data
AS interface slave
I/O data Parameters
Active parameters Active configuration data
Configuration Address
LDS LAS Configuration data Expected configuration data Parameters LPS
Figure 3.5 Data transfer between the AS interface master and the AS interface slave.
slaves once these data have been read from the AS interface slaves; (iii) the list of detected AS interface slaves (LDS) that specifies which AS interface slaves were detected on the AS interface bus; (iv) the list of activated AS interface slaves (LAS) that specify which AS interface slaves were activated by the AS interface master. I/O data are only exchanged with activated AS interface slaves. (b) I/O data. The I/O data are the process input and output data.
Zhang_Ch03.indd 270
5/13/2008 5:41:17 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
271
(c) Configuration data. These are nonvolatile data (e.g., stored in an EEPROM), which are available unchanged even following a power failure. (i) Expected configuration data that are selectable comparison values which allow the configuration data of the detected AS interface slaves to be checked. (ii) List of permanent AS interface slaves (LPS) that specifies the AS interface slaves expected on the AS interface cable by the AS interface master. The AS interface master checks continuously whether all the AS interface slaves specified in the LPS exist and whether their configuration data match the expected configuration data. The AS interface slave has the following structures: (a) I/O data (b) Parameters (c) Actual configuration data. The configuration data include the I/O configuration and the ID codes of the AS interface slave. (d) Address. The AS interface slaves have address “0” when installed. To allow a data exchange, the AS interface slaves must be programmed with addresses other than “0.” The address “0” is reserved for special functions. (2) The operating phases. Figure 3.6 illustrates the individual operating phases. (a) Initialization mode. The initialization mode, also known as the offline phase, sets the basic status of the master. The module is initialized after switching on the power supply or following a restart during operation. During the initialization, the images of all the slave inputs and the output data from the point of view of the application are set to the value “0” (inactive). After switching on the power supply, the configured parameters are copied to the parameter field so that subsequent activation uses the preset parameters. If the AS interface master is reinitialized during operation, the values from the parameters field that may have changed in the meantime are retained. (b) Start-up phase. (i) Detection phase. Detection of AS interface slaves in the startup phase. During startup or after a reset, the AS interface master runs through a startup phase during which it detects which AS interface slaves are connected to the AS interface cable and what type of slaves these are. The “type” of the slaves is specified by the configuration data stored
Zhang_Ch03.indd 271
5/13/2008 5:41:17 PM
272
INDUSTRIAL CONTROL TECHNOLOGY Initialization Offline phase Startup phase
Detection phase
Activation phase in the protected mode
Activation phase in the configuration mode
“Startup with configured data”
“Startup without configured data/obtain configuration data”
Normal operation Data exchange phase
Management phase
Inclusion phase
Figure 3.6 How do the individual operating phases work in the data transfer through an AS interface.
permanently on the AS interface slave when it is manufactured and can be queried by the master. Configuration files contain the I/O assignment of an AS-I slave and the slave type (ID codes). The master enters detected slaves in the list of detected slaves (LDS). (ii) Activation phase: Activating AS interface slaves. After the AS interface slaves are detected, the master sends a special call which activates them. When activating individual slaves, a distinction is made between two modes on the AS interface master: Master in the configuration mode: All detected stations (with the exception of the slave with address “0”) are activated. In this mode, it is possible to read actual values and to store them for a configuration. Master in the protected mode: Only the stations corresponding to the expected configuration stored on the AS interface master are activated. If the actual configuration found on the AS interface cable differs from this
Zhang_Ch03.indd 272
5/13/2008 5:41:17 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
273
expected configuration, the AS interface master indicates this. The master enters activated AS interface slaves in the list of activated slaves (LAS). (iii) Normal mode. On completion of the startup phase, the AS interface master switches to the normal mode. (iv) Data exchange phase. In the normal mode, the master sends cyclic data (output data) to the individual AS interface slaves and receives their acknowledgment messages (input data). If an error is detected during the transmission, the master repeats the appropriate poll. (v) Management phase. During this phase, all existing jobs of the control application are processed and sent. Possible jobs are, for example, as follows: Parameter transfer: Four parameter bits (three parameter bits with AS interface slaves with the extended addressing mode) are transferred to a slave and are used, for example, for a threshold value setting. Changing slave addresses: This function allows the addresses of AS interface slaves to be changed by the master if the AS interface slave supports this particular function. (vi) Inclusion phase. In the inclusion phase, newly added AS interface slaves are included in the list of detected AS interface slaves and, providing the configuration mode is selected, they are also activated (with the exception of slaves with address “0”). If the master is in the protected mode, only the slaves stored in the expected configuration of the AS interface master are activated. With this mechanism, slaves that were temporarily out of service are also included again. (3) Interface functions. To control the master and slave interaction from the user program, there are various functions available on the interface. The possibilities are explained below. The possible operations and the direction of data flow are illustrated in Fig. 3.7. (a) Read/write. When writing, parameters are transferred to the slave and the parameter images on the communication processor; when reading, parameters are transferred from the slave or from the communication processor parameter image to the CPU. (b) Read and store (configured) configuration data. Configured parameters or configuration data are read from the nonvolatile memory of the communication processor.
Zhang_Ch03.indd 273
5/13/2008 5:41:17 PM
274
INDUSTRIAL CONTROL TECHNOLOGY AS interface master CPU
AS interface slave Communication processor
Process image IO data
Data images I/O data I/O data 1.Read/write
User program
Activation parameters
Parameters
Activation configuration data
Configuration data 4. Supply slaves with configuration parameters (activation)
LDS LAS 2. Read/store configuration data
3. Configure actual
Configuration data (EEPROM) Expected configuration data Parameters LAS
Figure 3.7 How does the AS interface function.
(c) Configure actual. When reading, the parameters and configuration data are read from the slave and stored permanently on the communication processor; when writing, the parameters and configuration data are stored permanently on the communication processor. (d) Supply slaves with configured parameters. Configured parameters are transferred from the nonvolatile area of the communication processor to the slaves. (4) Operating extended AS interface slaves with standard AS interface masters. The following information is about operating extended AS interface with standard AS interface masters. (a) Slaves are connected to standard masters. The most significant slave bit (bit 4) of each A slave must be set to “0.” The most significant parameter bit (bit 4) must also be set to “1” (default value). Without these settings, the A slave cannot be operated with a standard master. (b) B slaves must not be connected to standard AS interface masters.
Zhang_Ch03.indd 274
5/13/2008 5:41:18 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
275
3.1.4 System Characteristics and Important Data 3.1.4.1 How the AS Interface Functions The AS interface or AS interface system operates as outlined below: (1) Master–slave access techniques. The AS interface is a “single master system.” This means that there is only one master per AS interface network that controls the operations of process. This polls all AS interface slaves one after the other and waits for a response. (2) Electronic address setting. The address of an AS interface slave is its identifier. This only occurs once within an AS interface system. The setting can either be made using a special addressing unit or by an AS interface master. The address is always stored permanently on the AS interface slave. (3) Operating reliability and flexibility. The transmission technique used (current modulation) guarantees high operating reliability. The master monitors the voltage on the cable and the transferred data. It detects transmission errors and the failure of slaves and sends a message to the controller such as a PLC or PC. The user can then react to this message. Replacing or adding AS interface slaves during normal operation does not affect communication with other AS interface slaves.
3.1.4.2
Physical Characteristics
The most important physical characteristics of the AS interface and its components are as follows: (1) The two-wire cable for data and power supply. A simple twowire cable can be used. Shielding or twisting is not necessary. Both the data and the power are transferred on this cable. The power available depends on the AS interface power supply unit used. For optimum wiring, the mechanically coded AS interface cable is available preventing the connections from being reversed and making simple contact with the AS interface application modules using the penetration technique. (2) Tree structure network with a cable. The “tree structure” of the AS interface allows any point on a cable section to be used as the start of a new branch. (3) Direct integration. Practically all the electronics required for a slave have been integrated on a special integrated circuit. This allows the AS interface connector to be integrated directly in binary actuators or sensors.
Zhang_Ch03.indd 275
5/13/2008 5:41:18 PM
276
INDUSTRIAL CONTROL TECHNOLOGY (4) Increased functionality, more uses for the customer. Direct integration allows devices to be equipped with a wide range of functions. Four data and four parameter lines are available. The resulting “intelligent” actuators/sensors increase the possibilities, for example, monitoring, parameter assignment, wear or pollution checks, etc. (5) Additional power supply for higher power requirements. An external source of power can be provided for slaves with a higher power requirement.
3.1.4.3
System Limits
(1) Cycle time (a) Maximum 5 ms with standard AS interface slaves. (b) Maximum 10 ms with AS interface slaves using the extended addressing mode. AS interface uses constant message lengths. Complicated procedures for controlling transmission and identifying message lengths or data formats are not required. This makes it possible for a master to poll all connected standard slaves within a maximum of 5 ms and to update the data both on the master and slave. If only one AS interface slave using the extended addressing mode is located at an address, this slave is polled at least every 5 ms. If two extended slaves (A and B slave) share an address, the maximum polling cycle is 10 ms. (B slaves can only be connected to extended masters.) (2) Number of connectable AS interface slaves (a) Maximum of 31 standard slaves. (b) Maximum of 62 slaves with the extended addressing mode. AS interface slaves are the input and output channels of the AS interface system. They are only active when called by the AS interface master. They trigger actions or transmit reactions to the master when commanded. Each AS interface slave is identified by its own address (1–31). A maximum of 62 slaves using the extended addressing mode can be connected to an extended master. Pairs of slaves using the extended addressing mode occupy one address; in other words, the addresses 1–31 can be assigned to two extended slaves. If standard slaves are connected to an extended master, these occupy a complete address; in other words, a maximum of up to 31 standard slaves can be connected to an extended master. (3) Number of inputs and outputs (a) A maximum of 248 binary inputs and outputs with standard modules.
Zhang_Ch03.indd 276
5/13/2008 5:41:18 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
277
(b) A maximum of 248 inputs and 186 outputs with modules using the extended addressing mode. Each standard AS interface slave can receive 4 bits of data and send 4 bits of data. Special modules allow each of these bits to be used for a binary actuator or a binary sensor. This means that an AS interface cable with standard AS interface slaves can have a maximum of 248 binary attachments (124 inputs and 124 outputs). All typical actuators or sensors can be connected to the AS interface in this way. The modules are used as distributed inputs/outputs. If modules with the extended addressing mode are used, a maximum of 3 inputs and 3 outputs is available per module; in other words a maximum of 248 inputs and 186 outputs can be operated with modules using the extended addressing mode.
3.1.4.4
Range of Functions of the Master Modules
The functions of the AS interface master modules are stipulated in the AS interface master specification. An overview of these functions can be found in the master module manual provided by the vendor or manufacturer. The AS interface protocol was created in Germany in 1994 by a consortium of factory automation suppliers. Originally developed to be a lowcost method for addressing discrete sensors in factory automation applications, AS interface has since gained acceptance in process industries due to its high power capability, simplicity of installation and operation, and low cost adder for devices. Each AS interface segment can network up to 31 devices. This provides for 124 inputs and 124 outputs, giving a maximum capacity of 248 I/O per network on a v2.0 segment. The AS interface v2.1 specification doubles this to 62 devices per segment, providing 248 inputs and 186 outputs for a total network capacity of 434 I/O points. Both signal and power are carried on two wires. Up to 8 A at 30 VDC of power are available for field devices such as solenoid valves.
3.1.4.5 AS Interface in a Real-Time Environment The system characteristics listed below can offer the AS interface the capability to work in a real-time environment: (1) Optimized system for binary sensors and actuators and for simple analog elements. (2) Master–slave principle with cyclic polling.
Zhang_Ch03.indd 277
5/13/2008 5:41:18 PM
278
INDUSTRIAL CONTROL TECHNOLOGY (3) Tree structure of the network. (4) Both data and power by means of one unshielded two-wire cable. (5) Flat cable for contacting by piercing technology. (6) Modules as remote I/O ports for conventional sensors and actuators. (7) Integrated slaves with their own AS interface capabilities. (8) No communication software in the slaves, only firmware in the self-configuring master. (9) Low costs, simple installation, easy handling, flexible networks, high reliability in an industrial environment, open and internationally accepted system with many manufacturers and products.
There are three aspects of the AS interface that are of particular importance in real-time applications: connectivity, cycle time, and availability. (1) Connectivity. AS interface has two distinct ways to be connected to the first control level. The first and most important way is a direct connection as the type 2 of AS interface architecture given in Section 3.1.2. In that case, the system’s master is part of a controller such as PLC, SCADA, or PC, running at its own cycle time. As the AS interface is an open system, any kind of controller such as PLC, SCADA, or PC manufacturer can build a master for their own system. There are masters available to a lot of systems already, with several more in development. The second way is to connect AS interface via a coupler to a higher Fieldbus and to use it as a subsystem, which has been given as the type 1 of AS interface architecture in Section 3.1.2. In that case, all data from the AS interface network is handled in one node of the Fieldbus and it is connected to the above lying host together with other components of the higher Fieldbus. The application program has to handle all data as usual for the particular Fieldbus. For real-time applications, an analysis of the cycle time and the availability of the combination of the two systems has to be done. AS interface is definitely open to such solutions and offers couplers to most known higher Fieldbus like PROFIBUS, CAN, etc., with others (e.g., LON, Fieldbus Foundation) being in preparation. Together with its tree structure, AS interface thus offers the most flexible networking solution to any application in automation. (2) Cycle time. AS interface is a single-master system with cyclic polling. Thus, any slave is addressed in a definite time.
Zhang_Ch03.indd 278
5/13/2008 5:41:18 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
279
For a complete net with 31 slaves, the cycle time is 5 ms. It may be shorter with fewer slaves. (With very few slaves the cycle time can be shortened to less than 500 µs.) Analog data with more than 4 bits needs several cycles depending on its length, but without affecting the basic cycle time for binary sensors and actuators. The cycle time includes all steps from and to the interface to the host system and even includes one repetition. The data exchange with the host happens from here via process I/O images at the end of each cycle stored in, for example, a dual-ported memory at the interface. Therefore no other steps have to be taken into account for a direct connection to the control device. This is asynchronous coupling, and in real-time applications of the cycle times of both the network and the controller this may present a restriction, but for many systems and applications this is short enough. (3) Availability. Availability in this context means that a system will deliver reliable data and diagnostic values continuously and in time under all specified conditions, especially under severe electromagnetic noise. The answers to three questions are of special importance for real-time applications: (a) Can electromagnetic noise or other faults disturb the reliability of data? (b) How much time is necessary for the correction of a faulty transmission? (c) How often does such a fault happen and can this affect the whole system?
3.2 Industrial Control System Interface Devices In reference to Fig. 3.1, there exist two kinds of interface between the control level and the actuator/sensor level in industrial control systems, which are AS interface and field level interface. In additional to these two kinds of interface, the interface between controller and either AS interface or field level interface is also of importance in industrial control systems. The interface between controller and either AS interface or field level interface normally resides in the controller’s microprocessor unit or chipset to bridge the central processing unit (CPU) with exterior environments, which therefore can be defined as controller interface, or simplified as interfaces hereafter. Section 3.1 gives a detailed discussion on AS interface. This section concentrates on the field level in Section 3.2.1, and interfaces in Section 3.2.2.
Zhang_Ch03.indd 279
5/13/2008 5:41:18 PM
280
INDUSTRIAL CONTROL TECHNOLOGY
3.2.1
Fieldbus System
In recent years, there have emerged literally hundreds of Fieldbuses developed by different companies and organizations all over the world. The term Fieldbus covers many different industrial control protocols. The following lists some typical Fieldbuses with their applications as shown in Fig. 3.1.
3.2.1.1
Foundation Fieldbus
The Foundation Fieldbus can be flexibly used in process automation applications. The specification supports bus-powered field devices as well as allows application in hazardous areas. The Fieldbus Foundation, an independent not-for-profit organization which aims at developing and maintaining an internationally uniform and successful Fieldbus for automation tasks, claimed to establish an international, interoperable Fieldbus standard to replace the expensive, conventional 4–20 mA wiring in the field and enables bidirectional data transmission. The entire communication between the devices and the automation system as well as the process control station takes place over the bus system, and all operating and device data are exclusively transmitted over the Fieldbus. The communication between control station, operating terminals, and field devices simplifies the start-up and parameterization of all components. The communication functions allow diagnostic data, which are provided by up-to-date field devices, to be evaluated. The essential objectives in Fieldbus technology are to reduce installation costs, save time and costs due to simplified planning, as well as improve the operating reliability of the system due to additional performance features. Fieldbus systems are usually implemented in new plants or existing plants that must be extended. To convert an existing plant to Fieldbus technology, the conventional wiring can either be modified into a bus line or be replaced with a shielded bus cable, if required. (1) Performance features. The Foundation Fieldbus provides a broad spectrum of services and functions compared to other Fieldbus systems: (a) Intrinsic safety for use in hazardous environments (b) Bus-powered field devices (c) Line or tree topology (d) Multimaster capable communication (e) Deterministic (predictable) dynamic behavior (f) Distributed data transfer (DDT) (g) Standardized block model for uniform device interfaces (h) Flexible extension options based on device descriptions.
Zhang_Ch03.indd 280
5/13/2008 5:41:18 PM
281
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
The characteristic feature of distributed data transfer enables single field devices to execute automation tasks so that they are no longer just sensors or actuators, but contain additional functions. For the description of a device’s function(s) and for the definition of a uniform access to the data, the Foundation Fieldbus contains predefined function blocks. The function blocks implemented in a device provide information about the tasks the device can perform. Typical functions provided by sensors include the following: analog input or discrete input (digital input). Control valves usually contain the following function blocks: analog output or discrete output (digital output). The following blocks exist for process control tasks: Proportional and Derivative (PD controller) or Proportional and Integral and Derivative (PID controller). If a device contains such a function block, it can control a process variable independently. The shift of automation tasks—from the control level down to the field—results in the flexible, distributed processing of control tasks. This reduces the load on the central process control station which can even be replaced entirely in small-scale installations. Therefore, an entire control loop can be implemented as the smallest unit, consisting only of one sensor and one control valve with integrated process controller, which communicates over the Foundation Fieldbus (see Fig. 3.8). The enhanced functionality of the devices leads to higher requirements to be met by the device hardware and comparably complex software implementation and device interfaces.
User 1
User 2
Switch Bridge High speed ethernet H1 bus 2 1
Junction box 3
5 6
Figure 3.8 Foundation Fieldbus control network.
Zhang_Ch03.indd 281
5/13/2008 5:41:18 PM
282
INDUSTRIAL CONTROL TECHNOLOGY (2) Layered communications model. The Foundation Fieldbus specification is based on the layered communications model and consists of three major functional elements as illustrated in Fig. 3.9: (a) Physical layer (b) Communication “stack” (c) User application is made up of function blocks and the device description. It is directly based on the communication stack. Depending on which blocks are implemented in a device, users can access a variety of services. System management utilizes the services and functions of the user application and the application layer to execute its tasks (Fig. 3.9(b) and (c)). It ensures proper cooperation between the individual bus components as well as synchronizes the measurement and control tasks of all field devices with regard to time. The Foundation Fieldbus layered communications model is based on the ISO/OSI reference model. As is the case for most Fieldbus systems, and in accordance with an IEC specification, layers 3–6 are not used. The comparison in Fig. 3.9 shows that the communication stack covers the tasks of layers 2 and 7 and that Layer 7 consists of the Fieldbus Access Sublayer and the Fieldbus Message Specification.
User application (a)
Communication stack
Physical layer
User application (b)
Application layer
Presentation layer Session layer Transport layer Network layer Data link layer Physical layer
Function block model
Device description (c)
Fieldbus message specification Fieldbus access sublayer Presentation layer Session layer Transport layer Network layer Data link layer Physical layer
Figure 3.9 Structure and description of the Foundation Fieldbus communication model.
Zhang_Ch03.indd 282
5/13/2008 5:41:19 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
283
(3) Physical layer. The Foundation Fieldbus model solves pending communication tasks by using two bus systems, the slow, intrinsically safe H1 bus and the fast, higher level H2 bus as given in Fig. 3.8. (a) H1 bus. The following summary gives a brief overview of the basic values and features of the H1 bus. (i) Manchester coding is used for data transfer. The data transfer rate is 31.25 kbit/s. (ii) Proper communication requires that the field devices have enough voltage. Each device should have minimum 9 V. To make sure that this requirement is met, software tools are available which calculate the resulting currents and terminal voltages based on the network topology, the line resistance, and the supply voltage. (iii) The H1 bus allows the field devices to be powered over the bus. The power supply unit is connected to the bus line in the same way (parallel) as a field device. Field devices powered by supply sources other than the bus must be additionally connected to their own supply sources. (iv) With the H1 bus it must be ensured that the maximum power consumption of current consuming devices is lower than the electric power supplied by the power supply unit. (v) Network topologies used are usually line topology or, when equipped with junction boxes, star, tree, or a combination of topologies. The devices are best connected via short spurs using tee connectors to enable connection/disconnection of the devices without interrupting communication. (vi) The maximum length of a spur is limited to 120 m and depends on the number of spurs used as well as the number of devices per spur. (vii) Without repeaters, the maximum length of a H1 segment can be as long as 1900 m. By using up to four repeaters, a maximum of 5 × 1900 m = 9500 m can be achieved. The short spurs from the field device to the bus are included in this total length calculation. (viii) The number of bus users per bus segment is limited to 32 in intrinsically safe areas. In explosion-hazardous areas, this number is reduced to only a few devices due to power supply limitations.
Zhang_Ch03.indd 283
5/13/2008 5:41:19 PM
284
INDUSTRIAL CONTROL TECHNOLOGY (ix) Various types of cables are useable for Fieldbus. Type A is recommended as preferred Fieldbus cable, and only this type is specified for the maximum bus length of 1900 m. (x) Principally, there need to be two terminators per bus segment, one at or near each end of a transmission line. (xi) It is not imperative that bus cables be shielded, however, it is recommended to prevent possible interferences and for best performance of the system. The H1 bus can be designed intrinsically safe (Ex-i) to suit applications in hazardous areas. This requires that proper barriers be installed between the safe and the explosion-hazardous area. In addition, only one device, the power supply unit, must supply the Fieldbus with power. All other devices must always, that is, also when transmitting and receiving data, function as current sinks. Since the capacity of electrical lines is limited in intrinsically safe areas depending on the explosion group—IIB or IIC—the number of devices that can be connected to one segment depends on the effective power consumption of the used devices. Since the Foundation Fieldbus specification is not based on the FISCO model, the plant operator must ensure that intrinsic safety requirements are met when planning and installing the communications network. For instance, the capacitance and inductance of all line segments and devices must be calculated to ensure that the permissible limit values are observed. (b) High speed Ethernet (HSE). The HSE is based on standard Ethernet technology. The required components are therefore widely used and are available at low cost. The HSE runs at 100 Mbit/s and cannot be equipped not only with electrical lines, but also with optical fiber cables. The Ethernet operates by using random (not deterministic) CSMA bus access. This method can only be applied to a limited number of automation applications because it requires real-time capability. The extremely high transmission rate enables the bus to respond sufficiently fast when the bus load is low and devices are only few. With respect to process engineering requirements, real-time requirements are met in any case. If the bus load must be reduced due to the many connected devices, or if several HSE partial networks are to be combined
Zhang_Ch03.indd 284
5/13/2008 5:41:19 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
285
to create a larger network, Ethernet switches must be used (see Fig. 3.8). A switch reads the target address of the data packets that must be forwarded and then passes the packets on to the associated partial network. This way, the bus load and the resulting bus access time can be controlled to best adapt it to the respective requirements. (c) Bridge to H1–HSE coupling. A communications network that consists of a H1 bus and a HSE network results in a topology as illustrated in Fig. 3.8. To connect the comparatively slow H1 segments to the HSE network, coupling components, so-called bridges, are required. A bridge is used to connect the individual H1 buses to the fast high speed Ethernet. The various data transfer rates and data telegrams must be adapted and converted, considering the direction of transmission. This way, powerful and widely branched networks can be installed in larger plants. (4) Communication stack. The field devices used with the Foundation Fieldbus are capable of assuming process control functions. This option is based on distributed communication which ensures that each controlling field device can exchange data with other devices (e.g., reading measuring values, forwarding correction values), all field devices are served in time (“in time” meaning that the processing of the different control loops is not negatively influenced), and two or more devices never access the bus simultaneously. To meet these requirements, the H1 bus of the Foundation Fieldbus uses a central communication control system. (a) Link active scheduler (LAS). The LAS controls and schedules the communication on the bus. It controls the bus activities using different commands which it broadcasts to the devices. Since the LAS also continuously polls unassigned device addresses, it is possible to connect devices during operation and to integrate them in the bus communication. Devices that are capable of becoming the LAS are called Link Masters. Basic devices do not have the capability to become LAS. In a redundant system containing multiple Link Masters, one of the Link Masters will become the LAS if the active LAS fails (fail-operational design). (b) Communication control. The communication services of the FF specification utilize scheduled and unscheduled data transmission. Time-critical tasks, such as the control of process variables, are exclusively performed by scheduled services, whereas parameterization and diagnostic functions are carried out using unscheduled communication services.
Zhang_Ch03.indd 285
5/13/2008 5:41:19 PM
286
INDUSTRIAL CONTROL TECHNOLOGY (i) Scheduled data transmission. To solve communication tasks in time and without access conflicts, all time-critical tasks are based on a strict transmission schedule. This schedule is created by the system operator during the configuration of the Foundation Field system. The LAS periodically broadcasts a synchronization signal (TD: Time Distribution) on the Fieldbus so that all devices have exactly the same data link time. In scheduled transmission, the point of time and the sequence are exactly defined. This is why it is called a deterministic system. (ii) Unscheduled transmission. Device parameters and diagnostic data must be transmitted when needed, that is, on request. The transmission of this data is not time critical. For such communication tasks, the Foundation Fieldbus is equipped with the option of unscheduled data transmission. Unscheduled data transmission is exclusively restricted to the breaks in between scheduled transmission. The LAS grants permission to a device to use the Fieldbus for unscheduled communication tasks if no scheduled data transmission is active. Permission for a certain device to use the bus is granted by the LAS when it issues a pass token (PT command) to the device. The pass token is sent around to all devices entered in the Live List which is administrated by the LAS. Each device may use the bus as long as required either until it returns the token or until the maximum granted time to use the token has elapsed. The Live List is continuously updated by the LAS. The LAS sends a special command, the Probe Node (PN), to the addresses not in the Live List, searching for newly added devices. If a device returns a Probe Response (PR) message, the LAS adds the device to the Live List where it receives the pass token for unscheduled communication according to the order submitted for transmission in the Live List. Devices which do not respond to the PT command or return the token after three successive tries are removed from the Live List. Whenever a device is added or removed from the Live List, the LAS broadcasts these changes to all devices. This allows all Link Masters to maintain a current copy of the Live List so that they can become the LAS without the loss of information.
Zhang_Ch03.indd 286
5/13/2008 5:41:19 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
287
(c) Communication schedule. The LAS follows a strict schedule to ensure that unscheduled communication using the token as well as the TD or PN commands do not interfere with the scheduled data transmission. Before each operation, the LAS refers to the transmission list to check for any scheduled data transmissions. If this is the case, it waits (idle mode) for precisely the scheduled time and then sends a Compel Data (CD) message to activate the operation. In case there are no scheduled transmissions and sufficient time is available for additional operations, the LAS sends one of the other commands. With PN it searches for new devices, or it broadcasts a TD message for all devices to have exactly the same data link time, or it uses the PT massage to pass the token for unscheduled communication. Following this, the sequence starts all over again with the abovementioned check of the transmission list entries. It is obvious that this cycle gives scheduled transmission the highest priority and that the scheduled times are strictly observed, regardless of other operations. (5) User application layer. The Fieldbus Access Sublayer (FAS) and Fieldbus Message Specification (FMS) layer form the interface between the data link layer and the user application (see Fig. 3.9). The services provided by FAS and FMS are invisible for the user. However, the performance and functionality of the communication system considerably depends on these services. (a) Fieldbus access sublayer (FAS). FAS services create Virtual Communication Relationships (VCR) which are used by the higher level FMS layer to execute its tasks (Figure 3.10). VCRs describe different types of communication processes and enable the associated activities to be processed more quickly. Foundation Fieldbus communication utilizes three different VCR types as follows: (i) The Publisher/Subscriber VCR type is used to transmit the input and output data of function blocks. As described above, scheduled data transmission with the CD command is based on this type of VCR. However, the Publisher/Subscriber VCR is also available for unscheduled data transmission; for instance, if a subscriber requests measuring or positioning data from a device. (ii) The Client/Server VCR type is used for unscheduled, user-initiated communication based on the PT command. If a device (client) requests data from another device, the requested device (server) only responds
Zhang_Ch03.indd 287
5/13/2008 5:41:20 PM
288
INDUSTRIAL CONTROL TECHNOLOGY Client/Server Operator communication
Report distribution Event notification, alarms, trend reports
Publisher/subscriber Data publication
Set point changes Mode and device data changes
Send process alarms to operator consoles
Send actual value of a transmitter to PID block and operator console
Upload/download Adjusting alarm values Access display views Remote diagnostics
Send trend reports to data historians
Figure 3.10 Virtual Communication Relationships of the FAS.
when it receives a PT from the LAS. The Client/Server communication is the basis for operator-initiated requests, such as set point changes, tuning parameter access and change, diagnosis, device upload and download, etc. (iii) Report distribution communication is used to send alarm or other event notifications to the operator consoles or similar devices. Data transmission is unscheduled when the device receives the PT command together with the report (trend or event notification). Fieldbus devices that are configured to receive the data await and read this data. (b) Fieldbus message specification (FMS). FMS provides the services for standardized communication. Data types that are communicated over the Fieldbus are assigned to certain communication services. For a uniform and clear assignment, object descriptions are used. Object descriptions not only contain definitions of all standard transmission message formats, but also include application-specific data. For each type of object there are special, predefined communication services. Object descriptions are collected together in a structure called an object dictionary. The object description is identified by its index. (1) Index 0, called the object dictionary header, provides a description of the dictionary itself. (2) Indices between 1 and 255 define standard data types that are used to build more complex object descriptions. (3) The User Application object descriptions can start at any index above 255. The FMS defines Virtual Field Devices (VFD) which are used to make the object descriptions of a field device as
Zhang_Ch03.indd 288
5/13/2008 5:41:20 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
289
well as the associated device data available over the entire network. The VFDs and the object description can be used to remotely access all local field device data from any location by using the associated communication services.
3.2.1.2
PROFIBUS
PROFIBUS is the largest Fieldbus in the world with cost-saving solutions in factory automation and process automation plus safety, drives, and motion control coverage. This Fieldbus approach produces significant cost savings in design, installation, and maintenance expenses over the old approach of point-to-point wiring. For many years, PROFIBUS development has continued with undiminished enthusiasm and energy. With more than 13,000,000 nodes installed there are significantly more PROFIBUS nodes installed than of any other Fieldbus. Current PROFIBUS activities are targeted at system integration, PROFIBUS engineering development, and application profiles. Because of these application profiles, PROFIBUS today is the only Fieldbus that provides robust engineering solutions for both factory and process automation. (1) Working mechanism. PROFIBUS is suitable for both fast, time-critical applications and complex communication tasks. PROFIBUS communication is in the international standards IEC 61158 and IEC 61784. The application and engineering aspects are specified in the generally available guidelines of the PROFIBUS User Organization. This fulfills user demand for manufacturer independence and openness and ensures communication between devices of various manufacturers. PROFIBUS can handle large amounts of data at high speed and can serve the needs of large installations. Based on a realtime capable asynchronous token bus principle, PROFIBUS defines multimaster and master–slave communication relations, with cyclic or acyclic access, allowing transfer rates of up to 500 kbit/s. The physical layer (two-wire RS485), the data link layer, and the application layer are all standardized. PROFIBUS distinguishes between confirmed and unconfirmed services, allowing process communication with both broadcast and multitasking protocols. PROFIBUS DP is a master/slave polling network with the ability to upload/download configuration data and precisely synchronized multiple devices on the network. Multiple masters are possible in PROFIBUS, but the outputs of any device can only be assigned to one master. There is no power on the bus.
Zhang_Ch03.indd 289
5/13/2008 5:41:20 PM
290
INDUSTRIAL CONTROL TECHNOLOGY (2) Basic types. PROFIBUS encompasses several Industrial Bus Protocol Specifications, including PROFIBUS-DP, PROFIBUSPA, PROFIBUS-FMS, PROFInet, PROFIBUS-safe, and PROFIBUS for motion control. (a) PROFIBUS-DP. PROFIBUS-DP is the main emphasis for factory automation; it uses RS485 transmission technology, one of the DP communications protocol versions, and has widespread usage for such items as remote I/O systems, motor control centers, and variable speed drives. PROFIBUS-DP communicates at speeds from 9.6 kbps to 12 Mbps over distances from 100 to 1200 m. PROFIBUS-DP does not natively support intrinsically safe installations. More than 2500 PROFIBUS-compliant products are available from which you can select best-in-class devices to suit your individual needs, with alternative sources usually available. (b) PROFIBUS-PA. PROFIBUS-PA is the main emphasis for process automation, typically with MBP-IS transmission technology, the communications protocol version DP-V1, and the application profile PA devices. PROFIBUS-PA is a full-function Fieldbus that is generally used for process level instrumentation. PROFIBUS-PA communicates at 31.25 kbps and has a maximum distance of 1900 m per segment. PROFIBUS-PA is designed to support intrinsically safe applications. PROFIBUS is also tailored to process automation requirements. It is of modular design and comprises the communication protocol PROFIBUS-DP, different transmission technologies, numerous application profiles, and structured device integration tools. Typical PROFIBUS-PA applications are formed by combining modules suited for or required by the respective applications. (c) PROFIBUS-FMS. PROFIBUS-FMS is designed for communication at the cell level according to Fieldbus message specification. At this level programmable controllers (e.g., PLC and PC) communicate primarily with each other. In this application area a high degree of functionality is more important than fast system reaction times. FMS services are a subset of the services (MMS = Manufacturing Message Specification, ISO 9506) which have been optimized for Fieldbus applications and to which functions for communication object administration and network management have been added. Execution of the FMS services via the bus is described by service sequences consisting of several interactions which are called service primitives. Service primitives describe the interaction between requester and responder.
Zhang_Ch03.indd 290
5/13/2008 5:41:20 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
291
(d) PROFInet. PROFInet is the leading Industrial Ethernet standard for automation that includes plant-wide Fieldbus communication and plant-to-office communication. PROFInet is designed to work from I/O to MES and hence can simultaneously handle standard Ethernet transmissions and real-time transmissions at 1 ms speeds. PROFInet embraces industry standards like TCP/IP, XML, OPC, and ActiveX. Because of the integrated proxy technology it connects other Fieldbuses in addition to PROFIBUS, thus protecting the existing investment in plant equipment and networks (whether that is PROFIBUS or another Fieldbus). (e) Motion control with PROFIBUS. Motion control with PROFIBUS is the main emphasis for drive technology using RS485 transmission technology, the communications protocol version DP V2, and the application profile PROFI drive. The demands of motion control propelled the implementation of functionalities such as clock cycle synchronization or slave-to-slave communication. Decentralized drive applications can be realized economically by means of intelligent drives, since PROFIBUS now also permits the highly dynamic distribution of the technological signals among the drives. (f) PROFI-safe. PROFI-safe is the main emphasis for safetyrelevant applications (universal use for almost all industries), using RS485 or MBP-IS transmission technology, one of the available DP versions for communication, and the application profile PROFI-safe. PROFIBUS is the very first Fieldbus in merging standard automation and safety automation in one technology, running on the same bus, using the same communication mechanisms, and thus providing highest efficiency to the user. This supports simple and cost-effective installation and operation.
3.2.1.3
Controller Area Network (CAN bus)
CAN is a serial bus system, which was originally developed for automotive applications in the early 1980s. The CAN protocol was internationally standardized in 1993 as ISO 11898-1 and comprises the data link layer of the seven layer ISO/OSI reference model. CAN bus system can theoretically link up to 2032 devices (assuming one node with one identifier) on a single network. However, due to the practical limitation of the hardware (transceivers), it can only link up to 110 nodes (with 82C250, Philips) on a single network. It offers high-speed communication rate up to 1 Mbit/s thus allowing real-time control. In addition, the error
Zhang_Ch03.indd 291
5/13/2008 5:41:20 PM
292
INDUSTRIAL CONTROL TECHNOLOGY
confinement and the error detection feature make it more reliable in noise critical environment. CAN bus systems provide the following: (1) A multimaster hierarchy, which allows building intelligent and redundant systems. If one network node is defective the network is still able to operate. (2) Broadcast communication. A sender of information transmits to all devices on the bus. All receiving devices read the message and then decide if it is relevant to them. This guarantees data integrity as all devices in the system use the same information. (3) Sophisticated error detecting mechanisms and retransmission of faulty messages. This also guarantees data integrity. The CAN serial bus system is used in a broad range of embedded as well as automation control systems. It usually links two or more microcontrollerbased physical devices. The original equipment manufacturers (OEM) design embedded control systems; the end user has no or only some knowledge of the embedded network functions and is therefore not responsible for the CAN communication system. However, automation control systems are specified by the end user. The system design including the CAN network services may be implemented by the end users themselves or by a system house. The main CAN application fields include (1) passenger cars, (2) trucks and buses, (3) off-highway and off-road vehicles, (4) maritime electronics; (5) aircraft and aerospace electronics, (6) factory automation, (7) industrial machine control, (8) lifts and escalators, (9) building automation, (10) medical equipment and devices, (11) nonindustrial control, (12) nonindustrial equipment. (1) CAN basic working mechanism. (a) Principles of data exchange. When data are transmitted by CAN, no stations are addressed, but instead, the content of the message (e.g., rpm or engine temperature) is designated by an identifier that is unique throughout the network. The identifier defines not only the content but also the priority of the message. This is important for bus allocation when several stations are competing for bus access. If the CPU of a given station wishes to send a message to one or more stations, it passes the data to be transmitted and their identifiers to the assigned CAN chip (“Make ready”). This is all the CPU has to do to initiate data exchange. The message is constructed and transmitted by the CAN chip. As soon as the CAN chip receives the bus allocation (“Send Message”) all other stations on the CAN network become receivers of this message (“Receive Message”). Each station
Zhang_Ch03.indd 292
5/13/2008 5:41:20 PM
293
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL CAN station 1
CAN station 2
Accept
Prepare
Select
CAN station 3
CAN station 4 Accept
Select
Select
Receive message
Receive message
Send message Receive message
Figure 3.11 Broadcast transmission and acceptance filtering by CAN nodes.
in the CAN network, having received the message correctly, performs an acceptance test to determine whether the data received are relevant for that station (“Select”). If the data are of significance for the station concerned they are processed (“Accept”), otherwise they are ignored. Figure 3.11 illustrates this scenario. A high degree of system and configuration flexibility is achieved as a result of the content-oriented addressing scheme. It is very easy to add stations to the existing CAN network without making any hardware or software modifications to the existing stations, provided the new stations are purely receivers. Because the data transmission protocol does not require physical destination addresses for the individual components, it supports the concept of modular electronics and also permits multiple reception (broadcast, multicast) and the synchronization of distributed processes: measurements needed as information by several controllers can be transmitted via the network, in such a way that it is unnecessary for each controller to have its own sensor. (b) Nondestructive bitwise arbitration. For the data to be processed in real time, they must be transmitted rapidly. This not only requires a physical data transfer path with up to 1 Mbit/s but also calls for rapid bus allocation when several stations wish to send messages simultaneously (Fig. 3.12). In real-time processing the urgency of messages to be exchanged over the network can differ greatly: a rapidly changing dimension (e.g., engine load) has to be transmitted
Zhang_Ch03.indd 293
5/13/2008 5:41:20 PM
294
INDUSTRIAL CONTROL TECHNOLOGY Recessive Dominant Bus line 1
1 2
3 2 3
1 loses
3 loses
Figure 3.12 Principle of nondestructive bitwise arbitration.
more frequently and therefore with fewer delays than other dimensions (e.g., engine temperature) which change relatively slowly. The priority at which a message is transmitted compared with another less urgent message is specified by the identifier of the message concerned. The priorities are laid down during system design in the form of corresponding binary values and cannot be changed dynamically. The identifier with the lowest binary number has the highest priority. Bus access conflicts are resolved by bitwise arbitration on the identifiers involved by each station observing the bus level bit for bit. In accordance with the “wired” mechanism, by which the dominant state (logical 0) overwrites the recessive state (logical 1), the competition for bus allocation is lost by all those stations with recessive transmission and dominant observation. All “losers” automatically become receivers of the message with the highest priority and do not reattempt transmission until the bus is available again. (c) Destructive bus allocation. Simultaneous bus access by more than one station causes all transmission attempts to be aborted and, therefore, there is no successful bus allocation. More than one bus access may be necessary in order to allocate the bus at all. The number of attempts before bus allocation is successful being a purely statistical quantity (examples: CSMA/CD, Ethernet). In order to process all transmission requests of a CAN network while complying with latency constraints at as low a data transfer rate as possible, the CAN protocol must implement a bus allocation method that guarantees that there
Zhang_Ch03.indd 294
5/13/2008 5:41:21 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
295
is always unambiguous bus allocation even when there are simultaneous bus accesses from different stations. The method of bitwise arbitration using the identifier of the messages to be transmitted uniquely resolves any collision between a number of stations wanting to transmit, and it does this at the latest within 13 (standard format) or 33 (extended format) bit periods for any bus access period. Unlike the message-wise arbitration employed by the CSMA/ CD method, this nondestructive method of conflict resolution ensures that no bus capacity is used without transmitting useful information. Even in situations where the bus is overloaded, the linkage of the bus access priority to the content of the message proves to be a beneficial system attribute compared with existing CSMA/CD or token protocols: in spite of the insufficient bus transport capacity, all outstanding transmission requests are processed in order of their importance to the overall system (as determined by the message priority). The available transmission capacity is utilized efficiently for the transmission of useful data since “gaps” in bus allocation are kept very small. The collapse of the whole transmission system due to overload, as can occur with the CSMA/CD protocol, is not possible with CAN. Thus, CAN permits implementation of fast, traffic-dependent bus access which is nondestructive because of bitwise arbitration based on the message priority employed. Nondestructive bus access can be further classified into centralized bus access control or decentralized bus access control depending on whether the control mechanisms are present in the system only once (centralized) or more than once (decentralized). A communication system with a designated station (inter alia for centralized bus access control) must provide a strategy to take effect in the event of a failure of the master station. This concept has the disadvantage that the strategy for failure management is difficult and costly to implement and also that the takeover of the central station by a redundant station can be very time consuming. For these reasons and to circumvent the problem of the reliability of the master station (and thus of the whole communication system), the CAN protocol implements decentralized bus control. All major communication mechanisms, including bus access control, are implemented several times in the system because this is the only way to fulfill the high requirements for the availability of the communication system.
Zhang_Ch03.indd 295
5/13/2008 5:41:21 PM
296
INDUSTRIAL CONTROL TECHNOLOGY In summary it can be said that CAN implements a trafficdependent bus allocation system that permits, by means of a nondestructive bus access with decentralized bus access control, a high useful data rate at the lowest possible bus data rate in terms of the bus busy rate for all stations. The efficiency of the bus arbitration procedure is increased by the fact that the bus is utilized only by those stations with pending transmission requests. These requests are handled in the order of the importance of the messages for the system as a whole. This proves especially advantageous in overload situations. Since bus access is prioritized on the basis of the messages, it is possible to guarantee low individual latency times in real-time systems. (d) Message frame formats. The CAN protocol supports two message frame formats, the only essential difference being in the length of the identifier (ID). In the standard format the length of the ID is 11 bits, and in the extended format the length is 29 bits. The message frame for transmitting messages on the bus comprises seven main fields (Fig. 3.13). A message in the standard format begins with the start bit “start of frame,” followed by the “arbitration field,” which contains the identifier and the remote transmission request (RTR) bit, which indicates whether it is a data frame or a request frame without any data bytes (remote frame). The “control field” contains the IDE (identifier extension) bit, which indicates either standard format or extended format, a bit reserved for future extensions and—in the last 4 bits—a count of the data bytes in the data field. The “data field” ranges from 0 to 8 bytes in length and is followed by the “CRC field,” which is used as a frame security check for detecting bit errors. The “ACK field” comprises the ACK slot (1 bit) and the ACK delimiter (1 recessive bit). The bit in the ACK slot is sent as a recessive bit and is overwritten as a dominant bit by those receivers which have at this time received the data correctly (positive acknowledgment). Correct messages are acknowledged by the receivers regardless of the result of the acceptance test. The end of the Arbitration field
S O F
11-bit identifier
Control field
R I r T D 0 DLC R E
Data field
CRC field
0–8 bytes
15-bit CRC
Ack field
End of frame
Int
Bus idle
Figure 3.13 Message frame for standard format (CAN Specification 2.0A).
Zhang_Ch03.indd 296
5/13/2008 5:41:21 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
297
message is indicated by “end of frame.” “Intermission” is the minimum number of bit periods separating consecutive messages. If there is no following bus access by any station, the bus remains idle (“bus idle”). (e) Detecting and signaling errors. Unlike other bus systems, the CAN protocol does not use acknowledgment messages but instead signals any errors that occur. For error detection, the CAN protocol implements three mechanisms at the message level: (i) Cyclic redundancy check (CRC). The CRC safeguards the information in the frame by adding redundant check bits at the transmission end. At the receiver end these bits are recomputed and tested against the received bits. If they do not agree, there has been a CRC error. (ii) Frame check. This mechanism verifies the structure of the transmitted frame by checking the bit fields against the fixed format and the frame size. Errors detected by frame checks are designated “format errors.” (iii) ACK errors. As mentioned above, frames received are acknowledged by all recipients through positive acknowledgment. If no acknowledgment is received by the transmitter of the message (ACK error) this may mean that there is a transmission error which has been detected only by the recipients, that the ACK field has been corrupted, or that there are no receivers. The CAN protocol also implements two mechanisms for error detection at the bit level: (i) Monitoring. The ability of the transmitter to detect errors is based on the monitoring of bus signals: each node which transmits also observes the bus level and thus detects differences between the bit sent and the bit received. This permits reliable detection of all global errors and errors local to the transmitter. (ii) Bit stuffing. The coding of the individual bits is tested at bit level. The bit representation used by CAN is NRZ (nonreturn-to-zero) coding, which guarantees maximum efficiency in bit coding. The synchronization edges are generated by means of bit stuffing, that is, after five consecutive equal bits the sender inserts into the bit stream a stuff bit with the complementary value, which is removed by the receivers. The code check is limited to checking adherence to the stuffing rule. If one or more errors are discovered by at least one station (any station) using the above mechanisms, the
Zhang_Ch03.indd 297
5/13/2008 5:41:22 PM
298
INDUSTRIAL CONTROL TECHNOLOGY current transmission is aborted by sending an “error flag.” This prevents other stations from accepting the message and thus ensures the consistency of data throughout the network. After transmission of an erroneous message has been aborted, the sender automatically reattempts transmission (automatic repeat request). There may again be competition for bus allocation. As a rule, retransmission will be begun within 23 bit periods after error detection; in special cases the system recovery time is 31 bit periods. However effective and efficient the method described may be, in the event of a defective station it might lead to all messages (including correct ones) being aborted, thus blocking the bus system if no measures for selfmonitoring were taken. The CAN protocol, therefore, provides a mechanism for distinguishing sporadic errors from permanent errors and localizing station failures (fault confinement). This is done by statistical assessment of station error situations with the aim of recognizing a station’s own defects and possibly entering an operating mode where the rest of the CAN network is not negatively affected. This may go as far as the station switching itself off to prevent messages erroneously recognized as incorrect from being aborted. (f) Extended format CAN messages. The SAE “Truck and Bus” subcommittee standardized signals and messages as well as data transmission protocols for various data rates. It became apparent that standardization of this kind is easier to implement when a longer identification field is available. To support these efforts, the CAN protocol was extended by the introduction of a 29-bit identifier. This identifier is made up of the existing 11-bit identifier (base ID) and an 18-bit extension (ID extension). Thus, the CAN protocol allows the use of two message formats: StandardCAN (Version 2.0A) and ExtendedCAN (Version 2.0B). As the two formats have to coexist on one bus, it is laid down which message has higher priority on the bus in the case of bus access collisions with differing formats and the same base identifier: the message in standard always has priority over the message in extended format. CAN controllers that support the messages in extended format can also send and receive messages in standard format.
Zhang_Ch03.indd 298
5/13/2008 5:41:22 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
299
When CAN controllers that only cover the standard format (Version 2.0A) are used in one network, then only messages in standard format can be transmitted on the entire network. Messages in extended format would be misunderstood. However, there are CAN controllers that support only standard format but recognize messages in extended format and ignore them (Version 2.0B passive). The distinction between standard format and extended format is made using the IDE bit (Identifier Extension Bit) which is transmitted as dominant in the case of a frame in standard format. For frames in extended format it is recessive. The RTR bit is transmitted dominant or recessive depending on whether data are being transmitted or whether a specific message is being requested from a station. In place of the RTR bit in standard format the substitute remote request (SRR) bit is transmitted for frames with extended ID. The SRR bit is always transmitted as recessive, to ensure that in the case of arbitration the standard frame always has priority bus allocation over an extended frame when both messages have the same base identifier. Unlike the standard format, in the extended format the IDE bit is followed by the 18-bit ID extension, the RTR bit, and a reserved bit (r1). All the following fields are identical with standard format. Conformity between the two formats is ensured by the fact that the CAN controllers which support the extended format can also communicate in standard format. (g) Implementations of the CAN protocol. Communication is identical for all implementations of the CAN protocol. There are differences, however, with regard to the extent to which the implementation takes over message transmission from the microcontrollers which follow it in the circuit. CAN controllers with intermediate buffer (formerly called basicCAN chips) have implemented as hardware the logic necessary to create and verify the bit stream according to protocol. However, the administration of data sets to be sent and received, acceptance filtering in particular, is carried out to only a limited extent by the CAN controller. Typically, CAN controllers with intermediate buffer have two reception and one transmission buffer. The 8-bit code and mask registers allow a limited acceptance filtering (8 MSB of the identifier). Suitable choice of these register values allows groups of identifiers or in borderline cases all IDs to be selected. If more than the 8 ID-MSBs are necessary to
Zhang_Ch03.indd 299
5/13/2008 5:41:22 PM
300
INDUSTRIAL CONTROL TECHNOLOGY differentiate between messages, then the microcontroller following the CAN controller in the circuit must complement acceptance filtering by software. CAN controllers with intermediate buffer may place a strain on the microcontroller with the acceptance filtering, but they require only a small chip area and can therefore be produced at lower cost. In principle they can accept all objects in a CAN network. CAN objects consist mainly of three components: identifier, data length code, and the actual useful data. CAN controllers with object storage (formerly called FullCAN) function like CAN controllers with intermediate buffers, but also administer certain objects. Where there are several simultaneous requests they determine, for example, which object is to be transmitted first. They also carry out acceptance filtering for incoming objects. The interface to the following microcontroller corresponds to a RAM. Data to be transmitted are written into the appropriate RAM area, and data received are read out correspondingly. The microcontroller has to administer only a few bits (e.g., transmission request). CAN controllers with object storage are designed to take as much strain as possible off the local microcontroller. These CAN controllers require a greater chip area, however, and are therefore more expensive. In addition to this, they can only administer a limited number of chips. CAN controllers are now available which combine both principles of implementation. They have object storage, at least one of which is designed as an intermediate buffer. For this reason there is no longer any point in differentiating between basicCAN and fullCAN. As well as CAN controllers which support all functions of the CAN protocol, there are also CAN chips which do not require a following microcontroller. These CAN chips are called serial link I/O (SLIO). CAN chips are CAN slaves and have to be administered by a CAN master. (2) CAN physical layer. (a) Physical CAN connection. Data rates (up to 1 Mbit/s) necessitate a sufficiently steep pulse slope, which can be implemented only by using power elements. A number of physical connections are basically possible. However, the users and manufacturers group, CAN in Automation, recommends the use of driver circuits in accordance with ISO 11898. Integrated driver chips in accordance with ISO 11898 are available from several companies (Bosch, Philips, Siliconix,
Zhang_Ch03.indd 300
5/13/2008 5:41:22 PM
301
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
and Texas Instruments). The international users and manufacturers group, CAN in Automation (CiA), also specifies several mechanical connections (cable and connectors) (Fig. 3.14). (b) Physical media. The basis for transmitting CAN messages and for competing for bus access is the ability to represent a dominant and a recessive bit value. This is possible for electrical and optical media so far. For electrical media the differential output bus voltages are defined in ISO 11898-2 and ISO 11898-3, in SAE J2411, and ISO 11992. With optical media the recessive level is represented by “dark” and the dominant level by “light.” The physical medium most commonly used to implement CAN networks is a differentially driven pair of wires with common return. For vehicle body electronics, single wire bus lines are also used. Some efforts have been made to develop a solution for the transmission of CAN signals on the same line as the power supply. The parameters of the electrical medium become important when the bus length is increased. Signal propagation,
Microcontroller
CAN controller T×0 T×1
R×0 R×1
T×D
R×D Ref
+6 V Rs Vcc 100 nF
CAN transceiver Gnd CAN_L CAN_H
Bus termination
Bus termination CAN_H RT
CAN bus lines
RT
CAN_L
Figure 3.14 Physical CAN connection according to ISO 11898.
Zhang_Ch03.indd 301
5/13/2008 5:41:22 PM
302
INDUSTRIAL CONTROL TECHNOLOGY the line resistance, and wire cross-sections are factors when dimensioning a network. In order to achieve the highest possible bit rate at a given length, a high signal speed is required. For long bus lines the voltage drops over the length of the bus line. The wire cross-section necessary is calculated by the permissible voltage drop of the signal level between the two nodes farthest apart in the system and the overall input resistance of all connected receivers. The permissible voltage drop must be such that the signal level can be reliably interpreted at any receiving node. (c) Network topology. Electrical signals on the bus are reflected at the ends of the electrical line unless measures against that have been taken. For the node to read the bus level correctly, it is important that signal reflections are avoided. This is done by terminating the bus line with a termination resistor at both ends of the bus and by avoiding unnecessarily long stub lines of the bus. The highest possible product of transmission rate and bus length line is achieved by keeping as close as possible to a single line structure and by terminating both ends of the line. Specific recommendations for this can be found in the applicable standards (i.e., ISO 11898-2 and -3). It is possible to overcome the limitations of the basic line topology by using repeaters, bridges, or gateways. A repeater transfers an electrical signal from one physical bus segment to another segment. The signal is only refreshed and the repeater can be regarded as a passive component comparable to a cable. The repeater divides a bus into two physically independent segments. This causes an additional signal propagation time. However, it is logically just one bus system. A bridge connects two logically separated networks on the data link layer (OSI Layer 2). This is so that the CAN identifiers are unique in each of the two bus systems. Bridges implement a storage function and can forward messages or parts thereof in an independent time-delayed transmission. Bridges differ from repeaters since they forward messages, which are not local, whereas repeaters forward all electrical signals including the CAN identifier. A gateway provides the connection of networks with different higher layer protocols. It therefore performs the translation of protocol data between two communication systems. This translation takes place on the application layer (OSI Layer 7). (d) Bus access. For the connection between a CAN controller chip and a two-wire differential bus, a variety of CAN transceiver chips according to different physical layer standards are available ( ISO 11898-2 and -3, etc.).
Zhang_Ch03.indd 302
5/13/2008 5:41:22 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
303
This interface basically consists of a transmitting amplifier and a receiving amplifier transceiver = transmit and receive). Aside from the adaptation of the signal representation between chip and bus medium, the transceiver has to meet a series of additional requirements. As a transmitter it provides sufficient driver output capacity and protects the on-controller-chip driver against overloading. It also reduces electromagnetic radiation. As a receiver the CAN transceiver provides a defined recessive signal level and protects the on-controller-chip input comparator against overvoltages on the bus lines. It also extends the common mode range of the input comparator in the CAN controller and provides sufficient input sensitivity. Furthermore, it detects bus errors such as line breakage, short circuits, shorts to ground, etc. A further function of the transceiver can also be the galvanic isolation of a CAN node and the bus line. (e) Physical CAN protocols. The CAN protocol defines the data link layer and part of the physical layer in the OSI model, which consists of seven layers. The International Standards Organization (ISO) defined a standard, which incorporates the CAN specifications as well as a part of physical layer: the physical signaling, which comprises bit encoding and decoding (Non-Return-to-Zero (NRZ)) as well as bit timing and synchronization. (i) Bit encoding. In the chosen NRZ bit coding the signal level remains constant over the bit time and thus just one time slot is required for the representation of a bit (other methods of bit encoding are, e.g., Manchester or pulsewidth modulation). The signal level can remain constant over a longer period of time; therefore, measures must be taken to ensure that the maximum permissible interval between two signal edges is not exceeded. This is important for synchronization purposes. Bit stuffing is applied by inserting a complementary bit after five bits of equal value. Of course the receiver has to unstuff the stuff bits so that the original data content is processed. (ii) Bit timing and synchronization. On the bit level (OSI level one, physical layer), CAN uses synchronous bit transmission. This enhances the transmitting capacity but also means that a sophisticated method of bit synchronization is required. While bit synchronization in a character-oriented transmission (asynchronous) is performed upon the reception of the start bit available with each character, a synchronous transmission protocol is just one start bit available at the beginning of a frame.
Zhang_Ch03.indd 303
5/13/2008 5:41:22 PM
304
INDUSTRIAL CONTROL TECHNOLOGY To enable the receiver to read the messages correctly, continuous resynchronization is required. Phase buffer segments are, therefore, inserted before and after the nominal sample point within a bit interval. The CAN protocol regulates bus access by bitwise arbitration. The signal propagation from sender to receiver and back to the sender must be completed within one bit time. For synchronization purposes, a further time segment, the propagation delay segment, is needed in addition to the time reserved for synchronization, the phase buffer segments. The propagation delay segment takes into account the signal propagation on the bus as well as signal delays caused by transmitting and receiving nodes. Two types of synchronization are distinguished: hard synchronization at the start of a frame and resynchronization within a frame. After a hard synchronization the bit time is restarted at the end of the sync segment. Therefore, the edge, which caused the hard synchronization, lies within the sync segment of the restarted bit time. Resynchronization shortens or lengthens the bit time so that the sample point is shifted according to the detected edge. (iii) Interdependency of data rate and bus length. Depending on the size of the propagation delay segment, the maximum possible bus length at a specific data rate (or the maximum possible data rate at a specific bus length) can be determined. The signal propagation is determined by the two nodes within the system that are farthest apart from each other. It is the time that it takes a signal to travel from one node to the one farthest away (taking into account the delay caused by the transmitting and receiving node), synchronization and the signal from the second node to travel back to the first one. Only then can the first node decide whether its own signal level (recessive in this case) is the actual level on the bus or whether it has been replaced by the dominant level by another node. This fact is important for bus arbitration. (3) CAN application layer protocols. In the CAN world there are different standardized application layer protocols. Some are very specific and related to specific application fields. Examples of CAN-based application layer protocols are given below: (a) CANopen. CANopen is a CAN-based higher layer protocol. It was developed as a standardized embedded network with
Zhang_Ch03.indd 304
5/13/2008 5:41:22 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
305
highly flexible configuration capabilities. CANopen was predeveloped in an Esprit project under the chairmanship of Bosch. In 1995, the CANopen specification was handed over to the CAN in Automation (CiA) international users’ and manufacturers’ group. Originally, the CANopen communication profile was based on the CAN Application Layer (CAL) protocol. Version 4 of CANopen (CiA DS 301) is standardized as EN 50325-4. The CANopen specifications cover application layer and communication profile (CiA DS 301), as well as a framework for programmable devices (CiA 302), recommendations for cables and connectors (CiA 303-1), and SI units and prefix representations (CiA 303-2). The application layer as well as the CAN-based profiles is implemented in software. Standardized profiles (device, interface, and application profiles) developed by CiA members simplify the system design job of integrating a CANopen network system. Offthe-shelf devices, tools, and protocol stacks are widely available at reasonable prices. For system designers, it is very important to reuse application software. This requires not only communication compatibility, but also interoperability and interchange ability of devices. In the CANopen device and interface profiles, defined application objects exist to achieve the interchangeability of CANopen devices. CANopen is flexible and open enough to enable manufacturer-specific functionality in devices, which can be added to the generic functionality described in the profiles. CANopen unburdens the developer from dealing with CAN-specific details such as bit-timing and implementationspecific functions. It provides standardized communication objects for real-time data (Process Data Objects, PDO), configuration data (Service Data Objects, SDO), and special functions (Time Stamp, Sync message, and Emergency message) as well as network management data (Boot-up message, NMT message, and Error Control). (b) CAN Kingdom. CAN Kingdom unleashes the full power of CAN. It gives system designers maximum freedom to create their own systems, which is not bound to the CSMA/AMP multimaster protocol of CAN but can create systems using virtually any type of bus management and topology. CAN Kingdom opens the possibility for a module designer to design general modules without knowing which system they will finally be integrated into and what type of higher layer CAN protocol it will have. As the system designer can allow
Zhang_Ch03.indd 305
5/13/2008 5:41:22 PM
306
INDUSTRIAL CONTROL TECHNOLOGY only specific modules to be used in the system, the cost advantage of an open system can be combined with the security of a proprietary system! Since the identifier in a CAN message not only identifies the message but also governs the bus access, a key factor is the enumeration of the messages. Another important factor is to see to it that the data structure in the data field is the same in both the transmitting and receiving modules. By adopting a few simple design rules these factors can be fully controlled and communication optimized for any system. This is done during a short setup phase at the initialization of the system. Including some modules not following the rules of the CAN Kingdom into a CAN Kingdom system is even possible. CAN Kingdom also enforces a conform documentation of modules and systems. (c) DeviceNet. DeviceNet is a low-cost communications link to connect industrial devices (such as limit switches, photoelectric sensors, valve manifolds, motor starters, process sensors, bar code readers, variable frequency drives, panel displays, and operator interfaces) to a network and eliminate expensive hard wiring. The direct connectivity provides improved communication between devices as well as important device-level diagnostics not easily accessible or available through hard wired I/O interfaces. DeviceNet is a simple, networking solution that reduces the cost and time to wire and install factory automation devices, while providing interchangeability of “like” components from multiple vendors. DeviceNet specifications have been developed by the Open DeviceNet Vendor Association (ODVA) and are internationally standardized. Buyers of the DeviceNet Specification receive an unlimited, royalty-free license to develop DeviceNet products. (d) J1939-based higher layer protocols. A J1939 network connects electronic control units (ECU) within a truck and trailer system. The J1939 specification—with its engine, transmission, and brake message definitions—is dedicated to diesel engine applications. It is supposed to replace J1587/J1708 networks. Other industries adopted the general J1939 communication functions, in particular the J1939/21 and J1939/31 protocol definitions—they are required for any J1939compatible system. They added other physical layers and defined other application parameters. The ISO standardized the J1939-based truck and trailer communication (ISO 11992) and the J1939-based communication for agriculture
Zhang_Ch03.indd 306
5/13/2008 5:41:22 PM
307
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
and forestry vehicles (ISO 11783). The National Maritime Electronics Association (NMEA) specified the J1939-based communication for navigation systems in marine applications (NMEA 2000). One reason for the incorporation of J1939 specifications into others is the fact that it makes sense to reinvent the basic communication services. An industryspecific document defines the particular combination of layers for that industry. CiA has developed several CANopen interface profiles for J1939-based networks (CiA DSP 413). Gateways are defined according to ISO 11992-2 and ISO 11992-3. In addition, the CANopen profile family includes a framework for gateways according to SAE J1939/71. (4) CAN standards. The original specification is the Bosch specification. Version 2.0 of this specification is divided into two parts: (a) Standard CAN (Version 2.0A). Uses 11 bit identifiers. (b) Extended CAN (Version 2.0B). Uses 29 bit identifiers. The two parts define different formats of the message frame, with the main difference being the identifier length. There are two ISO standards for CAN. The difference is in the physical layer, where ISO 11898 handles high speed applications up to 1Mbit/s. ISO 11519 has an upper limit of 125 kbit/s. (a) Part A and Part B compatibility. There are three types of CAN controllers: Part A, Part B passive, and Part B (Table 3.1). They are able to handle the different parts of the standard as follows: Most 2.0A controllers transmit and receive only standard format messages, although some (known as 2.0B passive) will receive extended format messages but then ignore them. 2.0B controllers can send and receive messages in both formats. Note that if 29 bit identifiers are used on a bus that contains part A controllers, the bus will not work! (b) CAN bus physical layer. The physical layer is not part of the Bosch CAN standard. However, in the ISO standards transceiver characteristics are included. CAN transmits signals on Table 3.1 CAN Part A and Part B Compatibility Message Format\CAN Chip Type
Part A
11 bit ID 29 bit ID
Ok Error!
Zhang_Ch03.indd 307
Part B Passive
Part B
Ok Tolerated on the bus, but ignored
Ok Ok
5/13/2008 5:41:22 PM
308
INDUSTRIAL CONTROL TECHNOLOGY the CAN bus which consists of two wires, a CAN-High and CAN-Low. These two wires operate in differential mode, that is, they carry inverted voltages (to decrease noise interference). The voltage levels, as well as other characteristics of the physical layer, depend on which standard is being used. (i) ISO 11898. The voltage levels for a CAN network which follows the ISO 11898 (CAN High Speed) standard are described in Table 3.2. Note that for the recessive state, nominal voltage for the two wires is the same. This decreases the power drawn from the nodes through the termination resistors. These resistors are 120 Ω and are located on each end of the wires. Some people have played with using central termination resistors (i.e., putting them in one place on the bus). This is not recommended since that configuration will not prevent reflection problems. (ii) ISO 11519. The voltage levels for a CAN network which follows the ISO 11519 (CAN Low Speed) standard are described in Table 3.3. ISO 115519 does not require termination resistors. These are not necessary because the limited bit rates (maximum 125 kB/s) make the bus insensitive to reflections. The voltage level on the CAN bus is recessive when the bus is idle.
Table 3.2 ISO 11898 Parameters for CAN Signal CAN-High CAN-Low
Recessive State (V)
Dominant State (V)
Min
Nominal
Max
Min
Nominal
Max
2.0 2.0
2.5 2.5
3.0 3.0
2.75 0.5
3.5 1.5
4.5 2.25
Table 3.3 ISO 11519 Parameters for CAN Signal CAN-High CAN-Low
Zhang_Ch03.indd 308
Recessive State (V)
Dominant State (V)
Min
Nominal
Max
Min
Nominal
Max
1.6 3.1
1.75 3.25
1.9 3.4
3.85 0
4.0 1.0
5.0 1.15
5/13/2008 5:41:22 PM
309
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
(iii) Bus lengths. The maximum bus length for a CAN network depends on the bit rate used. It is required that the wavefront of the bit signal has time to travel to the most remote node and back again before the bit is sampled. This means that if the bus length is near the maximum for the bit rate used, one should choose the sampling point with utmost care—on the other hand, one should always do that! Table 3.4 gives the different bus lengths and the corresponding maximum bit rates. (iv) Cable. According to the ISO 11898 standard, the impedance of the cable shall be 120 ± 12 Ω. It should be twisted par, shielded or unshielded. Work is in progress on the single-wire standard SAE J2411.
3.2.1.4
Interbus
Interbus was one of the very first Fieldbuses to achieve widespread popularity. It continues to be popular because of its versatility, speed, diagnostic and autoaddressing capabilities. Physically, it has the appearance of being a typical line-and-drop-based network, but in reality it is a serial ring shift register. Each slave node has two connectors, one which receives data and one which passes data onto the next slave. Interbus technology provides an open Fieldbus system, which embraces all the process I/Os required for almost any control system. Interbus is able to fulfill essential requirements of high-performance control concepts, as it is (1) a cost-effective solution with bus systems, which transmits data serially and reduces the amount of parallel cabling required; (2) an open and manufacturer-independent networking system, which can be easily connected with existing control systems; (3) flexible with regard to future modifications or expansions.
Table 3.4 CAN Bus Length Bus Length (m)
Maximum Bit Rate (bit/s)
40 100 200 500 6 km
1 Mbit/s 500 kbit/s 250 kbit/s 125 kbit/s 10 kbit/s
Zhang_Ch03.indd 309
5/13/2008 5:41:23 PM
310
INDUSTRIAL CONTROL TECHNOLOGY
With its special features and an extensive product range, Interbus has established itself successfully in all sectors of industry. Its traditional field of application is the automotive industry, but Interbus is also increasingly being used as an automation solution in other areas such as materials handling and conveying, the paper and print industry, the food and beverage industry, building automation, the wood-processing industry, assembly and robotics applications, general mechanical engineering, and, more recently, in process engineering. In addition to standard applications for connecting a large number of sensors and actuators in the field to the higher level control system via a serial bus system, Interbus can also be used to fulfill a variety of special application requirements such as (1) driving synchronically a control loop application in a mill train and (2) alternative and changing bus configuration in a machining center. (1) Operation mechanism. Interbus works with a master/slave access method, in which the master also establishes the connection to the higher level control or bus system. In terms of topology, Interbus is a ring system with an active connection to communication devices. Starting at the Interbus master, the controller board, all devices are actively connected on the ring system. Each Interbus device (slave) has two separate lines for data transmission: one for forward data transfer and one for return data transfer. This eliminates the need for a return line from the last to the first device, necessary in a simple ring system. The forward and return lines run in one cable. From the installation point of view, Interbus is similar to bus or linear structures, as only one bus cable connects one device with the next. To enable the structuring of an Interbus system, subring systems (bus segments) can be formed on the main ring, the source of which is the master. These subring systems are connected with bus couplers (also known as bus terminal modules). Figure 3.15 illustrates the basic structure of an Interbus system with one main ring and two subring systems. The remote bus is installed from the controller board. Remote bus devices and bus couplers are connected to the remote bus. Each bus coupler connects the remote bus with a subring system. There are two different types of subring system, which are available in different installation versions: (a) The local bus (formally known as the I/O bus) is responsible for local management, connects local bus devices, and is typically used to form local I/O compact stations, for example, in the control cabinet. It is also available as a robust version for direct mounting on machines and systems.
Zhang_Ch03.indd 310
5/13/2008 5:41:23 PM
311
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL Host system (IPC, PC, PLC)
256 remote bus devices, maximum
Controller board (bus master)
4096 I/O points, maximum
512 devices in total
Localbus
Bus coupler
LB device
LB device
400 m Remote bus RB device Remote bus branch
Bus coupler
RB device
RB device
400 m (1312.34 ft.) RB device RB—Remotebus LB—Local bus
Figure 3.15 Basic structure of an Interbus system.
(b) The remote bus branch connects remote bus devices and connects distributed devices over large distances. Remote bus branches can be used to set up complex network topologies, which are ideal for complex technical processes distributed over large distances. The Interbus remote bus cable forms an RS-485 connection and, because of the ring structure and the additional need for an equalizing conductor between two remote bus devices, it requires five cables. Due to the different physical transmission methods, the local bus is available with nine cables and TTL levels for short distances (up to 1.5 m) and as a two-wire cable with a TTYbased current interface for medium distances (up to 10 m). Due to the integrated amplifier function in each remote bus device, the total expansion of the Interbus system can reach 13 km. To ensure that the system is easy to operate, the number of Interbus devices is limited to a maximum of 512.
Zhang_Ch03.indd 311
5/13/2008 5:41:23 PM
312
INDUSTRIAL CONTROL TECHNOLOGY Interbus works as a shift register, which is distributed across all bus devices and uses the I/O-based summation frame method for data transmission. Each bus device has data memories, which are combined via the ring connection of the bus system to form a large shift register. Figure 3.16 illustrates the data transmission principle. A data packet in the summation frame is made available in the send shift register by the master. The data packet contains all data that is to be transmitted to the bus devices (OUT data). The corresponding data registers in the bus devices contain the data to be transmitted to the master (IN data) (Fig. 3.16a). The OUT data is now transferred from the master to the device and the IN data is transferred from the devices to the master in one data cycle. The master starts by sending the loop-back word through the ring. At the end of the data cycle, Master
Loopback OUT data 4 OUT data 3 OUT data 2 OUT data 1
Slave 1 IN data 1
Slave 2 IN data 2
Slave 4 IN data 4
IN data 3
Slave 1 OUT data 1
Slave 2 OUT data 2
Slave 4 OUT data 4
Slave 3 OUT data 3
Slave 3
(a)
Master
IN data 4 IN data 3 IN data 2 IN data 1 Loopback (b)
Figure 3.16 Principle of data transmission on Interbus: (a) distribution of data before a data cycle and (b) distribution of data after a data cycle.
Zhang_Ch03.indd 312
5/13/2008 5:41:23 PM
313
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
the master receives the loop-back word. The loop-back word “pulls” the OUT data along behind it while “pushing” the IN data along in front of it. This is called full duplex data transmission (Fig. 3.16b). The devices do not have to be addressed explicitly as the physical position of a device in the ring is known and the master can position the information to be transmitted at this point in the summation frame telegram. In the example, the first data word after the loop-back word is addressed to slave 4, for example. The amount of user data to be pushed through the ring corresponds to the total data length of all bus devices. The bus couplers are integrated into the ring but do not provide any user data. Data widths between 1 bit and 64 bytes per data direction are permitted in one Interbus device. Unlike local bus segments, whose components only really differ in terms of installation technology, Interbus loop (sensor loop, IP 65 local bus) offers a new physical transmission method. The individual devices are connected via a simple two-wire unshielded cable to form a ring. The data and the 24 V power supply for up to 32 sensors are also supplied via the cable. Figure 3.17 shows the configuration of an Interbus loop segment. Data is transmitted as load-independent current signals, which have a higher level of immunity to interference than the voltage signals normally used. The data to be transmitted is modulated using Manchester code on the 24 V supply voltage (Interbus usually uses the NRZ code). The physical bus
Remote bus IN
UL
US
Interbus Loop segment
Loop device 1
10 m (32.81 ft.)
Loop device 2
UA Loop device 64
... ...
Bus coupler Remote bus OUT
100 m (328.08 ft.)
Figure 3.17 Interbus loop segment (UL, power supply for the bus logic; US, power supply for the Interbus loop; UA, local power supply for actuators).
Zhang_Ch03.indd 313
5/13/2008 5:41:23 PM
314
INDUSTRIAL CONTROL TECHNOLOGY characteristics are converted by an appropriate bus terminal module, which can be connected to the Interbus ring at any point in a remote bus segment. One of the main fields of application of Interbus loop is the connection of individual devices with IP 65 and IP 54 connections directly in the system. An extensive range of functions and devices is available as bus devices. The Interbus protocol is not converted in any way in an Interbus loop, which means that complex gateways are not required and an Interbus loop segment can be used in conjunction with any other type of Interbus device. Data scanning is absolutely synchronous in all parts of the Interbus system. Despite this, the high scanning speed is maintained. An Interbus system is configured by connecting the bus devices one after the other in a ring. Bus couplers segment the ring according to the application requirements. With Interbus G4 (Generation 4) and later, it is possible to set up complex network topologies, which can be optimized for the structure of the automation system, by integrating bus couplers with an additional bus connection. There are two ways of structuring the configuration of this type of Interbus network: (i) divide the entire network into various levels; (ii) assign segment-specific device numbers. Both configuration methods are explained using the example of an Interbus network configuration with four levels, as illustrated in Fig. 3.18. The network is split into four different levels starting with the bus master on the main remote bus line as the first level. The branching secondary lines are now assigned a second level. The devices connected to these lines can form additional substructures, etc. In this way, a nesting depth of up to 16 levels can be achieved. The sequence is such that a local bus (formally known as the I/O bus) in a remote bus segment is always assigned to the next level. Segment-specific device numbers are assigned either automatically according to the physical configuration or they can be freely specified by the user. The numbering comprises two components:
= <Bus segment number> • . According to this pattern, the second digit of the device number for all remote devices is zero, for example, 1.0. The second digit is only used by the local bus devices (e.g., I/O
Zhang_Ch03.indd 314
5/13/2008 5:41:27 PM
315
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL Bus master
1.0
...
1.1
1.2
3.0
3.1
3.2
5.0
5.1
1.8
2.0 BK
...
3.8
4.0 BK
5.2
...
5.8
6.0 BK 8.0
7.0
8.1
Level 1 Level 2 Level 3 Level 4
BK—Bus coupler with additional remote bus branch, e.g., IBS ST 24 BK RB-T —Remotebus —Localbus
Figure 3.18 Interbus network configuration with four levels.
modules) connected downstream of the remote device, for example, 1.1. Bus couplers with an additional remote bus branch appear as two separate remote bus devices with one local bus/remote bus branch, for example, bus coupler 1.0/2.0. When physically assigning this type of remote bus device, the remote bus branch is assigned the next consecutive number, for example, 3.0. Any additional subbranches on this branch are assigned the next consecutive number, for example, 4.0, 5.0, etc. The outgoing remote bus from the branch is counted as the last component, for example, 8.0. Device numbering is a structuring tool and should not be confused with device addressing. Although the device numbers can be used for addressing purposes, this is not absolutely necessary. (2) Interbus system devices (a) Protocol chip. The most important element in the electrical configuration of an Interbus device is the Interbus protocol chip, which manages the complete summation frame protocol and provides the physical interface to the Interbus ring. The bus master and Interbus slave devices use different protocol chips according to their function in the Interbus
Zhang_Ch03.indd 315
5/13/2008 5:41:27 PM
316
INDUSTRIAL CONTROL TECHNOLOGY system. Hardware solutions tailored to meet specific technical requirements are available for both Interbus master and slave solutions. The parts of the Interbus protocol that correspond to layers 1 and 2 of the OSI reference model are processed entirely in the protocol machine. This means that basic devices require additional software or processing power. The protocol machine also provides physical access to the incoming (IB IN) and outgoing (IB OUT) Interbus interface. Both shift registers—the ID register and data register— operate as send and receive buffers in the ID and data cycle. The application and/or higher protocol layers can access this buffer via the MPM interface (Multifunction Pin, MFP). The MFP interface can be set according to interface requirements. The data registers can be expanded with external registers (ToExR, FromExR). The Interbus register chip SRE 1, which, if required, can expand the shift register width of an Interbus device to 64 bytes, is used for register expansion. By default, the register width of the SUPI 3 is 8 bytes. The diagnostic and report manager constantly monitors the operation (on chip diagnostics). Any error descriptions that are received, such as CRC errors, transient loss of medium, voltage dips, etc., are saved to the ID send buffer and can be read from there by the master at any time. This means that unique error locations can even be identified for sporadic errors that are difficult to diagnose. The Interbus slave chip enables all Interbus device variants for the remote and local bus to be implemented, with the exception of those for Interbus loop. Interbus loop also works with the Interbus protocol but uses a different physical transmission medium, which requires the protocol chip on the physical interface to be of the same format. The standard master protocol chip for Interbus masters is the IPMS microcontroller. The IPMS is designed to work with a wide range of different processors. The master chip is often used together with the Motorola CPU 68332. The master firmware, which manages the Interbus functions, is stored in the EPROM. Only actual bit transmission (Layer 1, parts of Layer 2) takes place via the IPMS. The IPMS is connected to the relevant host system via a shared memory area, which, in its simplest format, is a Dual Port Memory (DPM) or a Multiport
Zhang_Ch03.indd 316
5/13/2008 5:41:28 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
317
Memory (MPM). Interbus masters with IPMS are available in various formats depending on the functions required. (b) Local bus devices. An I/O bus interface for the two-wire protocol with the SUPI 3 is an interface example used to configure an ST local bus. The ST local bus operates with four transmission signals, which, due to the ring format of Interbus, are available twice at the incoming and outgoing local bus interface as IN and OUT signal lines. In addition, one incoming and one outgoing reset line are also available in the local bus. The bus signals can be connected directly to the local bus connectors, as the SUPI 3 meets the Interbus specification for the local bus even without external drivers and receivers. (c) Remote bus devices. If it is used as a remote bus device, the drivers and receivers required for differential signal transmission to RS-485 must be added to the SUPI. On the remote bus, transmission takes place via two twisted pair cables (DO+/DO–, DI+/DI–). Unlike the local bus, remote bus devices require a dedicated power supply for the device logic, as this is no longer provided via the bus cable. (d) Interbus loop devices. Although Interbus loop devices also operate with the standardized Interbus protocol, they do not transmit voltage signals to RS-485, which is usually the case on Interbus. Instead, they use load-independent current signals and Manchester coding to transmit the data and supply voltage on one and the same bus line (loop). Due to the different physical transmission medium, a special protocol chip, the IBS LPC, is available for Interbus loop. This chip is an ASIC with approximately 7000 gate equivalents and is supplied in QFP-44 housing. Special loop diagnostics are integrated into the LPC 2 to extend the familiar diagnostic functions of the SUPI 3. (3) Protocol structure. The Interbus protocol, which has been optimized specifically for the requirements of automation technology, transmits single-bit data from limit switches or to switching devices (process data) and complex programs or data records to intelligent field devices (parameter data) with the same level of efficiency and safety. Process data is transmitted in the fixed and cyclic time slot in real-time conditions, while parameter data comprises the acyclic transmission of larger volumes of data as and when required. The continuity of an Interbus network for very different tasks within an automation system—ensured in essence by the standard protocol—is supported by additional measures: (1) the adaptation of the physical transmission method
Zhang_Ch03.indd 317
5/13/2008 5:41:28 PM
318
INDUSTRIAL CONTROL TECHNOLOGY “downward,” making it easy to install and connect individual sensors and actuators; (2) the provision of “upward” interface couplers to connect Interbus networks directly with factory and or company networks (Ethernet networks); (3) the guarantee of easy configuration, project planning, and diagnostics with uniform software tools. The Interbus protocol is based on the OSI reference model and for reasons of efficiency only takes into account layers 1, 2, and 7 (Fig. 3.19). Certain functions from layers 3 to 6 have been included in application Layer 7. The physical layer (Layer 1) defines both the time conditions (such as the baud rate, permissible jitter, etc.) and the formats for encoding information. The data link layer (Layer 2) ensures data integrity and manages cyclic data transfer via the bus using the summation frame protocol. The transmission methods and protocols on layers 1 and 2 can be found in DIN 19 258. Following on from the data link layer, data access to the Interbus devices takes place in the application layer as required via two different data channels: (a) The process data channel serves the primary use of Interbus as a sensor and actuator bus. The cyclic exchange of I/O data between the higher level control system and the connected sensors/actuators takes place via this channel. (b) The parameter channel supplements cyclic data exchange with individual I/O points in connection-oriented message exchange. This type of communication requires additional data packing, as large volumes of information are being exchanged between the individual communication partners. Data is transmitted using communication services based on the client/server model. Interbus devices almost always have one process data channel. A parameter channel can also be fitted as an optional extra. Application layer Parameter data channel
Process data channel
Layer 2
Data link layer
Layer 1
Physical layer
Network management
Layer 7
Figure 3.19 Interbus protocol structure.
Zhang_Ch03.indd 318
5/13/2008 5:41:28 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
319
During operation, an Interbus system requires settings to be made and provides a wide range of diagnostic information. This information is processed by the network management on each layer. More detailed information about readiness for operation, error states, and statistical data can also be accessed and evaluated, and network configuration settings can be made. The hybrid protocol structure of Interbus for the two different data classes (process data and parameter data) and its independent data transmission via two channels is a decisive factor in the performance of the Interbus protocol. The protocol enables the creation of a seamless network comprising control systems and intelligent field devices right down to individual sensors and actuators.
3.2.1.5
Ethernets/Hubs
Ethernet is the major local area network (LAN) technology in use today, and is widely used for the LAN-connected PCs and workstations. Ethernet refers to the family of LAN products covered by the IEEE 802.3 standard, and the technology can run over both optical fiber and twisted-pair cables. Over the years, Ethernet has steadily evolved to provide additional performance and network intelligence. More than 300 million switched Ethernet ports have been installed worldwide. Ethernet technology enjoys such wide acceptance because it is easy to understand, deploy, manage, and maintain. Ethernet is low cost and flexible, and supports a variety of network topologies. Although traditional, non-Ethernet-based industrial solutions have a data rate of between 500 kbps to 12 Mbps, Ethernet technology can deliver substantially higher performance. Because it is based on industry standards, it can run and be connected over any Ethernet-compliant device from any vendor. This continual improvement has made Ethernet an excellent solution for industrial applications. Today, the technology can provide four data rates. (1) 10BASE-T Ethernet delivers performance of up to 10 Mbps over twisted-pair copper cable. (2) Fast Ethernet delivers a speed increase of 10 times the 10BASE-T Ethernet specification (100 Mbps) while retaining many of Ethernet’s technical specifications. These similarities enable organizations to use 10BASE-T applications and network management tools on Fast Ethernet networks. (3) Gigabit Ethernet extends the Ethernet protocol even further, increasing speed 10-fold over Fast Ethernet to 1000 Mbps, or
Zhang_Ch03.indd 319
5/13/2008 5:41:28 PM
320
INDUSTRIAL CONTROL TECHNOLOGY 1 Gigabit/s. Because it is based upon the current Ethernet standard and compatible with the installed base of Ethernet and Fast Ethernet switches and routers, network managers can support Gigabit Ethernet without needing to retrain or learn a new technology. (4) 10 Gigabit Ethernet, ratified as a standard in June 2002, is an even faster version of Ethernet. It uses the IEEE 802.3 Ethernet media access control (MAC) protocol, the IEEE 802.3 Ethernet frame format, and the IEEE 802.3 frame size. Because 10 Gigabit Ethernet is a type of Ethernet, it can support intelligent Ethernetbased network services, interoperate with existing architectures, and minimize users’ learning curves. Its high data rate of 10 Gigabits/s makes it a good solution to deliver high bandwidth in wide area networks (WANs) and metropolitan area networks (MANs). (1) Industrial Ethernet. Recognizing that Ethernet is the leading networking solution, many industry organizations are porting the traditional Fieldbus architectures to Industrial Ethernet. Industrial Ethernet applies the Ethernet standards developed for data communication to manufacturing control networks (Fig. 3.20). Using IEEE standards-based equipment, organizations can migrate all or part of their factory operations to an Ethernet environment at the pace they wish. Instead of using architectures composed of multiple separate networks, Industrial Ethernet can unite a company’s administrative, control-level, and device-level networks to run over a single network infrastructure. In an Industrial Ethernet network, Fieldbus-specific information that is used to control I/O devices and other manufacturing components are embedded into Ethernet frames. Because the technology is based
Device profile
Valves
Drivers
Robotics
Application object library Application Layer 2 (Data link) Layer 1 (Physics)
Explicit messages, I/O messages Message routing, connection management TCP
UDP IP Ethernet MAC/LLC Physical layer
QoS parameter
Layer 4(Transport Layer 3 (Network)
Data management services
Other
Fieldbus—Spec
Semiconductor
Figure 3.20 Using intelligent Ethernet for automation control.
Zhang_Ch03.indd 320
5/13/2008 5:41:28 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
321
on industry standards rather than on custom or proprietary standards, it is more interoperable with other network equipment and networks. Although Industrial Ethernet is based on the same industry standards as traditional Ethernet technology, the implementation of the two solutions is not always identical. Industrial Ethernet usually requires more robust equipment and a very high level of traffic prioritization when compared with traditional Ethernet networks in a corporate data network. The primary difference between Industrial Ethernet and traditional Ethernet is the type of hardware used. Industrial Ethernet equipment is designed to operate in harsh environments. It includes industrial-grade components, convection cooling, and relay output signaling. And it is designed to operate at extreme temperatures and under extreme vibration and shock. Power requirements for industrial environments differ from data networks, so the equipment runs using 24 V of DC power. To maximize network availability, it also includes fault-tolerant features such as redundant power supplies. Industrial Ethernet environments also differ from traditional Ethernet networks in their use of multicasting by hosts for certain applications. Industrial applications often use producer– consumer communication, where information “produced” by one device can be “consumed” by a group of other devices. In a producer–consumer environment, the most important priority for a multicast application is to guarantee that all hosts receive data at the same time. A traditional Ethernet network, on the other hand, focuses more on the efficient utilization of bandwidth in general, and less on synchronous data access. To help optimize synchronous data access, Industrial Ethernet equipment must include the intelligence and Quality of Services (QoS) features needed to enable organizations to appropriately prioritize multicast transmissions. Ethernet technology can provide not only excellent performance for manufacturing applications, but a wide range of network security measures to provide both confidentiality and data integrity. Confidentiality helps ensure that data cannot be accessed by unauthorized users. Data integrity protects data from intentional or accidental alteration. These network security advantages protect manufacturing devices like programmable logic controllers (PLCs) as well as PCs, and apply to both equipment and data security. Manufacturers can use many methods to help ensure network confidentiality and integrity. These network security measures can be grouped into
Zhang_Ch03.indd 321
5/13/2008 5:41:28 PM
322
INDUSTRIAL CONTROL TECHNOLOGY several categories, including access control and authentication, and secure connectivity and management. Access control is commonly implemented using firewalls or network-based controls protecting access to critical applications, devices, and data so that only legitimate users and information can pass through the network. However, access-control technology is not limited to dedicated firewall devices. Any device that can make decisions to permit or deny network traffic, such as an intelligent switch, is part of an integrated access-control solution. When designing an access-control solution, network administrators can set up filtering decisions based on a variety of criteria, such as an IP address or TCP/UDP port number. Intelligent switches can provide support for this advanced filtering to limit network access to authorized users. At the same time, they can enable organizations to enforce policy decisions based on the IP or MAC address of a laptop or PLC. Virtual LANs (VLANs) are another access-control solution, providing the ability to create multiple IP subnets within an Ethernet switch. VLANs provide network security and isolation by virtually segmenting factory floor data from other data and users. VLANs can also be used to enhance network performance, separating low-priority end devices from high-priority devices. Access controls can also include a variety of device or userauthentication services. Authentication services determine who may access a network and what services they are authorized to use. For example, the 802.1x authentication protocol provides port-based authentication so that only legitimate devices can connect to switch ports. Authentication services are an effective complement to other network security measures in a manufacturing environment. To provide additional protection for manufacturing networks, organizations can take several approaches to authenticate and encrypt network traffic. Using virtual private network (VPN) technology, Secure Sockets Layer (SSL) encryption can be applied to application-layer data in an IP network. Organizations can also use IP Security (IPSec) technology to encrypt and authenticate network packets to thwart network attacks such as sniffing and spoofing. VPN client software, together with dedicated VPN network hardware, can be used to encrypt device monitoring and programming sessions, and to support strong authentication. Manufacturers can also use Secure Shell (SSH) Protocol encryption for remote terminal logins to network devices. Version 3 of Simple Network Management Protocol (SNMP) also offers
Zhang_Ch03.indd 322
5/13/2008 5:41:28 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
323
support for encryption and authentication of management commands and data. (2) Network topology. Because factory floor applications run in real time, the network must be available to users on a continuous basis, with little or no downtime. Manufacturers can help ensure network reliability using effective network design principles, as well as intelligent networking services. Manufacturers deploying an Ethernet solution should design networks with redundant paths to ensure that a single device outage does not take down the entire network. Two network topologies most often used are ring and hub-and-spoke. In hub-and-spoke designs (Fig. 3.21), three layers of switches are usually installed. The first layer is often referred to as the access layer. These switches provide connections for end-point devices like PLCs, robots, and Human– Machine Interfaces (HMIs). A second layer called the distribution layer provides connectivity between the access-layer switches. And a third layer called the core layer provides connectivity to other networks or to the Internet service provider (ISP) via routers. The distribution layer may include switches with routing functions to provide inter-VLAN routing. Access-layer switches, on the other hand, generally provide only Layer 2 (data link) forwarding services. For optimum performance, network equipment at each of these layers must be aware of the information contained within the Layer 2 through Layer 4 packet headers. In ring topologies (Fig. 3.22), all devices are connected in a ring. Each device has a neighbor to its left and right. If a connection on one side of the device is broken, network connectivity can still be maintained over the ring via the opposite side of the device. In some situations, manufacturers install dual counterrotating rings
Figure 3.21 Hub-and-spoke network topology.
Zhang_Ch03.indd 323
5/13/2008 5:41:28 PM
324
INDUSTRIAL CONTROL TECHNOLOGY
Figure 3.22 Ring topology.
to maximize availability. In a ring topology, each switch functions as both an access-layer and a distribution-layer switch. (3) Network protocols. To prevent loops from being formed in the network when devices are interconnected via multiple paths, some organizations use the Spanning Tree Protocol. If a problem occurs on a network node, this protocol enables a redundant alternative link to automatically come back online. The traditional Spanning Tree Protocol has been considered too slow for industrial environments. To address these performance concerns, the IEEE standards committee has ratified a new Rapid Spanning Tree Protocol (802.1w). This protocol provides subsecond convergence times that vary between 200 and 800 ms, depending on network topology. Using 802.1w, organizations can enjoy the benefits of Ethernet networks, with the performance and reliability that manufacturing applications demand. Another spanningtree option is Multiple Spanning Tree Protocol (802.1s). This enables VLANs to be grouped into spanning-tree instances. Each instance has a spanning-tree topology that is independent from other spanning-tree instances. This architecture provides multiple forwarding paths for data traffic, enables load balancing, and reduces the number of spanning-tree instances needed to support a large number of VLANs. Ethernet switches provide excellent connectivity and performance; however, each switch is another device that must be managed on the factory floor. To make switched Ethernet networks easy to support and maintain, intelligent switches include built-in management capabilities. These intelligent features make it easy to connect manufacturing devices to the network, without creating additional configuration tasks. And they help minimize network downtime if part of the network should fail. One of the most useful intelligent features in a switched Ethernet network is Option 82. In an Ethernet network, Dynamic Host Configuration
Zhang_Ch03.indd 324
5/13/2008 5:41:29 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
325
Protocol (DHCP) lets devices dynamically acquire their IP addresses from a central server. The DHCP server can be configured to give out the same address each time or generate a dynamic one from a pool of available addresses. Because the interaction of the factory floor devices requires specific addresses, Industrial Ethernet networks usually do not use dynamic address pools. However, static addresses can have drawbacks. Because they are linked to the MAC address of the client, and because the MAC address is often hard-coded in the network interface of the client device, the association is lost when a client device fails and needs to be replaced. Extended fields in the DHCP packet can be filled in by the switch, indicating the location of the device requesting an IP address. The 82nd optional field, called Option 82, carries the specific port number and the MAC address of the switch that received the DHCP request. This modified request is sent on to the DHCP server. If an access server is Option 82-aware, it can use this information to formulate an IP address based on the Option 82 information. Effective use of Option 82 enables manufacturers to minimize administrative demands and maintain maximum network uptime even in the event of the failure of individual devices. Because manufacturing processes depend on the precise synchronization of processes, network determinism must be optimized to deliver the best possible performance. Data must be prioritized using QoS to ensure that critical information is received first. The multicast applications that are prevalent in manufacturing environments must be well managed using Internet Group Management Protocol (IGMP) snooping. Many Industrial Ethernet applications depend on IP multicast technology. IP multicast allows a host, or source, to send packets to another group of hosts called receivers anywhere within the IP network using a special form of IP address called the IP multicast group address. While traditional multicast services, such as video or multimedia, tend to scale with the number of streams, Industrial Ethernet multicast applications do not. Industrial Ethernet environments use a producer–consumer model, where devices generate data called “tags” for consumption by other devices. The devices that generate the data are producers and the devices receiving the information are consumers. Multicast is more efficient than unicast, because consumers will often want the same information from a particular producer. Each device on the network can be both a producer and a consumer of traffic. While most devices generate very little data, networks with a large number of nodes can generate a large amount of multicast
Zhang_Ch03.indd 325
5/13/2008 5:41:29 PM
326
INDUSTRIAL CONTROL TECHNOLOGY traffic, which can overrun end devices in the network. Using mechanisms like QoS and IGMP snooping, organizations can control and manage multicast traffic in manufacturing environments. Many manufacturing applications depend on multicast traffic, which can introduce performance problems in the network. To address these challenges in an Industrial Ethernet environment, organizations can deploy IGMP snooping. IGMP snooping limits the flooding of multicast traffic by dynamically configuring the interfaces so that multicast traffic is forwarded only to interfaces associated with IP multicast devices. In other words, when a multicast message is sent to the switch, the switch forward the message only to the interfaces that are interested in the traffic. This is very important because it reduces the load of traffic traversing through the network. It also relieves the hosts from processing frames that are not needed. In a producer–consumer model used by Industrial Ethernet, IGMP snooping can limit unnecessary traffic from the I/O device that is producing, so that it only reaches the device consuming that data. Messages delivered to a given device that were intended for other devices consume resources and slow performance, so networks with many multicasting devices will suffer performance issues if IGMP snooping or other multicast limiting schemes are not implemented. The IGMP snooping feature allows Ethernet switches to “listen” to the IGMP conversation between hosts. With IGMP snooping, the Ethernet switch examines the IGMP traffic coming to the switch and keeps track of multicast groups and member ports. When the switch receives an “IGMP join” report from a host for a particular multicast group, the switch adds the host port number to the associated multicast forwarding table entry. When it receives an IGMP “leave group” message from a host, it removes the host port from the table entry. After the switch relays the IGMP queries, it deletes entries periodically if it does not receive any IGMP membership reports from the multicast clients. A Layer 3 router normally performs the querying function. When IGMP snooping is enabled in a network with Layer 3 devices, the multicast router sends out periodic IGMP general queries to all VLANs. The switch responds to the router queries with only one “join” request per MAC multicast group. The switch then creates one entry per VLAN in the Layer 2 forwarding table for each MAC group from which it receives an IGMP join request. All hosts interested in this multicast traffic send “join” requests and are added to the forwarding table entry. Layer 2 multicast groups learned through IGMP snooping are dynamic. However, in a managed switch, organizations
Zhang_Ch03.indd 326
5/13/2008 5:41:29 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
327
can statically configure MAC multicast groups. This static setting supersedes any automatic manipulation by IGMP snooping. Multicast group membership lists can consist of both userdefined settings and settings learned via IGMP snooping. (4) Quality of service (QoS). An Industrial Ethernet network may transmit many different types of traffic, from routine data to critical control information, or even bandwidth-intensive video or voice. The network must be able to distinguish among and give priority to different types of traffic. To address these issues, organizations can implement QoS using several techniques. QoS involves three important steps. First, different traffic types in the network need to be identified through classification techniques. Second, advanced buffermanagement techniques need to be implemented to prevent highpriority traffic from being dropped during congestion. Finally, scheduling techniques need to be incorporated to transmit highpriority traffic from queues as quickly as possible. In Layer 2 switches on an Ethernet network, QoS usually prioritizes native, encapsulated Ethernet frames, or frames tagged with 802.1p class of service (CoS) specifications. More advanced QoS mechanisms take this definition a step further. For example, advanced Ethernet switches can study and interpret the flow of QoS traffic as it is processed through the switch. A switch can be configured to prioritize frames based on given criteria at different layers of the OSI reference model. For example, traffic could be prioritized according to the source MAC address (in Layer 2) or the destination TCP port (in Layer 4). Any traffic traveling through the interface to which this QoS is applied is classified, and tagged with the appropriate priority. Once a packet has been classified, it is then placed in a holding queue in the switch, and scheduled based on the scheduling algorithm desired. In an Industrial Ethernet application, real-time I/O control traffic would share network resources with configuration (FTP) and data-collection flows, as well as other traffic, in the upper layers of the OSI reference model. By using QoS to give high priority to real-time UDP control traffic, organizations can prevent delay or jitter from affecting any control functions.
3.2.2
Interfaces
Devices connect to the microprocessor using an interface bus. The specification of this bus defines speed between the microprocessor and the connected device that greatly affects the performance of the microprocessor.
Zhang_Ch03.indd 327
5/13/2008 5:41:29 PM
328
INDUSTRIAL CONTROL TECHNOLOGY
Peripherals can connect to the controller (or computer) using either an internal or an external interface. This subsection lists main interfaces popularly used in industrial control.
3.2.2.1
PCI, ISA, and PCMCIA
(1) PCI bus. Intel has developed a standard interface, named the PCI (Peripheral Component Interface/Interconnect) local bus for microprocessors. This technology allows fast memory, disk and video access. The PCI bus is now the main interface bus used in most industrial controllers, and is rapidly replacing the ISA bus for internal interface devices. It is a very adaptable bus and most external buses, such as SCSI and USB connect to the processor via the PCI bus. The PCI bus transfers data using the system clock and can operate over a 32- or 64-bit data path. The high transfer rates used in PCI architecture machines limit the number of the PCI bus interfaces to two to three, normally the graphics adapter and hard disk controller. If data is transferred at 64 bits at a rate of 33 MHz, then the maximum transfer rate is 264 MB/s. Detailed descriptions for the PCI bus system architecture, the PCI operation, the bus arbitration, the bus configuration, and PCI bus interrupt handling are given in both Sections 2.1.4 and 7.3.1. (2) ISA bus. IBM developed the Industry Standard Architecture or ISA bus for their 80286-based AT (advanced technology) computer. It had the advantage of being able to deal with 16 bits of data at a time. An extra edge connector gives compatibility with the PC bus. This gives an extra 8 data bits and four address lines. Thus, the ISA bus has a 16-bit data and a 24-bit address bus. This gives a maximum of 16 MB of addressable memory and, like the PC bus, it uses a fixed clock rate of 8 MHz. The maximum data rate is thus 2 bytes (16 bits) per clock cycle, giving a maximum throughput of 16 MB/s. IBM’s PC/AT was designed with an expansion bus which not only provided for taking advantage of the new technology, but also remained compatible with the older style 8-bit XT add-in boards. Anticipating that advances in processors would again outpace advances in bus technology, the PC/AT was designed with two separate oscillators. In this way, the microprocessor and expansion bus could be run on different clocks with different speeds. Therefore, a controller or computer running a newer processor with 33 MHz clock speed could also run its expansion bus at an 8 MHz clock rate. ISA cards are more cumbersome to
Zhang_Ch03.indd 328
5/13/2008 5:41:29 PM
329
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
install than other cards because I/O addresses, interrupts, and clock speed must be set using jumpers and switches on the card itself. The other bus options which use software to set these parameters are called Plug & Play. While there is nothing inferior about using jumpers and switches, it can be more intimidating for novice users. The ISA system, however, does not have a central registry from which to allocate system resources. Consequently, each device behaves as though it has sole access to system resources such as DMA, I/O ports, IRQs, and memory. Obviously, this can cause problems when using multiple add-in boards in a single system. In practice there is no speed difference between running many serial communication peripherals using a PCI or an ISA bus, though the PCI advantage is obvious for high-speed devices such as video cards. Thus, there is no reason to convert your current ISA serial communication systems to PCI, as ISA will provide equivalent functionality, generally at a lower price. However, if you are starting a new installation using a PC with few or no (as is increasingly the case today) ISA slots, or you prefer using Plug & Play cards, then you should consider using PCI adapters. Figure 3.23 shows a typical connection to the ISA bus. The ALE (sometimes known as BALE) controls the address latch; when active low, it latches the address lines A2–A19 to the ISA bus. The address is latched when ALE goes from a high to a low. The Pentium’s data bus is 64 bits wide, whereas the ISA expansion bus is 16 bits wide. It is the bus controller’s function to steer D0–D31 Processor
Data latch
D0–D15
Memory Address latch
BE0
–BE3
A2–A19 ALE
ISA bus
A2–A31
A0 Bus controller
A1 SBHE M16 IO16
Figure 3.23 ISA bus connections.
Zhang_Ch03.indd 329
5/13/2008 5:41:29 PM
330
INDUSTRIAL CONTROL TECHNOLOGY data between the processor and the slave device for either 8-bit or 16-bit communications. The following are the descriptions of the ISA signals. (a) SA19 to SA0. System Address bits 19:0 are used to address memory and I/O devices within the system. These signals may be used along with LA23 to LA17 to address up to 16 MB of memory. Only the lower 16 bits are used during I/O operations to address up to 64K I/O locations. SA19 is the most significant bit. SA0 is the least significant bit. These signals are gated on the system bus when BALE is high and are latched on the falling edge of BALE. They remain valid throughout a read or write command. These signals are normally driven by the system microprocessor or DMA controller, but may also be driven by a bus master on an ISA board that takes ownership of the bus. (b) LA23 to LA17. Unlatched Address bits 23:17 are used to address memory within the system. They are used along with SA19 to SA0 to address up to 16 MB of memory. These signals are valid when BALE is high. They are “unlatched” and do not stay valid for the entire bus cycle. Decodes of these signals should be latched on the falling edge of BALE. (c) AEN. Address Enable is used to de-gate the system microprocessor and other devices from the bus during DMA transfers. When this signal is active, the system DMA controller has control of the address, data, and read/write signals. This signal should be included as part of ISA board select decodes to prevent incorrect board selects during DMA cycles. (d) BALE. Buffered Address Latch Enable is used to latch the LA23 to LA17 signals or decodes of these signals. Addresses are latched on the falling edge of BALE. It is forced high during DMA cycles. When used with AEN, it indicates a valid microprocessor or DMA address. (e) CLK. System Clock is a free-running clock typically in the 8–10 MHz range, although its exact frequency is not guaranteed. It is used in some ISA board applications to allow synchronization with the system microprocessor. (f) SD15 to SD0. System Data serves as the data bus bits for devices on the ISA bus. SD15 is the most significant bit. SD0 is the least significant bit. SD7 to SD0 are used for transfer of data with 8-bit devices. SD15 to SD0 are used for transfer of data with 16-bit devices. Sixteen-bit devices transferring data with eight-bit devices convert the transfer into two eight-bit cycles using SD7 to SD0.
Zhang_Ch03.indd 330
5/13/2008 5:41:30 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
331
(g) –DACK0 to –DACK3 and –DACK5 to –DACK7. DMA Acknowledge 0 to 3 and 5 to 7 are used to acknowledge DMA requests on DRQ0 to DRQ3 and DRQ5 to DRQ7. (h) DRQ0 to DRQ3 and DRQ5 to DRQ7. DMA Requests are used by ISA boards to request service from the system DMA controller or to request ownership of the bus as a bus master device. These signals may be asserted asynchronously. The requesting device must hold the request signal active until the system board asserts the corresponding DACK signal. (i) –I/O CH CK. I/O Channel Check signal may be activated by ISA boards to request that a nonmaskable interrupt (NMI) be generated to the system microprocessor. It is driven active to indicate an incorrect error has been detected. (j) I/O CH RDY. I/O Channel Ready allows slower ISA boards to lengthen I/O or memory cycles by inserting wait states. This signals normal state is active high (ready). ISA boards drive the signal inactive low (not ready) to insert wait states. Devices using this signal to insert wait states should drive it low immediately after detecting a valid address decode and an active read or write command. The signal is released high when the device is ready to complete the cycle. (k) –IOR. I/O Read is driven by the owner of the bus and instructs the selected I/O device to drive read data onto the data bus. (l) –IOW. I/O Write is driven by the owner of the bus and instructs the selected I/O device to capture the write data on the data bus. (m) IRQ3 to IRQ7 and IRQ9 to IRQ12 and IRQ14 to IRQ15. Interrupt Requests are used to signal the system microprocessor that an ISA board requires attention. An interrupt request is generated when an IRQ line is raised from low to high. The line must be held high until the microprocessor acknowledges the request through its interrupt service routine. These signals are prioritized with IRQ9 to IRQ12 and IRQ14 to IRQ15 having the highest priority (IRQ9 is the highest) and IRQ3 to IRQ 7 having the lowest priority (IRQ7 is the lowest). (n) –SMEMR. System Memory Read instructs a selected memory device to drive data onto the data bus. It is active only when the memory decode is within the low 1 MB of memory space. SMEMR is derived from MEMR and a decode of the low 1 MB of memory. (o) –SMEMW. System Memory Write instructs a selected memory device to store the data currently on the data bus. It is
Zhang_Ch03.indd 331
5/13/2008 5:41:30 PM
332
INDUSTRIAL CONTROL TECHNOLOGY active only when the memory decode is within the low 1 MB of memory space. SMEMW is derived from MEMW and a decode of the low 1 MB of memory. (p) –MEMR. Memory Read instructs a selected memory device to drive data onto the data bus. It is active on all memory read cycles. (q) –MEMW. Memory Write instructs a selected memory device to store the data currently on the data bus. It is active on all memory write cycles. (r) –REFRESH. Memory Refresh is driven low to indicate a memory refresh operation is in progress. (s) OSC. Oscillator is a clock with a 70 ns period (14.31818 MHz). This signal is not synchronous with the system clock (CLK). (t) RESET DRV. Reset Drive is driven high to reset or initialize system logic upon power up or subsequent system reset. (u) TC. Terminal Count provides a pulse to signal a terminal count has been reached on a DMA channel operation. (v) –MASTER. Master is used by an ISA board along with a DRQ line to gain ownership of the ISA bus. Upon receiving a –DACK a device can pull –MASTER low which will allow it to control the system address, data, and control lines. After –MASTER is low, the device should wait one CLK period before driving the address and data lines, and two clock periods before issuing a read or write command. (w) –MEM CS16. Memory Chip Select 16 is driven low by a memory slave device to indicate it is capable of performing a 16-bit memory data transfer. This signal is driven from a decode of the LA23 to LA17 address lines. (x) –I/O CS16. I/O Chip Select 16 is driven low by an I/O slave device to indicate it is capable of performing a 16-bit I/O data transfer. This signal is driven from a decode of the SA15 to SA0 address lines. (y) –0WS. Zero Wait State is driven low by a bus slave device to indicate it is capable of performing a bus cycle without inserting any additional wait states. To perform a 16-bit memory cycle without wait states, –0WS is derived from an address decode. (z) –SBHE. System Byte High Enable is driven low to indicate a transfer of data on the high half of the data bus (D15–D8). (3) PCMCIA (PC Card). The Personal Computer Memory Card International Association (PCMCIA) interface allows small, thin cards to be plugged into laptop, notebook, or palm computers.
Zhang_Ch03.indd 332
5/13/2008 5:41:30 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
333
The interface was originally designed for memory cards (Version 1.0), but it has since been adopted for many other types of adapters (Version 2.0), such as fax/modems, sound cards, local area network cards, CD-ROM controllers, digital I/O cards, and so on. Most PCMCIA cards comply with either PCMCIA Type II or Type III. Type I cards are 3.3 mm thick, Type II take cards up to 5 mm thick, and Type III allows cards up to 10.5 mm thick. A new standard, Type IV, takes cards that are thicker than 10.5 mm. Type II interfaces can accept Type I cards, Type III accept Type I and Type II, and Type IV interfaces accept Type I, II, and III. The PCMCIA standard uses a 16-bit data bus (D0–D15) and a 26-bit address bus (A0–A25), which gives an addressable memory of 226 bytes (64 MB). The memory is arranged as: (1) Common memory and attribute memory, which gives a total addressable memory of 128 MB. (2) I/O addressable space of 65,536 (64K) 8-bit ports. The PCMCIA interface allows the PCMCIA device to map into the main memory or into the I/O address space. For example, a modem PCMCIA device would map its registers into the standard COM port addresses (such as 3F8h–3FFh for COM1 or 2F8h–2FF for COM2). Any accesses to the mapped memory area will be redirected to the PCMCIA rather than the main memory or I/O address space. These mapped areas are called windows. A window is defined with a START address and a LAST address.
3.2.2.2
IDE
The most popular interface for hard disk drives is the integrated drive electronics (IDE) interface. Its main advantage is that the hard disk controller is built into the disk drive and the interface to the motherboard consists simply of a stripped-down version of the ISA bus. The most common standard is the ANSI-defined ATA-IDE standard. It uses a 40-way ribbon cable to connect to 40-pin header connectors. The standard allows for the connection of two disk drives in a daisy-chain configuration. This can cause problems because both drives have controllers within their drives. The primary drive is assigned as the master, and the secondary driver is the slave. A drive is set as a master or a slave by setting jumpers on the disk drive. They can also be set by software using the cable select pin on the interface. The specifications for the IDE are (1) Maximum of two devices (hard disks) (2) Maximum capacity for each disk of 528 MB
Zhang_Ch03.indd 333
5/13/2008 5:41:30 PM
334
INDUSTRIAL CONTROL TECHNOLOGY (3) Maximum cable length of 18 in. (4) Data transfer rates of 3.3, 5.2, and 8.3 MB/s.
A new standard called as enhanced IDE (E-IDE) allows for higher capacities than IDE has (1) (2) (3) (4) (5)
Maximum of four devices (hard disks) Uses two ports (for master and slaves) Maximum capacity for each disk of 8.4 GB Maximum cable length of 18 in. Data transfer rates of 3.3, 5.2, 8.3, 11.1, and 16.6 MB/s.
The PC (Personal Computer, in this chapter) is now a highly integrated system. The main elements of modern systems are the processor, the systems controller and the PCI IDE/ISA accelerator, as illustrated in Fig. 3.24. The system controller provides the main interface between the processor and the level-2 cache, the DRAM, and the PCI bus. It is one of the most important devices in the system and allows data to flow to and from the processor in the correct way. The PCI bus links to interface devices and also the PCI IDE/ISA accelerator (such as PIIX4 device). The PCI IDE/ISA device then interfaces to other buses, such as IDE and ISA. The IDE interface has separate signals for both the primary and secondary IDE drives; these are electrically isolated, which allows drives to be swapped easily without affecting the other port. The PCI IDE/ISA accelerator is a massively integrated device (the PIIX4 has 324 pins) and provides for an interface to other buses, such as
Processor
Level 2 cache System bus
System controller (PAC/ MTXC)
Cache bus
DRAM
DRAM bus PCI bus DMA
PCI IDE/ISA accelerator (PIIX 4)
interrupts
ISA bus
Reset USB bus Power management
Primary IDE
Secondary IDE X-bus SDIORDY#
SDIOW#
SDIOR#
SDDREQ
SDDACK#
SDD [15:0] SDA [2.0
SDCS3#
SDCS1#
PDIORDY# PDIOW# PDIOR# PDDREQ PDDACK# PDD [15:0] PDD [2:0 PDS3# PDCS1#
Figure 3.24 IDE system connections.
Zhang_Ch03.indd 334
5/13/2008 5:41:30 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
335
USB and X-Bus. It also handles the interrupts from the PCI bus and ISA bus. It thus has two integrated 82C59 interrupt controllers, which support up to 15 interrupts. The PCI IDE/ISA accelerator also handles DMA transfers (on up to 8 channels), and thus has two integrated 82C37 DMA controllers. Along with this, it has an integrated 82C54, which provides for the system timer, DRAM refresh signal, and the speaker tone output. The IDE (or AT bus) is the de facto standard for most hard disks in PCs. It has the advantage over older type interfaces that the controller is integrated into the disk drive. Thus, the computer only has to pass high-level commands to the unit and actual control can be achieved with the integrated controller. Several companies developed a standard command set for an AT attachment (ATA). Commands include (1) (2) (3) (4) (5)
read sector buffer—reads contents of the controller’s sector buffer write sector buffer—writes data to the controller’s sector buffer check for active read multiple sectors lock drive door.
The control of the disk is achieved by passing a number of high-level commands through a number of I/O port registers.
3.2.2.3
SCSI
Small computer systems interface (SCSI) has an intelligent bus subsystem and can support multiple devices cooperating concurrently. Each device is assigned a priority. The main types of SCSI are (1) SCSI-I. SCSI-I transfers at rate of 5MB/s with an 8-bit data bus and seven devices per controller. (2) SCSI-II. SCSI-II supports for SCSI-I and with one or more of the following: (a) Fast SCSI, which uses a synchronous transfer to give 10 MB/s transfer rate. The initiator and target initially negotiate to see if they can both support synchronous transfer. If they can, they then go into a synchronous transfer mode. (b) Fast and wide SCSI-2, which doubles the data bus width to 16 bits to give 20 MB/s transfer rate. (c) Fifteen devices per master device. (d) Tagged command queuing (TCQ), which greatly improves performance and is supported by Windows, NetWare, and OS-2.
Zhang_Ch03.indd 335
5/13/2008 5:41:30 PM
336
INDUSTRIAL CONTROL TECHNOLOGY (e) Multiple commands sent to each device. (f) Commands executed in whatever sequence will maximize device performance. (3) Ultra SCSI (SCSI-III). Ultra SCSI operates either as 8-bit or 16-bit with either 20 or 40 MB/s transfer rate (Table 3.5).
SCSI standard uses a 50-pin header connector and a ribbon cable to connect to up to eight devices. It overcomes the problem existing in IDE, where devices have to be assigned as a master and a slave. SCSI and fast SCSI transfer one byte at a time with a parity check on each byte. SCSI-II, wide SCSI, and ultra SCSI use a 16-bit data transfer and a 68-pin connector. An SCSI bus is made of an SCSI host adapter connected to a number of SCSI units via SCSI bus. As all units connect to a common bus, only two units can transfer data at a time, either from one SCSI unit to another or from one SCSI unit to the SCSI host. The great advantage of this transfer is that it does not involve the processor. Each unit on an SCSI unit is assigned a SCSI ID address. In the case of SCSI-I, this ranges from 0 to 7 (where 7 is normally reserved for a tape drive). The host adapter takes one of the addresses; thus a maximum of seven units can connect to the bus. Most systems allow the units to take on any SCSI ID address, but older systems required boot drives to be connected to a specific SCSI address. When the system is initially booted, the host adapter sends out a Start Unit command to each SCSI unit. This allows each of the units to start in an orderly manner (and not overload the local power supply). The host will start with the highest priority address (ID = 7) and finishes with the lowest address (ID = 0). Typically, the ID is set with a rotating switch selector or by three jumpers.
Table 3.5 SCSI Types Indices Types
Data Bus (bits)
Transfer Rate (MB/s)
Tagged Command Queuing
Parity Checking
Maximum Devices
Pins on Cable and Connector
SCSI-I SCSI-II fast SCSI-III fast/wide Ultra SCSI
8 8
5 10 (10 MHz) 20 (10 MHz) 40 (20 MHz)
No Yes
Option Yes
7 7
50 50
Yes
Yes
15
68
Yes
Yes
15
68
Zhang_Ch03.indd 336
16 16
5/13/2008 5:41:30 PM
337
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
SCSI defines an initiator control and a target control. The initiator requests a function from a target, which then executes the function, as illustrated in Fig. 3.25, where the initiator effectively takes over the bus for the time to send a command and the target executes the command and then contacts the initiator and transfers any data. The bus will then be free for other transfers. Table 3.6 gives the definitions of main SCSI signals. Each of the control signals can be true or false. They can be either OR-tied driven or Non-OR-tied driven. In OR-tied driven, the driver does not drive the signal to the false state. In this case, the bias circuitry of the bus terminators pulls the signal false whenever it is released by the drivers at every SCSI device. If any driver is asserted, then the signal is true. The BSY, SEL, and RST signals are OR-tied. In the ordinary operation of the bus, the BSY and RST signals may be simultaneously driven true by several drivers. However, in Non-OR-tied driven, the signal may be actively driven false.
Initiator
SCSI bus
Function request
Target Function execution
Figure 3.25 Initiator and target in SCSI.
Table 3.6 SCSI Main Signals Signals BSY ACK RST ATN MSG SEL C/D (control/data) REQ I/O (input/output)
Zhang_Ch03.indd 337
Definitions Indicates that the bus is busy, or not (an OR-tied signal). Activated by the initiator to indicate an acknowledgment for an REQ information transfer handshake. When active (low) resets all the SCSI devices (an OR-tied signal). Activated by the initiator to indicate the attention state. Activated by the target to indicate the message phase. Activated by the initiator; it is used to select a particular target device (an OR-tied signal). Activated by the target to identify whether there is data or control on the SCSI bus. Activated by the target to acknowledge a request for an ACK information transfer handshake. Activated by the target to show the direction of the data on the data bus. Input defines that data is an input to the initiator, else it is an output.
5/13/2008 5:41:30 PM
338
INDUSTRIAL CONTROL TECHNOLOGY
No signals other than BSY, RST, and D(PARITY) are driven simultaneously by two or more drivers. The SCSI bus allows any unit to talk to any other unit, or the host to talk to any unit. Thus there must be some way of arbitration where units capture the bus. The main phases that the bus goes through are as follows: (1) Free bus. In this state, there are no units that either transfer data or have control of the bus. It is identified by deactivate SEL and BSY (both will be high). Thus, any unit can capture the bus. (2) Arbitration. In this state, a unit can take control of the bus and become an initiator. To do this, it activates the BSY signal and puts its own ID address on the data bus. After a delay, it tests the data bus to determine whether a high-priority unit has put its own address on the bus. If it has, then it will allow the other unit access to the bus. If its address is still on the bus, then it asserts the SEL line. After a delay, it then has control of the bus. (3) Selection. In this state, the initiator selects a target unit and gets the target to carry out a given function, such as reading or writing data. The initiator outputs the OR value of its SCSI-ID and the SCSI-ID of the target onto the data bus (e.g., if the initiator is 2 and the target is 5 then the OR-ed ID on the bus will be 00100100). The target then determines that its ID is on the data bus and sets the BSY line active. If this does not happen within a given time, then the initiator deactivates the SEL signal, and the bus will be free. The target determines that it is selected when the SEL signal and its SCSI ID bit are active and the BSY and I/O signals are false. It then asserts the BSY signal within a selection abort time. (4) Reselection. When the arbitration phase is complete, the wining SCSI device asserts the BSY and SEL signals and has delayed at least a bus clear delay plus a bus settle delay. The wining SCSI device sets the DATA BUS to a value that is the logical OR of its SCSI ID bit and the initiator’s CSI ID bit. Sometimes, the target takes some time to reply to the initiator’s request. The initiator determines that it is reselected when the SEL and I/O signals and its SCSI ID bit are true and the BSY signal is false. The reselected initiator then asserts the BSY signal within a selection abort time of its most recent detection of being reselected. An initiator does not respond to a reselection phase if other than two SCSI ID bits are on the data bus. After the target detects that the BSY signal is true, it also asserts the BSY signal and waits a given time delay and then releases the SEL signal. The target may then change the I/O signal and the data bus. After the reselected initiator detects the SEL signal is false, it releases the BSY signal. The target continues to assert the BSY signal until it gives up the SCSI bus.
Zhang_Ch03.indd 338
5/13/2008 5:41:31 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
339
(5) Command. The command phase is used by the target to request command information from the initiator. The target asserts the C/D signal and negates the I/O and MSG signals during the REQ/ ACK handshake(s) of this phase. The format of the command descriptor block for 6-byte commands is: Byte 0—operation code; Byte 1—logical unit number (MSB, if required); Byte 2—logic block address; Byte 3—logic block address (LSB, if required); Byte 4—transfer length (if required)/parameter list length (if required)/allocation length (if required); Byte 5—control. (6) Data. The data phase covers both the data-in and data-out phases. In the data-in phase, the target requests that data be sent to the initiator from the target. For this purpose, the target asserts the I/O signal and negates the C/D and MSG signals during the REQ/ ACK handshake(s) of the phase. In the data-out phase, the target requests that data be sent from the initiator to the target. The target negates the C/D, I/O, and MSG signals during the REQ/ ACK handshake(s) of this phase. (7) Message. The message phase covers both the message-out and message-in phases. The first byte transferred in either of these phases can be either a single-byte message or the first byte of a multiple-byte message. Multiple-byte messages are contained completely within a single message phase. The message system allows the initiator and the target to communicate over the interface connection. Each message can be one, two, or more bytes in length. In a single message phase, one or more messages can be transmitted (but a message cannot be split between multiple message phases). (8) Status. The status phase allows the target to request that status information be sent from the target to the initiator. The target shall assert the C/D and I/O signals and negate the MSG signal during the REQ/ACK handshake(s) of this phase. The status phase normally occurs at the end of a command (although in some cases it may occur before transferring the command descriptor block).
3.2.2.4
USB and Firewire
(1) Universal serial bus (USB). USB is mainly used for the connection of medium bandwidth peripherals such as keyboards, scanner, modem, video, game or graphic controller, music interface, etc. The great advantage of USB is that it allows for peripherals to be added and deleted from the system without causing any system upsets. The system will also automatically sense the connected device and load the required driver. Basically, USB provides these features: (1) easy to use; (2) self-identifying
Zhang_Ch03.indd 339
5/13/2008 5:41:31 PM
340
INDUSTRIAL CONTROL TECHNOLOGY peripherals with automatic mapping of function to driver and configuration; (3) dynamically attachable and reconfigurable peripherals; (4) low-speed and medium-speed transfer rate of 1.5 Mbps or 12 Mbps. USB is a balanced bus architecture that hides the complexity of the operation from the devices connected to the bus. The USB host controller controls system bandwidth. Each device is assigned a default address when the USB device is first powered or reset. Hubs and functions are assigned a unique device address by USB software. All USB devices are attached to the USB via a port on specialized USB devices known as hubs. Hubs indicate the attachment or removal of a USB device in its per port status. Figure 3.26 shows an example connection of the USB 2.0 system. In this example, a memory hub is used to provide a fast data transfer (GB/s), while the Firewire connection provides ultrahigh-speed connection for video transfers. The USB connection provides low-high and full-speed connections to most of the peripherals that connect to the system. The USB connections can be internal or can connect to an external hub. In the implementation of USB, there are two main ways, as given below. (a) Open host controller interface (OHCI). This method defines the register level interface that enables the USB controller to communicate with the host computer and the operating system. OHCI is an industrial standard hardware interface for operating systems, device drivers, and the BIOS to manage the USB. It optimizes performance of the USB bus while minimizing CPU overhead to control the USB with a scatter/gather
Processor
AGP graphic controller
RDRAM
AGP 2.0
DRAM bus
Host bus
Memory controller hub (MCH) Hub interface
Floppy disk USB 2.0 hub
Hard disk
Video device
Firewire USB Port 1 USB 2.0 controller hub
USB Port 2
Audio/modem USB 2.0 hub
Scanner Adaptor
Figure 3.26 An example connection of the USB 2.0 system.
Zhang_Ch03.indd 340
5/13/2008 5:41:31 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
341
bushmaster hardware support. It has efficient isochronous data transfers allowing for high USB bandwidth without slowing down the host CPU. Furthermore, it ensures full compatibility with all USB devices. (b) Universal host controller interface (UHCI). This method defines how the USB controller talks to the host computer and its operating system. It is optimized to minimize host computer design complexity and uses the host CPU to control the USB bus. This method has the advantages of simple design which reduces the transistor count required to implement the USB interface on the host computer, thus reducing the system cost. Furthermore, it can provide full compatibility with all USB devices. In data transmission, USB supports two types of transfers: stream and message. A stream has no defined structure, whereas a message does. At start-up, one message pipe, control pipe 0, always exits as it provides access to the device’s configuration, status, and control information. USB optimizes large data transfers and real-time data transfers. When a pipe is established for an endpoint, most of the pipe’s transfer characteristics are determined and remain fixed for the lifetime of the pipe. Each bus transaction of USB involves the transmission of up to three packets which can be (1) token packet transmission, (2) data packet transmission, and (3) handshake packet transmission. With these transfer characteristics, USB defines four transfer types: (i) Control transfers. This is bursty, nonperiodic, hostsoftware-initiated request/response communication typically used for command/status operations. (ii) Isochronous transfers. This is periodic, continuous communication between host and device typically used for time-relevant information. This transfer type also preserves the concept of time encapsulated in the data. This does not imply, however, that the delivery needs of such data are always time-critical. (iii) Interrupt transfers. This is small data, nonperiodic, low frequency, bounded latency, device-initiated communication typically used to notify the host of device service needs. (iv) Bulk transfers. Nonperiodic, large bursty communication typically used for data that can use any available bandwidth and also is delayed until bandwidth is available. As mentioned earlier, a major advantage of USB is the hot attachment and detachment of devices. USB does
Zhang_Ch03.indd 341
5/13/2008 5:41:31 PM
342
INDUSTRIAL CONTROL TECHNOLOGY this by sensing when a device is attached or detached. When this happens, the host system is notified, and system software interrogates the device. It then determines its capabilities, and automatically configures the devices. All the required drivers are then loaded and applications can immediately make use of the connected device. (1) Attachment of USB devices. All USB devices are addressed using the USB default address when initially connected or after they have been reset. The host determines whether the newly attached USB device is a hub or a function and assigns a unique USB address to the USB device. The host establishes a control pipe for the USB device using assigned USB address and endpoint number zero. If the attached USB device is a hub and USB devices are attached to its ports, then the above procedure is followed for each of the attached USB devices. If the attached USB device is a function, then attachment notifications will be dispatched by USB software to interested host software. (2) Removal of USB devices. When a USB device has been removed from one of its ports, the hub automatically disables the port and provides an indication of device removal to the host. Then the host removes knowledge of the USB device. If the removed USB device is a hub, then the removal process must be performed for all of the USB devices that were previously attached to the hub. If the removed USB device is a function, removal notifications are sent to interested host software. (2) Firewire. The main competitor to USB is the Firewire standard (IEEE 1394–1995 buses), which is a high-speed serial bus typically for video transfers, whereas USB supports low-tomedium-speed peripherals. Firewire supports rates of approximately 100, 200, and 400 Mbps, known as S100, S200, and S400, respectively. The future standard promises higher data rates, and ultimately it is envisaged that rates of 3.2 Gbps will be achieved when optical fiber is introduced into the system. It uses pointto-point interconnect with a tree topology: 1000 buses with 64 nodes gives 64,000 nodes. Firewire also can be automatic configuration and hot plugging. In additional to asynchronous transfer, Firewire is able to be isochronous data transfer, where a fixed bandwidth is dedicated to a particular peripheral. However, it has the limit of the maximum cable length as 4.5 m. This should
Zhang_Ch03.indd 342
5/13/2008 5:41:31 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
343
subsequently reduce the costs of production of controller interfaces and peripheral connectors, as well as simplifying the requirements placed on users when setting up their devices. Firewire is a more economical interface bus standard that performs fast and high-bandwidth data transmissions. There are two bus categories in the Firewire: (a) Cable. This is a bus that connects external devices via a cable. This cable environment is a noncyclic network with finite branches consisting of bus bridges and nodes (cable devices). Noncyclic networks contain no loops and result in a tree topology, with devices daisy-chained and branched (where more than one device branch is connected to a device). Devices on the bus are identified by node IDs. Configuration of the node IDs is performed by the self ID and tree ID processes after every bus reset. This happens every time a device is added to or removed from the bus, and is invisible to the user. (b) Backplane. This type of topology is an internal bus. An internal IEEE-1394 device can be used alone, or incorporated into another backplane bus. Implementation of the backplane specification lags the development of the cable environment, but one could image internal IEEE-1394 hard disks in one computer being directly accessed by another IEEE-1394 connected computer. One of the key capabilities of IEEE-1394 is isochronous data transfer. Both asynchronous and isochronous are supported, and are useful for different applications. Isochronous transmission transmits data like real-time speech and video, both of which must be delivered uninterrupted, and at the rate expected, whereas asynchronous transmission is used to transfer data that is not tied to a specific transfer time. With IEEE-1394, asynchronous transmission is the conventional transfer method of sending data to an explicit address, and receiving confirmation when it is received. Isochronous, however, is an unacknowledged guaranteed bandwidth transmission method, useful for just-in-time delivery of multimedia type data. An isochronous “talker” requests an amount of bandwidth and a channel number. Once the bandwidth has been allocated, it can transmit data preceded by a channel ID. The isochronous listeners can then listen for the specified channel ID and accept the data following. If the data is not intended for a node, it will not be set to listen on the specific channel ID. Up to 64 isochronous channels are available,
Zhang_Ch03.indd 343
5/13/2008 5:41:31 PM
344
INDUSTRIAL CONTROL TECHNOLOGY and these must be allocated, along with their respective bandwidths, by an isochronous resource manager on the bus. By comparison, asynchronous transfers are sent to explicit addresses on the 1394 bus. When data is to be sent, it is preceded by a destination address, which each node checks to identify packets for itself. If a node finds a packet addressed to itself, it copies it into its receive buffer. Each node is identified by a 16-bit ID, containing the 10-bit bus ID and 6-bit node or physical ID. The actual packet addressing, however, is 64 bits wide, providing a further 48 bits for addressing a specific offset within a node’s memory.
3.2.2.5 AGP and Parallel Ports (1) AGP. The accelerated graphic port (AGP) is a major advancement in the connection of three-dimensional graphics applications, and is based on an enhancement of the PCI bus. One of the major improvements is the speed of transfer between the main system memory and the local graphic card. This reduces the need for large areas of memory on the graphics card. The main gain in moving graphics memory from the display buffer (on the graphic card) to the main memory is the display of text information because (1) it is generally read-only, and does not have to be displayed in any special order. (2) Shifting text does not require a great deal of data transfer and can be easily cached in memory, thus reducing data transfer. A shift in text can be loaded from the cached memory. (3) It is dependent on the graphics quality of the application, rather than the resolution of the display. (4) It is not persistent, as it resides in memory only for the duration that it is required. When it has completed the main memory, it can be assigned to another application. A display buffer, on the other hand, is permanent. The 440LX was the first AGP set product designed to support the AGP interface. The HOST BRIDGE AGP implementation is compatible with the accelerated graphics port specification 1.0. HOST BRIDGE supports only a synchronous AGP interface, coupling with the host bus frequency. The AGP 1.0 interface can reach a theoretical 528 MB/s transfer rate and AGP 2.0 can achieve a theoretical 1.056 GB/s transfer rate. The actual bandwidth will be limited by the capability of the HOST BRIDGE memory subsystem. (2) Parallel port. The parallel port is hardly the greatest technology. In its standard form, it allows only for simple communications
Zhang_Ch03.indd 344
5/13/2008 5:41:31 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
345
from the PC outwards. However, like the RS-232, the parallel port is a standard port of the PC, and it is cheaper. All parallel ports use a bidirectional link in either a compatible, nibble, or byte mode. These modes are relatively slow as the software must monitor the handshaking lines (up to 100 kbps). To allow high speeds, the enhanced parallel port and extended capabilities port protocol modes allow high-speed data transfer using automatic hardware handshaking.
3.2.2.6
RS-232, RS-422, RS-485, and RS-530
(1) RS-232. RS-232 (Recommended Standard-232) is a TIA/EIA standard for serial transmission between DTE and DCE. Using a 25-pin DB-25 or 9-pin DB-9 connector, its normal cable limitation of 50 ft can be extended to several hundred feet with highquality cable. RS-232 defines the purpose and signal timing for each of the 25 lines; however, many applications use less than a dozen. RS-232 transmits positive voltage for a 0 bit, negative voltage for a 1. In 1984, this interface was officially renamed TIA/EIA232-E standard (E is the current revision, 1991), although most people still call it RS-232. Table 3.7 lists the some RE-232 specifications. (2) RS-422 and RS-485. RS-422 (Recommended Standard-422) is a balanced serial interface for the transmission of digital data. The advantage of a balanced signal is the greater immunity to noise. The EIA describes RS-422 as a DTE to DCE interface for pointto-point connections. RS-422 was designed for greater distances and higher baud rates than RS-232. In its simplest form, a pair of converters from RS-232 to RS-422 (and back again) can be used to form an “RS-232 extension cord.” Data rates of up to 100 kbps and distances up to 4000 ft can be accommodated with RS-422. RS-422 is also specified for multidrop (party-line) applications where only one driver is connected to, and transmits on, a “bus” of up to 10 receivers. RS-485 (Recommended Standard-485) is standard for sending serial data. It uses a pair of wires to send a differential signal over distances up to 4000 ft without a repeater. The differential signal makes it very robust; RS-485 is one of the most popular communications methods used in industrial applications where its noise immunity and long-distance capability are a perfect fit. RS-485 is capable of multidrop communications—up to 32 nodes.
Zhang_Ch03.indd 345
5/13/2008 5:41:32 PM
346
INDUSTRIAL CONTROL TECHNOLOGY
Table 3.7 RS-232 and RS-422 Specifications Specifications Mode of operation Total number of drivers and receivers on one line Maximum cable length (ft) Maximum data rate Maximum driver output voltage (V) Driver output signal level Loaded (loaded minimum) (V) Driver output signal level Unloaded (unloaded maximum) (V) Driver load impedance (Ω) Maximum driver current in Power on high Z state Maximum driver current in Power off high Z state Slew rate (maximum) Receiver input voltage range (V) Receiver input sensitivity Receiver input resistance (Ohms)
RS-232
RS-422
Single-ended 1 Driver 1 Receiver 50 20 kbits/s +/−25 +/−2.0
Differential 1 Driver 10 Receiver 4000 10 Mbits/s −0.25 to +6 +/−5 to +/−15
+/–6
+/−25
3–7 k N/A
100 N/A
+/−100 µA
+/−6 mA @ +/−2 V N/A −10 to +10 +/−200 mV 4k min
30 V/µs +/−15 +/−3 V 3–7 k
RS-485 can be configured for their half or full duplex. Half duplex typically uses one pair of wires; full duplex requires two pairs. Both RS-422 and RS-485 use a twisted-pair wire (i.e., 2 wires) for each signal. They both use the same differential drive with identical voltage swings: 0 to +5 V. The main difference between RS-422 and RS-485 is that while RS-422 is strictly for point-topoint communications (and the driver is always enabled), RS-485 can be used for multiple drop systems. Since the basic differential receivers of RS-423-A and RS-422-A are electrically identical, it is possible to interconnect equipment using RS-423-A receivers and generators on one side of the interface with equipment using RS-422-A generators and receivers on the other side of the interface, if the leads of the receivers and generators are properly configured to accommodate such an arrangement and the cable is not terminated. Table 3.7 lists some specifications for RS-422. The data is coded as a differential voltage between the wires. The wires are named A (negative) and B (positive). When B > A then the output
Zhang_Ch03.indd 346
5/13/2008 5:41:32 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
347
is a mark (1 or off) and when A > B then it is counted as a space (0 or on). In general, a mark is +1 VDC for the A line and +4 VDC for the B line. A space is +1 VDC for the B line and +4 VDC for the A line. At the transmitter end the voltage difference should not be less than 1.5 VDC and not exceed 5 VDC. At the receiver end the voltage difference should not be less than 0.2 VDC. The minimum voltage level is –7 VDC and maximum +12 VDC. (3) RS-530. RS-530 (Recommended Standard-530) employs differential signaling on its send, receive, and clocking signals, as well as its control and handshaking signals. The differential signals for RS-530 are labeled as either “A or B.” At both connectors, wire A always connects to A, and B connects to B. The RS-530 transmitter sends a data 0 (or logic ON) by setting the potential on the A signal to 0.3 V (or more) higher than the voltage on the B signal. The transmitter sends a data 1 (or logic OFF) by setting the potential on the B signal to 0.3 V or more than the voltage on the A signal. The voltage offset (from ground reference) is not to exceed 3 V, however, most receivers can handle much more; check the receiver data sheet for exact limits. This approach is relatively immune to noise when the cable is constructed so that the A and B signal wires are a twisted pair. Shielding the cable is generally not required. Data 0 = A > B + 0.3 V; Data 1 = B > A + 0.3 V Example: Data 0: A = 2 V, B = 1 V; Data 1: A = 1 V, B = 2 V. Most receivers can handle both + and – voltages; again check the data sheet on the part used to be sure. If you have the correct receivers it is possible for the older V.35 (+/–5 V) signaling to be wired to RS-530 or V.11. This is how Cisco and others get many different interfaces on their Smart Serial connectors, and you thought it was magic!
3.2.2.7
IEEE-488
Hewlett-Packard originally developed the IEEE-488 bus called the HP-IB to connect and control programmable instruments, and to provide a standard interface for communication between instruments from different sources. The interface quickly gained popularity in the computer industry. Because the interface was so versatile, the IEEE committee renamed it General Purpose Interface Bus (GPIB). Almost any instrument can be used with the IEEE-488 specification, because it says nothing about the function of the instrument itself, or about the form of the instrument’s data. Instead, the specification defines a separate component, the interface, which can be added to the instrument.
Zhang_Ch03.indd 347
5/13/2008 5:41:32 PM
348
INDUSTRIAL CONTROL TECHNOLOGY
The signals passing into the interface from the IEEE-488 bus and from the instrument are defined in the standard. The instrument does not have complete control over the interface. Often the bus controller tells the interface what to do. The active controller performs the bus control functions for all the bus instruments. (1) IEEE-488 standards. The IEEE-488.1 standard greatly simplified the interconnection of programmable instruments by clearly defining mechanical, hardware, and electrical protocol specifications. For the first time, instruments from different manufacturers were connected by a standard cable. This standard does not address data formats, status reporting, message exchange protocol, common configuration commands, or device specific commands. The IEEE-488.2 standard enhances and strengthens the IEEE488.1 standard by specifying data formats, status reporting, error handling, controller functionality, and common instrument commands. It focuses mainly on the software protocol issues and thus maintains compatibility with the hardware-oriented IEEE488.1 standard. IEEE-488.2 systems tend to be more compatible and reliable. The IEEE-488 standard allows up to 15 devices to be interconnected on one bus. Each device is assigned a unique primary address, ranging from 0 to 30, by setting the address switches on the device. A secondary address may also be specified, ranging from 0 to 30. See the device documentation for more information on how to set the device primary and optional secondary address. The IEEE-488 bus specifies a maximum total cable length of 20 m with no more than 20 devices connected to the bus and at least two-thirds of the devices powered on. A maximum separation of 4 m between devices and an average separation of 2 m over the full bus should be followed. Bus extenders and expanders are available to overcome these system limits. The standard IEEE-488 cable has both a plug and receptacle connector on both ends. Special adapters and nonstandard cables are available for special interconnect applications. (2) Interface signals and data lines. At power-up time, the IEEE488 interface that is programmed to be the System Controller becomes the Active Controller in charge. The System Controller has several unique capabilities including the ability to send Interface Clear (IFC) and Remote Enable (REN) commands. IFC clears all device interfaces and returns control to the System Controller. REN allows devices to respond to bus data once they are addressed to listen. The System Controller may optionally
Zhang_Ch03.indd 348
5/13/2008 5:41:32 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
349
pass control to another controller, which then becomes Active Controller. There are three types of devices that can be connected to the IEEE-488 bus (listeners, talkers, and controllers). Some devices include more than one of these functions. The standard allows a maximum of 15 devices to be connected on the same bus. A minimum system consists of one controller and one talker or listener device (i.e., an HP 700 with an IEEE-488 interface and a voltmeter). It is possible to have several controllers on the bus but only one may be active at any given time. The Active Controller may pass control to another controller which in turn can pass it back or on to another controller. A listener is a device that can receive data from the bus when instructed by the controller and a talker transmits data on to the bus when instructed. The controller can set up a talker and a group of listeners so that it is possible to send data between groups of devices as well. The IEEE-488 interface system consists of 16 signal lines and 8 ground lines. The 16 signal lines are divided into 3 groups (8 data lines, 3 handshake lines, and 5 interface management lines). The lines DIO1 through DIO8 are used to transfer addresses, control information, and data. The formats for addresses and control bytes are defined by the IEEE 488 standard. Data formats are undefined and may be ASCII (with or without parity) or binary. DIO1 is the Least Significant Bit (note that this will correspond to bit 0 on most computers). (3) Handshake lines and handshaking. The three handshake lines (NRFD, NDAC, DAV) control the transfer of message bytes among the devices and form the method for acknowledging the transfer of data. This handshaking process guarantees that the bytes on the data lines are sent and received without any transmission errors and is one of the unique features of the IEEE488 bus. (a) The NRFD (Not Ready for Data) handshake line is identified by a listener to indicate it is not yet ready for the next data or control byte. Note that the controller will not see NRFD released (i.e., ready for data) until all devices have released it. (b) The NDAC (Not Data Accepted) handshake line is identified by a listener to indicate it has not yet accepted the data or control byte on the data lines. Note that the controller will not see NDAC released (i.e., data accepted) until all devices have released it. (c) The DAV (Data Valid) handshake line is identified by the talker to indicate that a data or control byte has been placed on
Zhang_Ch03.indd 349
5/13/2008 5:41:32 PM
350
INDUSTRIAL CONTROL TECHNOLOGY the data lines and has had the minimum specified stabilizing time. The byte can now be safely accepted by the devices. The handshaking process is outlined as follows. When the controller or a talker wishes to transmit data on the bus, it sets the DAV line high (data not valid), and checks to see that the NRFD and NDAC lines are both low, and then it puts the data on the data lines. When all the devices that can receive the data are ready, each releases its NRFD (not ready for data) line. When the last receiver releases NRFD, and it goes high, the controller or talker takes DAV low indicating that valid data is now on the bus. In response each receiver takes NRFD low again to indicate it is busy and releases NDAC (not data accepted) when it has received the data. When the last receiver has accepted the data, NDAC will go high and the controller or talker can set DAV high again to transmit the next byte of data. Note that if after setting the DAV line high, the controller or talker senses that both NRFD and NDAC are high, an error will occur. Also if any device fails to perform its part of the handshake and releases either NDAC or NRFD, data cannot be transmitted over the bus. Eventually a timeout error will be generated. The speed of the data transfer is controlled by the response of the slowest device on the bus; for this reason it is difficult to estimate data transfer rates on the IEEE-488 bus as they are always device dependent. (4) Interface management lines. The five interface management lines (ATN, EOI, IFC, REN, and SRQ) manage the flow of control and data bytes across the interface. (a) The ATN (Attention) signal is chosen by the controller to indicate that it is placing an address or control byte on the data bus. ATN is released to allow the assigned talker to place status or data on the data bus. The controller regains control by reasserting ATN; this is normally done synchronously with the handshake to avoid confusion between control and data bytes. (b) The EOI (End or Identify) signal has two uses. A talker may assert EOI simultaneously with the last byte of data to indicate end-of-data. The controller may assert EOI along with ATN to initiate a parallel poll. Although many devices do not use parallel poll, all devices should use EOI to end transfers (many currently available ones do not). (c) The IFC (Interface Clear) signal is selected only by the System Controller in order to initialize all device interfaces to a known state. After releasing IFC, the System Controller is the Active Controller.
Zhang_Ch03.indd 350
5/13/2008 5:41:32 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
351
(d) The REN (Remote Enable) signal is selection only by the System Controller. Its selection does not place devices into remote control mode; REN only enables a device to go into remote mode when addressed to listen. When in remote mode, a device should ignore its local front panel controls. (e) The SRQ (Service Request) line is like an interrupt: it may be used by any device to request the controller to take some action. The controller must determine which device is calling for the SRQ by conducting a serial poll. The requesting device releases SRQ when it is polled.
3.3 Human–Machine Interface in Industrial Control 3.3.1
Overview
The term “user interface” refers to the methods and devices that are used to accommodate interaction between machines and the human beings who use them. The user interface of a mechanical system, a vehicle, or an industrial installation and so on is often referred to as the human–machine interface. In any industrial control system, the human–machine interface can be used to deliver information from machine to people, which allows people to control, monitor, and record the system through devices such as image, keyboard, Ethernet, screen, video, radio, software, etc. Actually, the human–machine interface can take many forms. Although there are many techniques and methods used in industry, the human–machine interface always accomplishes two fundamental tasks: communicating information from the machine to the user, and communicating information from the user to the machine. Two industrial applications of the human–machine interface are given below to demonstrate its importance in industry and industrial control. However, in view of the fact that the human–machine interface becomes more and more essential in industry, these two examples are far from representing its applications in industry. A robot control is a good example that requires working with the human– machine interface. This robot control is based on human speech and gestures instead of keyboard or joystick control. In turn, the robot can also respond to the human control orders by using speech and gestures. When both the operator and the robot understand the environment and the work objects, they can communicate easier and the work task can be completed collaboratively. In most robots, the human–machine interface is a key
Zhang_Ch03.indd 351
5/13/2008 5:41:32 PM
352
INDUSTRIAL CONTROL TECHNOLOGY
component in any work cell. The human–machine interface must allow operators to run the equipment and cell in an intuitive manner. It must be configurable to give each level of personnel appropriate access to various layers of functionality. An autonomous service robot operates in the user’s own environment, performing independent tasks to reach user goals. Applications include, for instance, delivery agents in hospitals and factories, and cleaning robots in the home or in supermarkets. The latest robot controllers now are beginning to offer built-in human–machine interface functionality, complete with touch-screen interfaces, status indicators, program selection switches, part counters, and various other functions, which can be seen in Fig. 3.27. Another example in this aspect is the SCADA system in which the human–machine interface is an essential component. In industry, the human–machine interface in SCADA was born out of a need for a userfriendly front-end to a control system containing PLC. While a PLC does provide automated, preprogrammed control over a process, they are usually distributed across a plant, making it difficult to gather data from them manually. Additionally, the PLC information is usually in a crude userunfriendly format. The SCADA of human–machine interface gathers information from the PLCs via some form of network, and combines and formats the information. A sophisticated human–machine interface may also be linked to a database, to provide instant trending, diagnostic data,
Figure 3.27 A touch-screen shot of the human–machine interface for a robot (courtesy of Siemens).
Zhang_Ch03.indd 352
5/13/2008 5:41:32 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
353
scheduled maintenance procedures, logistic information, detailed schematics for a particular sensor or machine, and expert-system troubleshooting guides. Since about 1998, many companies, especially all major PLC manufacturers, have offered integrated SCADA and human–machine interface systems for its comprehensive range of facilities for industrial automation and process control. These companies recognized the benefits of using the same reliable monitoring and control software throughout their business, from shop floor through to top management. Many of them used open and nonproprietary communications protocols. Numerous specialized third-party SCADA packages with the human–machine interface, offering built-in compatibility with most major PLCs, have also entered the market, allowing mechanical engineers, electrical engineers, and technicians to configure the human–machine interface by themselves, without the need for a custom-made program written by a software developer. Figure 3.28 is a diagram including an SCADA system and its control screen in Website.
3.3.2
Human–Machine Interactions
Automated systems have penetrated virtually every area of our private life and our work environments. Development work on technical products is accelerating, and the products themselves are becoming increasingly complex and powerful. Human–machine interaction is already playing a vital role along the entire production process, from planning individual links in the production chain right through to designing the finished product. Innovative technology is made for humans, used and monitored by humans. The optimum form for this interaction depends on whether a technical innovation is reliable in operation, is safe, is accepted by personnel, and, last but not least, is cost-effective. This interplay between technology and user, known as human–machine interaction, is hence at the very heart of industrial automation, automated control, and industrial production.
3.3.2.1 The Models for Human–Machine Interactions Modeling the human–machine interaction is done simply to depict how human and machine interact in a system. The human–machine interaction model illustrates a typical information flow (or process context) between the “human” and “machine” components of a system. Figure 3.29 provides the components involved in each side of the human–machine interaction. The environment side has three components: Machine display component,
Zhang_Ch03.indd 353
5/13/2008 5:41:33 PM
354
INDUSTRIAL CONTROL TECHNOLOGY
Figure 3.28 A SCADA system and its Website-control screen (Courtesy of Siemens).
Human musculoskeletal component
Human cognitive component
Human sensory component
Machine I/O device component
Machine CPU component
Machine display component
Environment
Interaction
Human
Figure 3.29 The components involved in the human–machine interaction.
Zhang_Ch03.indd 354
5/13/2008 5:41:33 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
355
Machine CPU component, and Machine I/O device component. The human side has another three components: Human sensory component, Human cognitive component, and Human musculoskeletal component. In modern control systems, a model is a common architecture for grouping several machine configurations under one label. The set of models in a control system corresponds to a set of unique machine behaviors. The operator interacts with the machine by switching among models manually, or monitoring the automatic switching triggered by the machine. However, our understanding of models and their potential contribution to confusion and error is still far from complete. For example, there is widespread disagreement among user interface designers and researchers about what models are, independent of how they affect users. This blurred vision, found not only in the human–machine interaction domain, impedes our ability to develop methods for representing and evaluating human interaction with control systems. This limitation is magnified in high-risk systems such as automated cockpits, for which there is an urgent need to develop methods that will allow designers to identify, early in the design phase, the potential for error. The errors arising from modeling the human–machine interaction are thus an important issue that cannot be ignored. (1) Definition and construct. The constructs of the human–machine interaction model that will be discussed below are measurable aspects of human interaction with machines. As such, they can be used to form the foundation of a systematic and quantitative analysis. (a) Models’ behaviors. One of the first treatments of models came in the early 1960s from the science of cybernetics, the comparative study of human control systems and complex machines. The first treatments set forth the construct of a machine with different behaviors. The following is a simplified description of the machine behaviors’ construct: A given machine may have several components (e.g., X1, X2, X3). For each component there is a finite set of states. On “Startup,” for example, the machine initializes itself so that the active state of component X1 is “a,” X2 is “f,” and X3 is “k” (see Table 3.8). The vector of states (a, f, k) thus defines the machine’s configuration on “Startup.” Once a built-in test is performed, the machine can move from “Startup” to “Ready.” The transition to “Ready” can be defined so that for component X1, state “a” undergoes a transition and becomes state “b”; for X2 state “f” transitions to “g”, and for X3, “k”
Zhang_Ch03.indd 355
5/13/2008 5:41:34 PM
356
INDUSTRIAL CONTROL TECHNOLOGY changes to “l”. The new configuration (b, g, l) defines the “Ready” model of the machine. Now, there might be a third set of transitions, for example, to “Engaged” (c, h, m), and so on. The set of configurations labeled “Startup,” “Ready” and “Engaged,” if embedded in the same physical unit, corresponds to a machine with three different ways of behaving. A real machine whose behavior can be so represented is defined as a machine with “input.” The input triggers the machine to change its behavior. One source of input to the machine is manual; the user selects the model, for example, by turning a switch, and the corresponding transitions take place. But there can be another type of input: If some other machine selects the model, the input is “automatic.” More precisely, the output of the other machine becomes the input to our machine. For example, a separate machine performs a built-in test and outputs a signal that causes the machine in Table 3.8 to transition from “Standby” to “Ready,” automatically. Here in this book, we first define a model as a machine configuration that corresponds to a unique behavior. This is a very broad definition of the term. Later on, we constrain this definition from our special perspective: user interaction with control systems that employ models. (b) Model’s error and ambiguity. Model errors fall into a class of errors that involve forming and carrying out an intention. That is, when a situation is falsely classified, the resulting action may be one that was intended and appropriate for a perceived or expected situation, but inappropriate for the actual situation. However, there remains an open question as to what kind of situations lead to model error. This issue can be addressed by an example of a word processor. Specifically, we looked at situations in which the user’s input has different interpretations, depending on model. For example, in one
Table 3.8 A Machine with Different Behaviors
Startup Ready Engaged
Zhang_Ch03.indd 356
X1
X2
X3
a b c
f g h
k l m
5/13/2008 5:41:34 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
357
word processing application, the keystroke d can be interpreted as either (1) the literal text “d,” or (2) as the command “delete.” The interpretation depends on the word processor’s active model: either Text or Command. Model error can be linked to model ambiguity with interjecting the notion of user expectations. In this view, “model ambiguity” will result in model error only when the user has a false expectation about the result of his or her actions. There are two types of ambiguity: one that leads to model error, and one that does not lead to model error. An example of model ambiguity that does lead to model error is timesharing operating systems in which keystrokes are buffered until a “Return” or “Enter” key is pressed. However, when the buffer gets full, all subsequent keystrokes are ignored accordingly. This feature leads to two possible outcomes; all or only a portion of the keystrokes will be processed. The two outcomes depend on the state of buffer which is either “not full” or “full.” Since the state of the buffer is unknown to the user, false expectations may occur. The user’s action: hitting the “Return” key and seeing only part of what was keyed on the screen is therefore a “model error” because the buffer has already filled up. An example of model ambiguity that does not lead to model error is a common end-of-line algorithm which determines the number of words in a line. An ambiguity is introduced because the criteria for including the last word on the current line, or wrapping to the next line, are known to the user. Nevertheless, as long as the algorithm works reasonably well, the user will not complain because he or she has not formed any expectation about which word will stay on the line or scroll down, and either outcome is usually acceptable. Therefore, model error will not take place, even though model ambiguity does indeed exist. (c) User factors: Task, knowledge, and ability. One important element that constrains user expectations is the task at hand. If discerning between two or more different machine configurations is not part of the user’s task, model error will not occur. Consider, for example, the radiator fan of your car. Do you know what configuration (OFF, ON) it is in? The answer, of course, is no. There is no such indication in most modern cars. The fan mechanism changes its mode automatically depending on the output of the water temperature sensor. Model
Zhang_Ch03.indd 357
5/13/2008 5:41:34 PM
358
INDUSTRIAL CONTROL TECHNOLOGY ambiguity exists because at any point in time, the fan mechanism can change its model or stay in the current model. The configuration of the fan is completely ambiguous to the driver. But does such model ambiguity lead to model error? The answer is obvious; not at all because monitoring the fan configuration is not part of the driver’s task. Therefore, the user’s task is an important determinant of which machine configurations must be tracked by the user and which machine configurations need not be tracked. The second element that is part of the assessment of user expectations is user knowledge about the machine’s behaviors. By this, we mean that the user constructs some mental model of the machine’s “response map.” This mental model allows the user to track the machine’s configuration, and most importantly, to anticipate the next configuration of the machine. Specifically, our user must be able to predict what the new configuration will be following a manually triggered event or an automatically triggered event. The problem of reliably anticipating the next configuration of the machine becomes difficult when the number of transitions between configurations is large. Another factor in user knowledge is the number of conditions that must be evaluated as TRUE, before a transition from one model to another takes place. For example, the automated flight control systems of modern aircraft can execute a fully automatic (hands-off) landing. Several conditions (two engaged autopilots, two navigation receivers tuned to the correct frequency, identical course set, and more) must be TRUE before the aircraft will execute automatic landing. Therefore, in order to reliably anticipate the next model configuration of the machine, the user must have a complete and accurate model of the machine’s behavior, including its configurations, transitions, and associated conditions. This model, however, does not have to be complete in the sense that it describes every configuration and transition of the machine. Instead, as discussed earlier, the details of the user’s model must be merely sufficient for the user’s task, which is a much weaker requirement. The third element in the assessment of user expectations is the user ability to sense the conditions that trigger a transition. Specifically, the user must be able to first sense the events (e.g., a flight director is engaged; aircraft is more than 400 ft above the ground) and then evaluate whether or not the transition to a model (say, a vertical navigation) will take place. These events are usually made known to the user
Zhang_Ch03.indd 358
5/13/2008 5:41:34 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
359
through an interface. Nevertheless, there are more than a few control systems in which the interface does not depict the necessary input events. Such interfaces are said to be incorrect. In large and complex control systems, the user may have to integrate information from several displays in order to evaluate whether the transition will take place or not. For example, one of the conditions for a fully automated landing in a two-engine jetliner is that two separate electrical sources must be online, each one supplying its respective autopilot among the two autopilots. This information is external to the automatic flight control system, in the sense that it involves another aircraft system. The user’s job of integrating events, some of which are located in different displays, is not trivial. One important requirement for an efficient design is for the interface to integrate these events and provide the user with a succinct cue. In summary, we have discussed three elements that help to determine whether a given model ambiguity will or will not lead to false expectations. First is the relationship between model ambiguity and the user’s task. If distinguishing between models (e.g., radiator fan is “ON” or “OFF”) is not part of the user’s task, no meaningful errors will occur. Second, in a case where model ambiguity is relevant to the user’s task, we assess the user’s knowledge. If the user has an inaccurate and or incomplete model of the machine’s response map, he or she will not be able to anticipate the next configuration and model confusion will occur. Third, we evaluate the user’s ability to sense input events that trigger transitions. The interface must provide the user with all the necessary input events. If it does not, no accurate and complete model will help; the user may know what to look for but will never find it. As a result, confusion and model error will occur. (2) Classifications and types. A classification of the human–machine interaction models is proposed here to encompass three types of models in automated control systems: (1) “Interface models” that specify the behavior of the interface, (2) “Functional models” that specify the behavior of the various functions of a machine, and (3) “Supervisory models” that specify the level of user and machine involvement in supervising the process. Before we proceed to discuss this classification, we shall briefly describe a modeling language, “State Charts,” that will allow us to represent these models. In Chapter 5 of this book, Finite State Machine Model is given as a natural medium for describing the behavior of a model-based
Zhang_Ch03.indd 359
5/13/2008 5:41:34 PM
360
INDUSTRIAL CONTROL TECHNOLOGY system. A basic fragment of such a description is a state transition which captures the states, conditions or events, and transitions in a system. The State Chart language is a visual formalism for describing states and transitions in a modular fashion by extending the traditional Finite State Machine to include three unique features: “hierarchy,” “concurrency,” and “broadcast.” The “hierarchy” is represented by substates encapsulated within a super state. The “concurrency” is shown by means of two or more independent processes working in parallel. The “broadcast” mechanism allows for coupling of components, in the sense that an event in one end of the network can trigger transitions in another. These features of the State Chart are further explained in the following three examples. (a) Interface models. Figure 3.30 is a modeling structure of an interface model. It has three concurrently active processes (separated by a broken line): speed knob behavior, speed knob indicator, and speed window display. The behavior of the speed knob (middle process) is either “normal” or “pushed-in.” (These two states are depicted, in the State Chart language, by two rounded rectangles.) The initial state of the speed knob is normal (indicated by the small arrow above the state), but when momentarily pushed, the speed knob engages or disengages the Speed Intervene submode of the vertical navigation (VNAV, hereafter) model. The transition between normal and pushed-in is depicted by a solid arrow and the label “push” describes the triggering event. The transition back to normal occurs immediately as the pilot lifts his or her finger (the knob is spring loaded).
Speed knob (behavior) Speed knob (indicator)
Normal
Speed window (display) Closed d1 [disengaging VNAV .or. (in VNAV and b)]
Blank
Push/b d2 [engaging VNAV .or. (in VNAV and b)] Pushed-in
Open
Figure 3.30 An interface model.
Zhang_Ch03.indd 360
5/13/2008 5:41:34 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
361
The left-most process shown in Fig. 3.30 is the speed knob indicator. In contrast to many such knobs that have indicators, the Boeing-757 speed knob itself has no indicator and therefore is depicted as a single (blank) state. The right-most process is the speed window display that can be either closed or open. After VNAV is engaged, the speed window display is closed (implying that the source of the speed is from another component, the flight management computer). After VNAV is disengaged, and a semiautomatic model such as vertical speed is active, the speed window display is open, and the pilot can observe the current speed value and enter a new speed value. This logic is depicted in the speed knob indicator process in Fig. 3.30: transition d1 from closed to open is conditioned by the event “disengaging VNAV,” and d2 will take place when the pilot is “engaging VNAV.” When in vertical navigation model, the pilot can engage the Speed Intervene submodel by pushing the speed knob. This event, “push” (which can be seen in speed knob behavior process), triggers event b, which is then broadcast to other processes. Being in VNAV and sensing event b (“in VNAV and b”) is another (.OR.) condition on transition d1 from closed to open. Likewise, it is also the condition on transition d2 that takes us back to closed. To this end, the behavior of the speed knob is circular; the pilot can push the knob to close and push it again to open, ad infinitum. As explained above and seen in Fig. 3.30, there are two sets of conditions on the transitions between close and open. Of all these conditions, one, namely “disengaging VNAV,” is not always directly within the pilot’s control; it sometimes takes place automatically (e.g., during a transition from VNAV to the altitude hold model). Manual reengagement of VNAV will cause the speed parameter in the speed window to be replaced by economy speed computed by the flight management computer. If the speed value in the speed window was a restriction required by the American Transport Council, the aircraft will now accelerate/decelerate to the computed speed and the American Transport Council speed restriction will be ignored! (b) Functional models. When we survey the use of models in devices, an additional type of model emerges: the “functional model” which refers to the active function of the machine that produces a distinct behavior. An automatic gearshift mechanism of a car is one example of a machine with different models, each one defining different behaviors.
Zhang_Ch03.indd 361
5/13/2008 5:41:35 PM
362
INDUSTRIAL CONTROL TECHNOLOGY As we move to discussion of functional models and their uses in machines that control a timed process, we encounter the concept of “dynamics.” In dynamic control systems, the configuration and resulting behavior of the machine are a combination of a model and its associated parameter (e.g., speed, time, etc.). Referring back to our car example, the active model is the engaged gear that is Drive, and the associated parameter is the speed that corresponds to the angle of the accelerator pedal (say, 65 miles/h). Both model (Drive) and parameter (65 miles/h) define the configuration of the mechanism. Figure 3.31 depicts the structure of a functional model in the dynamic automated control system of a modern airliner. Two concurrent processes are depicted in this modeling structure: (1) models, and (2) parameter sources. Three models are depicted in the vertical models superstate in Fig. 3.31: vertical navigation, altitude hold, and vertical speed (the default model). All are functional models related to the vertical aspect of flight. The speed parameter can be obtained from two different sources: the flight management computer or the model control panel. The default source of the speed parameter, indicated by the small arrow
Vertical autopilot (models) Speed (parameter) Vertical navigation m2 / rv1 Altitude hold m1
Flight management computer
rv1 Model control panel
Vertical speed
Figure 3.31 A functional model.
Zhang_Ch03.indd 362
5/13/2008 5:41:35 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
363
in Fig. 3.31, is the model control panel. As mentioned in the discussion on interface models, engagement of vertical navigation via the model control panel will cause a transition to the flight management computer as the source of speed. This can be seen in Fig. 3.31 where transition m2 will trigger event rv1, which, in turn, triggers an automatic transition (depicted as a broken line) from “model control panel” to “flight management computer.” In many dynamic control mechanisms, some model transitions trigger a parameter source change while others do not. Such independence appears to be a source of confusion to operators. (c) Supervisory models. The third type of model we discuss here is “supervisory models” that sometimes are also referred to as “participatory” or “control” models. Modern automated control mechanisms usually allow the user flexibility in specifying the level of human and machine involvement in controlling the process. That is, the operator may decide to engage a manual model in which he or she is controlling the process; a semiautomatic model in which the operator specifies target values, in real time, and the machine attempts to maintain them; or fully automatic models in which the operator specifies in advance a sequence of target values, that is parameters, and the machine executes these automatically, one after the other. Figure 3.32 is an example of a supervisory model structure that can be found in many control mechanisms, such as the automated flight control system, cruise control of a car, and robots on assembly lines. The modeling structure consists of hierarchical layers of superstates, each with its own set of models. The supervisory models in the Automated Flight Control System are organized hierarchically. Three main levels are described in Fig. 3.32. The highest level of automation is the vertical navigation model (level 3), depicted as a superstate at the top of the models pyramid. Two submodels are encapsulated in the vertical navigation model; VNAV Speed and VNAV Path; each one exhibiting a somewhat different control behavior. One level below (level 2) are two semiautomatic models: vertical speed and altitude hold. One model in the Automated Flight Control System, altitude capture, can only be engaged automatically; no direct manual engagement is possible. This model engages automatically when the aircraft is beginning the level-off maneuver to capture the selected altitude. When the aircraft is
Zhang_Ch03.indd 363
5/13/2008 5:41:35 PM
364
INDUSTRIAL CONTROL TECHNOLOGY Automated flight control system models
Semiautomatic models Vertical navigation VNAV speed
VNAV path
3
Vertical speed Altitude hold 2 m3 1
m4 Altitude capture
Figure 3.32 A supervisory model.
several hundred feet from the selected altitude, an automatic transition from any climb model to altitude capture takes place (m3). In this aspect, an example can be that a transition from vertical navigation or vertical speed to altitude capture takes place (m3). Finally, when the aircraft reaches the selected altitude, a transition back from altitude capture to altitude hold model also takes place automatically (m4). In summary, we have illustrated a modeling language, Start Charts, for representing human interaction with control systems, and proposed a classification of three different types of models that are employed in computers, devices, and supervisory control systems. The “Interface models” change display format. The “Functional models” allow for different functions and associated parameters. Last are “Supervisory models” that specify the level of supervision (manual, semiautomatic, and fully automatic) in the human–machine system. The three types of models described here are essentially similar, in that they all define the manner in which a certain component of the machine behaves. The component may be the interface
Zhang_Ch03.indd 364
5/13/2008 5:41:35 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
365
only, a function of the machine, or the level of supervision. This commonality brings us back to our general working definition of the term “model,” a machine configuration that corresponds to unique behavior.
3.3.2.2
Systems of Human–Machine Interactions
There are three architectures of human–machine systems that are currently popular in industrial control: (1) adaptive human–machine interface, (2) supervisory human–machine interface, (3) distributed human–machine interface. (1) Adaptive human–machine interface. In a complex control system, human–machine interface is attempted to give users the means to perceive and manipulate easily huge quantities of information under resource constraints such as time, cognitive workload, devices, etc. The intelligence in the human–machine interface makes the control system more flexible and more adaptable. One subset of intelligent user interface is adaptive interfaces. An adaptive interface modifies its behavior according to some defined constraints in order to best satisfy all of them, and varies in the ways and means that are used to achieve adaptation. In human–machine interface of an industrial control, operators, system, and context are continuously changing and are sometimes in contradiction with one another. Thus, this kind of interface can be viewed as the result of a balance between these three components and their changing relative importance and priority. An adaptive interface aims at assisting the operator in acquiring the most salient information in any context, in most appropriate form, and at the most opportune time. In any industrial control systems, two main factors are considered: the system that generates the information stream and the operator to which this stream is presented. The system and the operator share a common goal: to control the process and to solve any problems that may arise. This common objective makes them cooperate although they may both have their own goals. The role of the interface is to integrate these different goals with different levels of importance and the various constraints that come from the task, the environment, or the interface itself in order to produce an information presentation that best harmonizes the set of all these parameters. Specifically, an industrial process control should react consistently, in a timely fashion without disturbing the operator needlessly in his task. However the most salient pieces of information should be presented in the most appropriate way.
Zhang_Ch03.indd 365
5/13/2008 5:41:35 PM
366
INDUSTRIAL CONTROL TECHNOLOGY The two main adaptation triggers used to modify the human– machine interface could be (a) The process. When the process moves from a normal state to a disturbed state, the streams of information may become denser and more numerous. To avoid any cognitive overload problems, the interface acts thus as a filter that channels the streams of information. To this end, it adapts the presentation of the pieces of information in order to help the operator identify and solve the problem more quickly, more easily, and more efficiently. (b) The operator. An operator is much more difficult and trickier to drive the adaptation on the user state. As a matter of fact, the interface has to infer whether the operator reacts incorrectly and needs help based on his actions. Then, it may decide to adapt itself to highlight the problem and suggest solutions to assist the user. The aim of the adaptation triggered by the process or the operator is to adapt the composition of the streams of information, that is, to adapt the organization and the presentation of the pieces of information on the interface in the best possible way, according to the state of the process and the inferred state of the operator. What is expected is to improve the communication between the system and the user. The means proposed in an adaptive human–machine interface to reach this are the following: (i) Highlight relevant pieces of information. The importance of a piece of information depends on its relevance according to the particular goals and constraints of each of the entities that participates in the communication between the system and the operator. (ii) Optimize space usage. According to the current usage of the resources, it may turn out that it is necessary to reorganize the display space to cope with new constraints and parameters. (iii) Select the best representation. According to the piece of information, its importance, the resources available, and the media currently in usage, the most appropriate media to communicate with the operator should be used. (iv) Timeliness of information. The display of a particular piece of information should be timely regarded the process and the operator. This adaptation should follow the evolution of the process over time, but it should also adapt the timing of the displayed information to the inferred needs of the operator.
Zhang_Ch03.indd 366
5/13/2008 5:41:36 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
367
(v) Perspectives. In traditional interfaces, the operator has to decide what, where, when, and how the information should be presented. Thus, an operator can wonder whether he or she needs an adaptive interface to achieve their task or whether it will constitute more a bother than a help to him or her. This raises at least four questions: (1) From the client’s point of view, whether the cost of developing an adaptive human–machine interface is justifiable. (2) From the human–machine interface designer’s point of view, whether this kind of interface is usable and how to evaluate its usability. (3) From the developer’s point of view, what are the best technical solutions to implement an efficient system within the required time? (4) From the operators’ point of view, whether they consider such an adaptive interface as a collaborator or a competitor. (2) Supervisory human–machine interface. Supervisory human– machine interface can be used in such systems and similar ones where there is a considerable distance between the control room and the machine house in a plant. It is from this machine house that the controller such as SCADA or PLC controls the objects which are, for example, pumps, blowers and purification monitors, etc. To provide the data communication, the supervisory software of the human–machine interface is linked with the controllers over a network such as Ethernet, Control Area Network, and so on. The supervisory software is such that only one person is needed at any one time to monitor the whole plant from a single master device. The generated graphics show a clear representation on screen of the current status of any part of the system. A number of alarms are automatically activated directly on screen if parameters deviate from their tight tolerance band. This ensures extremely rapid updating of the control room screen contents. All the calculations for the controllers are calculated by the control software, using constant feedback from sensors throughout the production process. According to many applications, the supervisory human– machine interface is indeed an ideal software package for these cases above. Thanks to its interactive configuration and its setting assistants, supervisory human–machine interface is able to straightforwardly get the system up and running and tested out. Supervisory human–machine interface is an open architecture which offers all the functions and options necessary for data collection and graphical representation of data on the operator screen. The system provides comprehensive logging of all
Zhang_Ch03.indd 367
5/13/2008 5:41:36 PM
368
INDUSTRIAL CONTROL TECHNOLOGY measured values with databases. By accessing this database and by using real-time measurements, a wide variety of reports and trend curves can be viewed on the screen or output to a printer. (3) Distributed human–machine interface. The distributed human– machine interface is a component-based approach. In a system of such an approach, the human–machine interface could directly access any controller component, which also means that each controller exposes the human–machine interface. Since all the system components are location transparent the human–machine interface can bind to a component anywhere, be it in-process, local-process, or remote process. The most likely case is remote binding because it would be assumed that the human–machine interface and the controller would reside on different platforms. In the distributed human–machine interface, the multiple servers are typically used to provide the systems with the flexibility and power of a peer-to-peer architecture. Each controller can have its own human–machine interface server. The assigned server to a controller or a proxy server of a controller is perfect for each controller to manage expansion, frequent system changes, maintenance, and replicated automation lines within or across plants. Instead of a single data server, each controller component provides its own data services through a proxy server. The primary drawback to decentralized components is the uncertainty of real-time controller performance generally resulting from poorly designed proxy agent use, for example, if the human– machine interface samples controller data at too high a frequency. However, the distributed human–machine interface is ideal for SCADA applications. Its distributed peer-to-peer architecture, reusable components, and remote deployment and maintenance capabilities make supporting SCADA applications remarkably efficient. The software’s network services have been optimized for use over slow and intermittent networks, which significantly enhance application deployment and communications.
3.3.2.3
Designs of Human–Machine Interactions
The design for a human–machine interface is important, because the human–machine interface of an application will often make or break this application. Although the functionality that an application provides to users is important, the way in which it provides that functionality is of the same importance. An application that is difficult to use will not be used. So, the value of human–machine interface design should not be underestimated.
Zhang_Ch03.indd 368
5/13/2008 5:41:36 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
369
(1) Design principles. The following describes a collection of principles for improving the quality of human–machine interface design. (a) The structure principle. The human–machine interface design should organize the interface purposefully, in meaningful and useful ways based on clear, consistent models that are apparent and recognizable to users, putting related things together and separating unrelated things, differentiating dissimilar things and making similar things resemble one another. The structure principle is concerned with overall interface architecture. (b) The simplicity principle. The human–machine interface design should make simple, common tasks simple to do, communicating clearly and simply in the user’s own language, and providing good shortcuts that are meaningfully related to longer procedures. (c) The visibility principle. The human–machine interface design should keep all needed options and materials for a given task visible without distracting the user with extraneous or redundant information. Good designs do not overwhelm users with too many alternatives or confuse them with unneeded information. (d) The feedback principle. The human–machine interface design should keep users informed of actions or interpretations, changes of state or condition, and errors or exceptions that are relevant and of interest to the user through clear, concise, and unambiguous language familiar to users. (e) The tolerance principle. The human–machine interface design should be flexible and tolerant, reducing the cost of mistakes and misuse by allowing undoing and redoing, while also preventing errors wherever possible by tolerating varied inputs and sequences and by interpreting all reasonable actions. (f) The reuse principle. The human–machine interface design should reuse internal and external components and behaviors, maintaining consistency with purpose rather than merely arbitrary consistency, thus reducing the need for users to rethink and remember. (2) Design process. (a) Phase one. The design process begins with a task analysis in which we identify all the stakeholders, examine existing control or production systems and control and production processes, whether they are paper-based or computerized, and identify ways and means to streamline and improve the control or production process. Tasks at this phase are to
Zhang_Ch03.indd 369
5/13/2008 5:41:36 PM
370
INDUSTRIAL CONTROL TECHNOLOGY conduct background research, interview stakeholders, and observe people conducting tasks. (b) Phase two. Once having an agreed-upon objective and set of functional requirements, the next step should go through a design phase to generate a design that meets all the requirements. The goal of the design process is to develop a coherent, easy to understand software front-to-end that makes sense to the eventual users of the system. The design and review cycle should be iterated until we are satisfied with our design. (c) Phase three. The next phase is implementation and test. We are also developing and implementing any performance support aids as necessary, for example, on-line help, paper manuals, etc. (d) Phase four. Once a functional system is complete, we move into the final phase. What constitutes “success” is that people using our system, whether it be an intelligent tutoring system or online decision aid, are able to see solutions that they could not see before and/or better understand the constraints that are in place. We generally conduct formal experiments, comparing performance using our system with perhaps different features turned on and off to make contributions to the literature on decision support and human– machine interaction. (3) Design evaluation. An important aspect of human–machine interaction is the methodology for evaluation of user interface techniques. Precision and recall measures have been widely used for comparing the ranking results of noninteractive systems, but are less appropriate for assessing interactive systems. The standard evaluations emphasize high recall levels. However, in many interactive settings, users require only a few relevant documents and do not care about high recall to evaluate highly interactive information access systems. Useful metrics beyond precision and recall include: time required to learn the system, time required to achieve goals on benchmark tasks, error rates, and retention of the use of the interface over time. Empirical data involving human users is time consuming to gather and difficult to draw conclusions from. This is due in part to variation in users’ characteristics and motivations, and in part to the broad scope of information access activities. Formal psychological studies usually only uncover narrow conclusions within restricted contexts. For example, quantities such as the length of time it takes for a user to select an item from a fixed
Zhang_Ch03.indd 370
5/13/2008 5:41:36 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
371
menu under various conditions have been characterized empirically, but variations in interaction behavior for complex tasks like information access are difficult to account for accurately. A more informal evaluation approach is called a heuristic evaluation in which user interface affordances are assessed in terms of more general properties and without concern about statistically significant results.
3.3.3
Interfaces
For all industrial control systems, a high degree of user friendliness at the interface between human and machine is a decisive prerequisite for being accepted by the general public. Ambient Intelligence applications are characterized by multimodal interfaces as well as by the pro-active behavior of the controller system. Therefore, various interfaces must be sensibly combined with each other, and the interaction with humans must be perfectly adapted to the individual situation of the human. Specific challenges include, among others, the selection of suitable interfaces for specific applications, the dynamic changes of interfaces based on changes in the state of the human such as “experiences gained” or “accident,” as well as the experience-based optimization of such interfaces. Regarding selection, methods are currently being developed that can suggest suitable interfaces based on a comprehensive characterization of the requirements. This methodology is very comprehensive and complex, since the requirements involve human properties such as their desire for information or personal preferences. Further evaluation and optimization of the methodology are absolutely indispensable. This work urgently requires the collaboration of psychologists. With respect to the dynamic changing of interfaces, this must be supported at least by semiautomatic generation. Experience-based patterns may be a suitable approach for this. Concerning the optimization of interfaces, an increase of acceptance through experience-based optimization can be envisioned. Such assistance systems already exist in vehicles, where, for example, the type of acceleration can be adapted to the style of driving of the respective driver.
3.3.3.1
Devices
(1) Operator interface terminals. These human–machine interfaces are operator interface terminals with which users interact in order to control other devices. Some human–machine interfaces include
Zhang_Ch03.indd 371
5/13/2008 5:41:36 PM
372
INDUSTRIAL CONTROL TECHNOLOGY knobs, levers, and controls. Others provide programmable function keys or a full keypad. Devices that include a processor or interface to personal computers are also available. Many human– machine interfaces include alphanumeric or graphic displays. For ease of use, these displays are often backlit or use standard messages. When selecting human–machine interfaces, important considerations include devices supported and devices controlled. Device dimensions, operating temperature, operating humidity, and vibration and shock ratings are other important factors. Many human–machine interfaces include flat panel displays (FPD) that use liquid crystal display (LCD) or gas plasma technologies. In LCD, an electric current passes through a liquid crystal solution that is trapped between two sheets of polarizing material. The crystals align themselves so that light cannot pass, producing an image on the screen. LCD can be monochrome or color. Color displays can use a passive matrix or an active matrix. Passive matrix displays contain a grid of horizontal and vertical wires with an LCD element at each intersection. In active matrix displays, each pixel has a transistor that is switched directly on or off, improving response times. Unlike LCD, gas plasma displays consist of an array of pixels, each of which contains red, blue, and green subpixels. In the plasma state, gas reacts with the subpixels to display the appropriate color. These human–machine interfaces differ in terms of performance specifications and I/O ports. Performance specifications include processor type, random access memory (RAM), and hard drive capacity, and other drive options. I/O interfaces allow connections to peripherals such as mice, keyboards, and modems. Common I/O interfaces include Ethernet, Fast Ethernet, RS-232, RS-422, RS-485, small computer system interface (SCSI), and universal serial bus (USB). Ethernet is a local area network (LAN) protocol that uses a bus or star typology and supports data transfer rates of 10 Mbps. Fast Ethernet is a 100 Mbps specification. RS-232, RS-422, and RS-485 are balanced serial interfaces for the transmission of digital data. Small computer systems interface (SCSI) is an intelligent I/O parallel peripheral bus with a standard, device-independent protocol that allows many peripheral devices to be connected to the SCSI port. Universal serial bus (USB) is a four-wire, 12-Mbps serial bus for low-to-medium speed peripheral device connections. These human–machine interfaces are available with a variety of features. For example, some devices are web-enabled or networkable. Others include software drivers, a stylus, and support for a keyboard, mouse, and printer. Devices that provide real-time
Zhang_Ch03.indd 372
5/13/2008 5:41:36 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
373
clock support use a special battery and are not connected to the power supply. Power-over-Ethernet (PoE) equipment eliminates the need for separate power supplies altogether. These human–machine interfaces that offer shielding against electromagnetic interference (EMI) and radio frequency interference (RFI) are commonly available. Devices that are designed for harsh environments include enclosures that meet standards from the National Electronics Manufacturers’ Association (NEMA). (2) Operator interface monitors. Machine controllers and monitors use electronic numeric control and a monitoring interface for programming and calibrating computerized machinery. This product area includes general-purpose machine controllers, embedded machine controllers, machine monitors, CNC stepper motors, and CNC router controllers. A machine controller is a programmable, automatic, and computer numerically controlled (CNC) device. An embedded machine controller is part of a larger system. A machine monitor is used to collect and display production data from production equipment such as presses. A CNC stepper motor is used to drive a machine tool with power and precision. A CNC router controller is used to cut tool paths. Many other types of machine controllers and monitors are also available. Machine controllers and monitors consist of many different components. A machine controller uses a microprocessor to perform predetermined control and logical operations. Memory is added to the processor in order to record data from the machine. Often, an input device is used to provide menus or options. Some embedded machine controllers provide 16-axis pulse motion control capabilities. Others include antivibration design mechanisms. Machine monitors track a machine’s uptime, downtime, and idle time. They also allow operators to enter a reason for downtime or nonproductive activities. In some cases, a machine monitor can be programmed to require the entry of a reason code after each downtime event. In this way, machine controllers and monitors can be configured to meet the needs of specific machinery and industries. Machine controllers and monitors are used in many different applications. Some machine control products are used to regulate medical equipment such as respirators. Others are used in aerospace, automotive, or military applications. An embedded machine controller can be used in a printing machine, pipe bending equipment, CNC stepper motor, or CNC router controller. Embedded machine controllers are also used in the manufacture of semiconductors and electronic devices. Machine controllers
Zhang_Ch03.indd 373
5/13/2008 5:41:36 PM
374
INDUSTRIAL CONTROL TECHNOLOGY and monitors with integral software are used in industries where reliability, quality, and cost are important considerations. (3) Industrial control pendants. Industrial control pendants are sophisticated, hand-held terminals that are used to control robot or machine movements from point to point, within a determined space. They consist of a hanging control console furnished with joysticks, push buttons, or rotary cam switches. A type of industrial control pendant, teach pendants are the most popular robotics teaching method, and are used widely with all types of robots, in many industries. As the robot moves within this determined space, the various points are recorded into its memory banks, and can be located later on through subsequent playback. There are a number of teach pendant types available, depending on the type of application for which they will be used. If the goal is simply to monitor and control a robotics unit, then a simple control box style is suitable. If additional capabilities such as on the fly programming are required, more sophisticated boxes should be used. Industrial control pendants are equipped with switches, dials, and pushbuttons through which data is relayed to the robotics unit, and additional monitoring systems if necessary. The relationship between industrial control pendants and their subservient unit is generally established via an interconnected cabling system. However, more advanced wireless devices are also available. During use, the operator actuates the switches on manual pendants in a specific order. This, in turn, causes the robot, end effectors, or machine, to move to and from the desired points. As the end effector reaches the desired point, the operator uses the record pushbutton to enter the location into the robot, or robot controller’s memory banks. This is the most common programming method for playback robots. The usage of industrial control pendants is common; however it has a significant disadvantage in that the operator must divert his and her attention away from the movement of the machine during programming in order to locate the appropriate pushbutton to move the robot. The use of a joystick provides a solution to this problem as the movement of the stick in a certain direction propels the robot or machine in that direction. This option is available on more advanced industrial control pendant types. (4) SCADA HMI (human–machine interface) devices. Distributed control systems (DCS) and supervisory control and data acquisition (SCADA) systems are system architectures for process control applications. A distributed control system (DCS) consists of
Zhang_Ch03.indd 374
5/13/2008 5:41:36 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
375
a programmable logic controller (PLC) that is networked both to other controllers and to field devices such as sensors, actuators, and terminals. A DCS may also interface to a workstation. A SCADA system is a process control application that collects data from sensors or other devices on a factory floor or in remote locations. The data is then sent to a central computer for management and process control. SCADA systems provide shop floor data collection and may allow manual input via bar codes and keyboards. Both distributed control systems (DCS) and supervisory control and data acquisition (SCADA) systems often include integral software for monitoring and reporting. There are several parts to a supervisory control and data acquisition system or SCADA system. To control SCADA, a SCADA system integrator, SCADA security, and SCADA HMI are required. A SCADA system integrator is used to interface a SCADA system to an external application. SCADA security uses one or more computers at a remote site to monitor and control sensors or shop floor devices. SCADA security includes remote terminal units (RTU), a communications infrastructure, and a central control room where monitoring devices such as workstations are housed. SCADA HMI is a human machine interface that accounts for human factors in engineering design. Distributed control systems (DCS) and supervisory control and data acquisition (SCADA) systems are used in a variety of industries. Distributed control systems or DCS systems are used to control traffic lights and manage chemical processing, pharmaceutical, and power generation facilities. Supervisory control and data acquisition systems or SCADA systems are used in warehouses, petrochemical processing, iron and steel production, food processing, and agricultural applications. Providers of distributed control systems and supervisory control and data acquisition systems are located across the United States and around the world.
3.3.3.2 Tools Operator interface mounts and arms are articulating components used to hold and position industrial computer monitors, keyboards, or other operator interfaces. Operator interface mounts and arms are designed to improve the physical and spatial relationships between machines and the humans that operate them. The science of these relationships, called ergonomics, is the study of human–machine interactions. Ergonomically compatible products are designed to maximize productivity and minimize
Zhang_Ch03.indd 375
5/13/2008 5:41:36 PM
376
INDUSTRIAL CONTROL TECHNOLOGY
operator fatigue, discomfort, and injury. The goal of using operator interface mounts and arms as part of an ergonomics program is to reduce injuries, illnesses, and musculoskeletal disorders in the workplace. Several of the most common types of ergonomic operator interface mount and arm products include computer accessories (e.g., keyboard drawer, mouse tray, glare screen, wrist rest, and monitor support arm) and workstation accessories (e.g., instrumentation booms, articulating supports, foot rests, chairs, document stands). A monitor support arm is a type of operator interface mount that is designed to support computer screens or monitors in work stations, control centers, and operating theaters. A support arm should combine stability and full adjustability to meet the operator’s needs. A support arm can be a desk, wall, ceiling, or mobile mounting arm. Articulating supports are movable support arms that a user can readjust for the height or location of monitors and equipment in relation to the user’s eyes or hands. Keyboard drawers are used to store unused keyboards. A monitor drawer mounts an LCD and keyboard within a rack frame or enclosure so that a monitor can be folded down and stored when not in use. Instrumentation booms are another type of operator interface mounts and arms. This type of operator interface mount is used to support various types of equipment, including computer or industrial monitors, video equipment, and manufacturing equipment. The U.S. Occupational Safety and Health Administration (OSHA) has a four-pronged, comprehensive approach to ergonomics, including operator interface mounts and arms, that is designed to quickly and effectively address musculoskeletal disorders in the workplace. The OSHA approach includes industry or task specific guidelines, enforcement actions, outreach and assistance activities, and a national advisory committee.
3.3.3.3
Software
The human–machine interface (HMI) software enables operators to manage industrial and process control machinery via a computer-based graphical user interface (GUI). There are two basic types of HMI software: supervisory level and machine level. The supervisory level is designed for control room environment and used for system control and data acquisition (SCADA), a process control application which collects data from sensors on the shop floor and sends the information to a central computer for processing. The machine level uses embedded, machine-level devices within the production facility itself. Most human–machine interface software is designed for either supervisory level or machine level; however,
Zhang_Ch03.indd 376
5/13/2008 5:41:36 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
377
applications that are suitable for both types of HMI are also available. These software applications are more expensive, but can eliminate redundancies and reduce long-term costs. Selecting human–machine interface software requires an analysis of product specifications and features. Important considerations include system architectures, standards and platforms; ease of implementation, administration, and use; performance, scalability, and integration; and total costs and pricing. Some human–machine interface software provides data logging, alarms, security, forecasting, operations planning and control (OPC), and ActiveX technologies. Others support data migration from legacy systems. Communication on multiple networks can support up to four channels. Supported networks include ControlNet and DeviceNet. ControlNet is a real-time, control-layer network that provides high-speed transport of both time-critical I/O data and messaging data. DeviceNet is designed to connect industrial devices such as limit switches, photoelectric cells, valve manifolds, motor starters, drives, and operator displays to programmable logic controllers (PLC) and personal computers (PC). Some human–machine interface software runs on Microsoft Windows CE, a version of the Windows operating system that is designed for hand-held devices. Microsoft and Windows are registered trademarks of Microsoft Corporation. Windows CE allows users to deploy the same human–machine interface software on distributed HMI servers, machinelevel embedded HMI, diskless open-HMI machines, and portable or pocketsized HMI devices.
3.4 Highway Addressable Remote Transducer (HART) Field Communications HART is an acronym for “Highway Addressable Remote Transducer” that represents a two-way digital communication simultaneously with the 4–20 mA analog signaling used by traditional instrumentation equipment in industrial process control. HART was developed in the early 1980s by a company named Rosemount Inc. for the host to perform the management of the field devices in industrial systems. In July 1993, the HART Communication Foundation was established to provide worldwide support for application of this technology. HART Specifications continue to be updated to broaden the range of HART applications. A recent HART development, the Device Description Language (DDL), provides a universal software interface to new and existing devices.
Zhang_Ch03.indd 377
5/13/2008 5:41:36 PM
378
INDUSTRIAL CONTROL TECHNOLOGY
3.4.1
HART Communication
Most industrial control systems include numerous field system functions. The host controller, therefore, should instantly communicate with all the field instruments and devices while control processes are running for (1) device configuration or reconfiguration, (2) device diagnostics, (3) device troubleshooting, (4) reading the values of additional measurements provided by the device, (5) device health and status, and other requirements. A host in the system can be a Distributed Control System, PLC, and Asset Management System, Safety System, or a hand-held device. By fully using HART communication, industrial control can be benefited in many aspects. Utilizing the full capabilities of HART-enabled devices and systems reduces costs by improving plant operations and increasing efficiency and helps to avoid the high cost of process disruptions and unplanned shutdowns. Properly utilized, the intelligent capabilities of HART-smart devices are a valuable resource for keeping plants operating at maximum efficiency. Real-time HART integration with plant control, safety, and asset management systems unlocks the value of connected devices and extends the capability of systems to detect any problems with the device, its connection to the process, or interference with accurate communication between the device and system. The world’s leading process automation control systems and instrumentation suppliers all support HART Communication in their field device and system products. Most automation system suppliers offer direct HARTenabled I/O and PC-based software applications to leverage the intelligence in HART-smart field devices for continuous device condition monitoring, real-time diagnostics, and multivariable process information.
3.4.1.1
HART networks
(1) Wired HART networks. There are two kinds of wired HART networks available in the industrial control systems. Figure 3.33 displays the architectures of these two wired HART networks; the first (Fig. 3.33(a)) is a point-to-point HART network, and the second (Fig. 3.33(b)) is a multiple-dropped HART network. As shown in Fig. 3.33, wired HART networks include the Host Controller and some field devices that can be transmitters; between the Host and the field devices, this system has an I/O system that can be the system interface with the HART, and a hand-held terminal or hand-held communicator. The type of network with a single Field Instrument that does both HART network functions and analog signaling is probably
Zhang_Ch03.indd 378
5/13/2008 5:41:36 PM
379
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL Analog HART interface
Digital data (2–3 updates per second)
Primary master: control system or other host application
Power supply Transmitter Secondary master (a)
Control system or other host application
Handheld terminal
Input/output (I/O) system
Field devices (b)
Figure 3.33 The architecture of HART networks: (a) the point-to-point HART network and (b) the multiple-dropped HART network. Note: Intrument power is provided by an internal or external power source that is not shown.
the most common type of wired HART network and is called a point-to-point network. In some cases the point-to-point network might have a HART Field Instrument but no permanent HART Master. This might occur, for example, if the user intends primarily analog communication and Field Instrument parameters
Zhang_Ch03.indd 379
5/13/2008 5:41:36 PM
380
INDUSTRIAL CONTROL TECHNOLOGY are set prior to installation. A HART user might also set up this type of network and then later communicate with the Field Instrument using a hand-held communicator (HART Secondary Master). This is a device that clips onto device terminals (or other points in the network) for temporary HART communication with the Field Instrument. A HART Field Instrument is sometimes configured so that it has no analog signal, only HART function. Several such Field Instruments can be connected together (electrically in parallel) on the same network, as in Fig. 3.34. These Field Instruments are said to be multiple-dropped. The Master is able to talk to and configure each one, in turn. When Field Instruments are multidropped there cannot be any analog signaling. The term “current loop” ceases to have any meaning. Multiple-dropped Field Instruments that are powered from the network draw a small, fixed current (usually 4 mA) so that the number of devices can be maximized. A Field Instrument that has been configured to draw a fixed analog current is said to be “parked.” Parking is accomplished by setting the short-form address of the Field Instrument to some number other than 0. A hand-held communicator might also be connected to the network of Fig. 3.34. There are few restrictions on building wired HART networks. The topology may be loosely described as a bus, with drop attachments forming secondary busses as desired, which are illustrated in Fig. 3.35. The whole collection is considered a single network. Except for the intervening lengths of cable, all of the devices are electrically in parallel. The Hand-Held Communicator (HHC) may also be connected virtually anywhere. As a practical matter, however, most of the cable is inaccessible and the HHC has to be connected at the Field Instrument, in junction boxes, or in controllers or marshalling panels. In
Control area
Field area
HART
Master
Current loop (network)
. .
.
Field instrument 1 and 2 and Nth
Figure 3.34 HART network with multiple-dropped field instruments.
Zhang_Ch03.indd 380
5/13/2008 5:41:38 PM
381
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL F1
F1
Control Field area area Single twisted pair
Secondary bus
F1
F1
Master
Main bus F1 Field instrument
HHC
F1
Figure 3.35 HART network showing free arrangement of devices.
intrinsically safe (IS) installations there will likely be an IS barrier separating the control and field areas. A Field Instrument may be added or removed or wiring changes made while the network is live (powered). This may interrupt an on-going transaction. However, if the network is inadvertently short-circuited, this could reset all devices. The network will recover from the loss of a transaction by retrying a previous communication. If Field Instruments are reset, they will eventually come back to the state they were in prior to the reset. No reprogramming of HART parameters is needed. Digital signaling brings with it a variety of other possible devices and modes of operation. For example, some Field Instruments are HART only and have no analog signaling. Others draw no power from the network. In still other cases the network may not be powered (no DC). There also exist other types of HART networks that depart from the conventional one described here. These are covered in another section. (2) Wireless HART networks. Wireless HART is the first open and interoperable wireless communication standard designed to address the critical needs of the process industry for reliable, robust, and secure wireless communication in real world industrial plant applications. A Wireless HART Network consists of Wireless HART field devices, at least one Wireless HART gateway, and a Wireless HART network manager. These components are connected into
Zhang_Ch03.indd 381
5/13/2008 5:41:39 PM
382
INDUSTRIAL CONTROL TECHNOLOGY a wireless mesh network supporting bidirectional communication from the HART host to field device and back. Figure 3.36 gives the typical Wireless HART network architecture with the principal devices plotted: (a) Network manager. The Network Manager is an application that manages the mesh network and Network Devices. The Network Manager performs the following functions: (1) Forms the mesh network, (2) Allows new devices to connect to the network, (3) Sets the communication schedule of the devices, (4) Establishes the redundant data paths for all communications, (5) Monitors the network. (b) Gateway devices. The gateway device connects the mesh network with a plant automation network, allowing data to flow between the two network devices. The gateway device provides access to the Wireless HART devices by a system or other host application. (c) Network devices. A network device is a node in the mesh network. It can transmit and receive Wireless HART data and perform the basic functions necessary to support network formation and maintenance. Network devices include Plant automation network
n m
Host appllication (e.g., Asset management)
l Gateway
g l
m
Field devices
Network manager e
g
c f
Handheld
Process automation controller Gateway
Existing HART devices a
Adapter
Figure 3.36 Typical wireless HART architecture (courtesy of the HART Communication Foundation).
Zhang_Ch03.indd 382
5/13/2008 5:41:39 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
383
field devices, router devices, gateway devices, and mesh hand-held devices. (d) Field devices. The field device may be a process connected instrument, a router, or hand-held device. The Wireless HART network connects these devices together. (e) Router device. A router device is used to improve network coverage (to extend a network) so that it is capable of forwarding messages from other network devices. (f) Process connected instrument. Typically a measuring or positioning device used for process monitoring and control, it is also capable of forwarding messages from other network devices. (g) Adapter. An adapter is a device that allows a HART instrument without wireless capability to be connected to a Wireless HART network. (h) Hand-held support device. Hand-held devices are used in the commissioning, monitoring, and maintenance of network devices; they are portable and operated by the plant personnel. Wireless HART networks can be configured in a number of different topologies to support various application requirements including the following: (a) Star network. Star networks have just one router device that communicates with several end devices. This is one of the simplest network topologies. A star network may be appropriate for small applications. (b) Mesh network. Mesh networks are formed by network devices that are all router devices. Mesh networks provide a robust network with redundant data paths which is able to adapt to changing RF environments. (c) Star mesh network. Star mesh networks are a combination of the star network and mesh network.
3.4.1.2
HART Mechanism
HART communication occurs between two HART-enabled devices, typically a field device and a control or monitoring system. To perform the communication between the host and the field instruments, the analog measurement signal is used to transmit digital information. For this purpose, an additional signal is modulated to the measurement signal using the Frequency Shift Keying (FSK) process. The two frequencies of the additional signal, 1200 and 2200 Hz, represent the bit values 1 and 0. This makes it possible to transfer additional information without affecting the analog measurement signal. As indicated by Fig. 3.37, HART provides
Zhang_Ch03.indd 383
5/13/2008 5:41:40 PM
384
INDUSTRIAL CONTROL TECHNOLOGY
+0.5 mA HART signal
0 20 mA –0.5 mA 1200 2200 Hz Hz “1” “0”
C
Analog signal
R
C R
R C C
R C = Command R = Response
4 mA
0
1
Time (s)
2
Figure 3.37 HART signaling (digital and analog).
two simultaneous communication channels: the 4–20 mA analog signal and a digital signal. The 4–20 mA signal communicates the primary measured value (in the case of a field instrument) using the 4–20 mA current loop, the fastest and most reliable industry standard. Additional device information is communicated using a digital signal that is superimposed on the analog signal. The digital signal contains information from the device including device status, diagnostics, additional measured or calculated values, etc. Together, the two communication channels provide a complete field communication solution that is easy to use and configure, is low cost and is very robust. The HART signal path from the microprocessor in a sending device to the microprocessor in a receiving device is displayed in Fig. 3.38. Amplifiers, filters, and the network between these two interfaces have been omitted for simplicity in Fig. 3.38. At this level the diagram is the same, regardless of whether a Master or Slave is transmitting. Notice that, if the signal starts out as a current, the FSK is a voltage. But if it starts out a voltage it stays a voltage. The transmitting device begins by turning on its carrier and loading the first byte to be transmitted into its interface circuits. It waits for the byte to be transmitted and then loads the next one. This is repeated until all the
Zhang_Ch03.indd 384
5/13/2008 5:41:40 PM
385
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
FSK (current or voltage)
Receiver’s interface (circuits and modem)
Sender’s interface (circuits and modem)
Sender’s microprocessor
Receivers’ microprocessor
Figure 3.38 HART signaling path.
bytes of the message (these messages are always defined as commands that are of predefined format) are exhausted. The transmitter then waits for the last byte to be serialized and finally turns off its carrier. With minor exceptions, the transmitting device does not allow a gap to occur in the serial stream, the start and stop bits are used for synchronization, and the parity bit is part of the HART error detection. The serial character stream is applied to the modulator of the sending modem. The Modulator operates such that a logic 1 applied to the input produces a 1200 Hz periodic signal at the Modulator output. Logic 0 produces 2200 Hz. The type of modulation used is called Continuous Phase Frequency Shift Keying (CPFSK). “Continuous Phase” means that there is no discontinuity in the modulator output when the frequency changes. When the sender’s interface output (modulator input) switches from logic 1 to logic 0, the frequency changes from 1200 to 2200 Hz with just a change in slope of the transmitted waveform. A moment’s thought reveals that the phase does not change through this transition. Given the chosen shift frequencies and the bit rate, a transition can occur at any phase. At the receiving end, the demodulator section of a modem in its interface converts FSK back into a serial bit stream at 1200 bps. Each character is converted back into an 8-bit byte and parity is checked. The receiving microprocessor reads the incoming bytes from its interface and checks parity for each one until there are no more or until parsing of the data stream indicates that this is the last byte of the message. The receiving
Zhang_Ch03.indd 385
5/13/2008 5:41:40 PM
386
INDUSTRIAL CONTROL TECHNOLOGY
processor accepts the incoming message only if its amplitude is high enough to cause carrier detect to be asserted. In some cases, the receiving processor will have to test an I/O line to make this determination. In others, the carrier detect signal gates the receive data so that nothing (no transitions) reaches the receiving interface unless carrier detect is asserted. HART protocol puts most of the responsibility (such as timing and arbitration) into the Masters. This eases the Field Instrument software development and puts the complexity into the device that is more suited to deal with it. A Master typically sends a command and then expects a reply. A Slave waits for a command and then sends a reply. The command and associated reply are called a transaction. There are typically periods of silence (no device is allowed communicating) between transactions. A Slave accesses the network as quickly as possible in response to a Master. Network access by Masters requires arbitration. Masters arbitrate by observing who sent the last transmission (a Slave or the other Master) and by using timers to delay their own transmissions. Thus, a Master allows time for the other Master to start a transmission. The timers constitute dead time when no device is communicating and therefore contribute to “overhead” in HART communication. Each HART field instrument (in normal cases, a field instrument plays a role of Slave) must have a unique address. Each command sent by a Master contains the address of the desired Field Instrument. All Field Instruments examine the command. The one that recognizes its own address sends back a response. This address is incorporated into the command message sent by a Master and is echoed back in the reply by the Slave. Addresses are either 4 bits or 38 bits and are called short and long or “short frame” and “long frame” addresses, respectively. A Slave can also be addressed through its tag (an identifier assigned by the user). Each command or reply is a message that starts with the preamble and is ended with the checksum. The preamble is allowed to vary in length, depending on the requirements in the Slave end. Different Slaves can have different preamble length requirements, so that a Master might need to maintain a table of these values. A Master will use the longest possible preamble when talking to a Slave for the first time. Once the Master reads the Slave’s preamble, it first checks the length requirement (a stored HART parameter), then will subsequently use this new length when talking to that Slave. The checksum at the end of the message is used for error control. It is the exclusive-OR of all of the preceding bytes, starting with the start delimiter. The checksum, along with the parity bit in each character, creates a message matrix having so-called vertical and longitudinal parity. If a message is in error, this usually necessitates a retry.
Zhang_Ch03.indd 386
5/13/2008 5:41:41 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
387
One more feature, available in some Field Instruments, is burst mode. A Field Instrument that is burst-mode capable can repeatedly send a HART reply without a repeated command. This is useful in getting the fastest possible updates (about 2–3 times per second) of process variables. If burst mode is to be used, there can be only one bursting Field Instrument on the network. A Field Instrument remembers its mode of operation during power down and returns to this mode on power up. Thus, a Field Instrument that has been parked will remain so through power down. Similarly, a Field Instrument in burst-mode will begin bursting again on power up.
3.4.2
HART System
HART communication in industrial control comprises two folders: HART connection system and HART protocol. This section focuses on the HART system, and the next section will be on the HART protocol. HART system devices work to support the HART protocol by communicating their data over the transmission lines of the 4–20 mA connections. This enables the field devices to be parameterized and initialized in a flexible manner or to read measured and stored data (records). All these tasks require field devices based on microprocessor technology. These devices are frequently called smart devices. For building and maintaining a HART-enabled system, the technical kernel will be choosing the HART-compatible devices, installing the system’s devices, configuring the system’s devices, and calibrating the system’s devices. The key is to make sure that the engineers or designers are requesting or specifying devices or systems that are fully compliant with the HART protocol specification and are tested and registered with the HART Communication Foundation (the HCF). The engineers or designers are required to be assured of these aspects: interoperability with other HART-compatible devices and HART-enabled systems; getting a device that will provide the powerful features of HART technology; specifying a product that will fully integrate into your HART-enabled applications.
3.4.2.1
HART System Devices
The devices constructing a HART connection system have several features that significantly reduce the time required to fully commission a HART-enabled network. When less time is required for commissioning, substantial cost savings are hence achieved. Devices which support the HART protocol are grouped into master (host) and slave (field) devices. Master devices include communicator or hand-held terminals as well as
Zhang_Ch03.indd 387
5/13/2008 5:41:41 PM
388
INDUSTRIAL CONTROL TECHNOLOGY
PC-based workplaces that stay in a control room. HART slave devices, on the other hand, include sensors, transmitters, and various actuators. The variety ranges from two-wire and four-wire devices to intrinsically safe versions for use in hazardous environments. Field devices and communicators as well as compact hand-held terminals have an integrated FSKmodem, whereas computers or workstations have a serial interface to connect the modem externally. HART communication is often used for such simple point-to-point connections (Fig. 3.33(a)). Nevertheless, many more connection variants are possible. In extended systems, the number of accessible devices can be increased by using a multiplexer. In addition to that, HART enables the networking of devices to suit special applications. Network variants include multiple-dropped (Fig. 3.39), FSK-bus, and networks for split-range operation. (1) HART Communicator. The HART Communicator is the most widely used communicator across the world in industrial control. The HART Communicators are portable devices; their weights have been evenly distributed for comfortable one-handed operation in the field. The result is the universal, user upgradeable, intrinsically safe, rugged and reliable Field Communicator. In HART-enabled systems, the HART Communicators are often
PC host Modem
Multiplexer HART field device
Controller
HART signals
Address 0
4–20 mA
Address 0
Address 0
Figure 3.39 An HART connecting system including the FSK-modem and HART-multiplexer, and HART-buses (courtesy of the SAMSON, Inc.).
Zhang_Ch03.indd 388
5/13/2008 5:41:41 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
389
defined by engineers as the Second Master (Fig. 3.33(a)) or Hand-held Terminals (Fig. 3.33(b)). With a memory and a microprocessor or some applicationspecific integrated circuits, the HART Communicator provides a complete solution for configuring and monitoring all HART devices and all Fieldbus devices of an industrial system. It is comprised of three main components plus accessories. These parts consist of the hand-held; the HART interface hardware, and the application software Suite. The Communicator runs on a robust, real-time, operating system. This trio of hardware and software comprises a complete HART field communicator that can be powerful, multifaceted, and portable all in one. The hardware for the HART Communicators primarily includes the HART interface and the pinch connectors. The HART interface is designed to interface to the multiple connectors located on the bottom of the hand-held, allowing communication between the Palm and the HART network. The pinch connectors easily connect to any HART network for instant communication. Most of the HART interface requires no batteries, running solely off the hand-held’s internal power supply. Its compact size and low power consumption makes the interface an ideal solution for portability. The software suite for the HART Communicators includes some distinct applications. Each of these applications is preloaded onto the hand-held and designed for a particular function. The main application of the suite allows communication, monitoring, and configuration of HART-compatible devices. The software is based upon manufacturer device description files (DDL) and thus allows access to all menus and parameters as designed by the manufacturer. This software application allows for the logging of device variable values over time. A wide range of variables can be logged automatically at a user selectable sample time, or manually one by one. These logs can be saved and transferred to a PC for further analysis. This graphing application allows device variables to be trended over time in an easy to view graphical format. Device parameters can be simultaneously graphed in various colors for easy identification. The display makes it easy to read in both bright sunlight and in normal lighting. To make sure all conditions are covered, a multilevel backlight is added, allowing the display to be viewed in those areas of your plant with dim light. The touch sensitive display and large physical navigation buttons provide for efficient use both on the bench and in the field. User upgradeable HART and Fieldbus devices, as well as functional updates to existing devices are introduced continually
Zhang_Ch03.indd 389
5/13/2008 5:41:41 PM
390
INDUSTRIAL CONTROL TECHNOLOGY by device vendors. Keeping up-to-date with the required Device Description (DD) drivers for all the devices in plant can be a real challenge. Nowadays, with the Easy Upgrade option, keeping communicators updated with the most current Device Descriptions (DDs) is an easy job. (2) FSK-Modem. There are often two kinds of modem required in the HART-enabled industrial control networks: USB (Universal Serial Bus) Modem and FSK (Frequency Shift Keying) Modem. These two kinds of modem are all connected with the host PC (Personal Computer) in the HART-enabled networks. This USBModem is just an ordinary PC modem used for computer networks, without special design for HART functions. However, the FSK-Modem should be particularly designed for HART functions. The following will focus on the FSK-Modem. The FSK-modem is designed to provide HART communication capabilities for the implementation of Frequency Shift Keying (FSK) techniques to transfer data. The FSK-modem is also required to conform to the HART network’s physical layer. For this purpose, most FSK-Modems operate at the Bell 202 standard and are made into a chipset containing some integrated circuits. As shown in Fig. 3.37, the FSK is the frequency modulation of a carrier of digital capability. For Simplex or Half Duplex operation, the FSK-modem uses a single carrier in which the communication can only be transmitted in one direction at a time. For Full Duplex, the FSK-modem uses multiple carriers so that data communication can be simultaneous in both directions. The basic block diagram for the FSK-modem chipset is depicted in Fig. 3.40, which illustrates the mechanism of a data modulation and demodulation. This chip is divided into three main parts: receive, transmit, and clock recovery. Both receive
Audio signal input
Low-pass filter
Received digital data
Descrambler
Clock recovery
Recovered clock signal
TX clock Transmitted digital data Scrambler
FIR
Anti-imaging filter
Audio signal output
Figure 3.40 Block diagram of an FSK-modem chipset.
Zhang_Ch03.indd 390
5/13/2008 5:41:41 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
391
and transmit blocks are separated and data can be processed in each direction independently. (a) Modulator for transmitting data. The data transmit part in this chipset performs modulation, in which the scrambler is to make a nearly flat spectrum of output signal. The output of the scrambler is connected to a long digital FIR filter. This FIR filter compensates distortion of transmission line and removes sharp edges rising from High-Low or Low-High logic transitions. The FIR filter makes the transmitted signal spectrum narrow to fit bandwidth and compensates the signal for the receiver’s side. The transmit wave’s shapes are stored in an EPROM (an electronic memory, see Chapter 2 of this book). In this way the transmitted waveform is synthesized not only from the present bit’s state, but also the four that preceded it and four to come. Data burnt into EPROM represents filter response for each of 256 combinations. (b) Demodulator for receiving data. At the receiving side, the audio signal going from the discriminator of the transceiver is passed through a low-pass filter to eliminate pertinent higher frequencies, and remove out of band spurious noise and residue. The signal is then limited and detected by sampling at the correct instant. At this point, the detected data, still randomized, are passed through a descrambler, where the original data are recovered and the result goes off to terminal. A descrambler, like a scrambler, is simply to provide the invert function of the scrambler and perforce requires some number of bits to synchronize. (c) Clock recovery. The heart of the receiver is a digital phase locked network (DPLL), which must extract a clock from the received audio stream. It is needed to time the receiver functions, including the all-important data detector. Each waveform has a phase shift of 360/256° from another and is made up of 16 samples. The received audio signal is limited, and a zero crossing detector circuit generates one cycle of 9600 Hz for each zero-crossing (a proto-clock). This is compared with a locally generated clock in a phase detector based on an up/down counter. The counter increments if one clock is early, decrements otherwise. This count then addresses an EPROM mentioned above. In this way, the local clock slips rapidly into phase with that of the incoming data. Local clock signal is derived from the output of EPROM. Output data are converted to sine voltage with maximum amplitude.
Zhang_Ch03.indd 391
5/13/2008 5:41:41 PM
392
INDUSTRIAL CONTROL TECHNOLOGY Some FSK-modems have one more function block in their chipsets; that is Carrier Detect. The carrier detect function in the FSK-modem is responsible for checking whether or not the modulated or demodulated waves fall in some range of frequencies. The carrier detect output is active low whenever a valid carrier tone between some Hz (inclusive) is detected. Detection occurs when timed transitions remain within the band of these Hz periods for 10 nanoseconds to 1 bit time. Some of the FSK-modem manufacturers use CMOS technologies to make the chipset. The FSK-modem is chipset with pin-out specification. In a HART-enabled network, this FSK-modem should be connected with the host microprocessor or the CPU (Central Processing Unit). Figure 3.41 illustrates how this modem’s pins connect with the CPU in an FSK-modem. (3) HART Multiplexer. In the HART-enabled industrial networks, the HART Multiplexer acts as a gateway between the network management computer and the HART-compatible field devices. Many field devices are distributed over a wide area in industrial process systems, and must be monitored and adapted to the changes in the processing environment. The process system with the HART Multiplexer enables on-line communication between an asset management computer or workstation and those intelligent field devices that support the HART protocol so as to simultaneously fit into the changes in the process system. All actions on the field device are parallel to the transmission of the 4–20 mA
MOD/DEMOD I/O
Carrier detect
CPU
Bandpass filter INRTS OCD
IRXA
HT2012 RX TX
TX data RX data
Waveshape filter
ITXD ORXD
OTXA
1460K OLK
460.8 kHz
M A U
M e d i a n e t w o r k
Figure 3.41 Typical hardware design of an FSK-modem.
Zhang_Ch03.indd 392
5/13/2008 5:41:41 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
393
measurement signals and have no influence on the measurement value processing through the process system. Each HART Multiplexer, regardless of whether it is a slave or a master, provides a connection to a specified number of field instruments. Up to thousands of field units can communicate and exchange data with a computer or workstation. Working with a hand-held terminal (HART-communicator) is also possible since the HART protocol accepts two masters (computer and hand-held terminal) in one system. Systems can be easily expanded and the advantages of the HART communications can be exploited. At the present technical level, the system consists of a maximum of 31 HART Multiplexer masters which are linked to the computer with an RS485 interface. Each HART Multiplexer master controls up to 15 HART Multiplexer slaves. At present, the HART Multiplexers are specified as the following three types: (a) HART Multiplexer Master. This is a HART Multiplexer that can operate up to 256 analog field instruments. The built-in slave unit operates the first 16 loops. If more than 16 loops are required, additional slave units can be connected. The slave units are connected to the master with a 14-pin flat cable. The connector for the ribbon cable is found on the same housing side as the connectors for the interface and the power supply. The analog signals are separately linked to a termination board with a 26-pin cable for each unit. Sixteen leads are reserved for the HART signal of the analog measurement circuits. The remaining 10 leads are sent to ground. This unit is designed with removable terminals and can be connected to a Power Rail. (b) HART Multiplexer Slave. This is a HART Multiplexer that can operate up to 16 analog field instruments at the present. In this case, the slave can only be operated with the HART Multiplexer Master and is powered by the master across a 14-pin flat cable connection. Up to 15 slaves can be connected to the master. The slave address is set with a 16 position rotary switch (addresses 1–16). If only one slave is connected to the master, then the slave address should be 1. If multiple slaves are connected, slaves are to be assigned addresses in ascending order. The analog signals are fed into the slave by means of a 26-pin ribbon cable. Sixteen leads are reserved for the HART signal of the analog measurement circuits. The remaining 10 leads are assigned to ground. (c) HART flexible interface. This is a flexible interface board with a HART pick-up connector. This flexible interface
Zhang_Ch03.indd 393
5/13/2008 5:41:42 PM
394
INDUSTRIAL CONTROL TECHNOLOGY board has 16 terminal blocks to connect up to 16 smart field devices. This board can be used for general purpose applications or in conjunction with intrinsic safety barriers for hazardous area applications. The specification of a HART Multiplexer should include these important technical data: (i) HART signal channels (1) Leakage current (µA at some temperature range), (2) Output termination External (measured by Ω), (3) Output voltage (mVpp), (4) Output impedance (measured by Ω, capacitively linked), (5) Input impedance per HART conventions, (6) Input voltage range (mV-Vpp), (7) Input voltage. (ii) Power supply (1) Nominal voltage (VDC), (2) Power consumption (less and equal to some W). (iii) Interface (1) Type is RS-xxx (Some number of wire multidrops), (2) Transmission speed 9600, 19200, 38400 baud, (3) Address selection (32) possible RS-xxx addresses, (4) The transmission speed (unit is “baud”) at the “ON” and “OFF” state for every switch. (iv) Mechanical data (1) Mounting some mm DIN rail or wall mounted, (2) Connection options some-pin ribbon cable for analog; some-pin ribbon cable for master–slave, (3) Removable terminals, maximum some AWG for interface and power supply. (4) HART connecting buses. In industrial process systems, the HART-enabled networks require several kinds of buses for connecting the HART-compatible devices and instruments. A brief description of two of these buses is given below. (a) Bus for split-range operation. In industrial process systems, there are special applications which require that several (usually two) actuators receive the same control signal. A typical example is the split-range operation of control valves. One valve operates in the nominal current range from 4 to 12 mA, while another valve uses the current range from 12 to 20 mA. The split-range operation technique is a solution for this case. In split-range operation, the control valves are connected in series in the current network. When both valves have a HART interface, the HART host device must be
Zhang_Ch03.indd 394
5/13/2008 5:41:42 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
395
able to distinguish with which valve it must communicate. To achieve this, the HART protocol revision 6 (anticipated for autumn 1999) and later will be extended by one more network variant. As is the case for multiple-dropped mode, each device is assigned to an address from 1 to 15. The analog 4–20 mA signals preserve its device-specific function, which is, for control valves, the selection of the required travel. (b) FSK-bus. The HART protocol can be extended by the FSK-bus. Similar to a device bus, the FSK-bus can connect approximately 100 HART-compatible devices and address these devices with the technical level at the present. This requires special assembly-type isolating amplifiers (e.g., TET 128). The only reason for the limited number of participants is that each additional participant increases the signal noise. The signal quality is therefore no longer sufficient to properly evaluate the telegram. The HART devices are connected to their analog current signal and the common FSKbus line with the isolating amplifier (Fig. 3.42). From the FSK-bus viewpoint, the isolating amplifiers act as impedance converters. This enables devices with high load to be integrated in the communication network. To address these devices, a special, long form of addressing is used. During the configuration phase, the bus address and the tag number of each device are set with the point-to-point line. During operation, the devices operate with the long addresses. When using the HART command 11 (see the subsection below), the host can also address the device via its tag. In this way, the system configuration can be read and checked during the start-up phase. (5) HART system interface. HART communication between two or more devices can function properly only when all communication participants are able to interpret the HART sine-wave signals correctly. To ensure this, not only must the transmission lines fulfill certain requirements, but the devices in the current network which are not part of the HART communication can impede or even prevent the transmission of the data. The reason is that the inputs and outputs of these devices are specified only for the 4–20 mA technology. Because the input and output resistances change with the signal frequency, such devices are likely to shortcircuit the higher frequency HART signals (1200–2200 Hz). Where a HART communication system is connected with other kinds of communication systems, gateways could be the best interface devices to convert the HART protocol into the
Zhang_Ch03.indd 395
5/13/2008 5:41:42 PM
396
INDUSTRIAL CONTROL TECHNOLOGY Host
Safe area
PC
Hazardous area
3780-1 FSK isolating amplifier (Ex-i)
Controller
FSK bus
3780-1
3780-1 • • Up to max. • 100 control loops •
Figure 3.42 The connection architecture of an HART network with the FSK-bus (courtesy of the SAMSON, Inc.).
protocols of the networks to be coupled. In most cases, when complex communications must be performed, Fieldbus systems would be the preferred choice. Even though there is no complex protocol conversion, the HART-enabled system is capable of communicating over long distances. Furthermore, the HART data signals can be transmitted over telephone lines using HART/ CCITT converters, in which the Field devices directly connect to dedicated lines owned by the telephone company, being able to communicate with the centralized host located many kilometers away. However, as already mentioned, within a HART-enabled system, the HART-compatible field devices also require an appropriate communication interface that could be, for example, an integrated FSK-modem or a HART-Multiplexer. As mentioned earlier, the HART signals are imposed on the conventional analog current signal. Whether the devices in the networks are designed in four-wire technique including an
Zhang_Ch03.indd 396
5/13/2008 5:41:42 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
397
additional power supply or in two-wire technique, HART communication can be used for both cases. However, it is important to note that the maximum permissible load of a HART device is fixed. The load of a HART device is limited by the HART specification. Another limitation is due to the process controller. The output of the process controller must be able to provide the power for the connected two-wire device. The higher the power consumption of a two-wire device is, the higher its load is. The additional functions of a HARTcommunicating device increase its power consumption, and hence the load, compared to non-HART devices. When retrofitting HART devices into an already existing installation, the process controller must be checked for its ability to provide the power required by the HART-compatible device. The process controller must be able to provide at least the load impedance of the HART device at 20 mA.
3.4.2.2
HART System Installation
The first task before installing a HART-enabled system is checking to verify the HART-compatible devices. Manufacturers have different levels of HART technical implementation in their devices and systems. In fact the capabilities of the HART-compatible devices and HART-enabled system vary widely, which requires when that engineers, when specifying HART technology, consider such factors and parameters as the following: (1) Registered device at the HART Communication Foundation (the HCF). (2) Registered Device-Description at the HART Communication Foundation (the HCF). (3) What is the number of variables this device can measure? (4) Does this device comply with HART specification (in reference of the HCF)? (5) Does the device respond to HART Command 48 (in reference of the HCF’ specifications)? (6) What unique or special features does this device support? (7) What diagnostic features does the device contain? These questions below are for the suppliers of those I/O interface devices: (1) How much HART capability is embedded into the I/O and how smart is it? (2) Can the I/O validate and secure the 4–20 mA signal?
Zhang_Ch03.indd 397
5/13/2008 5:41:42 PM
398
INDUSTRIAL CONTROL TECHNOLOGY (3) Is there one HART modem per channel, or is the I/O multiplexed? How fast can it update the HART digital values? (4) In what ways does the system support access to multivariable HART data from multivariable devices? (5) Can you merely “push a button” on the I/O to calibrate the network current or check the range? (6) Does the I/O support multiple-dropped networks? (7) Does the I/O automatically scan and monitor the HARTcompatible field devices or is the scanning only possible using “pass through”?
These questions below should be asked of the control system suppliers: (1) Does your host use a “native” device description or does it require a different file type? (2) Does the system make it easy to use all HART capabilities? (3) How much training is required to learn how to get and use HART data? (4) Review the configuration of a HART-compatible device using the control system. (5) Can the system use secondary digital process variables? (6) Does it understand the HART-compatible device status change? (7) Can the system detect configuration changes? (8) Does the system do notification by exception? (9) How does the system detect changes in configuration and status? (10) How is the HART-compatible device status communicated to the operators? (11) How do you perform tests when there is an error in the device? (12) How open is the system to third party software? Before installation, it is also necessary to enter device tags and other identification and configuration data into each field instrument. After installation, the instrument identification (tag and descriptor) can be verified in the control room using a configuration tool such as hand-held communicator or computer. Some field devices provide information on their physical configuration (e.g., wetted materials). These and other configuration data can also be verified in the control room. The verification process is important for safety. Once a field instrument has been identified and its configuration data confirmed, the analog network integrity can be checked using the network test feature, which is supported by many HART-compatible devices. The network test feature enables the analog signal from a HART transmitter to be fixed at a specific value to verify network integrity and ensure proper
Zhang_Ch03.indd 398
5/13/2008 5:41:42 PM
399
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
connection to support devices such as indicators, recorders, and DCS displays. Use the HART protocol network test feature to check analog network integrity and ensure a proper physical connection among all network devices. Additional integrity can be achieved if the analog value is compared to the digital value being reported in a device. For example, someone might have provided an offset to the 4–20 mA analog value that has not been accounted for in the control system. By comparing the digital value of the Primary Variable to the analog value, the network integrity can be verified. There are some ways to integrate HART data and leverage the intelligence in smart field devices. Several simple and cost-effective integration strategies are listed below in order to get more from currently installed HART-compatible devices and instruments (Fig. 3.43). (1) Point-to-point integration. This is the most common way to use HART. The communication capability of HART-compatible devices allows them to be configured and set up for specific applications, reducing costs and saving time in commissioning and maintenance. With connection to the 4–20 mA wires, a device can be integrated from remote locations by connecting anywhere on the current network to obtain device status and diagnostic information. (2) HART-to-analog integration. Signal extractors communicate with HART-compatible devices in real-time (simultaneously) to convert the intelligent information in these devices into 4–20 mA signals for input into an existing analog control system. Add this
Operator’s computer H A R HART system/network controller
T d
HART I/O interface devices
a t a
Filed instruments
Figure 3.43 The dataflow diagram for the integration of the HART data.
Zhang_Ch03.indd 399
5/13/2008 5:41:43 PM
400
INDUSTRIAL CONTROL TECHNOLOGY capability one device at a time to get more from the intelligent HART-compatible devices. (3) HART-plus-analog integration. New HART-multiplexer packaging solutions make it easy to communicate with HART-compatible devices by replacing the existing I/O termination panels. The analog control signal continues on to the control system as it does today but the HART data is sent to a device asset management system providing valuable diagnostics information 24/7. Although the control system is not aware of the HART data, this solution provides better access to device diagnostics for asset management improvements. (4) Full HART integration. Upgrading a Field or Remote I/O system provides an integrated path to continuously put HART data directly into your control system. Most new control systems are HART-capable and many suppliers offer software and I/O solutions to make upgrades simple and cost-effective. Continuous communication between the field device and control system enables problems with the devices, its connection to the process, or inaccuracies in the 4–20 mA control signal to be detected automatically so that corrective action can be taken before there is negative impact to the process operation.
3.4.2.3
HART System Configuration
The purpose of the Device Configuration is for accessing its HART Data. There are several methods of accessing the intelligent information in the HART-compatible device on a temporary or a permanent basis. The configuration of a HART-compatible device can be achieved by using the software and hardware tools. To configure a single device on a temporary basis, a universal hand-held configuration tool is needed, with a power supply, a load resister, and a HART-compatible device. Or, configuration can be achieved by using a computer which is capable of running a device configuration application and using a HART-modem. (1) Universal hand-held communicators. HART hand-held communicators are available from major instrumentation suppliers across the world and are supported by the member companies of the HCF. Using Device Description (DD) files, the communicator can fully configure any HART-compatible device for which it has a DD installed. If the communicator does not have the DD for a specific device, it will still communicate and configure the device using the HART Universal and Common Practice commands.
Zhang_Ch03.indd 400
5/13/2008 5:41:43 PM
401
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
There are 35–40 standard data items in every registered HARTcompatible device. The data can be accessed by any approved configuration tool such as a communicator. These items do not require the use of a Device Description and typically include the basic functionality for all devices. These are the Universal and Common Practice commands required of every registered HART device. To access the device specific data, a current Device Description is required and provides the communicator with the information it needs to fully access all the device specific capabilities. A HART hand-held communicator, if equipped, can also facilitate record keeping of device configurations. After a device is installed, its configuration data can be stored in memory or on a disk for later archiving or printing. There are many types of hand-held communicators available today; their features and ability should be compared to find those that meet your specific requirements. (2) Computer-based device configuration and management tools. A HART-compatible device can be configured with a desktop or laptop computer (or other portable models) by using a computer-based software application and a HART interface modem (Fig. 3.44). The advantages of using a computer include an improved screen display and support for more Device Descriptions and Device Configurations due to additional computer-based memory storage capacity. Due to the critical nature of device configurations in the plant environment, these
Handheld teminal PC/host application RS232 or USB HART interface
250 Ω resistor
Power supply Field device
Figure 3.44 The connection of a computer with an HART-compatible device for configuration (courtesy of the HCF).
Zhang_Ch03.indd 401
5/13/2008 5:41:43 PM
402
INDUSTRIAL CONTROL TECHNOLOGY computers can also be used as backup storage for data from hand-held communicators. Software applications are available from many suppliers. It is important to review their features to determine ease of use, ability to add or download the Device Descriptions, and general functionality.
3.4.2.4
HART System Calibration
In order to take advantage of the digital capabilities of HART-compatible devices and instruments, especially for precisely reporting the data values of process control, it is essential that these devices and instruments should be calibrated correctly. Like the calibration procedure of other devices, a calibration procedure for HART-compatible devices and instruments consists of a verification test, an adjustment to within acceptable tolerance if necessary, and a final verification test if an adjustment has been made. Furthermore, data from the calibration is collected and used to complete a report of calibration, documenting instrument performance over time. (1) Functional parts of HART devices. For a HART-compatible device, a multiple-point test between input and output does not provide an accurate representation of its operation. Just like a conventional device, the measurement process begins with a technology that converts a physical quantity into an electrical signal. However, the similarity to a conventional device ends here. Instead of a purely mechanical or electrical path between the input and the resulting 4–20 mA output signal, a HARTcompatible device has a microprocessor that manipulates the input data. As shown in Fig. 3.45, there are typically three calculation sections involved, and each of these sections may be individually tested and adjusted in the calibration procedure. Prior to the first box in Fig. 3.45, the microprocessor of this device measures some electrical property that is affected by the process variable of interest. The measured value may be voltage, capacitance, reluctance, inductance, frequency, or some other property. However, before it can be used by the microprocessor, it must be transformed to a digital count by an analog to digital (A/D) converter. In the first box, the microprocessor of this device must rely upon some form of equation or table to relate the raw count value of the electrical measurement to the actual property (PV) of interest such as temperature, pressure, or flow. The principal form of this table is usually established by the manufacturer, but
Zhang_Ch03.indd 402
5/13/2008 5:41:44 PM
403
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL High and low output trim
mA
PV
Counts Input section
mA
Conversion section PV may be read digitally
D/A counts
Counts
PV
PV
A/D counts
Range and transfer function
mA
High and low sensor trim
Output section
mA may be set and read digitally
Figure 3.45 A functional block diagram of HART-compatible devices.
most HART-compatible devices and instruments include commands to perform field adjustments. This is often referred to as a sensor trim. The output of the first box is a digital representation of the process variable. When engineers read the process variable using a communicator, this is the value that they can see. The second box in Fig. 3.45 is strictly a mathematical conversion from the process variable to the equivalent milliamp representation. The range values of the instrument (related to the zero and span values) are used in conjunction with the transfer function to calculate this value. Although a linear transfer function is the most common, pressure transmitters often have a square-root option. Other special instruments may implement common mathematical transformations or user defined break point tables. The output of the second block is a digital representation of the desired instrument output. When engineers read the network current using a HART-communicator, this is the value that they see. Many HART-compatible instruments support a command which puts the instrument into a fixed output test mode. This overrides the normal output of the second block and substitutes a specified output value. The third box in Fig. 3.45 is the output section where the calculated output value is converted to a count value that can be loaded into a digital to analog converter. This produces the actual analog electrical signal. Once again the microprocessor must rely on some internal calibration factors to get the output correct. Adjusting these factors is often referred to as a current loop trim or 4–20 mA trim. (2) Basic steps of HART calibration. This analysis in (1) above tells us why a proper calibration procedure for a HART-compatible
Zhang_Ch03.indd 403
5/13/2008 5:41:44 PM
404
INDUSTRIAL CONTROL TECHNOLOGY instrument is significantly different from that for a conventional instrument. The specific calibration requirements depend upon the application. If the application uses the digital representation of the process variable for monitoring or control, then the sensor input section (the first box in Fig. 3.45) must be explicitly tested and adjusted. Please note that this reading is completely independent of the milliamp output, and has nothing to do with the zero or span settings. The PV as read with HART communication continues to be accurate even when it is outside the assigned output range. If the current network output is not used (i.e., the instrument is used as a digital only device), then the input section calibration is all that is required. If the application uses the milliamp output, then the output section must be explicitly tested and calibrated. Please note that this calibration is independent of the input section, and again, has nothing to do with the zero and span settings. The same basic multiple point test and adjust technique are employed, but with a new definition for output. To run a test, use a calibrator to measure the applied input, but read the associated output (PV) with a communicator. Error calculations are simpler because there is always a linear relationship between the input and output, and both are recorded in the same engineering units. In general, the desired accuracy for this test will be the manufacturer’s accuracy specification. If the test does not pass, then follow the procedure recommended by the manufacturer for trimming the input section. This may be called a sensor trim and typically involves one or two trim points. Pressure transmitters also often have a zero trim, where the input calculation is adjusted to read exactly zero (not low range). Do not confuse a trim with any form of reranging or any procedure that involves using zero and span buttons. The same basic multiple point test and adjust technique is employed again, but with a new definition for input. To run a test, use a communicator to put the transmitter into a fixed current output mode. The input value for the test is the mA value. The output value is obtained using a calibrator to measure the resulting current. This test also implies a linear relationship between the input and output, and both are recorded in the same engineering units (milliamps). The desired accuracy for this test should also reflect the manufacturer’s accuracy specification. If the test does not pass, then follow the procedure recommended by the manufacturer for trimming the output section. This may be called a 4–20 mA trim, a current loop trim, or a D/A trim. The trim procedure should require two trim points close to or just outside
Zhang_Ch03.indd 404
5/13/2008 5:41:44 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
405
of 4 and 20 mA. Do not confuse this with any form of reranging or any procedure that involves using zero and span buttons. After calibrating both the Input and Output sections, a HARTcompatible device or instrument should operate correctly. The middle block in Fig. 3.45 only involves computations. That is why the range, units, and transfer function can be changed without necessarily affecting the calibration. Notice also that even if the instrument has an unusual transfer function, it only operates in the conversion of the input value to a milliamp output value, and therefore is not involved in the testing or calibration of either the input or output sections. (3) Performance verification of HART calibration. If the goal of this calibration is to validate the overall performance of a HARTcompatible device or instrument, it needs just to run a zero and span test like that applied to a conventional instrument. However, passing this test does not definitely indicate that the transmitter is operating correctly, which is due to the following reasons. Many HART-compatible instruments support a parameter called damping. If this is not set to zero, it can have an adverse effect on tests and adjustments. Damping induces a delay between a change in the instrument input and the detection of that change in the digital value for the instrument input reading and the corresponding instrument output value. This damping induced delay may exceed the settling time used in the test or calibration. The settling time is the amount of time the test or calibration waits between setting the input and reading the resulting output. It is advisable to adjust the instrument damping value to zero prior to performing tests or adjustments. After calibration, be sure the damping constant is returned to its required value. There is a common misconception that changing the range of a HART-compatible instrument by using a communicator somehow calibrates the instrument. Remember that a true calibration requires a reference standard, usually in the form of one or more pieces of calibration equipment to provide an input and measure the resulting output. Therefore, since a range change does not reference any external calibration standards, it is really a configuration change, not a calibration. Please note that in the block diagram of HART-compatible devices (Fig. 3.45), changing the range only affects the second box. It has no effect on the digital process variable as read by a communicator. Using only the zero and span adjustments to calibrate a HARTcompatible instrument (the standard practice associated with conventional instruments) often corrupts the internal digital readings. As shown in Fig. 3.45, there is more than one output to consider.
Zhang_Ch03.indd 405
5/13/2008 5:41:44 PM
406
INDUSTRIAL CONTROL TECHNOLOGY The digital PV and milliamp values read by a communicator are also outputs, just like the analog current network. The proper way to correct a zero drift condition is to use a zero trim. This adjusts the instrument input block so that the digital PV agrees with the calibration standard. If intending to use the digital process values for trending, statistical calculations, or maintenance tracking, then it should disable the external zero and span buttons and avoid using them entirely.
3.4.3
HART Protocol
HART protocol is widely recognized as the industry standard for digitally enhanced 4–20 mA field instrument communication in process control. In industrial process control, the HART protocol provides a uniquely backward compatible solution for filed instrument communication as both 4–20 mA analog and digital signals are transmitted simultaneously on the same wiring.
3.4.3.1
HART Protocol Model
HART communication uses a master–slave protocol which means that a field device as slave speaks only when it is spoken to by a device as master. In every communication, a master device sends a “command” message first; while receiving this command message, the slave device processes it and then sends back a “response” message to that master device (Fig. 3.46). Both the “command” and “response” messages include the HART-data and must be formatted in accordance with the relevant HCF (HART Communication Foundation) specifications.
Master
Slave Command Indication
Request Time out
Response
Response
Confirmation
Figure 3.46 The HART master–slave protocol model.
Zhang_Ch03.indd 406
5/13/2008 5:41:44 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
407
The HART protocol can be used in various modes for communicating information to and from smart field instruments and central control or monitoring equipment, which includes analog plus digital signals, and digital only signals. Digital master–slave communication simultaneous with the 4–20 mA analog signals is the most common. This mode, depicted in Fig. 3.46, allows digital information from the slave device to be updated twice per second in the master. The 4–20 mA analog signals are continuous and can still carry the primary variable for control. Please note that when a communication between the master and the slave is engaging, “interrupt” is definitely not allowed. “Burst” is an optional communication mode (Fig. 3.46) which allows a single slave device to continuously broadcast a standard HART response message. This mode frees the master from having to send repeated command requests to get updated process variable information. The same HART response message (PV or other, see Fig. 3.45) is continuously broadcast by the slave until the master instructs the slave to do otherwise. Data update rates of 3–4 per second are typical with “burst” mode communication and will vary with the chosen command. Please note that the “Burst” mode should be used only in single slave device networks. Two masters (primary and secondary) can communicate with slave devices in a HART network. Secondary masters, such as hand-held communicators, can be connected almost anywhere on the network and communicate with field devices without disturbing communication with the primary master. A primary master is typically a PLC, or computer based central control or monitoring system. A typical installation with two masters is shown in Fig. 3.33 and Fig. 3.44. From an installation perspective, the same wiring used for conventional 4–20 mA analog instruments carries the HART communication signals. Allowable cable run lengths will vary with the type of cable and the devices connected, but in general up to 3000 m for a single twisted pair cable with shield and 1500 m for multiple twisted pair cables with a common shield. Unshielded cables can be used for short distances. Intrinsic safety barriers and isolators which pass the HART signals are readily available for use in hazardous areas. The HART protocol also has the capability to connect multiple field devices on the same pair of wires in a multiple-dropped network configuration as shown in Fig. 3.33(b). In multiple-dropped networks, communication is limited to master–slave digital only. The current through each slave device is fixed at a minimum value to power the device (typically 4 mA) and no longer has any meaning relative to the process. The HART protocol utilizes the OSI 7-layer reference model. As is the case for most of the communication systems on the field level, the HART
Zhang_Ch03.indd 407
5/13/2008 5:41:44 PM
408
INDUSTRIAL CONTROL TECHNOLOGY
protocol implements only the layers 1, 2, and 7 of the OSI model. The layers 3–6 remain empty since their services are either not required or provided by the Application Layer 7 (see Fig. 3.47). The Application Layer defines the commands, responses, data types, and status reporting supported by the protocol. In addition, there are certain conventions in HART (e.g., how to trim the network current) that are also considered part of the Application Layer. While the Command Summary, Common Tables, and Command Response Code Specifications all establish mandatory Application Layer practices (including data types, common definitions of data items, and procedures), the Universal Commands specify the minimum Application Layer content of all HARTcompatible devices.
OSI layer
Function
Application layer
Provides the user with network capable applications
Presentation layer
Converts application data between network and local machine formats Connection management service for applications
Session layer
HART layer Command oriented. Predefined data types and application procedures
Transport layer
Provides network Autosegmented transfer of large datasets, independent, transport reliable stream transport, negotiated message transfer segment sizes.
Network layer
End to end routing of packets. Resolving network addresses
Data Link layer
Establishes data packet structure, framing, error detection, bus arbitration. Mechanical/electrical connection. Transmits raw bit stream
Physical layer
A binary, byteoriented, token passing, master– slave protocol Simultaneous analog and digital signaling, normal 4–20 mA copper wiring. Wired HART
Power-optimized, redundant path, selfhealing wireless mesh network Secure and reliable, time synched TDMA/CSMA, frequency agile with ARQ 2.4 Hz wireless, 802.15.4 based radios, 10 dBm T × power Wireless HART
Figure 3.47 HART protocol implementing the OSI 7-layer model.
Zhang_Ch03.indd 408
5/13/2008 5:41:44 PM
409
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
3.4.3.2
HART Protocol Commands
In the communication routines of the application layer of the HART protocol, the master devices and operating programs are based on HART commands to give instructions or send messages plus data to a field device. Once receiving the command message, the field devices immediately process it and then respond by sending back a response message which can contain requested status reports and/or the data of the field device. Table 3.9 provides the classes of the HART commands, and Table 3.10 gives a summary of the HART commands. Figure 3.48 is the standard format of both the HART command and response messages. In accordance with the HART command specification, this format includes the following: (1) First, the preamble, of between 5 and 20 bytes of hex FF (all 1s), helps the receiver to synchronize to the character stream. Table 3.9 HART Commands Classes Universal Commands All devices using the HART protocol must recognize and support the universal commands. Universal commands provide access to information useful in normal operations. For example, read primary variable and units, read manufacturer and device type, read current output and percentage of range, and read sensor serial number and limits
Zhang_Ch03.indd 409
Command Practice Commands
Device-Specific Commands
Common practice commands provide functions implemented by many, but not necessarily all, HART communication devices. The HART specifications recommend devices to support these commands when applicable. Examples of common practice commands are read a selection of up to four dynamic variables, write damping time constant, write transmitter range, set fixed output current and perform self-test
Device-specific commands represent functions that are unique to each field device. These commands access setup and calibration information as well as information about the construction of the device. Information on device-specific commands is available from device manufacturers or in the Field Device Specification document. Examples of devicespecific commands are read or write sensor type; start, stop, or clear totalizer; read or write alarm relay set point; etc.
5/13/2008 5:41:45 PM
410
INDUSTRIAL CONTROL TECHNOLOGY
Table 3.10 HART Commands Summary Universal Commands
Command Practice Commands
Device-Specific Commands (Example)
Read manufacturer and device type Read primary variable (PV) and units Read current output and percentage of range Read up to four predefined dynamic variables Read or write 8-character tag, 16-character descriptor, date Read or write 32character message Read device range values, units, and damping time constant Read or write final assembly number Write polling address
Read selection of up to four dynamic variables Write damping time constant Write device range values Calibrate (set zero, set span) Set fixed output current Perform self-test Perform master reset Trim PV zero Write PV unit Trim DAC zero and gain Write transfer function (square root/linear) Write sensor serial number Read or write dynamic variable assignments
Read or write lowflow cut-off Start, stop, or clear totalizer Read or write density calibration factor Choose PV (mass, flow, or density) Read or write materials or construction information Trim sensor calibration PID enable Write PID set point Valve characterization Valve set point Travel limits User units Local display information
PREAMBLE START ADDR COMM BCNT
[STATUS]
[DATA]
CHK
Preamble: 5 to 20 bytes, hex FF Start character: 1 byte Addresses: source and destination, 1 or 5 bytes Command: 1 byte Byte count (of status and data): 1 byte Status: 2 bytes, only in slave response Data: 0 to 25 bytes* Checksum: 1 byte *25 bytes is a recommended maximum data length The maximum number of data bytes is not defined by the protocol specifications.
Figure 3.48 The standard format of the HART protocol command and response frames.
Zhang_Ch03.indd 410
5/13/2008 5:41:45 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
411
(2) The start character may have one of several values, indicating the type of message: master to slave, slave to master, or burst message from slave; also the address format: short frame or long frame. (3) The address field includes both the master address (a single bit: 1 for a primary master, 0 for a secondary master) and the slave address. In the short frame format, the slave address is 4 bits containing the “polling address” (0–15). In the long frame format, it is 38 bits containing a “unique identifier” for that particular device. (One bit is also used to indicate if a slave is in burst mode.) (4) The command byte contains the HART command for this message. Universal commands are in the range 0–30; commonpractice commands are in the range 32–126; device-specific commands are in the range from 128 to 253. (5) The byte count byte contains the number of bytes to follow in the status and data bytes. The receiver uses this to know when the message is complete. (There is no special “end of message” character.) (6) The status field (also known as the “response code”) is two bytes, only present in the response message from a slave. It contains information about communication errors in the outgoing message, the status of the received command, and the status of the device itself. (7) The data field may or may not be present, depending on the particular command. A maximum length of 25 bytes is recommended, to keep the overall message duration reasonable. (But some devices have device-specific commands using longer data fields.) See also the HART data field. (8) Finally, the checksum byte contains an “exclusive-OR” or “longitudinal parity” of all previous bytes (from the start character onward). Together with the parity bit attached to each byte, this is used to detect communication errors.
3.4.3.3
HART Protocol Data
(1) HART data. There are several types of data or information that can be communicated from a HART-compatible device. This includes: (a) Device data (b) Supplier data (c) Measurement data (d) Calibration data.
Zhang_Ch03.indd 411
5/13/2008 5:41:45 PM
412
INDUSTRIAL CONTROL TECHNOLOGY The following is a summary of these data items available for communication between HART-compatible devices and a Host. (a) Process variable values. (i) Primary process variable (analog): 4–20 mA current signals continuously transmitted to host. (ii) Primary process variable (digital): Digital value in engineering units, IEEE floating point, up to 24-bit resolution. (iii) Percent range: Primary process variable expressed as percent of calibrated range. (iv) Loop current: Loop current value in milliamps. (v) Secondary process variable 1: Digital value in engineering units available from multivariable devices. (vi) Secondary process variable 2: Digital value in engineering units available from multivariable devices (vii) Secondary process variable 3: Digital value in engineering units available from multivariable devices (b) Commands from host to device. (i) Set primary variable units (ii) Set upper range (iii) Set lower range (iv) Set damping value (v) Set message (vi) Set tag (vii) Set date (viii) Set descriptor (ix) Perform loop test: Force loop current to specific value (x) Initiate self-test: Start device self-test (xi) Get more status available information. (c) Status and diagnostic alerts. (i) Device malfunction: Indicates device self-diagnostic has detected a problem in device operation. (ii) Configuration changed: Indicates device configuration has been changed. (iii) Cold start: Indicates device has gone through power cycle. (iv) More status available: Indicates additional device status data available. (v) Primary variable analog output fixed: Indicates device in fixed current mode. (vi) Primary variable analog output saturated: Indicates 4–20 mA signal is saturated. (vii) Secondary variable out of limits: Indicates secondary variable value outside the sensor limits.
Zhang_Ch03.indd 412
5/13/2008 5:41:45 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
413
(viii) Primary variable out of limits: Indicates primary variable value outside the sensor limits. (d) Device identification. (i) Instrument tag: User defined, up to 8 characters. (ii) Descriptor: User defined, up to 16 characters. (iii) Manufacturer name (code): Code established by HCF and set by manufacturer. (iv) Device type and revision: Set by manufacturer. (v) Device serial number: Set by manufacturer. (vi) Sensor serial number: Set by manufacturer. (e) Calibration information for 4–20 mA transmission of primary process variable. (i) Date: Date of last calibration, set by user. (ii) Upper range value: Primary variable value in engineering units for 20 mA point that is set by user. (iii) Lower range value: Primary variable value in engineering units for 4 mA point that is set by user. (iv) Upper sensor limit: Set by manufacturer. (v) Lower sensor limit: Set by manufacturer. (vi) Sensor minimum span: Set by manufacturer. (vii) PV damping: Primary process variable damping factor, set by user. (viii) Message: Scratch pad message area (32 characters), set by user. (ix) Loop current transfer function: Relationship between primary variable digital value and 4–20 mA current signal. (x) Loop current alarm action: Loop current action on device failure (upscale/downscale). (xi) Write protect status: Device write-protect indicator. (2) DDL device description. The HART commands in the application layer are based on the services of the lower layers and enable an open communication between the master and the field devices. All the HART-compatible devices, no matter their manufacturers, are capable for this openness and intercommunications as long as the field devices operate exclusively with the universal and common-practice commands. In a HART-enabled system, the user does not need more than the simple HART standard notation for the status and fault messages. When the user wants the message to contain further devicerelated information or that special properties of a field device are also used, the common-practice and universal commands are not sufficient. Using and interpreting the data requires that the user know their meaning. However, this knowledge is not available in
Zhang_Ch03.indd 413
5/13/2008 5:41:45 PM
414
INDUSTRIAL CONTROL TECHNOLOGY further extending systems which can integrate new components with additional options. To eliminate the adaptation of the master device’s software whenever an additional status message is included or a new component is installed, the device description language (DDL) was accordingly developed. The DDL is not limited to the HART applications. It was developed and specified for all the Fieldbus, independent of the HART protocol, by the Human–Machine Interface workshop of the International Fieldbus Group (IFG). The developers of the device description language (DDL) aimed at achieving versatile usability. The DDL also finds use in field networks. The required flexibility is ensured insofar as the DDL does not itself determine the number and functions of the device interfaces and their representation in the control stations. The DDL is simply a language, similar to a programming language, which enables the device manufacturers to describe all communication options in an exact and complete manner. The DDL allows the manufacturer to describe (a) attributes and additional information on communication data elements, (b) all operating states of the device, (c) all device commands and parameters, (d) the menu structure, thus providing a clear representation of all operating and functional features of the device. Having the device description of a field device and being able to interpret it, a master device is equipped with all necessary information to make use of the complete performance features of the field device. Device-specific and manufacturer-specific commands can also be executed and the user is provided with a universally applicable and uniform user interface, enabling him or her to clearly represent and perform all device functions. Thanks to this additional information, clear, exact and, hence, safer operation and monitoring of a process is made possible. The master device does not read the device description as readable text in DDL syntax, but as short, binary-coded Device Description data record specially generated by the DDL encoder (or DDL compiler). For devices with sufficient storage capacity, this short form opens up the possibility of storing the device description already in the firmware of the field device. During the parameterization phase, it can be read by the corresponding master device.
Zhang_Ch03.indd 414
5/13/2008 5:41:45 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
415
3.4.4 HART Integration 3.4.4.1 Basic Industrial Field Networks There are many different field networks available in industrial control for instrument engineers today. Understanding these networks’ classifications should allow us to choose the right tools for the control applications. Some common industrial communication protocols will be introduced below by focusing on the native applications and placing them in three basic categories: (1) Sensor networks are those protocols initially designed to support discrete I/O. (2) Device networks are those protocols originally focused on process instrumentation. (3) Control networks are those protocols typically used to connect controllers and I/O systems. (1) Sensor networks. Sensor level protocols, with a principal focus on supporting the digital communications for those discrete sensors and actuators. Sensor level protocols tend to have very fast cycle times and, since they are often promoted as an alternative to PLC discrete I/O, the cost of a network node should be relatively low. These protocols listed below are the simplest forms of sensor networks available today. (a) AS-i. AS-i acts as a network-based replacement for a discrete I/O card. Consequently, AS-i offers perhaps the simplest network around consisting of up to 31 slave devices with 248 I/O bits and the following functionality: (1) the master polls each slave, (2) the master message contains four output bits, (3) the slave answers immediately with four input bits, 4) diagnostics are included in each message, (5) a worst-case scan time of less than 5 ms. (b) CAN. The Controller Area Network (CAN) defines only basic, low level signaling and medium access specifications which are both simple and unique. Even though CAN medium access is technically CSMA/CD, this classification can provide simple, highly reliable, prioritized communication between intelligent devices, sensors, and actuators in automotive applications. Of these advantages, reliability is paramount. Network errors while driving a car on a busy interstate highway are unacceptable. Today, CAN is used in a vast number of vehicles and in a variety of other applications.
Zhang_Ch03.indd 415
5/13/2008 5:41:45 PM
416
INDUSTRIAL CONTROL TECHNOLOGY As a result: (1) a large number of different chips and vendors support CAN, 2) the total chip volume is huge, 3) the parts cost is small. (c) DeviceNet. DeviceNet is a well-established machine and manufacturing automation network supported by a substantial number of products and vendors in the semiconductor industry. DeviceNet specifies physical (connectors, network terminators, power distribution, and wiring) and application layer operation based on the CAN standard. While all the popular application layer hierarchies are supported (client and server, master and slave, peer-to-peer, publisher and subscriber), in practice most devices support only master– slave operation which results in significantly lower costs. In comparison with the CAN, DeviceNet provides configurable structure to the operation of the network, allowing for the selection of polled, cyclic, or event driven network operation. (d) Interbus. Interbus is a popular industrial network that uses a ring topology in which each slave has an input and an output connector. Interbus is one of the few protocols that are full duplex-data transmitted and received at the same time. Interbus communication is cyclical, efficient, fast, and deterministic (e.g., 4096 digital inputs and outputs scanned in 14 ms). Of the ring topology, Interbus commissioning offers us these advantages: (1) Node addresses are not required because the master can automatically identify the nodes on the network. (2) Slaves provide identification information that allows the master to determine the quantity of the data provided by the slave. (3) Using this data, the master explicitly maps the data to and from the slave into the bit stream as it shifts through the network. Interbus works as a large network-based shift register in which a bus cycle begins with the network master transmitting a bit stream. As the first slave receives the bits, they are echoed passing the data on to the next slave in the ring. This process is simultaneous with the data being shifted from the master to the first slave. In turn, the data from the first slave is then being shifted into the master. (2) Device networks. Device network protocols support process automation that is fundamentally continuous and analog, more complex transmitters, and valve-actuators. Transmitters typically include pressure, level, flow, and temperature. The valve-actuators can include those intelligent controllers, motorized valves, and pneumatic positioners. Three device networks are prevalent in
Zhang_Ch03.indd 416
5/13/2008 5:41:45 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
417
the process automation industry: Foundation Fieldbus H1, HART, and PROFIBUS-PA. (a) Foundation Fieldbus H1. In August 1996, the Foundation Fieldbus released its H1 Specifications that focus on “the network is the control system.” Of the fundamental differences from the controller and I/O approach used in traditional systems, Foundation Fieldbus specifies not only a communication network but also control functions. As a result, the purpose of the communications is to pass data to facilitate proper operation of the distributed control application. Its success relies on synchronized cyclical communication and on a well-defined applications layer. In the Foundation Fieldbus H1, communications occur within framed intervals of fixed time duration (a fixed repetition rate) and are divided into two phases and scheduled cyclical data exchange and acyclic (e.g., configuration and diagnostics) communication. The communication is controlled by polling the network and thereby prompting the process data to be placed on the bus. This is done by passing a special token to grant the bus to the appropriate device, resulting in the cyclic data being generated at regular intervals. In the Foundation Fieldbus H1 networks, the application layer defines function blocks, which include analog in, analog out, transducer, and the blocks. The data on the network is the transfer from one function block to another in the network-based control system. The data is complex and includes the digital value, engineering units, and status on the data (to indicate the PID is manual or the measured value is suspect). (b) HART. HART is unique among device networks because it is fundamentally an analog communications protocol. All the other protocols use digital signaling, while HART uses modulated communications because the “HART digital communications” modulate analog signals centered in a frequency band separated from the 4–20 mA signaling. HART enhances smart 4–20 mA field devices by providing two-way communication that is backward compatible with existing installations, which allows HART to support two communications channels simultaneously: a one-way channel carrying a single process value (the 4–20 mA signal) and a bidirectional channel to communicate digital process values, status, and diagnostics. Consequently, the HART protocol can be used in traditional 4–20 mA applications, and allows the benefits of digital communication to be realized in existing plant installations.
Zhang_Ch03.indd 417
5/13/2008 5:41:45 PM
418
INDUSTRIAL CONTROL TECHNOLOGY HART is a simple, easy-to-use protocol because of these facts: (1) Two masters are supported using token passing to provide bus arbitration. (2) Allows a field device to publish process data (“burst mode”). (3) Cyclical process data includes floating-point digital value, engineering units, and status. (4) Operating procedures standardized (for current loop reranging, loop test, and transducer calibration). (5) Standardized identification and diagnostics provided. (c) PROFIBUS-PA. PROFIBUS-PA (process automation) was introduced to extend PROFIBUS-DP (decentralized peripherals) in order to support process automation. PROFIBUS-PA, which operates over the same H1 physical layer as Foundation Fieldbus H1, is essentially an LAN for communication with process instruments. PROFIBUS-PA networks are fundamentally master–slave model, so sophisticated bus arbitration is not necessary. PROFIBUS-PA also defines profiles for common process instruments, including both mandatory and optional propertied (data items). When a field device supports a profile some configuration of the device should be possible without being device-specific. (3) Control networks. Control networks are focused on providing a communication backbone that allows integration of controllers, I/O, and subnetworks. Control networks stand at the crossroads between the growing capabilities of industrial networks and the penetration of enterprise networks into the control system. As such, the control networks are able to move huge chunks of heterogeneous data and operate at high data rates. A brief overview of these four control networks is given below; they are ControlNet, Industrial Ethernet, Ethernet/IP, and PROFIBUS-DP. (a) ControlNet. ControlNet was developed as a high performance network suitable for both manufacturing and process automation. ControlNet uses Time Division Multiple Access (TDMA) to control the access to the network, which means a network cycle is assigned a fixed repetition rate. Within the bus cycle, data items are assigned a fixed time division for transmission by the corresponding device. Data objects are placed on ControlNet within a designated time slot and at precise levels. Once the data to be published is identified along with its time slot, any device on the network can be configured to use the data. In the second half of a bus cycle, acyclic communications occur. ControlNet is more efficient than polled or token passing protocols because its data transmission is very deterministic.
Zhang_Ch03.indd 418
5/13/2008 5:41:45 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
419
(b) Industrial Ethernet. Many industrial communication protocols specify mechanisms to embed their protocols in Ethernet. Ethernet addresses only the lower layers of communications networks, but does not address the meaning of the data it transports. Even though Ethernet effectively communicates many protocols simultaneously over the same wire, it provides no guarantees that the data can be exchanged between different protocols. In Industrial Ethernet, the protocol being adopted is TCP/ IP and not Ethernet at all. TCP/IP is using two approaches to support the session, presentation, and application layers of the corresponding industrial protocol. First, the industrial protocol is simply encapsulated in the TCP/IP, allowing the shortest development time for defining industrial protocol transportation over TCP/IP. The second approach actually maps the industrial protocol to TCP and UDP services. While this strategy takes more time and effort to develop, it results in a more complete implementation of the industrial protocol on top of the TCP/IP. UDP is a connectionless, unreliable communication service that works well for broadcasts to multiple recipients and fast, low-level signaling. UDP is used by several industrial protocols (for time synchronization). TCP is a connection oriented data stream that can be mapped to the data and I/O functions in some industrial protocols. (c) Ethernet/IP. Ethernet/IP is a mapping of the “Control and Information Protocol (CIP)” used in both ControlNet and DeviceNet to TCP/IP (not Ethernet). While all the basic functionality of ControlNet is supported, the hard real-time determinism that ControlNet offers is not present. CIP is being promoted as a common, object-oriented mechanism for supporting both manufacturing and process automation functions. Ethernet/IP is a good contribution to the growing discussion of Industrial Ethernet. However, Ethernet/IP is a recent development and is basically the application of an existing industrial network’s application layer to the TCP/IP. (d) PROFIBUS-DP. PROFIBUS-DP (Decentralized Peripherals) is a master–slave protocol used primarily to access remote I/O. Each node is polled cyclically updating the status and data associated with the node. Operation is relatively simple and fast. PROFIBUS-DP also supports Fieldbus Message Specification (FMS), which is a more complex protocol for demanding applications that includes support for multiple masters and peer-to-peer communication.
Zhang_Ch03.indd 419
5/13/2008 5:41:46 PM
420
3.4.4.2
INDUSTRIAL CONTROL TECHNOLOGY
Choosing the Right Field Networks
Before choosing the best suitable network for an industrial control application, it is necessary to review all the open network types available today. By carefully reviewing each of these types of open communication networks, their respective strengths and weaknesses, in particular some important technical factors, ought to be understood. Please note that the specified application will be far more than the technology used. The specific, measurable benefits must drive this selection process. The first factor that should be considered is the cost of each network. The second is the network connectivity in which the bottom line is that you want your data. It is very important to remember that all communications networks cause changes. Although the actual changes can be very difficult to foretell accurately, the effect of these changes must be realistically considered. In some instances, we have to make a decision on choosing between the HART and other types of field networks. Before making such a decision, it is important to understand the differences between the HART and other field networks. For example, if all the choices are constrained between the HART and the Foundation Fieldbus, comparisons of these two types in every category are necessary. Table 3.11 is a list of the differences in elemental technical features between the HART and the Foundation Fieldbus, which should be a help in choosing between the HART and the Foundation Fieldbus.
3.4.4.3
Integrating the HART with Other Field Networks
The integration of the HART network with the Foundation Fieldbus network is taken as an example here to demonstrate the strategies and techniques of integrating HART with other field networks. As mentioned above, one fundamental difference between HART and Fieldbus devices is that with Fieldbus devices, you can implement control strategies inside the field devices themselves; however, even if you can implement the same control strategies with HART devices, the execution of actual control algorithms would go on in the control system computer or PLC. A welldesigned control system will allow an integrated control strategy to use devices independent of communication protocol. HART and Fieldbus devices have the capability of providing a wide range of diagnostics data about the device’s safety. In fact, the diagnostic capabilities of HART and Fieldbus devices are nearly the same. The types of device diagnostics change widely, depending on the type of device. Measurement transmitters will have diagnostics related to the status of the
Zhang_Ch03.indd 420
5/13/2008 5:41:46 PM
421
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL Table 3.11 A Comparison between HART and Foundation Fieldbus Feature Technology acceptance
Power limitation Advances in silicon power consumption same for HART/FF; thus FF will always have capability for more functionality Communication performance Transmitter diagnostics
Advanced diagnostics
HART Well proven as Large Installed base Will continue to be sold as a replacement unit Simple for technicians’ competence 35 mW to 4 mA available for the HART signal Cannot “mirror” Fieldbus, however may provide an 80% solution 100 bits/s Additional burden on host Device only Includes predictive No knowledge of other devices Does not have the processing power
Push or poll
Polled for HART status periodically Status can be missed
Two-way communication to other devices Multiple-dropped
No
Use in safety instrumented systems
Very limited Theory 15 devices: real around 3 slow series loop All devices wired individually
Foundation Fieldbus (FF) Proven as Growing Installed base Training required New investment to occur increasingly in the FF FF minimum power requirement of 8 mA No spec limit; Ultimate FF segment power budget FF devices order of more magnitude than that powering an Event for IS FF H1 communicates at 31250 bits/s Device + other devices
For example, Statistical Process Monitoring and Machinery Health monitoring Events are latched/time stamped in the device Sent by the device There is no chance of missing field problems with FF Yes
True multiple-dropped: physically 32 devices realistically 12−16 Devices 2007 offers the holding back technology (Continued)
Zhang_Ch03.indd 421
5/13/2008 5:41:46 PM
422
INDUSTRIAL CONTROL TECHNOLOGY
Table 3.11 (Continued) Feature
HART
Control in the field and advanced applications
Does not support the function block model
Multiple variables
In digital mode only It is limited No
Footprint and hardware reduction
Future proof devices—typical upgrade capability In the field
No
Full specifications in the devices
No
Commissioning speed
Hours for individually wired devices
Foundation Fieldbus (FF) Function block model supports interoperable control in the field where blocks can reside in the field device Yes Renders obsolete all separate signal conditioners, isolation amplifier cards, output cards, CPU cards, I/P converters, etc. Ability to download new version of firmware over H1 link None disconnected device from the H1 segment Communicate with this devices conformant with Fieldbus specification Embedded at the factory Travels with the instrument Upload directly to “Smart Instrument” software Reduces commissioning time Reduces time to perform diagnostics The networking capability of FF allows the user to commission a device in 10 s of seconds
transducer and measurement logic in the device. Control devices such as valves will provide a lot of information about the mechanical condition of the device. Both transmitters and valves will provide diagnostic information about the communication electronics in their respective devices.
Zhang_Ch03.indd 422
5/13/2008 5:41:46 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
423
Although the diagnostic data provided by HART and Fieldbus is very similar, the way they get to the control system and the way they get to the operator or technician to see it can be quite different. This has to do with the speed and characteristics of the communication technology used by these two protocols. Fieldbus basically uses point-to-point communication technology. This means when a Fieldbus device detects a diagnostics condition it wants to report, it can send an event out on the bus with the related information. The control system picks up the event and immediately displays or annunciates it on the console. HART devices, on the other hand, have to continually undergo polling to see if there is anything to report. Because the polling occurs at 1200 bps with HART, there are limitations on how many devices it can poll for alters in a specific time frame. An operator can poll a small number of critical devices for alters within seconds or a large number of devices within minutes. However, it is possible to implement an effective diagnostic alert system with HART as long as you understand the restrictions on response and device count. Once the operator or maintenance engineer is aware of a problem in a filed device either through an alert or some other means, the actual display of the status information from HART and Fieldbus devices is very similar. Usually, a record of this status event will automatically log into the control system. The logging of HART and Fieldbus device problems should normally look the same on a well-integrated system. The type of portable maintenance tools required in systems of HART and Fieldbus devices is also an important factor for consideration. Portable tools currently fall into two general categories. The first is intrinsically safe hand-held devices. Several are available for HART only devices. A combined HART and Fieldbus intrinsically hand-held device has become available, too. This integrated tool allows the user to configure and diagnose HART and Fieldbus devices while in the field. The laptop computer is the second type of portable tool. However, these types of computers cannot go in hazardous areas of a plant. As far as small hand-held computers go, how practical these will be in a plant environment remains an open question.
Bibliography Alan R. Dewey in EMERSON. 2005. HART, Fieldbus Work Together in Integrated Environment. http://www.emersonprocess.com/home/library/articles/protocol/ protocol0507_teamwork.pdf. Accessed date: October 2007. Analog Services, Inc. (http://www.analogservices.com). 2006. HART Book. http:// www.analogservices.com/about_part0.htm. Accessed date: October 2007.
Zhang_Ch03.indd 423
5/13/2008 5:41:46 PM
424
INDUSTRIAL CONTROL TECHNOLOGY
AS-INTERFACE (http://www.as-interface.net). 2007a. AS-Interface System. http://www.as-interface.net/System/. Accessed date: May. AS-INTERFACE (http://www.as-interface.net). 2007b. AS-Interface Products. http://www.as-interface.net/Products/. Accessed date: May. CiA (http://www.can-cia.org). 2005a. Registered Free Download; CAN Physical Layer Specification Version 2.0. http://www.can-cia.org/downloads/ ciaspecifications/?557. Accessed date: July. CiA (http://www.can-cia.org). 2005b. Registered Free Download; CAN Application Layer Specification Version 1.1. http://www.can-cia.org/ downloads/ciaspecifications/?1169. Accessed date: July. CiA (http://www.can-cia.org). 2005c. Registered Free Download; CANopen Specification Version 1.3. http://www.can-cia.org/downloads/ciaspeci fications/?1136. Accessed date: July. Commfront (http://www.commfront.com). 2007. RS232, 485,422,530 Buses. http://www.commfront.com/CommFront-Home.htm. Accessed date: July. Cyber (http://cyber.felk.cvut.cz). 2006. Supervisory Human Operation. http:// cyber.felk.cvut.cz/gerstner/biolab/bio_web/projects/iga2002/index.html. Accessed date: October. David Belohrad and Miroslav Kasal. 1999. FSK Modem with GALs. http:// www.isibrno.cz/~belohrad/radioelektronika99-fskmodem.pdf. Accessed date: October 2007. Degani, Asaf, Shafto, Michael, Kirlik, Alex. 2006. Modes in Human–Machine Systems: Review, Classification, and Application. http://ic-www.arc.nasa .gov/people/asaf/interface_design/pdf/Modes%20in%20Human-Machine %20Systems.pdf. Accessed date: October. Degani, Asaf. 2006. Modeling Human–Machine Systems: On Modes, Error, and Patterns of Interaction. http://ase.arc.nasa.gov/people/asaf/hai/pdf/Degani_ Thesis.pdf. Accessed date: October. ESD-Electronics (http://www.esd-electronics.com). 2005. Controller Area Network. http://www.esd-electronics.com/german/PDF-file/CAN/Englisch/intro-e .pdf. Accessed date: July. Fieldbus (http://www.fieldbus.org). 2005a. FOUNDATION Technology. http:// www.fieldbus.org/index.php?option=com_content&task=view&id=45&Ite mid=195. Accessed date: July. Fieldbus (http://www.fieldbus.org). 2005b. Profibus Technology. http://www .pepperl-fuchs.com/pa/interbtob/profibus/default_e.html. Accessed date: July. Fieldbus Centre (http://www.knowthebus.org). 2005a. FOUNDATION Fieldbus. http://www.knowthebus.org/fieldbus/foundation.asp. Accessed date: July. Fuji Electric (http://web1.fujielectric.co.jp). 2007. Fuji AS-I Technologies. http://www.fujielectric.co.jp/fcs/eng/as-interface/as_i/index.html. Accessed date: May. Grid Connect (http://www.industrialethernet.com). 2005. Industrial Ethernet. http://www.industrialethernet.com/etad.html. Accessed date: July. Groover, Mikell P. 2001. Automation, Production Systems, and ComputerIntegrated Manufacturing. Second Edition. New Jersey: Prentice Hall. H. Kirrmann in ABB Research Center of Switzerland. 2006. The HART Protocol. AI_411_HART.ppt. Accessed date: October 2007.
Zhang_Ch03.indd 424
5/13/2008 5:41:46 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
425
Hardware Secrets (http://www.hardwaresecrets.com). 2007a. PCI Bus Tutorial. http://www.hardwaresecrets.com/article/190. Accessed date: May. Hardware Secrets (http://www.hardwaresecrets.com). 2007b. AGP Bus Tutorial. http://www.hardwaresecrets.com/article/155. Accessed date: July. Harris, Don. 2006. Human–Machine Interaction. http://www.cranfield.ac.uk/soe/ postgraduate/hf_module9.htm. Accessed date: October. HCF (HART Communication Foundation). 2007. HCF—Main Pages. http://www .hartcomm2.org/index.html. Accessed date: October. HIT (http://www.hit.bme.hu). 2007. GPIB Tutorial. http://www.hit.bme.hu/~papay/ edu/GPIB/tutor.htm. Accessed date: July. HMS Industrial Networking (http://www.anybus.com). 2005a. AS-Interface Technologies. http://www.anybus.com/technologies/asi.shtml. Accessed date: July. HMS Industrial Networking (http://www.anybus.com). 2005b. AS-Interface Products. http://www.anybus.com/products/asinterface.shtml. Accessed date: July. HMS Industrial Networking (http://www.anybus.com). 2005c. Interbus Connectivity: http://www.anybus.com/products/interbus.shtml?gclid=CLXtoK7UjY wCFT4GQgod3T_GBw. Accessed date: July. Honey Well (http://hpsweb.honeywell.com). 2005a. http://hpsweb.honeywell.com/ Cultures/en-US/Products/Systems/ExperionPKS/FoundationFieldbus Integration/default.htm. Accessed date: July. Honey Well (http://hpsweb.honeywell.com). 2007. Wireless HART. http://hpsweb .honeywell.com/Cultures/en-US/Products/wireless/SecondGeneration Wireless/default.htm. Accessed date: October. IBM (http://www.ibm.com/us). 2005. IDE Subsystem. http://publib.boulder.ibm .com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.kernelext/doc/ kernextc/ide_subsys.htm. Accessed date: May. Interbus Club (http://www.interbusclub.com). 2005a. Interbus Technology. http:// www.interbusclub.com/en/index.html. Accessed date: July. Interbus Training in Web (http://pl.et.fh-duesseldorf.de/prak/prake/index.asp). 2007. Interbus Basic. http://pl.et.fh-duesseldorf.de/prak/prake/download/IBS_ grundlagen.pdf. Accessed date: May. Interface Bus (http://www.interfacebus.com). 2005a. PCI Bus Pins. http://www .interfacebus.com/Design_PCI_Pinout.html. Accessed date: May. Interface Bus (http://www.interfacebus.com). 2005b. PCMCIA 16 bits Bus. http:// www.interfacebus.com/Design_Connector_PCMCIA.html. Accessed date: May. Interface Bus (http://www.interfacebus.com). 2005c. RS-485 Bus. http://www .interfacebus.com/Design_Connector_RS485.html. Accessed date: May. IO Tech (http://www.iotech.com). 2007. IEEE-488 Standard. http://www.iotech .com/an06.html. Accessed date: July. Israel, Johann Habakuk and Anja Naumann. 2006. http://useworld.net/ausgaben/ 4-2007/01-Israel_Naumann.pdf. Accessed date: October. IXXAT (http://www.ixxat.com). 2005. CAN Application Layer. http://www.ixxat .com/can_application_layer_introduction_en,7524,5873.html. Accessed date: July.
Zhang_Ch03.indd 425
5/13/2008 5:41:46 PM
426
INDUSTRIAL CONTROL TECHNOLOGY
Jaffe, David. 2006. Enhancing Human–Machine Interaction. http://ability.stanford .edu/Press/rehabman.pdf. Accessed date: October. Jim Russell. 2007. HART v Foundation Fieldbus. http://www.iceweb.com.au/ Instrument/FieldbusPapers/HART%20v%20FF%20PAPERfinal.pdf . Accessed date: October. Kenneth L. Holladay, P. E. in Southwest Research Institute of the USA. 1991. Calibrating HART Transmitters. http://www.transcat.com/PDF/Hart_ Transmitter_Calibration.pdf. Accessed date: October 2007. Kvaser (http://www.kvaser.com). 2005. Controller Area Network. http://www .kvaser.com/can/. Accessed date: July. Microchip (http://ww1.microchip.com). 2005a. CAN Basics. http://ww1 .microchip.com/downloads/en/AppNotes/00713a.pdf. Accessed date: July. Microchip (http://ww1.microchip.com). 2005b. CAN Physical Layer. http://ww1 .microchip.com/downloads/en/AppNotes/00228a.pdf. Accessed date: July. MOXA (http://www.moxa.com). 2005a. Industrial Ethernet Technologies. http:// www.moxa.com/Zones/Industrial_Ethernet/Tutorial.htm. Accessed date: July. MOXA (http://www.moxa.com). 2005b. Industrial Ethernet Products. http://www .moxa.com/product/Industrial_Ethernet_Switches.htm. Accessed date: July. Murray, Steven A. 2006. Human–Machine Interaction with Multiple Autonomous Sensors. http://www.spawar.navy.mil/robots/research/hmi/ifac.html. Accessed date: October. PC Guide (http://www.pcguide.com). 2005a. IDE Guide. http://www.pcguide .com/ref/hdd/if/ide/unstdIDE-c.html. Accessed date: May. PC Guide (http://www.pcguide.com). 2005b. SCSI Guide. http://www.pcguide .com/ref/hdd/if/scsi/. Accessed date: May. PEPPERL + FUCHS (http://www.am.pepperl-fuchs.com). 2007. HART Multiplexers. http://www.am.pepperl-fuchs.com/products/productsubfamily .jsp?division=PA&productsubfamily_id=1343. Accessed date: October. PEPPERL+FUCHS (http://www.am.pepperl-fuchs.com). 2005a. AS-Interface. http://www.pepperl-fuchs.com/cgi-bin/site_search.pl. Accessed date: July. PEPPERL+FUCHS (http://www.am.pepperl-fuchs.com). 2005b. Fieldbus Technology; Foundation Fieldbus: AS-Interface. http://www.pepperl-fuchs .com/pa/interbtob/communication/default_e.html. Accessed date: July. PHM (http://www.phm.lu). 2007. IDE Pins Out. http://www.phm.lu/Documenta tion/Connectors/IDE.asp. Accessed date: May. PROFIBUS (http://www.profibus.com). 2005a. Profibus Technical Description. http://www.profibus.com/pb/technology/description/. Accessed date: July. PROFIBUS (http://www.profibus.com). 2005b. Profibus Specification. http://www .profibus.com/pall/meta/downloads/. Accessed date: July. QSI (http://www.qsicorp.com). 2006. Human–Machine Interface Devices. http:// www.qsicorp.com/product/industrial/?gclid=CK-apKS1l4wCFSQHE god6iqP5w. Accessed date: October. Quatech (http://www.quatech.com). 2005. ISA Bus Overviews. http://www.quatech .com/support/comm-over-isa.php. Accessed date: July. Samson (http://www.samson.de). 2005. FOUNDATION Fieldbus Technical Information. http://www.samson.de/pdf_en/l454en.pdf. Accessed date: July. SAMSON. 2005. SAMSON Technical Information—HART Communications. http://www.samson.de/pdf_en/l452en.pdf. Accessed date: October 2007.
Zhang_Ch03.indd 426
5/13/2008 5:41:46 PM
3: SYSTEM INTERFACES FOR INDUSTRIAL CONTROL
427
Schneider-Electric (http://www.automation.schneider-electric.com). 2006. Human– Machine Interface. http://www.automation.schneider-electric.com/as-guide/ EN/pdf_files/asg-8-human-machine-interface.pdf. Accessed date: October. SCSI Library (http://www.scsilibrary.com). 2005. SCSI. http://www.scsilibrary .com. Accessed date: May. Semiconductors (http://www.semiconductors.bosch.de). 2005. CAN Specifications. http://www.semiconductors.bosch.de/pdf/can2spec.pdf. Accessed date: July. SIEMENS (http://www.automation.siemens.com). 2005a. Siemens AS-Interface Technologies. http://www.automation.siemens.com/cd/as-interface/html_76/ asisafe.htm. Accessed date: July. SIEMENS (http://www.automation.siemens.com). 2005b. Siemens AS-Interface Products. http://www.automation.siemens.com/infocenter/order_form.aspx? tab=3&nodekey=key_1994569&lang=en. Accessed date: July. SIEMENS (http://www.automation.siemens.com). 2005c. Siemens Profibus: http://www.automation.siemens.com/net/html_76/produkte/020_produkte.htm. Accessed date: July. SIEMENS (http://www.automation.siemens.com). 2005d. Industrial Ethernet Technologies and Products. http://www.automation.siemens.com/net/html_76/ produkte/040_produkte.htm. Accessed date: July. Simons, C. L. and Parmee, L. C. 2006. Human–Machine Interaction Software Design. http://www.ip-cc.org.uk/INTREP-COINT-SIMONS-2006.pdf. Accessed date: October. SMAR International Corporation (http://www.smar.com). 2006. HART Tutorial. http://www.smar.com/PDFs/Catalogues/Hart_Tutorial.pdf. Accessed date: October 2007. Softing (http://www.softing.com). 2005a. FOUNDATION Fieldbus. http://www .softing.com/home/en/industrial-automation/products/foundation-fieldbus/ index.php. Accessed date: July. Softing (http://www.softing.com). 2005b. Profibus. http://www.softing.com/home/ en/industrial-automation/products/profibus-dp/index.php?navanchor= 3010004. Accessed date: July. Tech Soft (http://www.techsoft.de). 2007. IEEEE-488 Tutorial. http://www .techsoft.de/htbasic/tutgpibm.htm?tutgpib.htm. Accessed date: July. Techfest (http://www.techfest.com). 2005. ISA Bus Technology. http://www .techfest.com/hardware/bus/isa.htm. Accessed date: July. Texas Instruments (http://sparc.feri.uni-mb.si). 2005. CAN Introduction. http:// sparc.feri.uni-mb.si/Sistemidaljvodenja/Vaje/pdf/Inroduction%20to%20 CAN.pdf. Accessed date: July. The UK AS-i Expert Alliance (http://www.as-interface.com). 2007a. AS-Interface Technologies. http://www.as-interface.com/asitech.asp. Accessed date: May. The UK AS-i Expert Alliance (http://www.as-interface.com). 2007b. AS-Interface Products. http://www.as-interface.com/asi_literature.asp. Accessed date: May. Vector Germany (http://www.can-solutions.com). 2005. CAN Mechanism. http:// www.can-solutions.com/?gclid=CLWxnai6jIwCFQrlQgodnx3gBg.Accessed date: July. YOKOGAWA (http://www.yokogawa.com). 2007. HART Communicator. http:// www.yokogawa.com/us/mi/MetersandInstruments/us-ykgw-yhcypc.htm. Accessed date: October.
Zhang_Ch03.indd 427
5/13/2008 5:41:46 PM
Zhang_Ch03.indd 428
5/13/2008 5:41:46 PM
4
Digital Controllers for Industrial Control
4.1 Industrial Intelligent Controllers 4.1.1
Programmable Logic Control (PLC) Controllers
The development of Programmable Logic Controllers (PLCs) was driven primarily by the requirements of automobile manufacturers who constantly changed their production line control systems to accommodate their new car models. In the past, this required extensive rewiring of banks of relays—a very expensive procedure. In the 1970s, with the emergence of solid-state electronic logic devices, several auto companies challenged control manufacturers to develop a means of changing control logic without the need to rewire the system totally. The PLC evolved from this requirement. The PLCs are designed to be relatively “user-friendly” so that electricians can easily make the transition from all-relay control to electronic systems. They give users the capability of displaying and troubleshooting ladder logic that shows the logic in real time. The logic can be “rewired” (programmed) and tested, without the need to assemble and rewire banks of relays. A PLC is a computer with a single mission. It usually lacks a monitor, a keyboard, and a mouse, as it is programmed normally to operate a machine or a system using but one program. The machine or system user rarely, if ever, interacts directly with the PLC’s program. When it is necessary to either edit or create the PLC program, a personal computer is usually (but not always) connected to it. The information from the PLCs can be accessed by supervisory control and data acquisition (SCADA) systems and Human–Machine Interfaces (HMIs), to provide a graphical representation of the status of the plant. Figure 4.1 is a schematic of the PLC’s control network resident in industrial systems.
4.1.1.1
Components and Architectures
PLC is actually an industrial microcontroller system (in more recent years we meet microprocessors instead of microcontrollers) where you have hardware and software specifically adapted to industrial environment. Block schema with typical components that a PLC consists of is found in Fig. 4.2. Special attention needs to be given to input and output, because 429
Zhang_Ch04.indd 429
5/13/2008 5:50:42 PM
430
INDUSTRIAL CONTROL TECHNOLOGY SCADA system
Industrial network (High level)
Other part of factory Industrial network (Middle level)
Local PC
Visual and sound signals Central PLC controller
Local PLC controller
Input sensing devices
Output load devices
Local process control system
Figure 4.1 Schematic of the PLC control network. Screw terminals for input lines PLC controller Input port interface
Power supply Communi cation
PC for programming
Extension interface
Memories CPU Internal buses Output port interface
Screw terminals for output lines
Figure 4.2 Basic elements of a PLC controller.
Zhang_Ch04.indd 430
5/13/2008 5:50:42 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
431
most PLC models feature a vast assortment of interchangeable I/O modules that allow for convenient interfacing with virtually any kind of industrial or laboratory equipment. Program unit is usually a computer used for writing a program (often in ladder diagram). (1) Central Processing Unit (CPU). This unit contains the “brains” of the PLC. It is often referred to as a microprocessor or sequencer. The basic instruction set is a high-level program, installed in Read-Only Memory (ROM). The programmed logic is usually stored in Electrically Erasable Permanent Read-Only Memory (EEPROM). The CPU will save everything in memory, even after a power loss. Since it is “electrically erasable,” the logic can be edited or changed as the need arises. The programming device is connected to the CPU whenever the operator needs to monitor, troubleshoot, edit, or program the system, but it is not required during the normal running operations. (2) Memory. System memory (today mostly implemented in FLASH technology) is used by a PLC for a process control system. Aside from this operating system, it also contains a user program translated from a ladder diagram to a binary form. FLASH memory contents can be changed only in a case where the user program is being changed. PLC controllers were used earlier instead of FLASH memory and have had EPROM memory instead of FLASH memory that had to be erased with UV lamp and programmed on programmers. With the use of FLASH technology this process was greatly shortened. Reprogramming a program memory is done through a serial cable in a program for application development. User memory is divided into blocks having special functions. Some parts of a memory are used for storing input and output status. The real status of an input is stored either as “1” or as “0” in a specific memory bit. Each input or output has one corresponding bit in memory. Other parts of the memory are used to store variable contents for variables used in user programs. For example, timer value, or counter value would be stored in this part of the memory. PLC controller memory consists of several areas given in Table 4.1, some of these having predefined functions. (3) Communication board. Every brand of PLC has its own programming hardware. Sometimes it is a small hand-held device, which resembles an oversized calculator with a liquid crystal display (LCD). However, most of the times it is the computerbased programmers. Computer-based programmers typically use a special communication board, installed in an industrial terminal
Zhang_Ch04.indd 431
5/13/2008 5:50:44 PM
Zhang_Ch04.indd 432
Working area
Output area
Input area
Timer/counter area
LR area
AR area
HR area
TR area
SR area
IR area
Data Area
IR 01000–IR 01915 (160 bits) IR 20000–IR 23115 (512 bits) SR23200–SR25515 (384 bits) TR 0–TR 7 (8 bits)
IR 00000–IR 00915 (160 bits)
Bit(s)
HR 00–HR 19 HR0000–HR1915 (20 words) (320 bits) AR 00–AR 15 AR0000–AR1515 (16 words) (256 bits) LR 00–LR 15 LR0000–LR1515 (16 words) (256 bits) TC 000–TC 127 (timer/counter numbers)
IR 010–IR 019 (10 words) IR 200–IR 231 (32 words) SR 232–SR 255 (24 words) –
IR 000–IR 009 (10 words)
Word(s)
Table 4.1 Memory Structure of PLC
Same numbers are used for both timers and counters
1:1 connection with another PC
Temporary storage of ON/OFF states when jump takes place Data storage; these keep their states when power is off Special functions, such as flags and control bits
Working bits that can be used freely in the program. They are commonly used as swap bits Special functions, such as flags and control bits
These bits may be assigned to an external I/O connection. Some of these have direct output on screw terminal (e.g., IR000.00–IR000.05 and IR010.00–IR010.03 with CPM1A model)
Function
432 INDUSTRIAL CONTROL TECHNOLOGY
5/13/2008 5:50:44 PM
Zhang_Ch04.indd 433
PC setup
Read only
DM 6144–DM 6599 (456 words) DM 6600–DM 6655 (56 words)
DM 0000–DM 0999 and DM 1022–DM 1023 (1002 words) DM 1000–DM 1021 (22 words)
–
–
–
–
Storing various parameters for controlling the PC
Data of DM area may be accessed only in word form. Words keep their contents after the power is off Part of the memory for storing the time and code of error that occurred. When not used for this purpose, they can be used as regular DM words for reading and writing. They cannot be changed from within the program
Notes: 1. IR and LR bits, when not used to their purpose, may be used as working bits. 2. Contents of HR area, LR area, counter, and DM area for reading/writing are stored within backup condenser. On 25C, condenser keeps the memory contents for up to 20 days. 3. When accessing the current value of PV, TC numbers used for data have the form of word. When accessing the Completing flags, they are used as data bits. 4. Data from DM6144 to DM6655 must not be changed from within the program, but can be changed by a peripheral device.
DM area
Error writing
Read/write
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL 433
5/13/2008 5:50:44 PM
434
INDUSTRIAL CONTROL TECHNOLOGY or personal computer, with the appropriate software program installed. Computer-based programming allows “offline” programming, where the programmers develop their logic, store it on a disk, and then “down-load” the program to the CPU at their convenience. In fact, it allows more than one programmer to develop different modules of the program. Programming can be done directly into the CPU if desired. When connected to the CPU the programmer can test the system, and watch the logic operate as each element is intensified in sequence on a cathode ray tube (CRT) when the system is running. Since a PLC can operate without having the programming device attached, one device can be used to service many separate PLC systems. (4) PLC controller inputs. Intelligence of an automated system depends largely on the ability of a PLC controller to read signals from different types of sensors and input devices. Keys, keyboards, and functional switches are a basis for human versus machine relationship. On the other hand, to detect a working piece, view a mechanism in motion, check pressure, or fluid level you need specific automatic devices such as proximity sensors, marginal switches, photoelectric sensors, level sensors, and so on. Thus, input signals can be logical (ON/OFF) or analog. Smaller PLC controllers usually only have digital input lines while larger ones also accept analog inputs through special units attached to a PLC controller. One of the most frequent analog signals is a current signal of 4–20 mA and millivolt voltage signal generated by various sensors. Sensors are usually used as inputs for PLCs. You can obtain sensors for different purposes. They can sense presence of some parts, measure temperature, pressure, or some other physical dimension, and so on (for instance, inductive sensors can register metal objects). Other devices also can serve as inputs to the PLC controller. Intelligent devices such as robots, video systems, and so forth often are capable of sending signals to PLC controller input modules (robot, for instance, can send a signal to PLC controller input as information when it has finished moving an object from one place to the other.) (5) PLC controller output. An industrial control system is incomplete if it is not connected with some output devices. Some of the most frequently used devices are motors, solenoids, relays, indicators, sound signalization, and so forth. By starting a motor, or a relay, PLC can manage or control a simple system such as a system for sorting products all the way up to complex systems such as a service system for positioning the head of a robotic machine.
Zhang_Ch04.indd 434
5/13/2008 5:50:44 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
435
Output can be of analog or digital type. A digital output signal works as a switch; it connects and disconnects lines. Analog output is used to generate the analog signal (for instance, a motor whose speed is controlled by a voltage that corresponds to a desired speed). (6) Extension lines. Every PLC controller has a limited number of input/output lines. If needed, this number can be increased through certain additional modules by system extension through extension lines. Each module can contain extension both of input and output lines. Also, extension modules can have inputs and outputs of a different nature from those on the PLC controller (for instance, in case relay outputs are on a controller, transistor outputs can be on an extension module). PLC has input and output lines through which it is connected to a system it directs. This is a very important part of the story about PLC controllers because it directly influences what can be connected and how it can be connected to controller inputs or outputs. Two terms most frequently mentioned when discussing connections to inputs or outputs are “sinking” and “sourcing.” These two concepts are very important in connecting a PLC correctly with the external environment. The briefest definition of these two concepts would be Sinking = Common GND line (–) Sourcing = Common VCC line (+), The first thing that catches one’s eye is “+” and “–” supply DC supply. Inputs and outputs that are either sinking or sourcing can conduct electricity only in one direction, so they are only supplied with direct current. According to what we have discussed so far, each input or output has its own return line, so 5 inputs would need 10 screw terminals on a PLC controller housing. Instead, we use a system of connecting several inputs to one return line as in the following picture. These common lines are usually marked “COMM” on the PLC controller housing. (7) Power supply. Electrical supply is used in bringing electrical energy to a CPU. Most PLC controllers work either at 24 VDC or 220 VAC. On some PLC controllers, you will find electrical supply as a separate module. Those are usually bigger PLC controllers, while small and medium series already contain the supply module. The user has to determine how much current to take from the I/O module to ensure that the electrical supply provides the appropriate amount of current. Different types of modules use different amounts of electrical current. This electrical supply is usually not used to start external inputs or outputs. The user has to provide separate supplies in
Zhang_Ch04.indd 435
5/13/2008 5:50:44 PM
436
INDUSTRIAL CONTROL TECHNOLOGY starting PLC controller inputs or outputs to ensure so called pure supply for the PLC controller. With pure supply we mean supply where industrial environment cannot affect it damagingly. Some of the smaller PLC controllers supply their inputs with voltage from a small supply source already incorporated into a PLC. The internal logic and communication circuitry usually operates on 5 and 15 V DC power. The power supply provides filtering and isolation of the low voltage power from the AC power line. Power supply assemblies may be separate modules, or in some cases, plug-in modules in the I/O racks. Separate control transformers are often used to isolate inputs and CPU from output devices. The purpose is to isolate this sensitive circuitry from transient disturbances produced by any highly inductive output devices. (8) Timers and counters. Timers and counters are indispensable in PLC programming. Industry has to number its products, determine a needed action in time, and so on. Timing functions are very important, and cycle periods are critical in many processes. There are two types of timers: delay-off and delay-on. First is late with turn off and another runs late in turning on in relation to a signal that activated timers. Example of a delay-off timer would be staircase lighting. Following its activation, it simply turns off after a few minutes. Each timer has a time basis, or more precisely has several time bases. Typical values are 1, 0.1, and 0.01 s. If the programmer has entered 0.1 as time basis and 50 as a number for delay increase, the timer will have a delay of 5 s (50 × 0.1 s = 5 s). Timers also have to have the value SV set in advance. Value set in advance or ahead of time is a number of increments that the timer has to calculate before it changes the output status. Values set in advance can be constants or variables. If a variable is used, the timer will use a real time value of the variable to determine a delay. This enables delays to vary depending on the conditions during function. An example is a system that has produced two different products, each requiring different timing during process itself. Product A requires a period of 10 s, so number 10 would be assigned to the variable. When product B appears, a variable can change value to what is required by product B. Typically, timers have two inputs. First is timer enable, or conditional input (when this input is activated, timer will start counting). The second input is a reset input. This input has to be in OFF status in order for a timer to be active, or the whole function would be repeated over again. Some PLC models require this input to be low for a timer to be active; other makers require
Zhang_Ch04.indd 436
5/13/2008 5:50:45 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
437
high status (all of them function in the same way basically). However, if a reset line changes status, the timer erases accumulated value.
4.1.1.2
Control Mechanism
A programmable logic controller is a digital electronic device that uses a programmable memory to store instructions and uses a CPU to implement specific functions such as logic, sequence, timing, counting, and arithmetic to control machines and processes. Figure 4.2 shows a simple schematic of a typical programmable logic controller. When running, the CPU scans the memory continuously from top to bottom, and left to right, checking every input, output, and instruction in sequence. The scan time depends upon the size and complexity of the program, and the number and type of I/O. The scan may be as short as a few milliseconds or less. A few milliseconds per scan would produce tens of scans per second. This short time makes the operation appear as instantaneous, but one must consider the scan sequence when handling critically timed operations and sealing circuits. Complex systems may use interlocked multiple CPUs to minimize total scan time. The parts of the PLC that are quite different from the typical desktop computer are the input and output modules. These modules allow the PLC to communicate with the machine. The inputs may come from limit switches, proximity sensors, temperature sensors, and so on. On the basis of the software program and the combination of inputs, the CPU of the PLC will set the outputs. These outputs may control motor speed and direction, actuate valves, open or close gates, and control all the motions and activities of the machine. (1) System address. The key to getting comfortable with any PLC is to understand the total addressing system. We have to connect our discrete inputs, pushbuttons, limit-switches, and so on, to our controller, interface those points with “electronic ladder diagram” (program), and then bring the results out through another interface to operate motor starters, solenoids, lights, and so forth. Inputs and outputs are wired to interface modules, installed in an I/O rack. Each rack has a two-digit address, each slot has its own address, and each terminal point is numbered. Figure 4.3 shows a PLC product in which all of these addresses are octal. We combine the addresses to form a number that identifies each input and output.
Zhang_Ch04.indd 437
5/13/2008 5:50:45 PM
438
INDUSTRIAL CONTROL TECHNOLOGY I/O Rack Rack no. 00
Output module
7 17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00
6 17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00
5 17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00
4 17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00
3 17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00
2 17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00
Input module
I:000/04
17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00
Closed input
Slot numbers
1
17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00
0
Energized output
0:007/15
17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00 0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Output image table
Word 0:007 17 16 15 14 13 12 11 10 07 06 05 04 03 02 01 00 0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
Input image table
Word 1:000
1:000 ] [ 04
User’s logic rung
Input, rack 00, slot 0, terminal 04
0:007 ( ) 15 Output, rack 00, slot 7, terminal 15
Figure 4.3 Solution of one line of logic.
Some manufacturers use decimal addresses. Some older systems are based on 8-bit words, rather than 16. There are a number of proprietary programmable controllers applied to special industries, such as elevator controls, or energy management, which may not follow the expected pattern, but they will use either 8- or 16-bit word structures. It is very important to identify the addressing system before you attempt to work on any system that uses a programmable controller, because one must know the purpose of each I/O bit before manipulating them in memory.
Zhang_Ch04.indd 438
5/13/2008 5:50:45 PM
439
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
If you know the address of the input or output, you can immediately check the status of its bit by calling up the equivalent address on a cathode ray tube (CRT) screen for most of PLC products. (2) I/O addresses. Figure 4.4 gives an I/O address scheme, which shows us that the I/O modules are closely linked with the Input and Output image tables, respectively. Figure 4.3 shows a very simple line of logic, where a pushbutton is used to turn on a lamp. The pushbutton and lamp “hardwiring” terminates at I/O terminals, and the logic is carried out in software. We have a pushbutton, wired to an input module (I), installed in rack 00, slot 0, and terminal 04. The address becomes I:000/04. An indicating lamp is wired to an output module (O), installed in rack 00, slot 7, and terminal 15. The address becomes O:007/15. Our input address, I:000/04, becomes memory address
Word # 0 1 2 3 4 5 6 7 0
1
2
3
4
5
6
7
Output image table
I/O group designation
An I/O chassis containing 16-point modules Note: Modules can also be installed like this: I O O I
Input/output designation I O I O I O IO IO IO IO IO
Input image table
Word 0 1 2 3 4 5 6 7
Figure 4.4 I/O addressing scheme.
Zhang_Ch04.indd 439
5/13/2008 5:50:45 PM
440
INDUSTRIAL CONTROL TECHNOLOGY I:000/04, and the output address 0:007/15 becomes memory address 0:007/15. In other words, the type of module, the rack address, and the slot position identifies the word address in memory. The terminal number identifies the bit number. (3) Image table addresses. An output image table is reserved in its IR area of the memory (see Table 4.1) as File format, and an input image table is reserved in the same way. A File in memory contains any number of words. Files are separated by type, according to their functions. In the same way, an input image table is also reserved in its IR area of the memory (See Table 4.1) as File format. Figure 4.4 illustrates the respective mapping relationship of the I/O modules to both Output and Input image tables. (4) Scanning. As the scan reads the input image table, it notes the condition of every input, and then scans the logic diagram, updating all references to the inputs. After the logic is updated, the scanner resets the output image table, to activate the required outputs. Figure 4.4 shows some optional I/O arrangements and addressing. In Fig. 4.3, we show how one line of logic would perform when the input at I:000/04 is energized, it immediately sets input image table bit I:000/04 true (ON). The scanner senses this change of state, and makes the element I:000/04 true in our logic diagram. Bit 0:007/15 is turned on by the logic. The scanner sets 0:007/15 true in the output image table, and then updates the output 0:007/15 to turn the lamp on.
4.1.1.3
PLC Programming
Programmable logic controllers use a variety of software programming languages for control. These include sequential function chart (SFC), function block diagram (FBD), ladder diagram (LD), structured text (ST), instruction list (IL), relay ladder logic (RLL), flow chart, C, C++, and Basic. Among these languages, ladder diagram is the most popular. Almost every program for programming a PLC controller possesses various useful options such as: forced switching on and off of the system inputs/outputs (I/O lines), program follow up in real time as well as documenting a diagram. This documenting is necessary to understand and to define failures and malfunctions. The programmer can add remarks, names of input or output devices, and comments that can be useful when finding errors, or with system maintenance. Adding comments and remarks enables any technician (and not just a person who developed the system) to understand a ladder diagram right away. Comments and remarks can even quote
Zhang_Ch04.indd 440
5/13/2008 5:50:47 PM
441
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
precisely part numbers if replacements would be needed. This would speed up repair of any problems that come up due to bad parts. The old way was such that a person who developed a system had protection on the program, so nobody aside from this person could understand how it was done. A correctly documented ladder diagram allows any technician to understand thoroughly how the system functions. (1) Relay ladder logic. Ladder logic is the main programming method used for PLCs. As mentioned before, ladder logic has been developed to mimic relay logic. Relays are used to let one power source close a switch for another (often-high current) power source, while keeping them isolated. An example of a relay in a simple control application is shown in Fig. 4.5. In this system, the first relay on the left is used as normally closed and will allow current to flow until a voltage is applied to input A. The second relay is normally open and will not allow current to flow until a voltage is applied to input B. If current is flowing through the first two relays, then current will flow through the coil in the third relay, and close the switch for output C. This circuit would normally be drawn in the ladder logic form. This can be read logically as C will be on if A is off and B is on. 115 VAC wall plug
Relay logic
Input A (Normally closed) A
Input B (Normally open) B
Output C (Normally open) C Ladder logic
Figure 4.5 A simple relay controller.
Zhang_Ch04.indd 441
5/13/2008 5:50:47 PM
442
INDUSTRIAL CONTROL TECHNOLOGY The example in Fig. 4.5 does not show the entire control system, but only the logic. When we consider a PLC there are inputs, outputs, and the logic. Figure 4.6 shows a more complete representation of the PLC. Here there are two inputs from pushbuttons. We can imagine the inputs as activating 24 VDC relay coils in the PLC. This in turn drives an output relay that switches 115 VAC, which will turn on a light. Note, in actual PLCs inputs are never relays, but outputs are often relays. The ladder logic in the PLC is actually a computer program that the user can enter and change. Note that both of the input pushbuttons are normally open, but the ladder logic inside the PLC has one normally open contact and one normally closed. Do not think that the ladder logic in the PLC needs to match the inputs or outputs. Many beginners will get caught trying to make the ladder logic match the input types. Many relays also have multiple outputs (throws) and this allows an output relay to also be an input simultaneously. The circuit shown in Fig. 4.7(a) is an example of this; it is called a seal in circuit. In this circuit, the current can flow through either branch of the circuit, through the contacts labeled A or B.
Power supply +24 V Push buttons PLC Inputs C
A
B
Ladder logic
Outputs
Light 115 VAC power
Figure 4.6 A PLC illustrated with relays.
Zhang_Ch04.indd 442
5/13/2008 5:50:47 PM
443
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL A B
B
X
Normally open
X
Normally closed
Normal output
Normally on output
IIT
X Immediate inputs
One Shot Relay
OSR X
Latch
L
U
IOT (a)
(b)
X
Unlatch
X Immediate Output T (c)
Figure 4.7 Relay Ladder logic representations: (a) a seal-in circuit; (b) Ladder logic inputs; and (c) Ladder logic outputs.
The input B will only be on when the output B is on. If B is off, and A is energized, then B will turn on. If B turns on then the input B will turn on and keep output B on even if input A goes off. After B is turned on the output, B will not turn off. PLC inputs are easily represented in ladder logic. Figure 4.7(b) shows there are three types of inputs. The first two are normally open and closed inputs, discussed previously. Normally open: an active input x will close the contact and allow power to flow. Normally closed: power flows when the input x is not open. The IIT (Immediate InpuT) function allows inputs to be read after the input scan, while the ladder logic is being scanned. This allows ladder logic to examine input values more often than once every cycle. Immediate inputs will take current values, but not those from the previous input scan. In ladder logic, there are multiple types of outputs, but these are not consistently available on all PLCs. Some of the outputs will be externally connected to devices outside the PLC, but it is also possible to use internal memory locations in the PLC. Six types of outputs are shown in Fig. 4.7(c). The first is a normal output; when energized the output will turn on and energize an output. The circle with a diagonal line through is a normally on output. When it is energized, the output will turn off. This type of output is not available on all PLC types. When initially energized, the OSR (one shot relay) instruction will turn on for
Zhang_Ch04.indd 443
5/13/2008 5:50:47 PM
444
INDUSTRIAL CONTROL TECHNOLOGY one scan, but then be off for all scans after, until it is turned off. The L (latch) and U (unlatch) instructions can be used to lock outputs on. When an L output is energized the output will turn on indefinitely, even when the output coil is deenergized. The output can only be turned off using a U output. The last instruction is the IOT (Immediate OutpuT) that will allow outputs to be updated without having to wait for the ladder logic scan to be completed. When power is applied (ON) the output x is activated for the left output, but turned off for the output on the right. An input transition on will cause the output x to go on for one scan (this is also known as a one shot relay). When the L is energized, x will be toggled on, and will stay on until the U coil is energized. This is like a flip-flop and stays set even when the PLC is turned off. In some PLCs, all immediate outputs do not wait for the program scan to end before setting an output. For example, to develop (without looking at the solution) a relay based controller that will allow three switches in a room to control a single light, there are two possible approaches. The first assumes that any one of the switches on will turn on the light, but all three switches must be off for the light to be off. Figure 4.8(a) displays the ladder logic of the first solution. The second solution assumes that each switch can turn the light on or off, regardless of the states of the other switches. This method is more complex and involves thinking through all of the possible combinations of switch positions. You can recognize this problem as an exclusive or problem from Fig. 4.8(b). (2) Programming. An example of ladder logic can be seen in Fig. 4.9. To interpret this diagram, imagine that the power is on the vertical line on the left-hand side; we call this the hot rail. On the Switches
Switch 1
Light
Light Switch 2
Switch 3
(a)
(b)
Figure 4.8 A case study: (a) solution 1 and (b) solution 2.
Zhang_Ch04.indd 444
5/13/2008 5:50:48 PM
445
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL Neutral
Hot A
B
X
C
D
G
E
F
H
Inputs
Y
Outputs
Figure 4.9 A simple Ladder Logic diagram.
right-hand side is the neutral rail. In this figure there are two rungs, and on each rung there are combinations of inputs (two vertical lines) and outputs (circles). If the inputs are opened or closed in the right combination the power can flow from the hot rail, through the inputs, to power the outputs, and finally to the neutral rail. An input can come from a sensor, switch, or any other type of sensor. An output will be some device outside the PLC that is switched ON or OFF, such as lights or motors. In the top rung the contacts are normally open and normally closed, which means if input A is ON and input B is OFF, then power will flow through the output and activate it. Any other combination of input values will result in the output X being off. The second rung of Fig. 4.9 is more complex; there are actually multiple combinations of inputs that will result in the output Y turning on. On the left-most part of the rung, power could flow through the top if C is OFF and D is ON. Power could also (and simultaneously) flow through the bottom if both E and F are true. This would get power half way across the rung, and then if G or H is true the power will be delivered to output Y. There are other methods for programming PLCs. One of the earliest techniques involved mnemonic instructions. These instructions can be derived directly from the ladder logic diagrams and entered into the PLC through a simple programming terminal. An example of mnemonics is shown in Fig. 4.10. In this example, the instructions are read one line at a time from top to bottom. The first line 00000 has the instruction LDN (input load and not) for input 00001. This will examine the input to the PLC, and if it is OFF it will remember a 1 (or true); if it is ON it will remember a 0 (or false). The next line uses an LD (input load) statement to look at the input. If the input is OFF it remembers a 0; if the input is ON it remembers a 1 (note: this is the
Zhang_Ch04.indd 445
5/13/2008 5:50:48 PM
446
INDUSTRIAL CONTROL TECHNOLOGY 00000 00001 00002 00003 00004 00005 00006 00007 00008
LDN LD AND LD LD AND OR ST END
00001 00002 00003 00004
00107
The mnemonic code is equivalent to the ladder logic below
00001 00002
00107
00003 00004 END
Figure 4.10 A mnemonic program and equivalent Ladder Logic.
reverse of the LD). The AND statement recalls the last two numbers remembered and if they are both true the result is a 1; otherwise the result is a 0. This result now replaces the two numbers that were recalled, and there is only one number remembered. The process is repeated for lines 00003 and 00004, but when these are done there are now three numbers remembered. The oldest number is from the AND; the newer numbers are from the two LD instructions. The AND in line 00005 combines the results from the last LD instructions and now there are two numbers remembered. The OR instruction takes the two numbers now remaining and if either one is a 1 the result is a 1, otherwise the result is a 0. This result replaces the two numbers, and there is now a single number there. The last instruction is the ST (store output) that will look at the last value stored and if it is 1, the output will be turned on; if it is 0 the output will be turned off. The ladder logic program in Fig. 4.10 is equivalent to the mnemonic program. Even if you have programmed a PLC with ladder logic, it will be converted to mnemonic form before being used by the PLC. In the past, mnemonic programming was the most common, but now it is uncommon for users to even see mnemonic programs. Sequential Function Charts have been developed to accommodate the programming of more advanced systems. These are similar to flowcharts, but are much more powerful. The example
Zhang_Ch04.indd 446
5/13/2008 5:50:49 PM
447
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
seen in Fig. 4.11 is doing two different things. To read the chart, start at the top where is says start. Below this there is the double horizontal line that says follow both paths. As a result, the PLC will start to follow the branch on the left- and right-hand sides separately and simultaneously. On the left there are two functions; the first one is the power-up function. This function will run until it decides it is done, and the power-down function will come after. On the right-hand side is the flash function; this will run until it is done. These functions look unexplained, but each function, such as power-up will be a small ladder logic program. This method is much different from flowcharts because it does not have to follow a single path through the flowchart. (3) Ladder diagram instructions. Ladder logic input contacts and output coils allow simple logical decisions. Instructions extend basic ladder logic to allow other types of control. Most of the instructions will use PLC memory locations to get values, store values, and track instruction status. Most instructions will normally become active when the input is true. But, some instructions, such as TOF timers, can remain active when the input is off. Other instructions will only operate when the input goes from false to true; this is known as positive edge triggered. Consider a counter that only counts when the input goes from false to true; the length of time the input is true does not change the instruction behavior. A negative edge-triggered instruction would be triggered when the input goes from true to false. Most instructions are not edge-triggered: unless stated, assume instructions are not edge-triggered. Instructions may be divided into several basic groups according to their operation. Each of these instruction groups is introduced with a brief description in Table 4.2. Start
Power up Flash Multiple path execution flow Power down
End
Figure 4.11 A sequential function chart.
Zhang_Ch04.indd 447
5/13/2008 5:50:49 PM
448
INDUSTRIAL CONTROL TECHNOLOGY
Table 4.2 Ladder Diagram Instructions Group Sequence Input Instructions
Instruction LOAD LOAD NOT AND AND NOT OR OR NOT AND LOAD OR LOAD
Sequence Output Instructions
OUTPUT OUT NOT SET RESET KEEP DIFFERENTIATE UP DIFFERENTIATE DOWN
Sequence Control Instructions
NO OPERATION END INTERLOCK
Function Connects an NO condition to the left bus bar Connects an NC condition to the left bus bar Connects an NO condition in series with the previous condition Connects an NC condition in series with the previous condition Connects an NO condition in parallel with the previous condition Connects an NC condition in parallel with the previous condition Connects two instruction blocks in series Connects two instruction blocks in parallel Outputs the result of logic to a bit Reverses and outputs the result of logic to a bit Force sets (ON) a bit Force resets (OFF) a bit Maintains the status of the designated bit Turns ON a bit for one cycle when the execution condition goes from OFF to ON Turns ON a bit for one cycle when the execution condition goes from ON to OFF — Required at the end of the program It the execution condition for IL(02) is OFF, all outputs are turned OFF and all timer PVs reset between IL(02) and the next ILC(03) (Continued)
Zhang_Ch04.indd 448
5/13/2008 5:50:49 PM
449
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL Table 4.2 Ladder Diagram Instructions (Continued) Group
Instruction INTERLOCK CLEAR JUMP
JUMP END Timer/Counter Instructions
Data Comparison Instructions
TIMER COUNTER REVERSIBLE COUNTER HIGH-SPEED TIMER COMPARE DOUBLE COMPARE BLOCK COMPARE
TABLE COMPARE Data Movement Instructions
MOVE MOVE NOT
BLOCK TRANSFER BLOCK SET DATA EXCHAGE SINGLE WORD DISTRIBUTE
Function ILC(03) indicates the end of an interlock (beginning at IL(02)) If the execution condition for JMP(04) is ON, all instructions between JMP(04) and JME(05) are treated as NOP(OO) JME(05) indicates the end of a jump (beginning at JMP(04)) An ON-delay (decrementing) timer A decrementing counter Increases or decreases PV by one A high-speed, ON-delay (decrementing) timer Compares two four-digit hexadecimal values Compares two eight-digit hexadecimal values Judges whether the value of a word is within 16 ranges (defined by lower and upper limits) Compares the value of a word to 16 consecutive words Copies a constant or the content of a word to a word Copies the complement of a constant or the content of a word to a word. Copies the content of a block of up to 1000 consecutive words to a block of consecutive words Copies the content of a word to a block of consecutive words Exchanges the content of two words Copies the content of a word to a word (whose address is determined by adding an offset to a word address) (Continued)
Zhang_Ch04.indd 449
5/13/2008 5:50:49 PM
450
INDUSTRIAL CONTROL TECHNOLOGY
Table 4.2 Ladder Diagram Instructions (Continued) Group
Instruction DATA COLLECT
MOVE BIT
MOVE DIGIT
Shift Instructions
SHIFT REGISTER
WORD SHIFT
ASYNCHRONOUS SHIFT REGISTER
ARITHMETIC SHIFT LEFT ARITHMETIC SHIFT RIGHT ROTATE LEFT
ROTATE RIGHT
ONE DIGIT SHIFT LEFT
Function Copies the content of a word (whose address is determined by adding an offset to a word address) to a word Copies the specified bit from one word to the specified bit of a word Copies the specified digits (4-bit units) from a word to the specified digits of a word Copies the specified bit (0 or 1) into the rightmost bit of a shift register and shifts the other bits one bit to the left Creates a multiple-word shift register that shifts data to the left in one-word units Creates a shift register that exchanges the contents of adjacent words when one is zero and the other is not Shifts a 0 into bit 00 of the specified word and shifts the other bits one bit to the left Shifts a 0 into bit 15 of the specified word and shifts the other bits one bit to the right Moves the content of CY into bit 00 of the specified word, shifts the other bits one bit to the left, and moves bit 15 to CY Moves the content of CY into bit 15 of the specified word, shifts the other bits one bit to the left, and moves bit 00 to CY Shifts a 0 into the rightmost digit (4-bit unit) of the shift register and shifts the other digits one digit to the left (Continued)
Zhang_Ch04.indd 450
5/13/2008 5:50:49 PM
451
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL Table 4.2 Ladder Diagram Instructions (Continued) Group
Instruction
Function
ONE DIGIT SHIFT RIGHT
Shifts a 0 into the rightmost digit (4-bit unit) of the shift register and shifts the other digits one digit to the right Creates a single- or multipleword shift register that can shift data to the left or right Increments the BCD content of the specified word by 1 Decrements the BCD content of the specified word by 1 Adds the content of a word (or a constant) Subtracts the contents of a word (or constant) and CY from the content of a word (or constant) Multiplies the content of two words (or contents) Divides the contents of a word (or constant) by the content of a word (or constant) Adds the contents of two words (or constants) and CY Subtracts the content of a word (or constant) and CY from the content of the word (or constant) Multiplies the contents of two words (or constants) Divides the content of a word (or constant) by the content of a word and obtains the result and remainder Add the 8-digit BCD contents of two pairs of words (or constants) and CY Subtracts the 8-digit BCD contents of a pair of words (or constants) and CY from the 80-digit BCD contents of a pair of words (or constants)
REVERSIBLE SHIFT REGISTER Increment/ Decrement Instructions
INCREMENT
BCD/Binary Calculation Instructions
BCD ADD
DECREMENT
BCD SUBTRACT
BDC MULTIPLY BCD DIVIDE
BINARY ADD BINARY SUBTRACT BINARY MULTIPLY BINARY DIVIDE
DOUBLE BCD ADD DOUBLE BCD SUBTRACT
(Continued)
Zhang_Ch04.indd 451
5/13/2008 5:50:49 PM
452
INDUSTRIAL CONTROL TECHNOLOGY
Table 4.2 Ladder Diagram Instructions (Continued) Group
Instruction DOUBLE BCD MULITPLY DOUBLE BCD DIVIDE
Data Conversion Instructions
BCD TO BINARY BINARY TO BCD 4 to 16 DECODER
16 to 4 DECODER
ASCII CODE CONVERT Logic Instructions
COMPLEMENT
LOGICAL AND
LOGICAL OR EXCLUSIVE OR
EXCLUSIVE NOR
Special Calculation
BIT COUNTER
Function Multiplies the 8-digit BCD contents of two pairs of words (or constants) Divides the 8-digit BCD contents of a pair of words (or constants) by the 8-digit BCD contents of a pair of words (or constants) Converts 4-digit BCD data to 4-digit binary data Converts 4-digit binary data to 4 digit BCD data Takes the hexadecimal value of the specified digit(s) in a word and turns ON the corresponding bit in a word(s) Identifies the highest ON bit in the specified word(s) and moves the hexadecimal value(s) corresponding to its location to the specified digit(s) in a word Converts the designated digit(s) of a word into the equivalent 8-bit ASCII code Turns OFF all ON bits and turns ON all OFF bits in the specified word Logically ANDs the corresponding bits of two word (or constants) Logically ORs the corresponding bits of two word (or constants) Exclusively ORs the corresponding bits of two words (or constants) Exclusively NORs the corresponding bits of two words (or constants) Counts the total number of bits that are ON in the specified block (Continued)
Zhang_Ch04.indd 452
5/13/2008 5:50:50 PM
453
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL Table 4.2 Ladder Diagram Instructions (Continued) Group
Instruction
Subroutine Instructions
SUBROUTINE ENTER SUBROUTINE ENTRY SUBROUTINE RETURN MACRO
Interrupt Control Instructions
INTERVAL TIMER
Step Instructions
STEP DEFINE
INTERRUPT CONTROL
STEP START Peripheral Device Control Instructions
BCD TO BINARY BINARY TO BCD 4 to 16 DECODER
16 to 4 DECODER
ASCII CODE CONVERT
Function Executes a subroutine in the main program Marks the beginning of a subroutine program Marks the end of a subroutine program Calls and executes the specified subroutine, substituting the specified input and output words for the input and output words in the subroutine Controls interval timers used to perform scheduled interrupts Performs interrupts control, such as masking and unmasking the interrupt bits for I/O interrupts Defines the start of a new step and resets the previous step when used with a control bit. Defines the end of step execution when used without a control bit Starts the execution of the step when used with a control bit Converts 4-digit BCD data to 4-digit binary data Converts 4-digit binary data to 4-digit BCD data Takes the hexadecimal value of the specified digit(s) in a word and turns ON the corresponding bit in a word(s) Identifies the highest ON bit in the specified word(s) and moves the hexadecimal value(s) corresponding to its location to the specified digit(s) in a word Converts the designated digit(s) of a word into the equivalent 8-bit ASCII code (Continued)
Zhang_Ch04.indd 453
5/13/2008 5:50:50 PM
454
INDUSTRIAL CONTROL TECHNOLOGY
Table 4.2 Ladder Diagram Instructions (Continued) Group
Instruction
I/O Units Instructions
Display Instructions
High Speed Counter Control Instructions
7-SEGMENT DECODER I/O REFRESH MESSAGE
MODE CONTROL
PV READ COMPARE TABLE LOAD
Damage Diagnosis Instructions
FAILURE ALARM
SEVERE FAILURE ALARM
Special System Instructions
4.1.1.4
SET CARRY CLEAR CARRY
Function Converts the designated digit(s) of a word into an 8-bit, 7-segment display code Refreshes the specified I/O word Reads up to 8 words of ASCII code (16 characters) from memory and displays the message on the programming console or other peripheral device Starts and stops counter operation, compares and changes counter PVs, and stops pulse output Reads counter PVs and status data Compares counter PVs and generates a direct table or starts operation Generates a nonfatal error when executed. The Error/Alarm indicator flashes and the CPU continues operating Generates a fatal error when executed. The Error/Alarm indicator lights and the CPU stops operating Sets Carry Flag 25504 to 1 Sets Carry Flag 25504 to 0
Basic Types and Important Data
Programmable logic controllers I/O channel specifications include total number of points, number of inputs and outputs, ability to expand, and maximum number of channels. Number of points is the sum of the inputs and the outputs. PLC may be specified by any possible combination of these values. Expandable units may be stacked or linked together to increase total control capacity. Maximum number of channels refer to the maximum total number of input and output channels in an expanded system. PLC system specifications to be considered include scan time, number of instructions, data memory, and program memory. Scan time is
Zhang_Ch04.indd 454
5/13/2008 5:50:50 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
455
the time required by the PLC to check the states of its inputs and outputs. Instructions are standard operations (such as mathematical functions) available to PLC software. Data memory is the capacity for data storage. Program memory is the capacity for control software storage. Available inputs for programmable logic controllers include DC, AC, analog, thermocouple, RTD, frequency or pulse, transistor, and interrupt inputs. Outputs for PLC include DC, AC, relay, analog, frequency or pulse, transistor, and triac. Programming options for PLC include front panel, hand held, and computer. Programmable logic controllers can also be specified with a number of computer interface options, network specifications, and features. PLC power options, mounting options, and environmental operating conditions are all also important to be considered. PLCs are usually available in these three general types: (1) Embedded. The embedded controllers expand their field bus terminals and transform them into a modular PLC. All embedded controllers support the same communication standards such as Ethernet TCP/IP. The industrial PCs and compact operating units belonging to PLC product spectrum are also identical for all controllers. (2) PC-based. This type of PLCs is of slide-in card for the PC that extends every PC or IPC and transforms it into a fully fledged PLC. In the PC, the slide-in card needs only one PCI bus slot and runs fully independently of the operating system. PC system crashes leave the machine control completely cold. (3) Compact. The compact PLC controller unites the functions of an operating unit and a PLC. To some extent, the compact controller already features integrated digital and analog inputs and outputs. Further field bus terminals in the compact PLCs can be connected via an electrically isolated interface such as CANopen.
4.1.1.5
Installation and Maintenance
(1) Control design considerations (a) Systematic design for process control. First, you need to select an instrument or a system that you wish to control. An automated system can be a machine or a process and can also be called a process control system. The function of a process control system is constantly watched by input devices (sensors) that give signals to a PLC controller. In response to this, the PLC controller sends a signal to external output devices (operative instruments) that actually control how the
Zhang_Ch04.indd 455
5/13/2008 5:50:50 PM
456
INDUSTRIAL CONTROL TECHNOLOGY system functions in an assigned manner (for simplification it is recommended that you draw a block diagram of operations’ flow). Second, you need to specify all input and output instruments that will be connected to a PLC controller. Following identification of all input and output instruments, corresponding designations are assigned to input and output lines of a PLC controller. Allotment of these designations is, in fact, an allocation of inputs and outputs on a PLC controller that corresponds to inputs and outputs of a system being designed. Third, make a ladder diagram for a program by following the sequence of operations that was determined in the first step, and then programming the completed ladder logic diagrams. Finally, the program is entered into the PLC controller memory. When programming is finished, checkups are done for any existing errors in a program code (using functions for diagnostics) and, if possible, an entire operation is simulated. Before this system is started, you need to check once again whether all input and output instruments are connected to correct inputs or outputs. By bringing supply in, the system starts working. (b) Memory considerations. The two main factors to consider when choosing memory are the type and the amount. An application may require two types of memory: nonvolatile memory and volatile memory with a battery backup. A nonvolatile memory, such as EPROM, can provide a reliable, permanent storage medium once the program has been created and debugged. If the application will require on-line changes, then it should probably be stored in read/write memory supported by a battery. Some controllers offer both of these options, which can be used individually or in conjunction with each other. The amount of memory required for a given application is a function of the total number of inputs and outputs to be controlled and the complexity of the control program. The complexity refers to the amount and type of arithmetic and data manipulation functions that the PLC will perform. For each of their products, manufacturers have a rule-of-thumb formula that helps to approximate the memory requirement. This formula involves multiplying the total number of I/O by a constant (usually a number between 3 and 8). If the program involves arithmetic or data manipulation, this memory approximation should be increased by 25–50%.
Zhang_Ch04.indd 456
5/13/2008 5:50:50 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
457
(c) Software considerations. During system implementation, the user must program the PLC. Because the programming is so important, the user should be aware of the software capabilities of the product they choose. Generally, the software capability of a system is tailored to handle the control hardware that is available with the controller. However, some applications require special software functions that are beyond the control of the hardware components. For instance, an application may involve special control or data acquisition functions that require complex numerical calculations and data-handling manipulations. The instruction set selected will determine the ease with which these software tasks can be implemented. It will also directly affect the time required to implement and execute the control program. (d) Peripherals. The programming device is the key peripheral in a PLC system. It is of primary importance because it must provide all of the capabilities necessary to accurately and easily enter the control program into the system. The two most common types of programming devices are handheld units and personal computers. Handheld units, which are small and of low cost, are typically used to program relatively small control programs in small PLCs. The amount of information that can be displayed on a handheld unit is normally a single program element or, in some cases, a single program rung. Personal computers provide a better way to program a system if the control program is large. Many PLC manufacturers provide software that allows their PLCs to be programmed using a standard PC. However, expansion boards or special interfacing cables may be required to link the personal computer with the programmable controller. In addition to the programming device, a system may require other types of peripherals such as line printers or color displayers at certain control stations to provide an interface between the controller and the operator. If a PC is used as a graphic interface to a PLC system, both systems must have compatible DDE (dynamic data exchange) drivers to properly interface with peripherals. Peripheral requirements should be evaluated along with the CPU, since the CPU will determine the type and number of peripherals that can be interfaced to the system. The CPU also influences the method of interfacing, as well as the distance that peripherals can be placed from the PLC. (2) Installation, wiring, and precautions. Input/output installation is perhaps the biggest and most critical job when installing a
Zhang_Ch04.indd 457
5/13/2008 5:50:50 PM
458
INDUSTRIAL CONTROL TECHNOLOGY programmable controller system. To minimize errors and simplify installation, the user should follow predefined guidelines. All of the people involved in installing the controller should receive these I/O system installation guidelines, which should have been prepared during the design phase. A complete set of documents with precise information regarding I/O placement and connections will ensure that the system is organized properly. Furthermore, these documents should be constantly updated during every stage of the installation. The following considerations will facilitate an orderly installation. (a) I/O module installation. Placement and installation of the I/O modules is simply a matter of inserting the correct modules in their proper locations. This procedure involves verifying the type of module (115 VAC output, 115 VDC input, etc.) and the slot address as defined by the I/O address assignment document. Each terminal in the module is then wired to the field devices that have been assigned to that address. The user should remove power to the modules (or rack) before installing and wiring any module. (b) Wiring considerations. (i) Wire size. Each I/O terminal can accept one or more conductors of a particular wire size. The user should check that the wire is of the correct gauge and that it is of the proper size to handle the maximum possible current. (ii) Wire and terminal labeling. Each field wire and its termination point should be labeled using a reliable labeling method. Wires should be labeled with shrinktubing or tape, while tape or stick-on labels should identify each terminal block. Color coding of similar signal characteristics (e.g., AC: red, DC: blue, common: white, etc.) can be used in addition to wire labeling. Typical labeling nomenclature includes wire numbers, device names or numbers, and the input or output address assignment. Good wire and terminal identification simplifies maintenance and troubleshooting. (iii) Wire bundling. Wire bundling is a technique commonly used to simplify the connections to each I/O module. In this method, the wires that will be connected to a single module are bundled, generally using a tie wrap, and then routed through the duct with other bundles of wire with the same signal characteristics. Input, power, and output bundles carrying the same type of signals should be kept in separate ducts, when possible, to avoid interference.
Zhang_Ch04.indd 458
5/13/2008 5:50:50 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
459
(c) PLC start-up and checking procedures. Prior to applying power to the system, the user should make several final inspections of the hardware components and interconnections. These inspections will undoubtedly require extra time. However, this invested time will almost always reduce total start-up time, especially for large systems with many input/output devices. The following checklist pertains to prestartup procedures: (i) Visually inspect the system to ensure that all PLC hardware components are present. Verify correct model numbers for each component. (ii) Inspect all CPU components and I/O modules to ensure that they are installed in the correct slot locations and placed securely in position. (iii) Check that the incoming power is correctly wired to the power supply (and transformer) and that the system power is properly routed and connected to each I/O rack. (iv) Verify that the I/O communication cables linking the processor to the individual I/O racks correspond to the I/O rack address assignment. (v) Verify that all I/O wiring connections at the controller end are in place and securely terminated. Use the I/O address assignment document to verify that each wire is terminated at the correct point. (vi) Check that the output wiring connections are in place and properly terminated at the field device end. (vii) Ensure that the system memory has been cleared of previously stored control programs. If the control program is stored in EPROM, remove the chips temporarily. (3) Troubleshooting the PLC system. (a) Diagnostic indicators. LED status indicators can provide much information about field devices, wiring, and I/O modules. Most input/output modules have at least a single indicator; input modules normally have a power indicator, while output modules normally have a logic indicator. For an input module, a lit power LED indicates that the input device is activated and that its signal is present at the module. This indicator alone cannot isolate malfunctions to the module, so some manufacturers provide an additional diagnostic indicator, a logic indicator. An ON logic LED indicates that the input signal has been recognized by the logic section of the input circuit. If the logic and power indicators do not match, then the module is unable to transfer the incoming signal to the processor correctly. This indicates a module malfunction.
Zhang_Ch04.indd 459
5/13/2008 5:50:50 PM
460
INDUSTRIAL CONTROL TECHNOLOGY An output module’s logic indicator works similar to an input module’s logic indicator. When it is ON, the logic LED indicates that the module’s logic circuitry has recognized a command from the processor to turn ON. In addition to the logic indicator, some output modules incorporate either a blown fuse indicator or a power indicator or both. A blown fuse indicator shows the status of the protective fuse in the output circuit, while a power indicator shows that power is being applied to the load. Similar to the power and logic indicators in an input module, if both LEDs are not ON simultaneously, the output module is malfunctioning. LED indicators greatly assist the troubleshooting process. With power and logic indicators, the user can immediately pinpoint a malfunctioning module or circuit. LED indicators, however, cannot diagnose all possible problems; instead, they serve as preliminary signs of system malfunctions. (b) Troubleshooting PLC inputs. If the field device connected to an input module seems to not turn ON, a problem may exist somewhere between the L1 connection and the terminal connection to the module. An input module’s status indicators can provide information about the field device, the module, and the field device’s wiring to the module that will help pinpoint the problem. The first step in diagnosing the problem is to place the PLC in standby mode, so that it is not activating the output. This allows the field device to be manually activated (e.g., a limit switch can be manually closed). When the field device is activated, the module’s power status indicator should turn ON, indicating that power continuity exists. If the indicator is ON, then wiring is not the cause of the problem. The next step is to evaluate the PLC’s reading of the input module. This can be accomplished using the PLC’s test mode, which reads the inputs and executes the program, but it does not activate the outputs. In this mode, the PLC’s display should either show a 1 in the image table bit corresponding to the activated field device or the contact’s reference instruction should become highlighted when the device provides continuity. If the PLC is reading the device correctly, then the problem is not located in the input module. If it does not read the device correctly, then the module could be faulty. The logic side of the module may not be operating correctly, or its optical isolator may be blown. Moreover, one of the module’s interfacing channels could be faulty. In this case, the module must be replaced. If the module does not read the
Zhang_Ch04.indd 460
5/13/2008 5:50:50 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
461
field device’s signal, then further tests are required. Bad wiring, a faulty field device, a faulty module, or an improper voltage between the field device and the module could be causing the problem. First, close the field device and measure the voltage to the input module. The meter should display the voltage of the signal (e.g., 120 V AC). If the proper voltage is present, the input module is faulty because it is not recognizing the signal. If the measured voltage is 10–15% below the proper signal voltage, then the problem lies in the source voltage to the field device. If no voltage is present, then either the wiring or the field device is the cause of the problem. Check the wiring connection to the module to ensure that the wire is secured at the terminal or terminal blocks. To further pinpoint the problem, check that voltage is present at the field device. With the device activated, measure the voltage across the device using a voltmeter. If no voltage is present on the load side of the device (the side that connects to the module), then the input device is faulty. If there is power, then the problem lies in the wiring from the input device to the module. In this case, the wiring must be traced to find the problem. (c) Troubleshooting PLC outputs. PLC output interfaces also contain status indicators that provide useful troubleshooting information. Similar to the troubleshooting of PLC inputs, the first step in troubleshooting outputs is to isolate the problem to the module, the field device, or the wiring. At the output module, ensure that the source power for switching the output is at the correct level. Also, examine the output module to see if it has a blown fuse. If it does have a blown fuse, check the fuse’s rated value. Furthermore, check the output device’s current requirements to determine if the device is pulling too much current. If the output module receives the command to turn ON from the processor yet the module’s output status does not turn ON accordingly, then the output module is faulty. If the indicator turns ON but the field device does not energize, check for voltage at the output terminal to ensure that the switching device is operational. If no voltage is present, then the module should be replaced. If voltage is present, then the problem lies in the wiring or the field device. At this point, make sure that the field wiring to the module’s terminal or to the terminal block has a good connection and that no wires are broken. After checking the module, check that the field device is working properly. Measure the voltage coming to the field
Zhang_Ch04.indd 461
5/13/2008 5:50:50 PM
462
INDUSTRIAL CONTROL TECHNOLOGY device while the output module is ON, making sure that the return line is well connected to the device. If there is power and yet the device does not respond, then the field device is faulty. Another method for checking the field device is to test it without using the output module. Remove the output wiring and connect the field device directly to the power source. If the field device does not respond, then it is faulty. If the field device responds, then the problem lies in the wiring between the device and the output module. Check the wiring, looking for broken wires along the wire path. (d) Troubleshooting the CPU. PLCs also provide diagnostic indicators that show the status of the PLC and the CPU. Such indicators include power OK, memory OK, and communications OK conditions. First, check that the PLC is receiving enough power from the transformer to supply all the loads. If the PLC is still not working, check for voltage supply drop in the control circuit or for blown fuses. If the PLC does not come up even with proper power, then the problem lies in the CPU. The diagnostic indicators on the front of the CPU will show a fault in either memory or communications. If one of these indicators is lit, the CPU may need to be replaced.
4.1.2
Computer Numerical Control (CNC) Controllers
CNC stands for computer numerical control, and refers specifically to the computer control of machine tools for manufacturing complex parts repeatedly. Many types of tools can have a CNC variant: lathes, milling machines, drills, grinding wheels, and so on. In an industrial production environment, all of these machines may be combined into one station to allow the continuous creation of a part involving several operations. CNC controllers are devices that control machines and processes. They range in capability from simple point-to-point linear control to highly complex algorithms that involve multiple axes of control. CNC controllers can be used to control various types of machine shop equipment. These include horizontal mills, vertical mills, lathes and turning centers, grinders, electro discharge machines (EDM), welding machines, and inspection machines. The number of axes controlled by CNC controllers can range anywhere from one to five, with some CNC controllers configured to control greater than six axes. Mounting types for CNC controllers include board, standalone, desktop, pendant, pedestal, and rack mount. Some units may have integral displays, touch screen displays, and keypads for controlling and programming.
Zhang_Ch04.indd 462
5/13/2008 5:50:50 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
463
The first benefit offered by all forms of CNC machine tools is improved automation. The operator intervention related to producing work pieces can be reduced or eliminated. Many CNC machines can run unattended during their entire machining cycle, freeing the operator to do other tasks. This gives the CNC user several side benefits including reduced operator fatigue, fewer mistakes caused by human error, and consistent and predictable machining time for each work piece. Since the machine will be running under program control, the skill level required of the CNC operator (related to basic machining practice) is also reduced as compared to a machinist producing work pieces with conventional machine tools. The second major benefit of CNC technology is consistent and accurate work pieces. Today’s CNC machines boast almost unbelievable accuracy and repeatability specifications. This means that once a program is verified, two, ten, or one thousand identical work pieces can be easily produced with precision and consistency. A third benefit offered by most forms of CNC machine tools is flexibility. Since these machines are run from programs, running a different work piece is almost as easy as loading a different program. Once a program has been verified and executed for one production run, it can be easily recalled the next time the work piece is to be run. This leads to yet another benefit, fast changeover. Since these machines are very easy to set up and run, and since programs can be easily loaded, they allow very short setup time. This is imperative with today’s just-in-time production requirements.
4.1.2.1
Components and Architectures
(1) CNC system architectures. A computer numerical control (CNC) system consists of three basic components: CNC software that is a program of instructions, a machine control unit, and processing equipment, also called machine tool. The general relationship among these three components is illustrated in Fig. 4.12. (a) CNC software. Both the controller and the computer in CNC systems operate by means of software. There are three types of software programs required in either of them: operating
Software program
Machine control unit Processing equipment
Figure 4.12 Elementary components of a CNC system.
Zhang_Ch04.indd 463
5/13/2008 5:50:50 PM
464
INDUSTRIAL CONTROL TECHNOLOGY system software, machine interface software, and application software. The principal function of the operating system is to generate and handle the correspondent control signals to drive the machine tool axes. The machine tool interface software is used to operate the communication link between the Central Processing Unit (CPU) and the machine tool axes to accomplish the control functions. Finally, in the application software, the program of instructions is the detailed step-by-step commands that direct the actions of the processing equipment. In machine tool applications, this program of instructions is called part programming. In part programming, the individual commands refer to positions of a cutting tool relative to the worktable on which the work part is fixtured. Additional instructions are usually included such as spindle speed, feed rate, cutting tool selection, and other functions. The program is coded in a suitable medium for submission to the machine control unit, called a controller. (b) Machine control unit. In today’s CNC technology, the machine control unit (MCU) consists of some kinds of computers with related control hardware that store and sequentially execute the program of instructions by converting each command into mechanical actions of the processing equipment. The general configuration of the MCU in a CNC system is illustrated in Fig. 4.13. A MCU generally consists of the following components or subsystems: CPU; memory; I/O interfaces; controls of machine tool axes and spindle speed; sequence control for other machine tool functions. These subsystems are interconnected inside the MCU by means of an internal bus, as indicated in Fig. 4.13. Among these components or subsystems of a machine control unit given in Fig. 4.13, CPU, memory and I/O interfaces are described in Chapters Two and Three. In hardware,
Memory
CPU
I/O interfaces Internal buses
Machine tool control
Sequence controls
Figure 4.13 Subsystem blocks of MCU in CNC system.
Zhang_Ch04.indd 464
5/13/2008 5:50:51 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
465
the MCU has two subsystems; machine tool controls and sequence controls are distinguished from normal computers such as personal computers (PC). The subsystem of machine tool controls is hardware components that control the position and velocity (feed late) of each machine axis as well as the rotational speed of the machine tool spindle. The control signals generated by MCU must be converted to a form and power level suited to the particular position control systems used to drive the machine axes. The positioning system can be classified as open-loop or closed-loop, and different hardware components are required in each case. Depending on the type of machine tool, the spindle is used to drive either the work piece or a rotating cutter. Turning exemplifies the first case, whereas milling and drilling exemplify the second. Spindle speed is a programmed parameter for most CNC machine tools. Spindle speed control components in the MCU usually consist of a drive control circuit and a feedback sensor interface. The particular hardware components depend on the type of spindle drive. In addition to control of table position, feed rate, and spindle speed, several additional functions are accomplished under part program control. These auxiliary functions are generally ON/OFF (binary) actuations, interlocks, and discrete numerical data. (c) Machine tool/processing equipment. The processing equipment accomplishes the processing steps to transform the starting work piece into a complete part. Its operation is directed by the MPU, which in turn is driven by instructions contained in the part program. In most CNC systems, the processing equipment consists of the worktable and spindle as well as the motors and controls to drive them. (d) Auxiliary and peripheral devices. Most CNC systems also contain some auxiliary devices as well as those devices called peripherals. Some of the important auxiliary devices may include (1) Field Buses, (2) Servo amplifiers, and (3) Power supply devices, and so on. Peripherals may include (1) Keyboards, (2) Graphic display interface such as monitors, (3) Printers, and (4) Disk drivers and tape readers. The microprocessor selected is bus oriented, and the peripherals can be connected to the bus via interface modules. (2) Computers and CNC controllers. Computers, especially PC, are more and more being used in factories to implement process control, which is the same in the CNC systems. Two
Zhang_Ch04.indd 465
5/13/2008 5:50:51 PM
466
INDUSTRIAL CONTROL TECHNOLOGY basic configurations between computers and CNC controllers are the following: (a) The PC is used as a separate front-end interface for displaying the control process to operators or entering and encoding software programs into the CNC controllers. In this case, both the PC and the controller are interconnected by means of respective I/O interface modules; mostly with R232 or R485 interfaces. (b) The PC contains the motion control chips (or board) and the other hardware required to operate the machine tool. In this case, the CNC control chip fits into a standard slot of the PC, and the selected PC will require additional interface cards and programming. In either configuration, the advantages of using a PC for CNC are its flexibility to execute a variety of user software in addition to and concurrently with controlling the machine tool operation. The user software might include programs for shop-floor control, statistical process control, solid modeling, cutting tool management, and other computer-aided manufacturing software. Other benefits include improved ease of use compared with conventional CNC and ease of networking the PC computers. Possible disadvantages include (1) lost time to retrofit the PC for CNC, particularly when installing the CNC motion controls inside the PC, and (2) current limitations in applications require complex fiveaxis control of the machine tool for these applications, whereas traditional CNC is still more efficient. It should be mentioned that advances in the technology of PC-based CNC are likely to reduce these disadvantages over time.
4.1.2.2
Control Mechanism
CNC is the process of “feeding” a set of sequenced instructions of the program into a specially designed programmable CNC controller and then using the controller to direct the motions of a machine tool such as a milling machine, lathe, or flame cutter. The program directs the cutter to follow a predetermined path at a specific spindle speed and feed rate that will result in the production of the desired geometric shape in a work piece. CNC controllers have several choices for operation. These include polar coordinate command, cutter compensation, linear and circular interpolation, stored pitch error, helical interpolation, canned cycles, rigid tapping, and autoscaling. Polar coordinate command is a numerical control system in which all the coordinates are referred to a certain pole. The position is
Zhang_Ch04.indd 466
5/13/2008 5:50:51 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
467
defined by the polar radius and polar angle. Cutter compensation is the distance you want the CNC control to offset for the tool radius away from the programmed path. Linear and circular interpolation is the programmed path of the machine, which appears to be straight or curved, but is actually a series of very small steps along that path. Machine precision can be remarkably improved through such features as stored pitch error compensation, which corrects for lead screw pitch error and other mechanical positioning errors. Helical interpolation is a technique used to make large diameter holes in work pieces. It allows for high metal removal rates with a minimum of tool wear. There are machine routines like drilling, deep drilling, reaming, tapping, boring, and so forth that involve a series of machine operations but are specified by a single G-code with appropriate parameters. Rigid tapping is a CNC tapping feature where the tap is fed into the work piece at the precise rate needed for a perfect tapped hole. It also needs to retract at the same precise rate; otherwise it will shave the hole and create an out of spec tapped hole. Autoscaling translates the parameters of the CNC program to fit the work piece. Many other kinds of manufacturing equipments and manufacturing processes are controlled by other types of programmable CNC controllers. For example, a heat-treating furnace can be equipped with such a controller that will monitor temperature and the furnace’s atmospheric oxygen, nitrogen, and carbon and make automatic changes to maintain these parameters within very narrow limits. (1) CNC coordinate system. To program the CNC processing equipment, a standard axis system must be defined by which the position of the workhead relative to workpart can be specified. There are two axis systems used in CNC, one for flat and prismatic workparts and the other for rotational parts. Both axis systems are based on the Cartesian coordinate system. The axis system for flat and prismatic parts consists of three linear axes (x, y, z) in the Cartesian coordinate system, plus three rotational axes (a, b, c), as shown in Fig. 4.14. In most machine tool applications, the x- and y-axes are used to move and position the worktable to which the part is attached, and the z-axis is used to control the vertical position of the cutting tool. Such a positioning scheme is adequate for simple numerical control applications such as drilling and punching of flat sheet metal. Programming of these machine tools consists of little more than specifying a sequence of x–y coordinates. The a-, b-, and c-rotational axes specify angular positions about the x-, y- and z-axes, respectively. To distinguish positive from negative angles, the right-hand rule
Zhang_Ch04.indd 467
5/13/2008 5:50:51 PM
468
INDUSTRIAL CONTROL TECHNOLOGY +z +c +y
+z
+b +x +x +a Workpart Worktable (a)
(b)
Figure 4.14 Coordinate systems used in numerical control: (a) for flat and prismatic work and (b) for rotational work.
is used. The rotational axes can be used for one or both of the following: (1) orientation of the workpart to present different surfaces for machining or (2) orientation of the tool or workhead at some angle relative to the part. These additional axes permit machining or complex workpart geometries. Machine tools with rotational axis capability generally have either four or five axes: three linear axes plus one or two rotational axes. Most CNC systems do not require all six axes. The coordinate axes for a rotational numerical control system are illustrated in Fig. 4.14(b). These systems are associated with numerical control lathes and turning centers. Although the work rotates, this is not one of the controlled axes on most of these turning machines. Consequently, the y-axis is not used. The path of a cutting tool relative to the rotating workpiece is defined in x–z plane, where the x-axis is the radial location of the tool, and the z-axis is parallel to the axis of rotation of the part. The part programmer must decide where the origin of the coordinate axis system should be located, which is usually based on programming convenience. After this origin is located, the zero position is communicated to the machine tool operator to move the cutting tool under manual control to some target point on the worktable, where the tool can be easily and accurately positioned. (2) Motion control—the heart of CNC. The most basic function of any CNC controller is automatic, precise, and consistent motion control. All forms of CNC equipment have two or more directions of motion, called axes. These axes can be precisely and automatically positioned along their lengths of travel. The two most
Zhang_Ch04.indd 468
5/13/2008 5:50:51 PM
469
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
common axis types as given before are linear (driven along a straight path) and rotary (driven along a circular path). Instead of causing motion by manually turning cranks and handwheels as is required on conventional machine tools, CNC machines allow motion to be actuated by servomotors under control of the CNC, and guided by the part program. Generally speaking, the motion type (rapid, linear, and circular), the axes to move, the amount of motion, and the motion rate (feed rate) are programmable with almost all CNC machine tools. Figure 4.15 shows the makeup of a linear axis of a CNC controller. In this case, a CNC command executed within the control (commonly through a program) tells the drive motor to rotate a precise number of times. The rotation of the drive motor in turn rotates the ball screw. And the ball screw drives the linear axis. A feedback device at the opposite end of the ball screw allows the control to confirm that the commanded number of rotations has taken place. Although a rather crude analogy, the same basic linear motion can be found on a common table vise. By rotating the vise crank, a lead-screw is therefore rotated, which, in turn, drives the movable jaw on the vise. In comparison, a linear axis on a CNC machine tool is extremely precise. The number of revolutions of the axis drive motor precisely controls the amount of linear motion along the axis. Spindle
Drive motor
Worktable
Slide movement
Feedback device
Drive motor signal Feedback signal •••••••••••• MCU
Figure 4.15 A CNC machine takes the commanded position from the CNC program. The drive motor is rotated a corresponding amount, which in turn drives the ball screw, causing linear motion of the axis. A feedback device confirms that the proper amount of ball screw revolutions has occurred.
Zhang_Ch04.indd 469
5/13/2008 5:50:52 PM
470
INDUSTRIAL CONTROL TECHNOLOGY All discussions to this point assume that the absolute mode of programming is used. In the absolute mode, the end points for all motions will be specified from the program zero point. However, there is another way of specifying end points for axis motion, which is the incremental mode. In the incremental mode, end points for motions are specified from the tool’s current position, not from program zero. Although the CNC controller must be told the location of the program zero point by one means or another, how this is done varies dramatically from one CNC controller to another. An older method is to assign program zero in the program. With this method, the programmer tells the control how far it is from the program zero point to the starting position of the machine. A newer and better way to assign program zero is through some form of offset. Machining center control manufacturers commonly call offsets used to assign program zero fixture offsets. Turning center manufacturers commonly call offsets used to assign program zero for each tool geometry offsets. While a CNC controller may have more motion types existing in industrial applications, the three most common types are available in almost all forms of CNC equipment: (a) Rapid motion or positioning. This motion type is used to command motion at the machine’s fastest possible rate. It is used to minimize nonproductive time during the machining cycle. Common uses for rapid motion include positioning the tool to and from cutting positions, moving to clear clamps and other obstructions, and in general, any noncutting motion during the program. (b) Straight line motion. This motion type allows the programmer to command perfectly straight line. This motion type also allows the programmer to specify the motion rate (feed rate) to be used during the movement. Straight line motion can be used any time a straight cutting movement is required, including when drilling, turning a straight diameter, face or taper, and when milling straight surfaces. The method by which feed-rate is programmed varies from one machine type to the next. Generally speaking, machining centers only allow the feed rate to be specific in per-minute format (inches or millimeters per minute). Turning centers also allow feed rate to be specified in per-revolution format (inches or millimeters per revolution). (c) Circular motion. This motion type causes the machine to make movements in the form of a circular path. This motion type is used to generate radii during machining. All feed-rate
Zhang_Ch04.indd 470
5/13/2008 5:50:52 PM
471
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
related points for straight-line motion still apply to the circular motion. (3) Interpolation. For the control to move along a perfectly straight line to get to the programmed end point, it must perfectly synchronize the X- and Y-axis movements as given in Fig. 4.16. Also, if machining is to occur during the motion, a motion rate (feed rate) must also be specified. This requires linear interpolation. Linear interpolation is accomplished by means of the linear interpolation commands in which the control will precisely and automatically calculate a series of very tiny single axis departures, keeping the tool as close to the programmed linear path as possible. The CNC machine tools appear that the machine is forming a perfectly straight-line motion. Figure 4.16(a) shows what the CNC control is actually doing during linear interpolation. In similar fashion, many applications for CNC machine tools require that the machine be able to form circular motions. Applications for circular motions include forming radii on turned work pieces between faces and turns, and milling radii on contours on machining centers. This kind of motion requires circular interpolation. With linear interpolation, the control will do its best to generate as close to a circular path as possible. Figure 4.16(b) shows what happens during circular interpolation. Depending on the application, other interpolation types are required on turning centers that have live tooling. For turning centers that can rotate tools (like end mills) in the turret and have a c-axis to rotate the work piece held in the chuck, polar coordinate interpolation can be used to mill contours around the periphery of the work piece. Polar coordinate interpolation +Y Desired motion direction
+X
+Y +X (a)
(b)
Figure 4.16 (a) Interpolation for actual motion generated with linear interpolation. (b) This drawing shows what happens during circular interpolation.
Zhang_Ch04.indd 471
5/13/2008 5:50:52 PM
472
INDUSTRIAL CONTROL TECHNOLOGY allows the programmer to “flatten out” the rotary axis, treating it as a linear axis for the purpose of making motion commands. (4) Compensation. All types of CNC machine tools require compensation. Though applied for different reasons on different machine types, all forms of compensation are for unpredictable conditions related to tooling. In many applications, the CNC user will be faced with several situations when it will be impossible to predict exactly the result of certain tooling related problems. So one or another form of compensation has to be used. (a) Tool length compensation. This machining center compensation type allows the programmer to forget about each tool’s length as the program is written. Instead of having to know the exact length of each tool and tediously calculating Z-axis positions based on the tool’s length, the programmer simply enters tool length compensation on each tool’s first Z-axis approach movement to the work piece. At the machine during setup, the operator will input the tool length compensation value for each tool in the corresponding offset. This, of course, means the tool must first be measured. If tool length compensation is used wisely, the tool can be measured offline (in a tool length measurement gauge) to minimize setup time. Figure 4.17 shows one popular method of determining the tool length compensation value. With this method, the value is simply the length of the tool. Spindle
Location key Tool length
Figure 4.17 With tool length compensation, the tool’s length compensation value is stored separate from the program. Many CNC controls allow the length of the tool to be used as the offset value.
Zhang_Ch04.indd 472
5/13/2008 5:50:52 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
473
(b) Cutter radius compensation. Just as tool length compensation allows the machining center programmer to forget about the tool’s length, so does cutter radius compensation allow the programmer to forget about the cutter’s radius as contours are programmed. It may be obvious that cutter radius compensation is only used for milling cutters and only when milling on the periphery of the cutter. A programmer would never consider using cutter radius compensation for a drill, tap, reamer, or other hole-machining tool. Without cutter radius compensation, machining center programmers must program the centerline path of all milling cutters. When programming centerline path, the programmer must know the precise diameter of the milling cutter and calculate program movements accordingly. This can be difficult enough with simple motions, but when contours become complicated, it can be very difficult to calculate centerline path. With cutter radius compensation, the programmer can program the coordinates of the work surface, not the tool’s centerline path. This eliminates the need for many calculations. It is worth mentioning that we are now talking about manual programming. If you have a computer-aided manufacturing (CAM) system, it can probably generate centerline path just as easily as work surface path. (c) Dimensional tool offsets. This compensation type applies only to turning centers. When setting up tools, it is not feasible to expect that each tool will be perfectly in position. It is likely that some minor positioning problems will exist. And even if all tools could be perfectly positioned, as any single-point turning or boring tool begins cutting, it will begin to wear. As a turning or boring tool wears, it will directly affect the size of the work piece being machined. For these reasons, and to allow easy sizing of turned work pieces, dimensional tool offsets are required (also simply called tool offsets). Dimensional tool offsets are installed as part of a four-digit T word. The first two digits indicate the tool station number and the second two digits indicate the offset number to be installed. When a tool offset is installed, the control actually shifts the entire coordinate system by the amount of the offset. It will be as if the operator could actually move the tool in the turret by the amount of the offset. Each dimensional offset has two values, one for X and one for Z. The operator will have control of what the tool does in both axes as the work piece is being machined.
Zhang_Ch04.indd 473
5/13/2008 5:50:53 PM
474
INDUSTRIAL CONTROL TECHNOLOGY (d) Other types of compensation. The compensation types shown to this point have been for machining centers and turning centers. But all forms of CNC equipment have some form of compensation to allow for unpredictable situations. Here are some other brief examples. CNC wire EDM machines have two kinds of compensation. One, called wire offset, works in a very similar way to cutter radius compensation to keep the wire centerline away from the work surface by the wire radius plus the overturn amount. It is also used to help make trim (finishing) passes using the same coordinates. Laser cutting machines also have a feature like cutter radius compensation to keep the radius of the laser beam away from the surface being machined. CNC press breaks have a form of compensation for bend allowances based on the work piece material and thickness. Generally speaking, if the CNC user is faced with any unpredictable situations during programming, it is likely that the CNC control manufacturer has come up with a form of compensation to deal with the problem.
4.1.2.3
CNC Part Programming
CNC part programming consists of designing and documenting the sequence of processing steps to be performed on a CNC machine. It is crucial that a manual CNC programmer be able to visualize the machining operations that are to be performed during the execution of the program. Then, in step-by-step order, the programmer will give a set of commands that makes the machine behave accordingly. The CNC programmer must be able to visualize the movements the CNC machine will be making before a program can be successfully developed. Without this visualization ability, the programmer will not be able to develop the movements in the program correctly. This is one reason why machinists make the best CNC programmers. An experienced machinist should be able to easily visualize any machining operation taking place. (1) CNC program formats. The program format is the arrangement of the data that make up the program. Commands are fed to the controller in units called blocks or statements. A block is one complete CNC instruction which is made up of one or more commands such as axis commands or feed rate commands. The format of command information within each block is very important. There are five block formats used in CNC programming: (a) Fixed sequential format. Fixed sequential format requires that specific command data items be organized together in a
Zhang_Ch04.indd 474
5/13/2008 5:50:53 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
(b) (c) (d)
(e)
475
definite order to form a complete statement or block of information. Every block must have exactly the same number of characters (words). The significance of each character depends on where it is located in the block. Fixed sequential format with TAB ignored. This is the same as the fixed sequential format except that TAB codes are used to separate the characters for easier reading by humans. Tab sequential format. This is the same as the preceding format except that characters with the same value as in the preceding block can be omitted in the sequence. Word address format. This format uses a letter prefix to identify the type of word that is a single alpha character. See Table 4.3 for definition of prefix. This features an address for each data element to permit the controller to assign data to their correct register in whatever order they are received. Almost all current CNC controllers use a word address format for programming. Word address format with TAB separation and variable word order. This is the same as the last format, except that characters are separated by TABs, and the characters in the block can be listed in any order. Although the word address format allows variations in the order, the words in a block are usually given in the following order: (i) Sequence number (N-word) (ii) Preparatory word (G-word, see Table 4.4 for definition of G-word)
Table 4.3 Common Word Prefixes Used in Word Address Format Address Word A, B, C F G I, J, K J K M N R S T X, Y, Z
Zhang_Ch04.indd 475
Function Rotation about the X-, Y-, Z-axis, respectively Feed rate commands Preparatory commands Circular interpolation X-, Y-, Z-axis offset, respectively Circular interpolation Y-axis offset Circular interpolation Z-axis offset Miscellaneous commands Sequence number Arc radius Spindle speed Tool number X-, Y-, Z-axis data, respectively
5/13/2008 5:50:53 PM
476
INDUSTRIAL CONTROL TECHNOLOGY
Table 4.4 G-Code Commands G-Codes for movement G00 It sets the controller for rapid travel mode axis motion used for point-to-point motion. Two-axis X and Y moves may occur simultaneously at the same velocity, resulting in a nonlinear dogleg cutter path. With axis priority, three-axis rapid travel moves will move the Z-axis before the X- and Y- if the Z-axis cutter motion is in the negative positive direction; otherwise the Z- axis will move last G01 It sets the controller for linear motion at the programmed feed rate. The controller will coordinate (interpolate) the axis motion of two-axis moves to yield a straight-line cutter path at any angle. A feed rate must be in effect. If no feed rate has been specified before entering the axis destination commands, the feed rate may default to zero inches per minute, which will require a time of infinity to complete the cut G02 It sets the controller for motion along an arc path at programmed feed rate in the clockwise direction. The controller coordinates the X- and Y axes (circular interpolation) to produce an arc path G03 It is the same as G02, but the direction is counterclockwise. G00, G01, G02, and G03 will each cancel any other of the four that might be active G04 It is used for dwell on some makes of CNC controllers. It acts much like the M00 miscellaneous command in that it interrupts execution of the program. Unlike the M00 command, G04 can be an indefinite dwell or it can be a timed dwell if a time span is specified G-Codes for offsetting the cutter’s center G40 It deactivates both G41 and G42, eliminating the offsets G41 It is used for cutter-offset compensation where the cutter is on the left side of the work piece looking in the direction of motion. It permits the cutter to be offset an amount the programmer specifies to compensate for the amount a cutter is undersize or oversize G42 It is the same as G41 except that the cutter is on the right side looking in the direction of motion. G41 and G42 can be used to permit the size of a milling cutter to be ignored (or set for zero diameter) when writing CNC programs. Milling cut statements can then be written directly in terms of work piece geometry dimensions. Cutting tool centerline offsets required to compensate for the cutter radius can be accommodated for the entire program by including a few G41 and/or G42 statements at appropriate places in the program (Continued)
Zhang_Ch04.indd 476
5/13/2008 5:50:53 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
477
Table 4.4 G-Code Commands (Continued) G-Codes for setting measurement data units G70 It sets the controller to accept inch units G71 It sets the controller to accept millimeter units G-Codes for calling (executing) canned cycles G78 It is used by some models of CNC controllers for a canned cycle for milling rectangular pockets. It cancels itself upon completion of the cycle G79 It is used by some models of N/C controllers for a canned cycle for milling circular pockets. It cancels itself upon completion of the cycle G80 It deactivates (cancels) any of the G80-series canned Z-axis cycles. Each of these canned cycles is modal. Once put in effect, a hole will be drilled, bored, or tapped each time the spindle is moved to a new location. Eventually the spindle will be moved to a location where no hole is desired. Cancelling the canned cycle terminates its action G81 It is a canned cycle for drilling holes in a single drill stroke without pecking. Its motion is feed down (into the hole) and rapid up (out of the hole). A Z-depth must be included G82 It is a canned cycle for counterboring or countersinking holes. Its action is similar to G81, except that it has a timed dwell at the bottom of the Z-stroke. A Z-depth must be included G83 It is a canned cycle for peck drilling. Peck drilling should be used whenever the hole depth exceeds three times the drill’s diameter. Its purpose is to prevent chips from packing in the drill’s flutes, resulting in drill breakage. Its action is to drill in at feed rate a small distance (called the peck increment) and then retract at rapid travel. Then the drill advances at rapid travel (“rapids” in machine tool terminology) back down to its previous depth, feeds in another peck increment, and rapids back out again. Then it rapids back in, feeds in another peck increment, etc., until the final Z-depth is achieved. A total Z-depth dimension and peck increment must be included G84 It is a canned cycle for tapping. Its use is restricted to CNC machines that have a programmable variable-speed spindle with reversible direction of rotation. It coordinates the spindle’s rotary motion to the Z-axis motion for feeding the tap into and out of the hole without binding and breaking off the tap. It can also be used with some nonprogrammable spindle machines if a tapping attachment is also used to back the tap out G85 It is a canned cycle for boring holes with a single-point boring tool. Its action is similar to G81, except that it feeds in and feeds out. A Z-depth must be included (Continued)
Zhang_Ch04.indd 477
5/13/2008 5:50:53 PM
478
INDUSTRIAL CONTROL TECHNOLOGY
Table 4.4 G-Code Commands (Continued) G86
G87
G89
It is also a canned cycle for boring holes with a single-point boring tool. Its action is similar to G81, except that it stops and waits at the bottom of the Z-stroke. Then the cutter rapids out when the operator depresses the START button. It is used to permit the operator to back off the boring tool so it does not score the bore upon withdrawal. A Z-depth must be included It is a chip breaker canned drill cycle, similar to the G83 canned cycle for peck drilling. Its purpose is to break long, stringy chips. Its action is to drill in at feed rate a small distance, back out a distance of 0.010 in. to break the chip, then continue drilling another peck increment, back off 0.010 in., drill another peck increment, etc., until the final Z-depth is achieved. A total Z-depth dimension and peck increment must be included It is another canned cycle for boring holes with a single-point boring tool. Its action is similar to G82, except that it feeds out rather than rapids out. It is designed for boring to a shoulder. A Z-depth must be included
G-Codes for setting position frame of reference G90 It sets the controller for positioning in terms of absolute coordinate location relative to the origin G91 It sets the controller for incremental positioning relative to the current cutting tool point location G92 It resets the X-, Y-, and/or Z-axis registers to any number the programmer specifies. In effect it shifts the location of the origin. It is very useful for programming bolt circle hole locations and contour profiling by simplifying trigonometric calculations G-Codes for modifying operational characteristics G99 It is a no modal deceleration override command used on certain Bridgeport CNC mills to permit a cutting tool to move directly— without decelerating, stopping, and accelerating—from a linear or circular path in one block to a circular or linear path in the following block, provided the paths are tangent or require no sudden change of direction and the feed rates are approximately the same
(iii) Coordinates (X-, Y-, Z-words for linear axes; A-, B-, C-axes for rotational axes) (iv) Feed rate (F-word) (v) Spindle speed (S-word) (vi) Tool selection (T-word)
Zhang_Ch04.indd 478
5/13/2008 5:50:53 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
479
(vii) Miscellaneous word (M-word, see Table 4.5 for definition of M-word) (viii) End-of-block (EOB symbol). (2) Programming methodologies. Part programming can be accomplished using a variety of procedures ranging from highly manual to highly automated methods. The methods are (1) manual part programming, (2) computer-assisted part programming, and (3) conversational programming. Table 4.5 M-Code Commands Miscellaneous commands M00 It is a code that interrupts the execution of the program. The CNC machine stops and stays stopped until the operator depresses the START/CONTINUE button. It provides the operator with the opportunity to clear away chips from a pocket, reposition a clamp, or check a measurement M01 It is a code for a conditional—or optional—program stop. It is similar to M00 but is ignored by the controller unless a control panel switch has been activated. It provides a means to stop the execution of the program at specific points in the program if conditions warrant the operator to actuate the switch M02 It is a code that tells the controller that the end of the program has been reached. It may also cause the tape or the memory to rewind in preparation for making the next part. Some controllers use a different code (M30) to rewind the tape M03 It is a code to start the spindle rotation in the clockwise (forward) direction M04 It is a code to start the spindle rotation in the counterclockwise (reverse) direction M05 It is a code to stop the spindle rotation M06 It is a code to initiate the execution of an automatic or manual tool change. It accesses the tool length offset (TLO) register to offset the Z-axis counter to correspond to the end of the cutting tool, regardless of its length M07 It turns on the coolant (spray mist) M08 It turns on the coolant (flood) M09 It turns off the coolant M10 & M11 They are used to actuate clamps M25 It retracts the quill on some vertical spindle N/C mills M30 It rewinds the tape on some N/C machines. Others use M02 to perform this function
Zhang_Ch04.indd 479
5/13/2008 5:50:53 PM
480
INDUSTRIAL CONTROL TECHNOLOGY In manual part programming, the programmer prepares the CNC code using the low-level machine language previously described. The program is either written by hand on a form from which a punched tape or other storage media is subsequently coded, or it is entered directly into a computer equipped with some CNC part programming software, which writes the program onto the storage media. In any case, the manual part programming is a block-by-block listing of the machining instructions for the given job, formatted for a particular machine tool. Manual part programming can be used for both point-to-point and contouring jobs. It is most suited to point-to-point machining operations such as drilling. It can also be used for simple contouring jobs, such as milling and turning when only two axes are involved. However, for complex three-dimensional machining operations, there is an advantage in using computer-assisted part programming. Computer-assisted part programming systems allow CNC programming to be accomplished at a much higher level than manual part programming and are becoming very popular. With a computer-assisted part programming system, the programmer will have a computer to help with the preparation of the CNC program. The computer will actually generate the G-code level program much like a CNC program created by manual means. Once finished, the program will be transferred directly to the CNC machine tool. While these systems vary dramatically from one system to the next, there are three basic steps that remain remarkably similar among most of them. First, the programmer must give some general information. Second, work piece geometry must be defined and trimmed to match the work piece shape. Third, the machining operations must be defined. Information required of the programmer in the first step includes documentation information like part name, part number, date, and program file name. The programmer may also be required to set up the graphic display size for scaling purposes. The work piece material and rough stock shape may also be required. In the second step, the programmer will describe the shape of the work piece by using a series of geometry definition methods. With graphic computer-assisted part programming systems, the programmer will generally be shown each geometric element as it is described. The programmer will have the ability to select from a series of definition methods, choosing the one that makes it the easiest to define the work piece shape. Once geometry is defined, most of these systems require that the geometry be trimmed to match the
Zhang_Ch04.indd 480
5/13/2008 5:50:53 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
481
actual shape of the work piece to be machined. Lines that run off the screen in both directions must be trimmed to form line segments. Circles must be trimmed to form radii. In the third step, the programmer tells the computer-assisted system how the work piece is to be machined. During the third step, usually a tool path or animation will be shown, giving the programmer a very good idea of what will happen as the program is run at the machine tool. This ability to visualize a program before it gets to the machine tool is a major advantage of graphic computer-assisted systems. At the completion of all operations, the programmer can command that the G-code level CNC program be created. With conversational programming, the program is created at the CNC machine. Generally speaking, the conversational program is created using graphic and menu-driven functions. The programmer will be able to visually check whether various inputs are correct as the program is created. When finished, most conversational controls will even show the programmer a tool path plot of what will happen during the machining cycle. Conversational controls vary substantially from one manufacturer to the next. In most cases, they can essentially be thought of as a single-purpose computer-assisted system, and thus do provide a convenient means to generate part programs for a single machine. Be forewarned, though, that some of these controls, particularly older models, can only be programmed conversationally at the machine, which means you cannot utilize other means such as offline programming with a computer-assisted system. However, most newer models can operate either in a conversational mode or accept externally generated G-code programs. There has been quite a controversy brewing over the wisdom of employing conversational controls. Some companies use them exclusively and swear by their use. Others consider them wasteful. Everyone involved with CNC seems to have a very strong opinion (pro or con) about them. Generally speaking, conversational controls can dramatically reduce the time it takes the operator to prepare the program as compared to manual part programming. (3) CNC part programming languages. (a) G-Code commands and M-commands require some elaboration. G-Code commands are called preparatory commands. They consist of two numerical digits following the “G” prefix in the word address format. Table 4.4 explains all the G-Code commands. M-Code commands are used to specify miscellaneous or auxiliary functions that are available
Zhang_Ch04.indd 481
5/13/2008 5:50:53 PM
482
INDUSTRIAL CONTROL TECHNOLOGY on the machine tool. M-Code commands are explained in Table 4.5. (b) Automatically programmed tools (APT). APT is a universal computer assisted programming system for multiaxis contouring programming. The original CNC programming system, developed for aerospace, was first used in building and manufacturing military equipment. The APT code is one of the most widely used software tools for complex numerically controlled machining. APT is a “problem oriented” language that was developed for the explicit purpose of aiding the CNC machine tools. Machinetool instructions and geometry definitions are written in the APT language to constitute a “part program.” The APT part program is processed by the APT software to produce a cutter location file. User supplied post processors to convert the cutter location data into a form suitable for a particular CNC machine tool may then process this file. The APT system software is organized into two separate programs: the load complex and the APT processor. The load complex handles the table initiation phase and is usually only run when changes to the APT processor capabilities are made. The APT processor consists of four components: the translator, the execution complex, the subroutine library, and the cutter location editor. The translator examines each APT statement in the part program for recognizable structure and generates a new statement, or series of statements, in an intermediate language. The execution complex processes all of the definition, motion, and a related statement to generate cutter location coordinates. The subroutine library contains routines defining the algorithms required to process the sequenced list of intermediate language commands generated by the translator. The cutter location editor reprocesses the cutter location coordinates according to user-supplied commands to generate a final cutter location file. The APT language is a statement oriented, sequence dependent language. With the exception of such programming techniques as looping and macros, statements in an APT program are executed in a strict first-to-last sequence. To provide programming capability for the broadest possible range of parts and machine tools, APT input (and output) is generalized, as represented by 3-dimensional geometry and tools, and is arbitrarily uniform, as represented by the moving tool concept and output data in absolute coordinates.
Zhang_Ch04.indd 482
5/13/2008 5:50:53 PM
483
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
4.1.2.4
CNC Controller Specifications
Table 4.6 CNC Controller Specifications Applications Horizontal Mills
Other
A horizontal milling machine has the cutting tool oriented in the horizontal direction A vertical milling machine has the cutting tool oriented in the vertical direction Lathes and turning centers are terms sometimes used interchangeably. A lathe has a cutting tool and a rotating work piece. The CNC program controls the position of the cutting tool along the way guide Grinders are used in work piece finishing operations. CNC grinders automatically grind surfaces to a precision finish Electro discharge machining (EDM) is a method in which voltage is applied through a dielectric medium between the tool electrode and the work piece, using electro discharge generated when the electrode and work piece are positioned close to each other Automated welding machines include TIG, MIG, and other capabilities Inspection machines inspect products for a particular attribute, e.g., color, size, mass, and reject items, which fall outside preset values. They are sometimes integrated on other CNC controlled machines Other unlisted, specialized, or proprietary applications
Number of axes 1 2 3 4 5 6+
The controller controls one axis The controller controls two axes The controller controls three axes The controller controls four axes The controller controls five axes The controller controls six or more axes
Vertical Mills Lathes and Turning Centers
Grinders
Electro Discharge Machine (EDM)
Welding Inspection
Configuration Board Stand Alone
A computer board that has the ability to act as a CNC controller Standalone cabinets are separate from the machine and contain all the controlling electronics of the machine (Continued)
Zhang_Ch04.indd 483
5/13/2008 5:50:53 PM
484
INDUSTRIAL CONTROL TECHNOLOGY
Table 4.6 CNC Controller Specifications (Continued) Desk Top Pendant Pedestal Rack Mount
A desktop controller allows the operator to control the machine from a separate, nearby office A pendant controller hangs from an arm attached to the machine Pedestal controllers sit on top of an arm attached to the machine A rack-mounted controller has tabs to mount the controller components on vertical rails inside a standard rack
Industrial communications Most factories have a common communications protocol. This allows for easier intercell or machine communications ARCNet ARCNet is an embedded networking technology well suited for real-time control applications in both the industrial and commercial marketplaces CANBus Controller Area Network—CANBus is a high-speed serial data network engineered to exist in harsh electrical environments ControlNET Real-time control-layer network that provides highspeed (up to 5 Mbps) transport of message data and I/O data. Especially good for peer-to-peer systems. Specifications that are set by ControlNet International (the association of ControlNet users) Data Highway The Data Highway Plus network is a local area Plus network designed to support remote programming for factory floor applications DeviceNet Utilizing CAN protocol, DeviceNet is a network designed to connect industrial devices such as limit switches, photoelectric cells, valve manifolds, motor starters, drives, and operator displays to PLCs and PCs Ethernet— A local area network (LAN) protocol developed by 10/100 Base –T Xerox Corporation in cooperation with DEC and Intel in 1976. Ethernet uses a bus or star topology and supports data transfer rates of 10 Mbps. The Ethernet specification served as the basis for the IEEE 802.3 standard, which specifies the physical and lower software layers. Ethernet uses the CSMA/CD access method to handle simultaneous demands. It is one of the most widely implemented LAN standards (Continued)
Zhang_Ch04.indd 484
5/13/2008 5:50:53 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
485
Table 4.6 CNC Controller Specifications (Continued) Parallel
PROFIBUS
SERCOS
Universal Serial Bus (USB)
Serial (RS232, RS422, RS485) Web Enabled Other Language Bitmap
Conversational
DXF File
The IEEE 1284 parallel interface standard is the prevalent standard for connecting a computer to a printer or certain other devices over a parallel (e8ight bits of data at a time) physical and electrical interface. The IEEE-1284 standard also allows for bidirectional communications PROFIBUS is a family of industrial communications protocols widely used in Europe for manufacturing and process applications. PROFIBUS focuses on multivendor interchange ability and interoperability of devices SERCOS—SErial Real-time COmmunications System—is an open controller-to-intelligent digital drive interface specification, designed for high speed serial communication of standardized closed-loop data in real time over a noise-immune, fiber optic cable Universal Serial Bus. The 12 megabit serial bus designed to replace virtually all low-to-medium speed peripheral device connections to personal computers, including keyboards, mice, modems, printers, joysticks, audio functions, monitor controls, etc. Heavily supported by hundreds of vendors and Microsoft. Almost all personal computer systems built in 1997 and on will use USB ports Recommended Standard interfaces approved by the Electronic Industries Association (EIA) for connecting serial devices The controller has an interface to the World Wide Web and can communicate with computers or controllers Other unlisted, specialized, or proprietary communications A bit map (often spelled “bitmap”) defines a display space and the color for each pixel or “bit” in the display space Conversational language is a higher level, easy to learn programming tool. It performs the same functions as the standard G-Code commands Drawing eXchange Format (DXF) file that was created as a standard to freely exchange two- and three-dimensional drawings between different CAD programs. It basically represents a shape as a wire frame mesh of x,y,z coordinates (Continued)
Zhang_Ch04.indd 485
5/13/2008 5:50:53 PM
486
INDUSTRIAL CONTROL TECHNOLOGY
Table 4.6 CNC Controller Specifications (Continued) G/M-Codes
Hewlett Packard Graphics Language (HPGL) Ladder Logic
Other Operation Polar Coordinate Command Cutter Compensation Linear/Circular Interpolation Stored Pitch Error
Helical Interpolation Canned Cycles
Rigid Tapping
G-Code is the programming language for Computer Numerically Controlled (CNC) machine tools that can be downloaded to the controller to operate the machine.M-Code is the standard machine tool codes that are normally used to switch on the spindle, coolant, or auxiliary devices HPGL was originally created to send two-dimensional drawing information to pen plotters, but has since become a good standard for the exchange of two-dimensional drawing information between CAD programs A programming language used to program programmable logic controllers (PLC). This graphical language closely resembles electrical relay logic diagrams Other unlisted, specialized, or proprietary languages A numerical control system in which all the coordinates are referred to a certain pole. The position is defined by the polar radius and polar angle Cutter compensation is the distance you want the CNC control to offset for the tool radius away from the programmed path Linear and circular interpolation is the programmed path of the machine, which appears to be straight or curved, but is actually a series of very small steps along that path Machine precision can be remarkably improved through such features as stored pitch error compensation, which corrects for lead screw pitch error and other mechanical positioning errors Helical interpolation is a technique used to make large diameter holes in work pieces. It allows for high metal removal rates with a minimum of tool wear There are machine routines like drilling, deep drilling, reaming, tapping, boring, etc., that involve a series of machine operations but are specified by a single G-code with appropriate parameters Rigid tapping is a CNC tapping feature where the tap is fed into the work piece at the precise rate needed for a perfect tapped hole. It also needs to retract at the same precise rate otherwise it will shave the hole and create an out of spec tapped hole (Continued)
Zhang_Ch04.indd 486
5/13/2008 5:50:53 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
487
Table 4.6 CNC Controller Specifications (Continued) Autoscaling Other Features Alarm/Event Monitoring
Behind Tape Reader
Diskette/Floppy Storage Tape Storage
Zip Disk Storage Multiprogram Storage Self Diagnostics
Simultaneous Control Tape Reader
Teach Mode
Other
Zhang_Ch04.indd 487
Autoscaling translates the parameters of the CNC program to fit the work piece Other unlisted, specialized, or proprietary operations Event monitoring is incorporated into machines to help avoid or reduce the incidence of costly down time. Monitoring could be collision avoidance that breaks tools or dies or other kinds of machine damaging incidents Behind Tape Readers (BTRs) allow the program to be loaded from a computer into the machines memory without having to go through the tape reader. In older machines this is common because it is the only way to access the machines memory A standard 3½ in. diameter floppy disk The machine program is stored on a tape. The tape can either be the conventional paper style or more current magnetic style Zip disk storage is a compact, high capacity form of removable storage Many machine controllers allow for multiple part programs to be stored internally. This allows for faster set up and turnaround times The controller has the ability to run a program and determine if there is a machine fault or error in the part being worked on Simultaneous control allows for the independent control of multiple axes at the same time A device attached to a machine controller that reads paper or magnetic tapes. These tapes contain the program information used to make parts Teach mode allows the machine operator to move each axis to the desired position to “teach” the machine how to make the part Other unlisted, specialized, or proprietary features or options
5/13/2008 5:50:53 PM
488
INDUSTRIAL CONTROL TECHNOLOGY
4.1.3 Supervisory Control and Data Acquisition (SCADA) Controllers Supervisory control and data acquisition (SCADA) is a system of hardware components and software application programs used for process control and to transfer data in real time from remote locations to control equipment and conditions. SCADA systems are at the heart of the modern industrial enterprise with applications in power plants, telecommunications, transportation, and water and waste control, and so on. SCADA hardware gathers and feeds data into a computer that has SCADA software installed. The computer then processes this data according to customer specifications and displays it on customized screens. These systems allow operators to control equipment from a central location and provide warnings (graphically on screens and with audible alarms) when conditions become hazardous.
4.1.3.1
Components and Architectures
(1) SCADA system architectures. SCADA systems have evolved in parallel with the growth and sophistication of modern computer technology. Most of the SCADA systems are of these two topologies: Monolithic SCADA systems, and Distributed SCADA systems. (a) Monolithic SCADA systems. When SCADA systems were first developed, the concept of computer in general centered on “mainframe” systems. Networks were generally nonexistent, and each centralized system stood alone. As a result, SCADA systems were standalone systems with virtually no connectivity to other systems. This kind of SCADA network is implemented to communicate with remote terminal units (RTUs) only in the field. The communication protocols in use on SCADA networks are developed by vendors of RTU equipment and often proprietary. In addition, these protocols are generally very “lean,” supporting virtually no functionality beyond that required for scanning and controlling points within the remote device. Also, it is generally not feasible to intermingle other types of data traffic with RTU communications on the network. Connectivity to the SCADA master station itself is very limited. Connections to the master typically are at the bus level via a proprietary adapter or controller plugged into the CPU backplane.
Zhang_Ch04.indd 488
5/13/2008 5:50:54 PM
489
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
Redundancy in this kind of system is accomplished by the use of two identically equipped mainframe systems, a primary and a backup, connected at the bus level. The standby system’s primary function is to monitor the primary and take over in the event of a detected failure. This type of standby operation means that little or no processing is done on the standby system. Figure 4.18 shows a typical architecture of the monolithic SCADA systems. (b) Distributed SCADA systems. Distributed SCADA systems take advantage of developments and improvement in system miniaturization and Local Area Networking (LAN) technology to distribute the processing across multiple systems. Multiple stations, each with a specific function, are connected to a LAN and share information with each other in real-time. Some of these distributed stations serve as communications processors, primarily communicating with field devices such as RTUs. Some serve as operator interfaces, providing the human–machine interface (HMI) for system operators. Still others serve as calculation processors or database servers. The distribution of an individual SCADA system functions across multiple systems providing more processing power for the system as a whole than would have been available in a single processor. The networks that connected these individual systems are generally based on LAN protocols and were not capable of reaching beyond the limits of the local environment.
Client
SCADA network highway
Ethernet/ modem
MTU RTU
RTU
PLC
Figure 4.18 Monolithic SCADA system architectures.
Zhang_Ch04.indd 489
5/13/2008 5:50:54 PM
490
INDUSTRIAL CONTROL TECHNOLOGY Some of the LAN protocols are of a proprietary nature, where the vendor creates its own network protocol or version thereof rather than pulling an existing one off the shelf. This allows a vendor to optimize its LAN protocol for realtime traffic, but it limits (or effectively eliminates) the connection of networks from other vendors to the SCADA LAN. Figure 4.19 depicts typical distributed SCADA architecture. Distribution of system functionality across networkconnected systems serves not only to increase processing power, but also to improve the redundancy and reliability of the system as a whole. Rather than the simple primary and standby failover scheme that is utilized in monolithic SCADA systems, the distributed architecture often kept all stations on the LAN in an online state all of the time. For example, if an HMI station were to fail, another HMI station could be used to operate the system, without waiting for failover from the primary system to the secondary. The Wide Area Network (WAN) used to communicate with devices in the field is largely unchanged by the development of LAN connectivity between local stations at the SCADA master. These external communications networks are still limited to RTU protocols and were not available for other types of network traffic. The WAN protocols such as the Internet Protocol (IP) are used for communication between the master station and communications equipment. This allows the portion of the master station that is responsible for communications with the field devices to be separated from the master station “proper” across a WAN. Vendors are now producing RTUs that can communicate with the master station using an Ethernet connection, as depicted by Fig. 4.19. (2) SCADA software architecture. As the name indicates, SCADA is not a full control system, but rather focuses on the supervisory level. As such, it is a purely software package that is positioned on top of hardware to which it is interfaced, in general via Programmable Logic Controllers (PLCs), or other commercial hardware modules like RTUs. SCADA systems used to run on DOS, VMS, and UNIX; in recent years all SCADA vendors have moved to NT and some also to Linux. The software generically is multitasking and is based upon a real-time database located in one or more servers. Servers are responsible for data acquisition and handling (e.g., polling controllers, alarm checking, calculations, logging and archiving) on a set of parameters, typically those they are connected to.
Zhang_Ch04.indd 490
5/13/2008 5:50:54 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
PDA
Client web browser
491
Client web browser
Internet or intranet or LAN
IP enabled meters & instruments
SCADA node Serial
SCADA node
Local network
SCADA node
Controllers & instruments
PLC
PLC
PLC
D
PLC PLC Micro controllers
Figure 4.19 Distributed SCADA system architectures.
However, it is possible to have dedicated servers for particular tasks, for instance, historian, and data-logger, alarm handler. Figure 4.20 shows a SCADA software architecture that is generic for most of the SCADA products. (a) Communications. (i) Internal communication. Server-client and server-server communication is, in general, on a publish-subscribe and event-driven basis and uses a TCP/IP protocol or Microsoft NT protocol; for example, a client application subscribes to a parameter that is owned by a particular server application and only changes to that parameter are then communicated to the client application. (ii) Access to devices. The data servers poll the controllers at a user defined polling rate. The polling rate may be different for different parameters. The controllers pass the requested parameters to the data servers. Time stamping of the process parameters is typically performed in the controllers and this time-stamp is taken over by the
Zhang_Ch04.indd 491
5/13/2008 5:50:54 PM
492
INDUSTRIAL CONTROL TECHNOLOGY SCADA client
HMI
Alarm and log display
Trending
Active X controls
3rd Party application
Active X container Client/Server protocol; SCADA server
TCP/IP protocol;
Real-Time event and interrupt handler
Recipe database, and recipe manager
Data processing manager
Real-Time database
Data R/W manager
Report generator
SQL
Alarm and log handler Alarm and log database
Archive handler
Archive database
Open Database Connectivity (ODBC) interface
Device driver (I/O interface)
Dynamic Data Exchange (DDE) interface
OLE for process control (OPC)
PLCs
Microsoft NT
RTUs
Field-Buses: Modbus, …
Application program interface
EXCEL
Private applications
Figure 4.20 SCADA software architecture.
data server. If the controller and communication protocol used support unsolicited data transfer, then the products will support this too. The products provide communication drivers for most of the common PLCs and widely used field-buses, for example, Modbus. Of the three Fieldbuses that are recommended, both PROFIBUS and Worldfip are supported but CANbus often is not. Some of the drivers are based on third party products and therefore have additional cost associated with them. A single data server
Zhang_Ch04.indd 492
5/13/2008 5:50:56 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
493
can support multiple communications protocols: it can generally support as many such protocols as it has slots for interface cards. (b) Interfacing. (i) Application interfaces. The provision of Process Control (OPC) client functionality is for SCADA to access devices in an open and standard manner. Some SCADA controllers also provide an Open Data Base Connectivity (ODBC) interface to the data in the archive and logs, but not to the configuration database, a library of APIs supporting C, C++, and Visual Basic (VB) to access data in the Real-Time Data Base (RTDB) logs and archive. The API often does not provide access to the SCADA controller’s internal features such as alarm handling, reporting, trending, and so on. The PC products provide support for the Microsoft standards such as Dynamic Data Exchange (DDE) which allows, for example, visualizing data dynamically in an EXCEL spreadsheet, Dynamic Link Library (DLL), and Object Linking and Embedding (OLE). (ii) Database. The configuration data are stored in a database that is logically centralized but physically distributed and that is generally of a proprietary format. For performance reasons, the RTDB resides in the memory of the servers and is also of proprietary format. The archive and logging format is usually also proprietary for performance reasons, but some products do support logging to a Relational Data Base Management System (RDBMS) at a slower rate, either directly or via an ODBC interface. (iii) Scalability. Scalability is understood as the possibility to extend the SCADA based control system by adding more process variables, more specialized servers (e.g., for alarm handling), or more clients. The products achieve scalability by having multiple data servers connected to multiple controllers. Each data server has its own configuration database and RTDB and is responsible for the handling of a subset of the process variables (acquisition, alarm handling, archiving). (iv) Redundancy. The SCADA products often have built in software redundancy at a server level, which is normally transparent to the user. Many of the SCADA products also provide more complete redundancy solutions if required.
Zhang_Ch04.indd 493
5/13/2008 5:50:56 PM
494
INDUSTRIAL CONTROL TECHNOLOGY (v) Access control. Users are allocated to groups, which have defined read/write access privileges to the process parameters in the system and often also to specific product functionality. (vi) Human–machine interface (HMI). Some of the SCADA products support multiple screens, which can contain combinations of synoptic diagrams and text. They also support the concept of a “generic” graphical object with links to process variables. These objects can be “dragged and dropped” from a library and included into a synoptic diagram. Most of the SCADA products that were evaluated decompose the process in “atomic” parameters (e.g., a power supply current, its maximum value, its ON/OFF status, etc.) to which a Tag-name is associated. The Tag-names used to link graphical objects to devices can be edited as required. These products include a library of standard graphical symbols, many of which would, however, not be applicable to the type of applications encountered in the experimental physics community. Standard windows editing facilities are provided: zooming, resizing, scrolling, and so on. Online configuration and customization of the HMI is possible for users with the appropriate privileges. Links can be created between display pages to navigate from one view to another. (vii) Trending. Almost all SCADA products provide trending facilities and one can summarize the common capabilities as follows: the parameters to be trended in a specific chart can be predefined or defined online; a chart may contain more than eight trended parameters or pens and an unlimited number of charts can be displayed (restricted only by readability); real-time and historical trending are possible, although generally not in the same chart; historical trending is possible for any archived parameter where zooming and scrolling functions are provided; parameter values at the cursor position can be displayed. The trending feature is either provided as a separate module or as a graphical object (ActiveX), which can then be embedded into a synoptic display. XY; other statistical analysis plots are generally not provided.
Zhang_Ch04.indd 494
5/13/2008 5:50:56 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
495
(viii) Alarm handling. Alarm handling is based on limit and status checking and performed in the data servers. More complicated expressions (using arithmetic or logical expressions) can be developed by creating derived parameters on which status or limit checking is then performed. The alarms are logically handled centrally; the information only exists in one place and all users see the same status, and multiple alarm priority levels are supported. It is generally possible to group alarms and to handle these as an entity (typically filtering on group or acknowledgement of all alarms in a group). Furthermore, it is possible to suppress alarms either individually or as a complete group. The filtering of alarms seen on the alarm page or when viewing the alarm log is also possible, at least on priority, time, and group. However, relationships between alarms cannot generally be defined in a straightforward manner. E-mails can be generated or predefined actions automatically executed in response to alarm conditions. (ix) Logging/archiving. The terms logging and archiving are often used to describe the same facility. However, logging can be thought of as medium-term storage of data on disk, whereas archiving is long-term storage of data either on disk or on another permanent storage medium. Logging is typically performed on a cyclic basis, which means that once a certain file size, time period, or number of points is reached the data is overwritten. Logging of data can be performed at a set frequency, or only initiated if the value changes or when a specific predefined event occurs. Logged data can be transferred to an archive once the log is full. The logged data is time-stamped and can be filtered when viewed by a user. The logging of user actions is, in general, performed together with either a user ID or station ID. There is often also a VCR facility to play back archived data. (x) Report generation. Using SQL type queries to the archive, RTDB, or logs can produce reports. Although it is sometimes possible to embed EXCEL charts in the report, a “cut and paste” capability is, in general, not provided. Facilities exist to allow automatic generation, printing, and archiving of reports.
Zhang_Ch04.indd 495
5/13/2008 5:50:56 PM
496
INDUSTRIAL CONTROL TECHNOLOGY (xi) Automation. The majority of the SCADA controller products allow actions to be automatically triggered by events. A scripting language provided by the SCADA products allows these actions to be defined. In general, one can load a particular display, send an Email, and run a user defined application or script and write to the RTDB. The concept of recipes is supported, whereby a particular system configuration can be saved to a file and then reloaded at a later date. Sequencing is also supported whereby, as the name indicates, it is possible to execute a more complex sequence of actions on one or more devices. Sequences may also react to external events. Some of the products do support an expert system but none has the concept of a Finite State Machine (FSM). (3) SCADA system components. A SCADA system normally consists of the following: (1) A central host computer server or servers (sometimes called a SCADA Center, master station, or Master Terminal Unit (MTU); (2) One or more field data interface devices, usually the Remote Terminal Units (RTUs), or Programmed Logic Controllers ( PLCs), which interface to field sensing devices and local control switchboxes and valve actuators; (3) A communications system used to transfer data between field data interface devices and control units and the computers in the SCADA central host. The system can be radio, telephone, cable, satellite, and so on, or any combination of these; (4) A collection of standard and/or customized software systems and these software systems are sometimes called Human Machine Interface (HMI) software or Man Machine Interface (MMI) software, and are used to provide the SCADA central host and operator terminal application, support the communications system, and monitor and control remotely located field data interface devices (a) Master Terminal Unit (MTU). At the heart of the SCADA system is the master terminal unit (MTU). The master terminal unit initiates all communication, gathers data, stores information, sends information to other systems, and interfaces with operators. The major difference between the MTU and RTU is that the MTU initiates virtually all communications between the two. The MTU also communicates with other peripheral devices in the facility like monitors, printers, and other information systems. The primary interface to the operator is the monitor or CRT that portrays a representation of valves,
Zhang_Ch04.indd 496
5/13/2008 5:50:56 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
497
pumps, and so on. As incoming data changes, the screen is updated. (b) Remote Terminal Unit (RTU). Remote terminal units gather information from their remote sites, from various input devices, such as valves, pumps, alarms, meters, and so on. Essentially, data is either analog (real numbers), digital (ON/OFF), or pulse data (e.g., counting the revolutions of a meter). Many remote terminal units hold the information gathered in their memory and wait for a request from the MTU to transmit the data. Other more sophisticated remote terminal units have microcomputers and programmable logic controllers (PLC) that can perform direct control over a remote site without the direction of the MTU. The RTU central processing unit (CPU) receives a binary data stream from the protocol that the communication equipment uses. Protocols can be open, like Transmission Control Protocol and Internet Protocol (TCP/IP) or proprietary. The RTU receives its information because it sees its node address embedded in the protocol. The data is then interpreted, and the CPU directs the appropriate action at the site. (c) Communications equipment. Communication equipment is required for bidirectional communications between an RTU and the MTU. This can be done through public transmission media or atmospheric means. Both Figs 4.18 and 4.19 depict the topology for a SCADA system. Note that it is quite possible that systems employ more than one means to communicate to remote sites. SCADA systems are capable of communicating using a wide variety of media such as fiber optics, dial-up, dedicated voice grade telephone lines, or radio. Recently, some utilities have employed Integrated Services Digital Network (ISDN). Since the amount of information transmitted is relatively small (less than 50k), voice grade phone lines and radio work well. The topology of a SCADA system is the way a network is physically structured, for example, a ring, bus, or star configuration. It is not possible to define a typical SCADA system topology because it can vary with each system. Some topologies provide redundant operation and others do not. A redundant topology is highly recommended for water treatment plants and other critical control functions. To deploy the topology of a SCADA system, there are many different ways in which SCADA systems can be implemented. Before a SCADA or any other system is rolled out, you need to determine what function the system will perform. The way in
Zhang_Ch04.indd 497
5/13/2008 5:50:56 PM
498
INDUSTRIAL CONTROL TECHNOLOGY which SCADA systems are connected can range from fiber optic cable to the use of satellite systems including the following: (1) twisted-pair metallic cable, (2) coaxial metallic cable, (3) fiber optic cable, (4) power line carrier, (5) satellites, (6) leased telephone lines, (7) very high frequency radio, (8) ultra high frequency radio, (9) microwave radio.
4.1.3.2
SCADA Protocols
In a SCADA system, the RTU accepts commands to operate control points, sets analog output levels, and responds to requests. It provides status, analog, and accumulated data to the SCADA master station. The data representations sent are not identified in any fashion other than by unique addressing. The addressing is designed to correlate with the SCADA master station database. The RTU has no knowledge of which unique parameters it is monitoring in the real world. It simply monitors certain points and stores the information in a local addressing scheme. The SCADA master station is the part of the system that should “know” that the first status point of RTU number, for example, 27 is the status of a certain circuit breaker of a given substation. This represents the predominant SCADA systems and protocols in use in the utility industry today. Each SCADA protocol consists of two message sets or pairs. One set forms the master protocol, containing the valid statements for master station initiation or response, and the other set is the RTU protocol, containing the valid statements an RTU can initiate and respond to. In most but not all cases, these pairs can be considered a poll or request for information or action and a confirming response. The SCADA protocol between MTU and RTU forms a viable model for RTU-to-Intelligent Electronic Device (IED) communications. Currently, in industry, there are several different protocols in use. The most popular are International Electro-technical Commission (IEC) 60870-5 series, specifically IEC 60870-5-101 (commonly referred to as 101), and Distributed Network Protocol version 3 (DNP3). (1) IEC 60870-5-101. IEC 60870-5 specifies a number of frame formats and services that may be provided at different layers. IEC 60870-5 is based on a three-layer Enhanced Performance Architecture (EPA) reference model (see Table 4.7) for efficient implementation within RTUs, meters, relays, and other Intelligent Electronic Devices (IEDs). In addition, IEC 60870-5 defines basic application functionality for a user layer, which is situated between the Open System Interconnection (OSI) application
Zhang_Ch04.indd 498
5/13/2008 5:50:56 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
499
Table 4.7 Enhanced Performance Architecture Application Layer (OSI Layer 7) Link Interface Link Layer (OSI Layer 2) Logic Link Control (LLC) Sublayer Medium Access Control (MAC) Sublayer Physical Interface Physical Layer (OSI Layer 1)
layer and the application program. This user layer adds interoperability for such functions as clock synchronization and file transfers. The following descriptions provide the basic scope of each of the five documents in the base IEC 60870-5 telecontrol transmission protocol specification set. Standard profiles are necessary for uniform application of the IEC 60870-5 standards. A profile is a set of parameters defining the way a device acts. Such profiles have been and are being created. The 101 profile is described in detail following the description of the applicable standards. (a) IEC 60870-5-1 (1990-02) specifies the basic requirements for services to be provided by the data link and physical layers for telecontrol applications. In particular, it specifies standards on coding, formatting, and synchronizing data frames of variable and fixed lengths that meet specified data integrity requirements. (b) IEC-60870-5-2 (1992-04) offers a selection of link transmission procedures using a control field and optional address field; the address field is optional because some point-topoint topologies do not require either source or destination addressing. (c) IEC 60870-5-3 (1992-09) specifies rules for structuring application data units in transmission frames of telecontrol systems. These rules are presented as generic standards that may be used to support a great variety of present and future telecontrol applications. This section of IEC 60870-5 describes the general structure of application data and basic rules to specify application data units without specifying details about information fields and their contents. (d) IEC 60870-5-4 (1993-08) provides rules for defining information data elements and a common set of information elements, particularly digital and analog process variables that are frequently used in telecontrol applications.
Zhang_Ch04.indd 499
5/13/2008 5:50:56 PM
500
INDUSTRIAL CONTROL TECHNOLOGY (e) IEC 60870-5-5 (1995-06) defines basic application functions that perform standard procedures for telecontrol systems, which are procedures that reside beyond Layer 7 (application layer) of the ISO reference model. These utilize standard services of the application layer. The specifications in IEC 60870-5-5 (1995-06) serve as basic standards for application profiles that are then created in detail for specific telecontrol tasks. Each application profile will use a specific selection of the defined functions. Any basic application functions not found in a standards document but necessary for defining certain telecontrol applications should be specified within the profile. Examples of such telecontrol functions include station initialization, cyclic data transmission, data acquisition by polling, clock synchronization, and station configuration. The Standard 101 Profile provides structures that are also directly applicable to the interface between RTUs and IEDs. It contains all the elements of a protocol necessary to provide an unambiguous profile definition so vendors may create products that interoperate fully. At the physical layer, the Standard 101 Profile additionally allows the selection of International Telecommunication Union—Telecommunication Standardization Sector (ITU-T) standards that are compatible with Electronic Industries Association (EIA) standards RS-232 and RS-485, and also support fiber optics interfaces. The Standard 101 Profile specifies frame format FT 1.2, chosen from those offered in IEC 60870-5-1 (1990–02) to provide the required data integrity together with the maximum efficiency available for acceptable convenience of implementation. FT 1.2 is basically asynchronous and can be implemented using standard Universal Asynchronous Receiver/Transmitters (UARTs). Formats with both fixed and variable block length are permitted. At the data link layer, the Standard 101 Profile specifies whether an unbalanced (includes multidrop) or balanced (includes point-to-point) transmission mode is used together with which link procedures (and corresponding link function codes) are to be used. Also specified is an unambiguous number (address) for each link. The link transmission procedures selected from IEC 60870-5-2 (1992-04) specify that SEND/NO REPLY, SEND/ CONFIRM, and REQUEST/RESPOND message transactions should be supported as necessary for the functionality of the end device.
Zhang_Ch04.indd 500
5/13/2008 5:50:56 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
501
The Standard 101 Profile defines the necessary rules for devices that will operate in the unbalanced (multidrop) and balanced (point-to-point) transmission modes. The Standard 101 Profile defines appropriate Application Service Data Units (ASDUs) from a given general structure in IEC 60870-5-3 (1992-09). The sizes and the contents of individual information fields of ASDUs are specified according to the declaration rules for information elements defined in the document IEC 60870-5-4 (1993–08). Type information defines structure, type, and format for information object(s), and a set has been predefined for a number of information objects. The predefined information elements and type information do not preclude the addition by vendors of new information elements and types that follow the rules defined by IEC 60870-5-4 (1993-08) and the Standard 101 Profile. Information elements in the Standard 101 Profile have been defined for protection equipment, voltage regulators, and metered values to interface these devices as IEDs to the RTU. The Standard 101 Profile utilizes the following basic application functions, defined in IEC 60870-5-5 (1995–06), within the user layer: (i) Station initialization (ii) Cyclic data transmission (iii) General interrogation (iv) Command transmission (v) Data acquisition by polling (vi) Acquisition of events (vii) Parameter loading (viii) File transfer (ix) Clock synchronization (x) Transmission of integrated totals (xi) Test procedure. Finally, the Standard 101 Profile defines parameters that support interoperability among multivendor devices within a system. These parameters are defined in 60870-5-102 and 60870-5-105. The Standard 101 Profile provides a checklist that vendors can use to describe their devices from a protocol perspective. These parameters include baud rate, common address of ASDU field length, link transmission procedure, basic application functions, and so on. Also contained in the checklist is the information that should be contained in the ASDU in both the control and monitor directions. This will assist the SCADA engineers in configuring their particular system.
Zhang_Ch04.indd 501
5/13/2008 5:50:56 PM
502
INDUSTRIAL CONTROL TECHNOLOGY The Standard 101 Profile application layer specifies the structure of the ASDU, as shown in Table 4.7. The fields indicated as being optional per system will be determined by a system level parameter shared by all devices in the system. For instance, the size of the common address of ASDU is determined by a fixed system parameter, in this case one or two octets (bytes). The Standard 101 Profile also defines two new terms not found in the IEC 60870-5-1 through 60870-5 base documents. The control direction refers to transmission from the controlling station to a controlled station. The monitor direction is the direction of transmission from a controlled station to the controlling station. Table 4.8 shows the structure of ASDUs as defined in the IEC 60870-5-101 specification. (2) DNP3. Protocols define the rules by which devices talk with each other, and DNP3 is a protocol for transmission of data from point A to point B using serial communications. It has been used primarily by utilities like the electric companies, but it operates suitably in other areas. The DNP3 is specifically developed for interdevice communication involving SCADA RTUs, and provides for both RTU-toIED and master-to-RTU/IED. It is based on the three-layer enhanced performance architecture (EPA) model contained in the IEC 60870-5 standards, with some alterations to meet additional requirements of a variety of users in the electric utility industry.
Table 4.8 Structure of Application Service Data Units (ASDUs)
Data Unit Identifier
Type Identification Variable Structure Qualifier Cause of Transmission … Common Address of ASDU …
Information Object 1
Information Object Address … Set of Information Elements Time Tag milliseconds … IV | Res | Time Tag minute
Application Service Data Unit
…
… Information Object n
Zhang_Ch04.indd 502
5/13/2008 5:50:57 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
503
DNP3 was developed with the following goals: (a) High data integrity. The DNP3 data link layer uses a variation of the IEC 60870-5-1 (1990–02) frame format FT3. Both data link layer frames and application layer messages may be transmitted using confirmed service. (b) Flexible structure. The DNP3 application layer is objectbased, with a structure that allows a range of implementations while retaining interoperability. (c) Multiple applications. DNP3 can be used in several modes, including polled only; polled report-by-exception; unsolicited report-by-exception (quiescent mode); mixture of modes 1–3. It can also be used with several physical layers, and as a layered protocol is suitable for operation over local and some wide area networks. (d) Minimized overhead. DNP3 was designed for existing wirepair data links with operating bit rates as low as 1200 bit/s and attempts to use a minimum of overhead while retaining flexibility. Selection of a data reporting method, such as report-by-exception, further reduces overhead. (e) Open standard. DNP3 is a nonproprietary, evolving standard controlled by a users group whose members include RTU, IED, and master station vendors, and representatives of the electric utility and system consulting community. A typical organization may have a centralized operations center that monitors the state of all the equipment in each of its substations. In the operations center, a computer stores all of the incoming data and displays the system for the human operators. Substations have many devices that need monitoring (are circuit breakers opened or closed?), current sensors (how much current is flowing?), and voltage transducers (what is the line potential?). That only scratches the surface; a utility is interested in monitoring many parameters, too numerous to discuss here. The operations personnel often need to switch sections of the power grid into or out of service. One or more computers are situated in the substation to collect the data for transmission to the master station in the operations center. The substation computers are also called upon to energize or deenergize the breakers and voltage regulators. DNP3 provides the rules for substation computers and master station computers to communicate data and control commands. DNP3 is a nonproprietary protocol that is available to anyone. Only a nominal fee is charged for documentation, but otherwise it is available worldwide with no restrictions. This means a utility
Zhang_Ch04.indd 503
5/13/2008 5:50:57 PM
504
INDUSTRIAL CONTROL TECHNOLOGY can purchase master station and substation computing equipment from any manufacturer and be assured that they will reliably talk to each other. Vendors compete based on their computer equipment’s features, costs, and quality factors instead of who has the best protocol. Utilities are not stuck with one manufacturer after the initial sale. The substation computer gathers data for transmission to the master such as the following (a) Binary input data that is useful to monitor two-state devices. For example, a circuit breaker is closed or tripped, or a pipeline pressure alarm shows normal or excessive. (b) Analog input data that conveys voltages, currents, power, reservoir water levels, and temperatures. (c) Count input data that reports kilowatt-hours of energy. (d) Files that contain configuration data. The master station issues control commands that take the form of the following: (1) close or trip a circuit breaker, raise or lower a gate, and open or close a valve; (2) analog output values to set a regulated pressure or set a desired voltage level. Other things the computers talk to each other about are synchronizing the time and date, sending historical or logged data, waveform data, and so on. DNP3 was designed to optimize the transmission of data acquisition information and control commands from one computer to another. It is not a general-purpose protocol for transmitting hypertext, multimedia, or huge files. Figure 4.21 shows the client-server relationship and gives a simplified view of the databases and software processes involved. The master or client is on the left side of Fig. 4.21, and the slave or server is on the right side. A series of square blocks at the top of the server depicts its databases and output devices. The various data types are conceptually organized as arrays. An array of binary input values represents states of physical or logical Boolean devices. Values in the analog input array represent input quantities that the server measured or computed. An array of counters represents count values, such as kilowatt hours, that are ever increasing (until they reach a maximum and then roll over to zero and start counting again). Control outputs are organized into an array representing physical or logical on-off, raise-lower, and trip-close points. Last, the array of analog outputs represents physical or logical analog quantities such as those used for set points. The elements of the arrays are labeled 0 through N–1, where N is the number of blocks shown for the respective data type. In DNP3 terminology, the element numbers are called the point
Zhang_Ch04.indd 504
5/13/2008 5:50:57 PM
505
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
Binary Analog Counter Control Analog Binary Analog Counter Control Analog input input output output Analog code code input input output output output 8 7 8 6 6 7 5 5 6 6 4 4 4 5 5 3 3 3 3 3 4 4 4 2 2 2 2 2 3 3 3 3 3 1 1 1 1 1 2 2 2 2 2 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 DNP3 user’s code DNP3 user’s code
DNP3 software
DNP3 software
Master (Client)
Slave (Server)
Physical media User request User response
Figure 4.21 DNP3 client–server relationship.
indexes. Indexes are zero-based in DNP3, that is, the lowest element is always identified as zero (some protocols use 1-based indexing). Note that the DNP3 client or master, also has a similar database for the input data types (binary, analog, and counter). The master or client, uses values in its database for the specific purposes of displaying system states, closed-loop control, alarm notification, billing, and so on. An objective of the client is to keep its database updated. It accomplishes this by sending requests to the server (slave) asking it to return the values in the server’s database. This is termed polling. The server responds to the client’s request by transmitting the contents of its database. Arrows are drawn at the bottom of Table 4.7 showing the direction of the requests (toward the server) and the direction of the responses (toward the client). Later we will discuss systems whereby the slaves transmit responses without being asked. The client and the server shown in Fig. 4.21 each have two software layers. The top layer is the DNP3 user layer. In the client, it is the software that interacts between the databases and
Zhang_Ch04.indd 505
5/13/2008 5:50:57 PM
506
INDUSTRIAL CONTROL TECHNOLOGY initiates the requests for the server’s data. In the server, it is the software that fetches the requested data from the server’s database for responding to client requests. It is interesting to note that if no physical separation of the client and server existed, eliminating the DNP3 might be possible by connecting these two upper layers together. However, since physical or possibly logical separation of the client and server exists, DNP3 software is placed at a lower level. The DNP3 user’s code uses the DNP3 software for transmission of requests or responses to the matching DNP3 user’s code at the other end. Data types and software layers will be discussed later. However, it is important to first examine a few typical system architectures where DNP3 is used. Figure 4.22 shows common system DNP3 Client (Master)
DNP3 Client (Master)
One-to-One DNP3 Client (Master)
DNP3 Server (Slave)
DNP3 Server (Slave)
DNP3 Server (Slave)
Multiple-drop DNP3 Client (Master)
DNP3 Server (Slave)
DNP3 Server (Slave)
Client (Master)
Hierarchical DNP3 Client (Master)
DNP3 Server (Slave)
XYZ Client (Master)
XYZ Server (Slave)
XYZ Server (Slave)
DNP3 Client (Master)
DNP3 Server (Slave)
DNP3 Server (Slave)
Data concentrator XYZ Client (Master)
XYZ Server (Slave)
Data concentrator
Figure 4.22 Common DNP3 architectures in use today.
Zhang_Ch04.indd 506
5/13/2008 5:50:57 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
507
architectures in use today. At the top is a simple one-on-one system having one master station and one slave. The physical connection between the two is typically a dedicated or dial-up telephone line. The second type of system is known as a multiple-drop design. One master station communicates with multiple slave devices. Conversations are typically between the client and one server at a time. The master requests data from the first slave, then moves onto the next slave for its data, and continually interrogates each slave in a round robin order. The communication medium is a multidropped telephone line, fiber optic cable, or radio. Each slave can hear messages from the master and is only permitted to respond to messages addressed to itself. Slaves may or may not be able to hear each other. In some multiple-drop forms, communications are peer-topeer. A station may operate as a client for gathering information or sending commands to the server in another station. Then, it may change roles to become a server to another station. The middle row in Fig. 4.22 shows a hierarchical type system where the device in the middle is a server to the client at the left and is a client with respect to the server on the right. The middle device is often termed a submaster. Both lines at the bottom of Fig. 4.22 show data concentrator applications and protocol converters. A device may gather data from multiple servers on the right side of the figure and store this data in its database where it is retrievable by a master station client on the left side of the figure. This design is often seen in substations where the data concentrator collects information from local intelligent devices for transmission to the master station. In recent years, several vendors have used Transport Control Protocol/Internet Protocol (TCP/IP) to transport DNP3 messages in lieu of the media discussed above. Link layer frames, which have not been discussed yet, are embedded into TCP/IP packets. This approach has enabled DNP3 to take advantage of Internet technology and permitted economical data collection and control between widely separated devices. Many communication circuits between the devices are susceptible to noise and signal distortion. The DNP3 software is layered to provide reliable data transmission and to effect an organized approach to the transmission of data and commands. Figure 4.23 shows the DNP3 architecture layers. The link layer has the responsibility of making the physical link reliable. It does this by providing error detection and duplicate frame detection. The link layer sends and receives packets, which
Zhang_Ch04.indd 507
5/13/2008 5:50:58 PM
508
INDUSTRIAL CONTROL TECHNOLOGY Binary code 8 7 6 5 4 3 2 1 0
Analog Counter input input
4 3 2 1 0
Analog output
Binary code 8 7 6 5 4 3 2 1 0
3 2 1 0
Analog Counter Control input input input
4 3 2 1 0
3 2 1 0
DNP3 user’s code
DNP3 user’s code
DNP3 application layer
DNP3 application layer
Psuedo transport layer
Psuedo transport layer
DNP3 link layer
DNP3 link layer
Master (Client)
6 5 4 3 2 1 0
3 2 1 0
Slave (Server)
Physical media User request User response
Figure 4.23 DNP3 layers.
in DNP3 terminology are called frames. Sometimes transmission of more than one frame is necessary to transport all of the information from one device to another. A DNP3 frame consists of a header and data section. The header specifies the frame size, which DNP3 station should receive the frame, which DNP3 device sent the frame, and data link control information. The data section is commonly called the payload and contains the data passed down from the layers above. Every frame begins with two sync bytes that help the receivers determine where the frame begins. The length specifies the number of octets in the remainder of the frame, not including Cyclical Redundancy Check (CRC) octets. The link control octet is used between sending and receiving link layers to coordinate their activities. A destination address specifies which DNP3 device should process the data, and the source address identifies which DNP3
Zhang_Ch04.indd 508
5/13/2008 5:50:58 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
509
device sent the message. Having both destination and source addresses satisfies at least one requirement for peer-to-peer communications, because the receiver knows where to direct its responses. Every DNP3 device must have a unique address within the collection of devices sending and receiving messages to and from each other. Three destination addresses are reserved by DNP3 to denote an all-call message; that is, all DNP3 devices should process the frame. Thirteen addresses are reserved for special needs in the future. The data payload in the link frame contains a pair of CRC octets for every 16 data octets. This provides a high degree of assurance that communication errors can be detected. The maximum number of octets in the data payload is 250, not including CRC octets. (The longest link layer frame is 292 octets if all the CRC and header octets are counted.) One often hears the term “link layer confirmation” when DNP3 is discussed. A feature of DNP3’s link layer is the ability of the transmitter of the frame to request the receiver to confirm that the frame arrived. Using this feature is optional, and it is often not employed. It provides an extra degree of assurance of reliable communications. If a confirmation is not received, the link layer may retry the transmission. Some disadvantages are the extra time required for confirmation messages and waiting for multiple timeouts when retries are configured. It is the responsibility of the transport layer to break long messages into smaller frames sized for the link layer to transmit, or when receiving, to reassemble frames into the longer messages. In DNP3, the transport layer is incorporated into the application layer. The transport layer requires only a single octet within the message to do its work. Therefore, since the link layer can handle only 250 data octets, and one of those is used for the transport function, then each link layer frame can hold as many as 249 application layer octets. Application layer messages are broken into fragments. Fragment size is determined by the size of the receiving device’s buffer. It normally falls between 2048 and 4096 bytes. A message that is larger than one fragment requires multiple fragments. Fragmenting messages is the responsibility of the application layer. Note that an application layer fragment of size 2048 must be broken into nine frames by the transport layer, and a fragment size of 4096 needs 17 frames. Interestingly, it has been learned from experience that communications are sometimes more successful for systems operating in high noise environments if the fragment size is significantly reduced.
Zhang_Ch04.indd 509
5/13/2008 5:50:58 PM
510
INDUSTRIAL CONTROL TECHNOLOGY The application layer works together with the transport and link layers to enable reliable communications. It provides standardized functions and data formatting with which the user layer above can interact. Before functions, data objects and variations can be discussed, the terms static, events, and classes need to be covered. In DNP3, the term static is used with data and refers to the current value. Thus, static binary input data refers to the present on or off state of a bistate device. Static analog input data contains the value of an analog value at the instant it is transmitted. DNP3 allows a request for some or all of the static data stored in a slave device. DNP3 events are associated with something significant happening. Examples are state changes, values exceeding some threshold, snapshots of varying data, transient data, and newly available information. An event occurs when a binary input changes from an “ON” to an “OFF” state or when an analog value changes by more than its configured deadband limit. DNP3 provides the ability to report events with and without time stamps so that the client can generate a time sequence report. The user layer can direct DNP3 to request events. Usually, a client is updated more rapidly if it mostly polls for events from the server and only occasionally asks for static data as an integrity measure. The reason updates are faster is because the number of events generated between server interrogations is small and, therefore, less data must be returned to the client. DNP3 goes a step further by classifying events into three classes. When DNP3 was conceived, class 1 events were considered as having higher priority than class 2 events, and class 2 were higher than class 3 events. While that scheme can still be configured, some DNP3 users have developed other strategies more favorable to their operation for assigning events into the classes. The user layer can request the application layer to poll for class 1, 2, or 3 events or any combination of them. DNP3 has provisions for representing data in different formats. Examination of analog data formats is helpful to understand the flexibility of DNP3. Static, current value, analog data can be represented by variation numbers as follows: (a) A 32-bit integer value with flag (b) A 16-bit integer value with flag (c) A 32-bit integer value (d) A 16-bit integer value (e) A 32-bit floating point value with flag (f) A 64-bit floating point value with flag.
Zhang_Ch04.indd 510
5/13/2008 5:50:58 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
511
The flag referred to is a single octet with bit fields indicating whether the source is online, value contains are start value, communications are lost with the source, the data is forced, and the value is over range. Not all DNP3 devices can transmit or interpret all six variations. DNP3 devices must be able to transmit the simplest variations so that any receiver can interpret the contents. Event analog data can be represented by these variations: (a) A 32-bit integer value with flag (b) A 16-bit integer value with flag (c) A 32-bit integer value with flag and event time (d) A 16-bit integer value with flag and event time (e) A 32-bit floating point value with flag (f) A 64-bit floating point value with flag (g) A 32-bit floating point value with flag and event time (h) A 32-bit floating point value with flag and event time. The flag has the same bit fields as the static variations. It looks like a variation 1 or 2 analog event cannot be differentiated from a variation 1 or 2 static analog value. DNP3 solves this predicament by assigning object numbers. Static analog values are assigned as object 30, and event analog values are assigned as object 32. Static analog values, object 30, can be formatted in one of six variations, and event analog values, object 32, can be formatted in one of eight variations. When a DNP3 server transmits a message containing response data, the message identifies the object number and variation of every value within the message. Object and variation numbers are also assigned for counters, binary inputs, controls, and analog outputs. In fact, all valid data types and formats in DNP3 are identified by object and variation numbers. Defining the allowable objects and variations helps DNP3 ensure interoperability between devices. DNP3’s basic documentation contains a library of valid objects and their variations. The client’s user layer formulates its request for data from the server by telling the application layer what function to perform, like reading, and specifying which objects it wants from the server. The request can specify how many objects it wants or it can specify specific objects or a range of objects from index number X through index number Y. The application layer then passes the request down through the transport layer to the link layer that, in turn, sends the message to the server. The link layer at the server checks the frames for errors and passes them up to the transport layer where the complete message is assembled in the server’s application layer. The application layer then tells the user layer which objects and variations were requested.
Zhang_Ch04.indd 511
5/13/2008 5:50:58 PM
512
INDUSTRIAL CONTROL TECHNOLOGY Responses work similarly, in that the server’s user layer fetches the desired data and presents it to the application layer that formats the data into objects and variations. Data is then passed downward, across the communication channel and upward to the client’s application layer. Here the data objects are presented to the user layer in a form that is native to the client’s database. One area that has not been covered yet is transmission of unsolicited messages. This is a mode of operating where the server spontaneously transmits a response, possibly containing data, without having received a specific request for the data. Not all servers have this capability, but those that do must be configured to operate in this mode. This mode is useful when the system has many slaves and the master requires notification as soon as possible after a change occurs. Rather than waiting for a master station-polling cycle to get around to it, the slave simply transmits the change. To configure a system for unsolicited messages, a few basics need to be considered. First, spontaneous transmissions should generally occur infrequently, otherwise, too much contention can occur, and controlling media access via master station polling would be better. The second basic issue is that the server should have some way of knowing whether it can transmit without stepping on someone else’s message in progress. DNP3 leaves specification of algorithms to the system implementer. One last area of discussion involves implementation levels. The DNP3 Users Group recognizes that supporting every feature of DNP3 is not necessary for every device. Some devices are limited in memory and speed and do not need specific features, while other devices must have the more advanced features to accomplish their task. DNP3 organizes complexity into three levels. At the lowest level, level 1, only very basic functions must be provided and all others are optional. Level 2 handles more functions, objects, and variations, and level 3 is even more sophisticated. As a result, only certain combinations of request formats and response formats are required. DNP3 is a protocol that fits well into the data acquisition world. It transports data as generic values, has a rich set of functions, and was designed to work in a wide area communications network. The standardized approach and public availability make DNP3 a protocol to be the standard for SCADA applications.
4.1.3.3
Functions and Administrations
(1) SCADA system functions. SCADA systems vary in their complexity from sophisticated networked systems with real-time
Zhang_Ch04.indd 512
5/13/2008 5:50:58 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
513
controls linking numerous remote sites equipped with RTUs to several central monitoring systems, to simple alarm feedback systems installed at one or more sites. Online systems can be equipped with control, signal, and alarm systems that notify the operator of abnormal conditions, and some may be tied into feedback loops that can provide real-time control, such as automatically adjusting chemical usage at a treatment plant or activating a back-up pump station. In selecting SCADA systems, important factors to consider include the following. (a) Points number, analog and digital. The SCADA system links all RTU and associated I/O and connection points under a systematic design to meet all of the requirements and monitoring functionality. Therefore, the level of sophistication selected for a particular application is a function of the level of information needed by the operator. Key factors that influence the selection of a SCADA system include the numbers of data points (I/Os and connection points) to be monitored and the method and quantity of data to be archived. For example, one of the most important steps in selecting specific devices and components for a SCADA system is identifying the specific equipment and/or processes that the system will monitor. In building a customized SCADA system, many utilities begin by first preparing a list of assets associated with the particular utility (e.g., pump stations, storage tanks, reservoirs, treatment process, or any other asset to be monitored), and then identifying the operational parameters, water quality parameters, and types of monitoring that are key to that specific operation. Each of the assets would have at least one input into the SCADA system. This input would allow the pertinent information from that asset to be sent to the CPU, and would identify it as coming from that particular asset. Once the information from a particular asset is input into the CPU, the input information would be processed versus internal logic to determine whether outputs were required. Potential outputs would be based on the types of responses that the user chooses to program based on the inputs. For example, there may be output connections that control pumps, lights, alarms, locks, and so on. As described above, a specific input (such as a sensor reporting low chlorine concentration) may be designed to generate a specific output (such as turning on a pump). (b) Data transmission and communication network. Data transmission can be provided via two primary channels: hardwired
Zhang_Ch04.indd 513
5/13/2008 5:50:58 PM
514
INDUSTRIAL CONTROL TECHNOLOGY and wireless. Hardwired systems are physically connected to each other through wires or fiber optic cables. Wireless systems are not physically connected. In these systems, the transmissions take place from the originator to the receiver over radio waves, satellite, or microwave frequencies. Both hardwired and wireless systems can transmit voice and data communications. However, security experts recommend not mixing the two data types on one line or channel, because current technology prioritizes voice information over data transmission, thus interrupting the data flow. Thus, data transfer will not occur until the lines are clear of voice communications. Three important considerations when choosing the appropriate type of transmission system are the speed of transmission, the amount of data that can be transferred, and the cost. For example, the transmission of photographic images of a plant (e.g., surveillance camera data) may require greater data transmission and storage requirements than the transmission of text data (e.g., the ON/OFF status of a pump). Data transmission and speed are related to the available bandwidth for the transmission. This, in turn, is related to the size of the conduit; the larger the conduit, the more bandwidth is available, and thus the faster the transmission or speed. However, the higher the speed is, the higher the costs are. It should be noted that costs would include both upfront capital improvement costs and recurring monthly fees. Another important consideration in choosing a communications network is its security. Specific security concerns regarding data transmission methods include ensuring that the following: the communication method is available (ensuring that the communication method is functioning and free to transmit data), as well as ensuring that the communication is not altered during transmission (ensuring the integrity of the communication), and ensuring that unauthorized parties do not intercept the communication (ensuring the confidentiality of the communication). Specific security aspects of different types of communication technologies, as well as discussions of general security aspects of hardwired versus wireless communications methods would be given in vendors’ Product Guides. As described above, there are wide ranges of specific SCADA options available to a system designer once the system’s needs have been defined. SCADA systems can be set up to communicate using a range of media, including
Zhang_Ch04.indd 514
5/13/2008 5:50:59 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
515
telephone lines, radio, Internet, cellular, or satellite interfaces. Whether using hardwired or wireless technology, current SCADA systems can report alarms using prespecified custom voice, pager, fax, or even email communications. If a system uses a portable design, SCADA can use a cellular phone for communication. In addition, an operator can use a laptop computer to access the SCADA system by dial-up means or via the Internet, both of which can be password protected. It should be noted that the topography of the area may be a factor for wireless systems. Any wireless system using a frequency of 900 MHz or more requires a direct line of sight between antennas for data transmission. Hilly terrain requires high antennas that can transmit the signal above interfering topography. This terrain may also require repeater stations to retransmit the signal and keep it strong. These additional requirements add to the capital cost of the system. (c) Control features. The control function of a SCADA system is achieved through use of PLCs. In a PLC-based SCADA system, a full set of standard ladder logic instructions is built into the system, along with counters, timers, and even analog capabilities for performing tasks such as turning ON/OFF pumps and screens, and opening/closing valves, and so on. Certain advanced SCADA systems are capable of communication with PLCs that can implement control programs created using both ladder logic programs and more advanced C-programs. Building both programs into one system allows information to be shared between components, increasing dynamic control of the system. This can be critical depending on system complexity and the needs of a specific application. For example, SCADA may be designed to control a variable speed pumping application that is to operate at its most efficient point on the pump curve at any given flow in the specified range. This operation requires each pump curve to be entered with the system curve. The program superimposes the system curve on the pump curve to determine the most efficient means of operating with one, two, or even three pumps depending on the size and number of pumps in the system. This part of the program is performed using the C program, while the start/stop and speed control is performed in the ladder logic program. Both types of control programs are displayed on the SCADA software. (d) Software and security issues. Selecting the appropriate SCADA system software is very important. Security experts
Zhang_Ch04.indd 515
5/13/2008 5:50:59 PM
516
INDUSTRIAL CONTROL TECHNOLOGY recommend that the software program be capable of performing every task needed to operate and maintain the system. The SCADA software typically runs on IBM PC or compatible computers under Microsoft Windows 95, 98, NT, or 2000 operating systems. Software packages compatible with other operating systems such as UNIX are also available. However, software for these systems would be proprietary and would be more expensive than PC-based systems. These packages would typically be used for SCADA systems at larger facilities. The SCADA software typically provides a graphical user interface (GUI) to program and display all pertinent operational parameters, including, but not limited to, system configuration, polling (“polling” is reading data from, and communicating with, two or more sites individually), datalogging, and alarming. The PLC software includes the ability to build control programs (e.g., using a ladder logic editor or a C program editor). A screen editor is usually included to create dynamic graphical displays capable of showing the value of selected process parameters in real time. Vendors normally have included various security options in their products to deter possible security problems. These include the following: (i) The ability to set up a password system. When the computer password system is properly used, the system is much more difficult to exploit. This system is similar to the system used for a regular desktop computer, and allows the system administrator to set up private passwords for each system user to allow certain predetermined activities or functions. The password system may also allow the administrator to require a regular renewal or change of passwords. (ii) The ability to trace log-ins. This will allow a system administrator or other responsible party to determine who has logged on and used the system, and when they have used it. (iii) Secure data transmission technology. Secure options are available to help ensure that data transmissions are not blocked or intercepted. These include options for radio transmissions, as well as options for the local telephone network. The radio communications are transmitted on a specific frequency and use a standard referred to as “frequency hopping spread spectrum” (FHSS).
Zhang_Ch04.indd 516
5/13/2008 5:50:59 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
517
In FHSS, the devices communicate over a sequence of frequencies that is known only to the devices actually communicating. The frequency on which the devices are communicating constantly changes based on a pseudorandom pattern, which is known only by the sender and the receiver. Licensed frequency radios are also widely used. Many utilities use leased lines or frame relay connectivity from the local telephone company to connect. This method provides a secure private circuit between sites. (iv) Automated alarm tracking and monitoring options. Some SCADA systems offer additional alarm pager options that can send an alphanumeric page to the operator on duty when systems or operations alarms are triggered. For example, this type of SCADA software could be configured to send a page when operating parameters are not within a certain prespecified range. (e) General security issues. Because SCADA systems can provide automatic control of a system, system security is an important consideration. The primary security vulnerabilities for SCADA systems are the communication links, the computer software, and power sources for the various system components. Discussions of security considerations for communications and SCADA software were provided above. Protection of power sources for individual system components will be dependent on the power sources used in the system. However, security can be improved by ensuring that there are backup power systems for emergency situations. (2) SCADA system administration. The outline below identifies some of the key SCADA and Automation system administration issues. (a) Define the boundary where the SCADA or automation system stops. The boundary may be clear or, in increasingly more cases, not so clear. Many SCADA systems are infringing on traditional IT turf by using the IT networks and servers as well as traditional IT technologies such as PCs, Windows, and NT, and even PC databases. As a result, it may be hard to say just where the boundary between the IT organization’s responsibilities stops and the organization’s starts concerning administration and maintenance. Some legacy SCADA and Automation systems are not even connected to IT resources such as the IT network. However, in a effort to push SCADA and Automation information out to business and engineering office users (and not just operations)
Zhang_Ch04.indd 517
5/13/2008 5:50:59 PM
518
INDUSTRIAL CONTROL TECHNOLOGY more and more SCADA and Automation systems are using the often in-place IT resource to accomplish this task. (b) Have a system back-up and restore plan. It is crucial to make sure your backup includes a reliable restore. When the time comes to use the backup, you should know it is going to work by testing the restore before you need to use it. Also, know what files and databases need to be included in your backup. Do not forget about your PLC RLL and like critical support files and documents. Often SCADA systems files and databases change on a continual basis. Make a schedule to run the backup and stick to it. With the advent of online backup systems, it is now feasible to do a backup on a live system in many cases. You probably also should keep an offsite backup of your critical files in the event of fire, flood, or theft. (c) Develop a PC/server file management system. Where is that RLL file for the safety system? How about that Excel spreadsheet with the PLC’s I/O listing? Moreover, how do you find the operations users manual for the pump control system? You had better know the answer to where to find those critical files. In addition, you may need to come up with those files when things are in a crisis management mode. The PLC has gone brain dead, lost the RLL program, and the plant is shut down because the PLC is the plant’s safety system. Now just where is that RLL file? In addition, is the documentation for a black start of the plant needed? Again, you had better be prepared because it most definitely can happen. Your file management system should specify file-naming conventions that should help ID the file(s) should they be misplaced. In addition, put it in writing and identify several layers of responsibility in case you are not around when the files are needed. (d) Develop management of change system. Unmanaged change can create havoc with the users and administrators of the SCADA system. Consider employing a Management of Change (MOC) system that has key players sign off and approve changes to the SCADA system. This is especially important with major upgrades, hardware changes, or anything that risks having the system unavailable for an extended time, or changes that could have a detrimental impact on the system. The review process should be efficient and not so restrictive, time consuming, or difficult to employ that people will want to circumvent the MOC process. The key players who sign off on the MOC requests should include your System Administrator.
Zhang_Ch04.indd 518
5/13/2008 5:50:59 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
519
(e) Select products that support centralized and/or remote administration. This will allow your Administrator to cover more ground, faster. Remote administration may mean the ability to troubleshoot, diagnose, and remediate problems without actually making a trip to the site where the system is located. SCADA may help you if your Administrator has the ability to try to resolve after hour problems without driving into the site or office during the middle of the night. Moreover, please make it easy to administer by hooking the box or system to your SCADA network.
4.1.4 Proportional-Integration-Derivative (PID) Controllers A PID controller is a standard feedback loop component in industrial control applications. It measures an “output” of a process and controls an “input,” with a goal of maintaining the output at a target value, which is called the “set point.” An example of a PID application is the control of a process temperature, although it can be used to control any measurable variable, which can be affected by manipulating some other process variable. For example, it can be used to control pressure, flow rate, chemical composition, force, speed, or a number of other variables. Automobile cruise control is an example of a PID application area outside of the process industries.
4.1.4.1
PID Control Mechanism
PID can be described as a set of rules with which precise regulation of a closed-loop feedback control system is obtained. Closed-loop feedback control means a method in which a real-time measurement of the process being controlled is constantly fed back to the controlling device, known as the controller, to ensure that the value that is desired is, in fact, being realized. The mission of the controlling device is to make the measured value, usually known as the “process variable,” equal to the desired value, usually known as the “set point.” The very best way of accomplishing this task is with the use of the control algorithm we know as three-mode: Proportional + Integration + Derivative. Figure 4.24 is a schematic of a feedback control loop with a PID controller. The most important of these, Proportional Control, determines the magnitude of the difference between the “set point” and the “process variable,” which is defined as “error.” Then it applies appropriate proportional changes to the “controller output” to eliminate “error.” Many control systems, in
Zhang_Ch04.indd 519
5/13/2008 5:50:59 PM
520
INDUSTRIAL CONTROL TECHNOLOGY Proportional algorithm +
Setpoint + −
Error
Integral algorithm
Controller output
Process variable Plant
+ +
Derivative algorithm Controller Feedback variable
Figure 4.24 A schematic of feedback control loop with a PID controller.
fact, will work quite well with only Proportional Control. Integral Control examines the offset of “set point” and the “process variable” over time and corrects it when and if necessary. Derivative Control monitors the rate of change of the “process variable” and consequently makes changes to the “controller output” to accommodate unusual changes. Each of the three control functions is governed by a user-defined parameter. These parameters vary immensely from one control system to another, and, as such, need to be adjusted to optimize the precision of control. The process of determining the values of these parameters is known as PID Tuning. PID Tuning, although considered “black magic” by many, really is, of course, always a well-defined technical process. There are several different methods of PID Tuning available, any of which will tune any system. Certain PID Tuning methods require more equipment than others, but usually result in more accurate results with less effort.
4.1.4.2
PID Controller Implementation
There are several elements within a feedback system as displayed in Fig. 4.24; for discussion purposes, a home heating temperature control system is used as the model in the descriptions below. There is no specific way a PID should be implemented in firmware; the methods described here only touch on a few of the many possibilities. The PID routine is configured in a manner that makes it modular. It is intended to be plugged into an existing piece of firmware, where the PID routine is passed the 8-bit or 16-bit or 32-bit error value that equals the difference of the Desired Plant Response minus the Measured Plant Response. Therefore, the actual error value is calculated outside of the PID routine. If necessary,
Zhang_Ch04.indd 520
5/13/2008 5:50:59 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
521
the code could be easily modified to do this calculation within the PID routine. The PID can be configured to receive the error in one of two ways, either as a percentage with a range of 0–100% (8-bit), or a range of 0–4000 (16-bit) or even higher (32-bit). This option is configured by a #define statement at the top of the PID source code with the PID’s variable declarations, if the programming language used is C or C++. The gains for proportional, integral, and derivative all have a range of 0–15. For resolution purposes, the gains are scaled by a factor of 16 with an 8-bit maximum of 255. A general flow showing how the PID routine would be implemented in the main application code is presented in Fig. 4.25. There are two methods considered for handling the signed numbers. The first method is to use signed mathematical routines to handle all of the PID calculations. The second is to use unsigned mathematical routines and maintain a sign bit in a status register. If the latter method was implemented, there are five variables that require a sign bit to be maintained: (1) error, (2) a_error,(3) p_error, (4) d_error, (5) pid_error. All of these sign bits are maintained in the register defined as, for example, pid_stat1 register. Flowcharts for the PID main routine and the PID Interrupt Service Routine functions are shown in Figs 4.26 and 4.27, respectively. The PID
Start Call PID initialize
Interrupt service routine (with PID code) application main …………………….. error
Call PID main
pid_out
Sends PID result to plant ……………………………. End
Figure 4.25 PID firmware implementation.
Zhang_Ch04.indd 521
5/13/2008 5:50:59 PM
522
INDUSTRIAL CONTROL TECHNOLOGY The error is passed from the main application code to the PID routine, along with the error sign bit in pid_stat1
error
Yes error = 0 ?
PID Action is not required, sequence returns to main application code.
No Calculate proportional term
(Proportional gain) x (error)
Calculate integral term
(Integral gain) x (a_error)
Calculate derivative term
(Derivative gain) x (d_error)
Proportional + integral + derivative
Scale down (proportional + integral + derivative)
Place final PID value in pid_out
pid_out
The final PID result is sent to the main application code, along with its sign located in pid_stat1.
Figure 4.26 Main PID routine flow chart.
main routine is intended to be called from the main application code that updates the error variable, as well as the pid_stat1 error sign bit. Once in the PID main routine, the PID value will be calculated and put into the pid_out variable, with its sign bit in pid_stat1. The value in pid_out is converted by the application code to the correct value so that it can be applied to the plant. The PID Interrupt Service Routine is configured for a high priority interrupt. The instructions within this Interrupt Service Routine can be placed into an existing Interrupt Service Routine, or kept as is and plugged into the application code. The proportional is the simplest term. The error is multiplied by the proportional gain: (error) × (proportional gain). The result is stored in the variable, prop. This value will be used later in the code to calculate the overall value needed to go to the plant. To obtain the integral term, the accumulated error must be retrieved. The accumulated error, a_error, is the sum of past errors. For this reason, the integral is known for looking at a system’s history for correction. The derivative term is calculated in similar fashion to the integral term. Considering that the derivative term is
Zhang_Ch04.indd 522
5/13/2008 5:51:00 PM
523
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL High priority interrupt occurred
Has a Timer 1 interrupt occurred?
No
…….
Yes error = 0 ?
Yes
No Context saves mathematical variables(1)
No
Yes error + a_error
Derivative count = 0?
a_error = 0?
No
a_error limit exceeded? Yes Restore a_error limit
Yes
No Restore mathematical variables(1)
d_error = 0? No
Yes
Set d_error _z bit
Return (1) These instructions are options; they are dependant upon how the Interrupt Service Routine is configured.
Figure 4.27 PID interrupt service routine flow chart.
based on the rate at which the system is changing, the derivative routine calculates d_error. This is the difference between the current error and the previous error. The rate at which this calculation takes place is dependant upon the Timer1 overflow. The derivative term can be extremely aggressive when it is acting on the error of the system. An alternative to this is to calculate the derivative term from the output of the system and not the error. The error will be used here. To keep the derivative term from being too aggressive, a derivative counter variable has been installed. This variable allows d_error to be calculated once for an x number of Timer1 overflows (unlike the accumulated error, which is calculated every Timer1 overflow). To get the derivative term, the previous error is subtracted from the current error (d_error = error – p_error). The difference is then multiplied by the derivative gain and this result is placed in the variable, deriv, which is added with the proportional and integral terms.
Zhang_Ch04.indd 523
5/13/2008 5:51:00 PM
524
INDUSTRIAL CONTROL TECHNOLOGY
4.1.4.3
PID Controller Tuning Rules
There are several rules, sometimes called methods, for the proper tuning of PID controls. Most of the tuning rules or methods require a considerable amount of trial and error as well as a technician endowed with a lot of patience. The following gives three PID tuning methods popularly used in industrial control. (1) Ziegler–Nichols tuning rules. The two most common are the Process Reaction Curve technique and the Closed-Loop Cycling method. These two methods were first formally described in an article by J.G. Ziegler and N.B. Nichols in 1942. Figure 4.28 describes the Process Reaction Curve technique. It should be understood that “optimal tuning,” as defined by J.G. Ziegler and N.B. Nichols, is achieved when the system responds to a perturbation with a 4:1 decay ratio. That is to say that, for example, given an initial perturbation of +40°, the controller’s subsequent response would yield an undershoot of –40° followed by an overshoot of +2.5°. This definition of “optimal tuning” may not suit every application, so the trade-offs must be understood. In the Close-Loop Cycling method, estimate the ultimate gain Ku and ultimate period Tu when a proportional part of the controller is acting alone. Follow these steps: (a) Monitor the temperature response curve in time using an oscilloscope. P Output level
Temperature Line drawn through point of inflection
R
L Time
Figure 4.28 PID control process reaction curve.
Zhang_Ch04.indd 524
5/13/2008 5:51:00 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
525
(b) Start with the proportional gain low enough to prevent any oscillation in the response (temperature). Record the offset value for each gain setting. (c) Increase the gain in steps of 2× the previous gain. After each increase, if there is no oscillation, change the set point slightly to trigger any oscillation. (d) Adjust the gain so that the oscillation is sustained, that is, continues at the same amplitude. If the oscillation is increasing, decrease the gain slightly. If it is decreasing, increase the gain slightly. (e) Make note of the gain that causes sustained oscillations and the period of oscillation. These are the ultimate gain Ku and the ultimate period Tu, respectively. (2) Tuning an ON–OFF control system. The first step is the tuning of the proportional band. If the controller contains Integral and Derivative adjustments, tune them to zero before adjusting the proportional band. The proportional band adjustment selects the response speed (sometimes called gain) a proportional controller requires to achieve stability in the system. The proportional band must be wider in degrees than the normal oscillations of the system but not so wide so as to dampen the system response. Start out with the narrowest setting for the proportional band. If there are oscillations, slowly increase the proportional band in small increments allowing the system to settle out for a few minutes after each step adjustment until the point at which the offset droop begins to increase. At this point, the process variable should be in a state of equilibrium at some point under the set point. The next step is to tune the Integral or reset action. If the controller has a manual reset adjustment, simply adjust the reset until the process droop is eliminated. The problem with manual reset adjustments is that once the set point is changed to a value other than the original, the droop will probably return and the reset will once again need to be adjusted. If the control has automatic reset, the reset adjustment adjusts the auto reset time constant (repeats per minute). The initial setting should be at the lowest number of repeats per minute to allow for equilibrium in the system. In other words, adjust the auto reset in small steps, allowing the system to settle after each step, until minor oscillations begin to occur. Then back off on the adjustment to the point at which the oscillations stop and the equilibrium is reestablished. The system will then automatically adjust for offset errors (droop). The last control parameter to adjust is the Rate or Derivative function. It is crucial to remember to always adjust this function last. This is because if the rate
Zhang_Ch04.indd 525
5/13/2008 5:51:01 PM
526
INDUSTRIAL CONTROL TECHNOLOGY adjustment is turned on before the reset adjustment is made, the reset will be pulled out of adjustment when the rate adjustment is turned on. Then the tuning procedure is complete. The function of the rate adjustment is to reduce as much as possible any overshoot. The rate adjustment is a time-based adjustment measured in minutes, which is tuned to work with the overall system response time. The initial rate adjustment should be the minimum number of minutes possible. Increase the adjustment in very small increments. After each adjustment let, it to settle out a few minutes. Then increase the set point a moderate amount. Watch the control action as the set point is reached. If an overshoot occurs, increase the rate adjustment another small amount and repeat the procedure until the overshoot is eliminated. Sometimes the system will become “sluggish” and never reach set point at all. If this occurs, decrease the rate adjustment until the process reaches set point. There may still be a slight overshoot but this is a trade-off situation.
4.1.4.4
PID Control Technical Specifications
PID (Proportional-Integral-Derivative) control actions allow the process control to maintain set point accurately by adjusting the control outputs. There are no industry-wide standards for PID controllers. However, robust and optimal control of process loops requires PID controllers to have certain abilities and features described here. (1) PID controller specifications. (a) Control units. The PID controller should be a device of no unit. Unit conversions should be done outside the PID algorithm. Engineering units may be available in memory locations within the PID “block” for display or informational purposes. For example, the controller could work on a 0–100% basis or on a 0–1 basis for inputs, outputs, and set points. This makes the controller easier to work with for feed forward, cascade, limits, summers, multipliers, and multivariable situations. (b) Algorithm type. The PID controller algorithm should produce a positional output (not an increment from the last position), and may be of the series or ideal type: (i) Laplace representation of series (interacting) type: m(s)/e(s) = Kc (I + I/Is) (I + Ds) where m is the position of the controller output, e is the deviation of the controlled variable from set point, s is
Zhang_Ch04.indd 526
5/13/2008 5:51:01 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
527
the Laplace operator, Kc is the proportional gain of the controller, I is its integral time, and D is its derivative time. (ii) Laplace representation of ideal (noninteracting) type: m(s)/e(s) = Kc (I + I/Is + Ds) (Filters and other details have been omitted from the above transforms for clarity.) (c) Sampling and sample time. The PID controller input signals should be sampled at a frequency of at least 10 Hz, reporting the average value of the signal over the previous sample interval. Using the average value for each sample prevents aliasing. (d) Proportional action or gain. The units of proportional action may be either percent proportional B and P or proportional gain Kc, where Kc = 100/P and P = 100/Kc. Proportional band setting should range from 1 to 10,000. If gain is used, the gain range should be from 0.01 to 100. The proportional action should work on deviation (SP—PV) or controlled variable PV depending on the user selection. The user should also be able to adjust the amount of proportional action applied to the set point SP. Proportioning band is the area around the set point where the controller is actually controlling the process; the output is at some level other than 100% or 0%. Proportioning bands are normally expressed in one of three ways: as a percentage of full scale; as a number of degrees (or other process variable units); gain which equals 100%/proportioning band% (e.g., P = 5%; gain = 20). If the proportioning band is too narrow an oscillation around the set point will result. If the proportioning band is too wide the control will respond in a sluggish manner, take a long time to settle out at set point, and may not respond adequately to upsets. (e) Integral action. The units of integral action should be in minutes per repeat. The integral action must operate on the deviation signal. The Integral time should be adjustable between 0.002 and 1000 min. There should be antireset windup logic so that the output of the integral term does not saturate into a limit when the controller output reaches that limit. The method of antireset windup should incorporate integral feedback. This allows the secondary measurement signal to be fed back to the primary controller in cascade, feed forward, and constraint control systems, maximizing their effectiveness, operability, and robustness. The controller
Zhang_Ch04.indd 527
5/13/2008 5:51:01 PM
528
INDUSTRIAL CONTROL TECHNOLOGY should be capable of operation without integral action, through the application of an adjustable output bias. (f) Derivative action and filter. The units of derivative action should be in minutes. Derivative action should be applied only to the process variable. The derivative time should be adjustable over the range of 0–500 min. When the user enters a value for derivative time, the controller should automatically insert a filter on the PV, whose time constant (if first order) should be the derivative setting divided by a number between 8 and 10. The filter will have the effect of limiting the dynamic gain from derivative action to between 8 and 10 times the controller gain. Changing the value of the controller gain will not change the value of the filter time constant. The preferred derivative filter, however, is second order. If a simple second-order filter is used, then the time constants in the filter should be set equal, and to a value of the derivative setting divided by a number between 16 and 20. This filter has the effect of limiting the dynamic gain from derivative to between 8 and 10 times the controller gain. The preferred second-order filter to use is of the Butterworth type, whose transfer function would be: 1/(I + Ds/Kd + 0.5 (Ds/Kd)2), where Kd is the desired derivative gain of 8–10. (g) Deadtime compensation. Deadtime compensation can be added by inserting a deadtime block in the integral feedback path of the controller. It improves controller performance for any process (not only one that is dominated by deadtime). It constitutes a fourth controller mode, requiring tuning like the other three. However, along with increased performance comes reduced robustness, requiring more precise tuning for all four modes than a PID controller without deadtime compensation. Deadtime should be adjustable over the range of 0–500 min. The deadtime register should contain at least 20 elements. The register should be initialized (all elements set to the value of the input signal) whenever the controller is placed in manual. (h) AutoManual transfer. Transfer between the automatic and manual modes should be bumpless in either direction. In the case, that integral action has not been selected, bumpless transfer from manual to automatic should be achieved by allowing the output bias to approach its set value through a first-order lag. Set point tracking, which forces the SP to equal the PV during manual operation (or before transfer to automatic),
Zhang_Ch04.indd 528
5/13/2008 5:51:01 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
529
should be optional, as selected by the user. Output tracking, which forces the output to follow a selected signal whenever the controller is placed in the “track” mode, should be available. (i) Manual reset. Virtually no process requires precisely 50% output on single output controls or 0% output on two output controls. Because of this, many older control designs incorporated an adjustment called manual reset (also called offset on some controls). This adjustment allows the user to redefine the output requirement at the set point. A proportioning control without manual or automatic reset (defined below) will settle out somewhere within the proportioning band but not likely on the set point. Some newer controls are using manual reset (as a digital user programmable value) in conjunction with automatic reset. This allows the user to preprogram the approximate output requirement at the set point to allow for quicker settling at set point. (j) Automatic reset (integral). Corrects for any offset (between set point and process variable) automatically over time by shifting the proportioning band. Reset redefines the output requirements at the set point until the process variable and the set point are equal. Most current controls allow the user to adjust how fast reset attempts to correct for the variable offset. Control manufacturers display the reset value as minutes, minutes/repeat (m/r), or repeats per minute (r/m). This difference is extremely important to note for repeats/ minute is the inverse of minutes or minutes/repeat). The reset time constant must be greater (slower larger number m/r smaller number r/m) than the process responds. If the reset value (in minutes/repeat) is too small a continuous oscillation will result (reset will over respond to any offset causing this oscillation). If the reset value is too long (in minutes/ repeat), the process will take too long to settle out at set point. Automatic reset is disabled any time when the temperature is outside the proportioning band to prevent problems during start-up. (k) Rate (derivative). Rate shifts the proportioning band on a slope change of the process variable. Rate in effect applies the “brakes” in an attempt to prevent overshoot (or undershoot) on process upsets or start-ups. Unlike reset, rate operates anywhere within the range of the instrument. Rate usually has an adjustable time constant and should be set much shorter than reset. The larger the time constant the more effect rate will have. Too large a rate time constant will
Zhang_Ch04.indd 529
5/13/2008 5:51:01 PM
530
INDUSTRIAL CONTROL TECHNOLOGY cause the process to heat too slowly. Too short a rate time constant the control will be slow to respond to upsets. The time constant is the amount of time any effects caused by rate will be in effect when rate is activated due to a slope change. (l) Self-tuning, adaptive tuning, pretuning. Many control manufacturers provide various facilities in their controls that allow the user to tune more easily (adjust) the PID parameters to their process. Given below is a description of these tunings: (i) Tuning on demand with upset. This facility typically determines the PID parameters by inducing an upset in the process. The controls proportioning is shut off (ON–OFF mode), and the control is allowed to oscillate around a set point. This allows the control to measure the response of the process when loading force such as heating or cooling is applied and removed. From this data the control can calculate and load appropriate PID parameters. Some manufacturers perform this procedure at set point while others perform it at other values. Caution must be exercised for substantial swings in the process variable values, which are likely to occur while the control is in this mode. (ii) Adaptive tuning This mode tunes the PID parameters without introducing any upsets. When a control is utilizing this function it is constantly monitoring the process variable for any oscillation around the set point. If there is an oscillation, the control adjusts the PID parameters in an attempt to eliminate them. This type of tuning is ideal for processes where load characteristics change drastically while the process is running. It cannot be used effectively if the process has externally induced upsets for which the control could not possibly tune out. For example, a press where a cold mold is inserted at some cyclic rate could cause the PID parameters to be adjusted to the point where control would be totally unacceptable. Some manufacturers call tuning on demand self tune, auto tune, or pretune. Adaptive tuning is sometimes called self tune, auto tune, or adaptive tune. Since there is no standardization in the naming of these features, questions must be asked to determine how they operate. (m) Nomenclature D = Derivative time e = Error or deviation = SP – PV
Zhang_Ch04.indd 530
5/13/2008 5:51:01 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
531
I = Integral time Kc = Proportional gain of the controller = 100/P m = Position of the controller output, P = Proportional band = 100/Kc PID = Proportional, Integral, Derivative PV = Process variable s = Laplace operator SP = Set point Tau = Time constant (2) General PID control types. (a) ON–OFF Controls. ON–OFF control is the most basic form of some process control such as temperature control. (i) Changes output only after temperature crosses the set point. (ii) Should only be used on noncritical applications. The process temperature never stabilizes at the set point due to process inertia. (iii) Also used in alarms and safety circuits. (iv) Most PID controls operate in this mode if the proportioning band is set to “0.” (b) Time proportioning controls. (i) They vary the output by cycling a relay, or logic voltage on and off. (ii) Proportions are achieved by varying the On Time versus Off Time. (iii) Usually include a parameter such as Cycle Time that is the total of the On Time and the Off Time. (c) Linear output controls. (i) They provide a DC voltage or current output related to the required output demand. (ii) They are normally connected to an SCR Power control or other solid-state device. The power-handling device then converts this signal to a relative power output. (d) Closed-loop valve motor controls. (i) These controls are used in conjunction with motor actuators in gas heating applications. The control has two outputs (typically relays): one for clockwise rotation and one for counterclockwise rotation. (ii) Feedback as to motor position is provided by a potentiometer attached to the motor. (e) Open-loop valve motor controls. (i) These controls are used in conjunction with motor actuators in gas heating applications. The control has two
Zhang_Ch04.indd 531
5/13/2008 5:51:01 PM
532
INDUSTRIAL CONTROL TECHNOLOGY outputs (typically relays): one for clockwise rotation and one for counterclockwise rotation. (ii) No feedback as to motor position is provided. The user enters a value for motor travel time in the control. This allows the control to determine how long to operate the motor in either direction. (f) High and low limit controls. (i) Usually used as safety devices on the failure of the primary control device or some other failure in the system. (ii) Once the process variable goes through the limit set point the controllers output switches. The output will not revert to normal until the process variable returns to a safe value and a reset button is pressed. (iii) Most insurance companies require an approved limit devices on certain applications, particularly on gas fired and applications that are left unattended. (iv) For complete safety a separate sensor and contactor is required. On electric applications utilizing power controls, a contactor connected to the incoming power should be used to protect against power control failure.
4.2 Industrial Process Controllers 4.2.1
Batch Controllers
Batch control (or batching control) is used for repeated fill operations leading to the production of finite quantities of material by subjecting quantities of input materials to an ordered set of processing activities over a finite period of time and using one or more pieces of equipment. Batch control systems are engineered to control, measure, and dispense any volume of liquid from drums, plating tanks, and any large storage vessel. They can be used in any industry where batching, chemical packaging, or dilution is required to be accurate and efficient. For example, in the pharmaceutical and consumer products industries, batch control systems provide complete automation of batch processes including the mixing, blending, and reacting of products such as spice blends, dairy products, beverages, and personal care products. Table 4.9 gives some examples of typical applications of the batch control. In the first case, the batch controller is ideal for discrete manufacturing as well as repetitive fill operations. In this example, the batch controller counts bottles that it then groups into six packs. Its control capability can be used to track bottles or six packs. In the second case, for the drum
Zhang_Ch04.indd 532
5/13/2008 5:51:01 PM
533
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL Table 4.9 Some Examples of Typical Applications of Batch Control Discrete filling and batch counting
Drum filling application utilizing three relay outputs
Pulse input
Flowmeter
Prewarn
Batch
Grand total
Pump
RS-232 or RS-485 I/O to computer
Controlling the mixing of materials
Pulse input
Pump B Flowmeter B
Batch relay
Pulse input
Batch relay
Pump A Flowmeter A
RS-232 or RS-485 I/O to computer
Zhang_Ch04.indd 533
5/13/2008 5:51:01 PM
534
INDUSTRIAL CONTROL TECHNOLOGY
filling application, the batch controller utilizes its maximum of three relays to control the pump. The Prewarn relay slows down the pump near the preset to avoid overshoot. The batch relay stops the pump at the preset. The controller relay stops the filling operation once a predetermined number of drums have been filled. The third is for multiple-batch controllers that can be used in combination to control the mixing of materials in the proper ratio. Each feed line is equipped with its own pump, flow meter, and Laureate. Controller setup and monitoring of the mixing operation are facilitated by optional serial communications. RS-232 or RS-485 I/O Interface allows a single data line to handle multiple controllers.
4.2.1.1
Batch Control Standards
An ISA standard, widely used for batch control, has been released as an IEC Publicly Available Specification (PAS) in a joint effort between ISA and the International Electrotechnical Commission (IEC). The ANSI/ ISA-88 (IEC 61512) batch control standard series is providing significant benefits to users and suppliers of batch control systems worldwide. Although the standard is primarily designed for batch processes, it is also being applied successfully in various manufacturing industries. This is because the structure required for flexible manufacturing mirrors the structure required for many batch processes, even though the underlying process is often continuous or discrete. The ANSI/ISA-88 (IEC 61512) is also the foundational cornerstone for the new ISA-95 standard, which addresses the integration between manufacturing and business systems. The ANSI/ISA-88 (IEC 61512) offers several automation benefits, including reduction of costs, implementation time, and cycle times. It also allows for improved (batch-to-batch) product consistency, improved product and process quality management, and better cost accounting capability. (1) Procedural model descriptions. The procedural model is a multitiered, hierarchical model composed of the following procedural elements: (a) Procedure. A procedure is the strategy used to carry out a process. It is made up of unit procedures. (b) Unit procedure. A unit procedure is a strategy used to carry out process activities and functions within a unit. A unit procedure is made up of one or more operations (e.g., main processing vessel unit procedure, and premix vessel unit procedure).
Zhang_Ch04.indd 534
5/13/2008 5:51:02 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
535
(c) Operation. An operation is a procedural element that defines an independent processing activity carried out by one or more phases within a unit (e.g., add raw materials operation, react operation). (d) Phases. A phase is the smallest component of the procedural model in terms of process-specific tasks or functions (e.g., charge surfactant, mix, agitate, temperature control, transfer out). (2) Physical model descriptions. The physical model describes equipment, and is composed of the following pieces: (a) Enterprise. The enterprise is responsible for determining what products will be manufactured, at which sites, and with what processes. The enterprise is an organization that coordinates the operation of one or more sites. These sites may contain areas, process cells, units, equipment modules, and control modules. (b) Site. A site is a component of a batch manufacturing enterprise identified by physical, geographical, or logical segmentation within an enterprise. It may contain areas, process cells, units, equipment modules, and control modules. (c) Area. An area is a component of a batch-manufacturing site that is identified by physical, geographical, or logical segmentation within a site. It may contain process cells, units, equipment modules, and control modules. (d) Process cell. A process cell contains all of the units, equipment modules, and control modules required to make one or more batches. It is a component of an area. (e) Unit. A unit is a grouping of equipment modules, control modules, and other process equipment in which one or more process functions can be conducted on a batch or part of a batch. (f) Equipment module. An equipment module is a functional group of devices that can carry out a finite number of minor processing activities. These activities make up such process functions as temperature control, pressure control, dosing, and mixing. (g) Control module. A control module is the lowest level of equipment in the physical model. It can carry out basic functions such as valve or pump control. It is important to note that the criteria defining the boundaries for the first three levels of the physical model, Enterprise, Site and Area, are outside the scope of batch control and of the ANSI/ ISA-88 (IEC 61512) standard. They are included merely to identify the relationship between the levels affecting batch control
Zhang_Ch04.indd 535
5/13/2008 5:51:02 PM
536
INDUSTRIAL CONTROL TECHNOLOGY and the enterprise. All batch server products provide control from the process cell downwards. (3) Types and structures of recipe. Recipes are the key to producing batches and are built around your unique equipment configuration. Each recipe consists of a set of sequential steps, and each step consists of one or more phases. The phases contain the ingredients and their corresponding percentages or set weights necessary to produce a given product. The phases also contain any run-time parameters required to make a given product, such as mix times and mix speeds. The ANSI/ISA-88 (IEC 61512) Batch Control standard differentiates among four types of recipes: (a) General recipe. A general recipe defines raw materials, relative quantities, and the processing required. General recipes do not include specifics about geography or the equipment required for processing. (b) Site recipe. The site recipe is derived from the general recipe but takes into account the geography of the site (there may be different grades of raw materials in different countries or continents) and the local language. (c) Master recipe. Master recipes are derived from site recipes and are targeted at the process cell. A master recipe is a required recipe level; without it, control recipes cannot be created nor batches produced. A master recipe takes into account its equipment requirements within a given process cell. It includes the following information categories: header, formula, equipment requirements, and procedure. A typical header contains the recipe name, product identification, version number, author, approval for production, and other administrative information. The formula of a master recipe contains raw materials with their respective target amounts, and process parameters such as temperature and pressure. Recipe procedures are built and implemented on equipment (units or classes of units). Equipment requirements provide the information necessary to constrain the choice of equipment when implementing procedures. (d) Control recipe. A control recipe is a batch created from the master recipe. Master recipe may have one or more control recipes on the batch list or in running status.
4.2.1.2
Control Mechanism
(1) Recipe-based batch control. Recipe execution requires performing a set of actions on one or many processing units. These actions
Zhang_Ch04.indd 536
5/13/2008 5:51:02 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
537
may range from automated tasks such as downloading set points to controllers to manual tasks such as adding ingredients. The recipes often differ based on the product produced, but may also differ based upon available equipment. To provide the most value to operations, the recipe should combine the execution of process actions with process monitoring. Process monitoring generates events that execute process actions. For example, if the rate-of-change of temperature has exceeded 10°F/min for 2 min, it is appropriate to lower the batch temperature set point. Batch expert has a highly flexible recipe execution system based on the ANSI/ISA-88 (IEC 61512) guideline. The system can incorporate process monitoring and model based control with process actions. In addition, the system is designed to take into account communication or equipment failures that may lead to operations aborting or stopping the recipe. One of the main contributions of the ANSI/ISA-88 (IEC 61512) standard is the introduction of common terminology for batch manufacturing. Two main models comprise the standard: physical model and procedural model. In general terms, the physical model is used to describe equipment, and the procedural model describes recipe process sequencing. To better understand the relationship between these two models and actual equipment control; we must also examine how the ANSI/ISA-88 (IEC 61512) defines recipes. Essentially, a recipe provides a way to describe products and how those products are produced. The standard differentiates between four types of recipes: general, site, master, and control as given above. The most significant contribution the standard makes to batch manufacturing is the separation of recipe procedure and equipment control logic. Recipe procedures reside on a PC, while the programming code running the production equipment resides in the PLC or DCS (distributed control system). Accordingly, recipes can be edited and modified without having to modify the PLC code. When it comes time to making a product, the manufacturing requirements defined by the recipe and its procedure are linked to the required equipment. The batch software provided by most of the vendors is the batch server that handles this linking at the phase level. The recipe phase within the batch server communicates with the equipment phases via a set of protocols residing in the batch server. These protocols are based on a set of rules represented by a state transition diagram (see Fig. 4.29). The phase logic interface (PLI) resides in the PLC or DCS system to enforce the state transition diagram rules and the handshaking protocols.
Zhang_Ch04.indd 537
5/13/2008 5:51:02 PM
538
INDUSTRIAL CONTROL TECHNOLOGY Control system (PLC or DCS)
Batch server
Reset Recipe procedure In an ordered set of Recipe unit procedure(s) In an ordered set of
Complete Reset
Restarting Start
Idle (Initial state)
Running
Abort
Reset
Hold Pause
Holding
Pausing
Resume
Stop
Paused
Stopping Aborting Transit:
Recipe operation(s)
Stopped Aborted
In an ordered set of Recipe phase(s)
Held
Reset
Phase logic interface (PLI)
Equipment module(s)
Quiescent: Final:
Equipment phase(s)
Control module(s)
Figure 4.29 The recipe phase within the batch server communicates with the equipment phases via a set of protocols residing in the batch server. These protocols are based on a set of rules represented by a state transition diagram.
The standard also includes a “Control Activity Model” that describes the various functions required to manage batch production. These functions, detailed below, have been incorporated into most commercial batch servers as follows: (a) Recipe management functions are responsible for creating, storing, and maintaining recipes. The result of this control activity is a master recipe. (b) Production planning and scheduling includes the decision algorithms used to produce batch production schedules. (c) Production information management functions are responsible for collecting, storing, processing, and reporting production information and, more specifically, batch history. (d) Process management functions include the creation of control recipes from master recipes and the initiation and supervision of batches scheduled for production. Additional process management functions include the allocation of equipment and arbitration of common resources, and the actual collection of batch and equipment event information. (e) Unit supervision refers to the functions associated with executing procedural elements—unit procedures, operations,
Zhang_Ch04.indd 538
5/13/2008 5:51:02 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
539
and phases—within a control recipe. Also included are the complete management of unit resources and the collection of batch and unit information. (f) Process control (PLC or DCS) functions encompass the execution of equipment phases and propagation of modes and states to and from any recipe procedural element and equipment or control module. They also cover the execution of I/O control on field devices and data collection from these devices. (2) Batch process steps. Batch processes deal with discrete quantities of raw materials or products. Batch processes allow for more than one type of product to be processed simultaneously, as long as the products are separated by the equipment layout. Batch processes entail movement of discrete product from processing area to processing area. Batch processes have recipes (or processing instructions) associated with each load of raw material to be processed into product. Batch processes have more complex logic associated with processing than is found in continuous processes. Batch processes often include normal steps that can fail, and thus also include special steps to be taken in the event of a failure, which therefore offers the Exception Handling in the batch processes an extreme importance. Each step can be simple or complex in nature, consisting of one or more operations. Generally, once a step is started it must be completed to be successful. It is not uncommon to require some operator approval before leaving one step and starting the next. There is frequently provision for nonnormal exits to be taken because of operator intervention, equipment failure, or the detection of hazardous conditions. Depending on the recipe for the product being processed, a step may be bypassed for some products. The processing operations for each step are generally under recipe control, but may be modified by operator override action. A typical process step is given in Fig. 4.30.
4.2.2
Servo Controllers
Servos are used in many applications all over the world. They are used in many remote control devices, steering mechanisms in cars, wing control in airplanes, and robotic arm control in workshops. They come in all shapes, sizes, and strengths. Servo control is manipulated everywhere with the servo controller. This controller is where all of the information the servos need is provided. Existing commercial partners have developed a number of products aimed at meeting the demands of precision servo applications, with new products currently in the works.
Zhang_Ch04.indd 539
5/13/2008 5:51:02 PM
540
INDUSTRIAL CONTROL TECHNOLOGY Operator or recipe bypass command
Operator abort command
Operator or recipe hold at completion command
Yes
Previous step
Bypass step
No
Perform step operation
Hold at step completion
No Next step
Yes Fault detected or operator abort Fault exit to pre-defined step
Figure 4.30 A typical batch process step.
4.2.2.1
Components and Architectures
(1) Servo system. Servo is a term that applies to a function or a task (Fig. 4.31). The function, or task, of a servo can be described as follows: A command signal that is issued from the user interface panel comes into the servo’s “positioning controller” in a servo system. The positioning controller is the device that stores information about various jobs or tasks. It has been programmed to activate the motor or load, that is, change speed or position or both. The signal then passes into the servo control or “amplifier” section. The servo control takes this low power level signal and increases, or amplifies, the power up to appropriate levels to actually result in movement of the servomotor and load. These low power level signals must be amplified: Higher voltage levels are needed to rotate the servomotor at appropriate higher speeds and higher current levels are required to provide torque to move heavier loads. This power is supplied to the servo control (amplifier) from the “power supply,” which simply converts AC power into the required DC level. It also supplies any low level voltage required for operation of integrated circuits. As power is applied onto the servomotor, the load begins to move: speed and position change. As the load moves, so does
Zhang_Ch04.indd 540
5/13/2008 5:51:02 PM
541
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL “AC” Power supply “DC” Power Power Low level power
High level
Programmable positioning controller
Servo control (Amplifier)
Power Interface panel
Command signal
Load Feedback
Servo motor
Figure 4.31 The concept of a servo system.
some other “device.” This other “device” either is a tachometer, resolver, or encoder (providing a signal which is “sent back” to the controller). This “feedback” signal is informing the positioning controller whether the motor is doing the proper job. The positioning controller looks at this feedback signal and determines if the load is being moved properly by the servo motor; and, if not, then the controller makes appropriate corrections. Therefore, a servo involves several devices. It is a system of devices for controlling some item (load). The item (load) that is controlled (regulated) can be controlled in any manner, that is, position, direction, speed. The speed or position is controlled in relation to a reference (command signal), as long as the proper feedback device (error detection device) is used. The feedback and command signals are compared, and the corrections made. Thus, the definition of a servo system is that it consists of several devices that control or regulate speed and position of a load. (2) Logic circuits of servo controllers. Radio Controlled (R/C) servos have enjoyed a big comeback in recent years due to their adoption by a new generation of robotics enthusiasts. Driving these versatile servos requires the generation of a potentially large number of stable pulse width modulated (PWM) control signals, which can be a daunting task. A simple solution to this problem is to use a dedicated serial servo controller board, which handles all the details of the multichannel PWM signal generation while being controlled through simple commands issued on a standard serial UART. Here we introduce an array of 32 parallel channels design that combines the brute force of a FPGA (Field-Programmable Gate Array) and the higher-level
Zhang_Ch04.indd 541
5/13/2008 5:51:03 PM
542
INDUSTRIAL CONTROL TECHNOLOGY intelligence of the MCU (Microprocessor Control Unit) to achieve some impressive specifications. Figure 4.32 gives a block diagram of this type of serial servo controllers’ logic circuits. An array of 32 parallel channels of 16-bit accuracy, 12-bit resolution PWM generation units is implemented inside an FPGA. A MCU is used at the heart of the system. Its external memory bus was put to good use in interfacing with the memory-mapped array of 64 PWM registers (i.e., 32 × 16 bits) inside the FPGA. The many roles of the MCU include initializing the FPGA registers with user-configurable servo startup positions stored in the internal EEPROM. In response to an external interrupt occurring at every PWM cycle, all current servo position values are refreshed in the memory mapped PWM registers of the FPGA. This is done as a simple memory-tomemory transfer (Figs 4.33 and 4.34). (3) Open loop and closed loop. In a servo system, the controller is the device that activates motion by providing a command to do something to start or change speed or position or both. This command is amplified and applied onto the motor. Thus, motion commences (Fig. 4.35). Systems that assume the motion has taken place (or is in the process of taking place) are termed “open loop.” An open loop drive is one in which the signal goes “in one direction only” from the control to the motor. There is no signal returning from the motor or load to inform the control that action or motion has occurred.
+3.3V
+1.8V PWM
Regulators
FPGA Program Memory
RS232/ RS485
CLK
Level shifter
INT FPGA MCU
ADDR/ DAT
.
ALE
.
READ WRITE
Figure 4.32 Overall serial servo controller circuit block diagram.
Zhang_Ch04.indd 542
5/13/2008 5:51:03 PM
543
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL clk update_req
16 bit master counter Channel multiplexer
rdn
Address latch
data ale
A0 A1 Address decoder
pwm_en pwm_zero_enable count data_out pwm_out data_in sel1 sel0 wm
pwm0 pwm1 pwm2
PWM module
LE
…… Ctl_register
wm
pwm31
status_led
Figure 4.33 Internal architecture of FPGA for this serial servo controller circuit given in Figure 4.32. Start
Initialize MCU Wait for FPGA init Load settings from EEPROM (limits, speed, base address start position, baud rate)
Serial receive buffer empty ?
Complete Valid packet receive yes ?
Parse byte
Process packet and execute CMD
Transfer computed array of 32 servo positions to FPGA register
Compute servo positions with velocity loop and range limit Is the refresh_fp ga flag high ?
Perform housekeeping
Figure 4.34 MCU firmware flowchart for this serial servo controller circuit given in Figure 4.32.
Zhang_Ch04.indd 543
5/13/2008 5:51:03 PM
544
INDUSTRIAL CONTROL TECHNOLOGY
Motor
Control
Motor
Control ........ ...... Signal goes in one direction (a)
A Signal returns back (b)
Figure 4.35 (a) Open loop drive and (b) close loop drive.
If a signal is returned to provide information that motion has occurred, then the system is described as having a signal that goes in “two directions”: the command signal goes out (to move the motor), and a signal is returned (the feedback) to the control to inform the control of what has occurred. The information flows back, or returns. The weaknesses of the open loop approach include the following: It is not good for applications with varying loads; it is possible for a stepper motor to lose steps; its energy efficiency level is low; and it has resonance areas that must be avoided. What applications use the closed loop technique? Those that require control over a variety of complex motion profiles use it. These may involve the following: control of either velocity and/or position; high resolution and accuracy; velocity may be either very slow, or very high; and the application may demand high torques in a small package size. Because of additional components such as the feedback device, complexity is considered by some to be a weakness of the closed loop approach. These additional components do add to initial cost (an increase in productivity is typically not considered when investigating cost). Lack of understanding does give the impression to the user of difficulty. In many applications, whether the open loop or closed loop techniques are employed often come down to the basic decision of the user.
4.2.2.2
Control Mechanism
Servo control is the regulation of velocity and position of a motor based on a feedback signal. The most basic servo loop is the velocity loop. The velocity loop produces a torque command to minimize the error between velocity command and velocity feedback. Most servo systems require
Zhang_Ch04.indd 544
5/13/2008 5:51:04 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
545
position control in addition to velocity control. The most common way to provide position control is to add a position loop in “cascade” or series with a velocity loop. Sometimes a single PID position loop is used to provide position and velocity control without an explicit velocity loop. Servo loops have to be “tuned” for each application. Tuning is the process of setting servo gains. Higher servo gains provide higher levels of performance, but they also move the system closer to instability. Low-pass filters are commonly used in series with the velocity loop to reduce highfrequency stability problems. Filters must be tuned at the same time the servo loops are tuned. Some drive manufacturers deal with demanding applications by providing advanced control algorithms. The algorithms may be necessary because the mechanics of the system do not allow the use of standard servo loops, or because the performance requirements of the application may not be satisfied with standard servo control loops. Motor control describes the process of producing actual torque in response to the torque command from the servo control loops. For brush motors, motor control is simply the control of current in motor winding because the torque produced by the motor is approximately proportional to the current in the winding. Most industrial servo controllers rely on current loops. Current loops are similar in structure to velocity loops, but they operate at much higher frequencies. A current loop takes a current command (usually just the output of the velocity loop) and compares it to a current feedback signal and generates an output that is essentially a voltage command. If the system needs more torque, the current loop responds by increasing the voltage applied to the motor until the right amount of current is produced. Tuning current loops is complicated. Manufacturers usually tune current loops for a motor, so users do not have to perform this function. One type of semiconductor is the silicon controller rectifier (SCR) which will be connected to the AC line voltage (Fig. 4.36). This type of device is usually employed where large amounts of power must be regulated, motor inductance is relatively high, and accuracy in speed is not critical (such as constant speed devices for fans, blowers, conveyor belts). Power out of the SCR, which is available to run the motor, comes in discrete pulses. At low speeds, a continuous stream of narrow pulses is required to maintain speed. If an increase in speed is desired, the SCR must be turned on to apply large pulses of instant power, and when lower speeds are desired, power is removed and a gradual coasting down in speed occurs. A good example would be when one car is towing a second car. The driver in the first car is the SCR device and the second car, which is being towed, is the motor/ load. As long as the chain is taut, the driver in the first car is in control of
Zhang_Ch04.indd 545
5/13/2008 5:51:04 PM
546
INDUSTRIAL CONTROL TECHNOLOGY
Available voltage
Narrow pulse
Wide pulse
Variable frequency
Pulses of power to motor Avg. volts Maintain speed
Increase speed (a)
Slow down
T1
T1 = T2
(b)
T2
T1
T2
(c)
Figure 4.36 Servo control types: (a) an SCR control; (b) pulse width determines average voltage; and (c) pulse frequency modulation to determine average voltage.
the second car. However, suppose the first car slows down. There would be slack in the chain and, at that point, the first car is no longer in control (and would not be until he gets into a position where the chain is taut again). So, for the periods of time when the first car must slow down, the driver is not in control. This sequence occurs repeatedly, resulting in a jerky, cogging operation. This type of speed control is adequate for many applications. If smoother speed is desired, an electronic network may be introduced. By inserting a “lag” network, the response of the control is slowed so that a large instant power pulse will not suddenly be applied. Filtering action of the lag network gives the motor a sluggish response to a sudden change in load or speed command changes. This sluggish response is not important in applications with steady loads or extremely large inertia. However, for wide range, high performance systems, in which rapid response is important, it becomes extremely desirable to minimize sluggish reaction since rapid changes to speed commands are desirable. Transistors may also be employed to regulate the amount of power applied onto a motor. With this device, there are several “techniques,” or design methodology, used to turn transistors “on” and “off.” The “technique” or mode of operation may be “linear,” “pulse width modulated” (PWM) or “pulse frequency modulated” (PFM). The “linear” mode uses transistors that are activated, or turned on, all the time supplying the appropriate amount of power required. Transistors act like a water faucet, regulating the appropriate amount of power to drive the motor. If the transistor is turned on halfway, then half of the power goes to the motor. If the transistor is turned fully on, then all of the power
Zhang_Ch04.indd 546
5/13/2008 5:51:04 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
547
goes to the motor and it operates harder and faster. Thus, for the linear type of control, power is delivered constantly, not in discrete pulses (like the SCR control), and better speed stability and control is obtained. Another technique is termed pulse width modulation (PWM). With PWM techniques, power is regulated by applying pulses of variable width, that is, by changing or modulating the pulse widths of the power. In comparison with the SCR control (which applies large pulses of power), the PWM technique applies narrow, discrete (when necessary) power pulses. Operation is as follows: With the pulse width small, the average voltage applied onto the motor is low, and the motor’s speed is slow. If the width is wide, the average voltage is higher, and therefore motor speed is higher. This technique has an advantage in that the power loss in the transistor is small, that is, the transistor is either fully “ON” or fully “OFF” and, therefore, the transistor has reduced power dissipation. This approach allows for smaller package sizes. The final technique used to turn transistors “ON” and “OFF” is termed pulse frequency modulation (PFM). The PFM, applying pulses of variable frequency, that is, by changing or modulating the timing of the pulses, regulates the power. The system operates as follows: With very few pulses, the average voltage applied onto the motor is low, and motor speed is slow. With many pulses, the average voltage is increased, and motor speed is higher.
4.2.2.3
Distributed Servo Control
Seemingly, as many ways exist of putting together servo motion systems—motion control systems that combine hardware and software—as there are applications for them. Some motion controllers operate on multiple platforms and buses, with units providing analog output to a conventional amplifier, as well as units that provide current control and direct pulse width modulation (PWM) output for as many as 32 motors simultaneously. There are amplifiers that still require potentiometers to be adjusted for the digital drives’ position, velocity, and current control. A near-limitless mixing and matching of these units it is possible to achieve a satisfactory solution for an application. Advances in hardware continue to make possible motion control products that operate faster and more precisely. New microprocessors and digital signal processors (DSPs) provide the tools necessary to create better features, functionality, and performance for lower cost. These features and functionality enhancements, such as increasing the number of axes on a motion controller or adding position controls to a drive, can be traced back
Zhang_Ch04.indd 547
5/13/2008 5:51:05 PM
548
INDUSTRIAL CONTROL TECHNOLOGY
to advances in the electronics involved. As dedicated servo and motion control performance has improved, the system-level requirements have increased. Machines that perform servo actions now routinely operate with more advanced software and more complex functions on their host computers. Companies that provide servo or motion systems are differentiating themselves from their competition by the quality of the operating systems they deliver to their customers. Traditional servo systems often consist of a high-power, front-end computer that communicates to a high-power, DSP-based, multiaxis motion controller card interfacing with a number of drives or amplifiers, which often have their own high-end processors on board. In a number of these cases, the levels of communication between the high-end computer and the motion controller can be broken down into three categories: simple point-to-point moves with noncritical trajectories; moves requiring coordination and blending, with trajectory generation being tied to the operation of the machine; and complex moves with trajectories that are critical to the process or machine. Increasingly, a significant case can be made for integrating motion control functionality into one compact module to enable mounting close to the motors, providing a distributed system. By centralizing the communications link to the computer from this controller/amplifier, such a solution significantly reduces the system wiring, thereby reducing cost and improving reliability. By increasing the number of power stages, the unit can drive more than one motor independently, up to the processing power of the DSP. If size reductions are significant enough, the distributed servo controller/amplifier does not have to be placed inside a control panel, thus potentially reducing system costs even further. To implement such a distributed servo controller and amplifier, the unit must meet the basic system requirements: enough processing power to control all aspects of a multiple-axis move, fast enough communications between the computer and the servo controller/amplifier to do the job, a small enough size to satisfy distributed-system requirements, and a reasonable price. (1) Levels of motion complexity. The three basic levels of motion complexity require different features and functions from a distributed servo controller and amplifier. It is possible to envision two types of distributed servo controller and amplifiers for pointto-point moves with noncritical trajectory applications: those for repetitive point-to-point moves with external-event-generated conditions and others for point-to-point destinations that vary
Zhang_Ch04.indd 548
5/13/2008 5:51:05 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
549
based on user- or machine-generated events. A typical application for this type of system is a “pick and place” robot. The robot’s purpose can vary from moving silicon wafers between trays to placing parts from an assembly line into packing boxes. The requirements for a distributed servo controller and amplifier involve determining a trajectory based on current location and requiring either a user destination or a preprogrammed destination. Effectively, the only user requirement is to set acceleration, velocity, and jerk parameters (for S-curve), and a series of final points. As a unit, there are minimal communication requirements between host and controller, with simple commands and feedback. Such communication requirements include a resident motion program for path planning; support for I/O functionality; support for terminal interface via RS-232; use without resident editor/ compiler; and stand-alone motion control. A complete motion control application is programmed into the distributed control module for “stand-alone mode” control of three axis applications. This mode is typically used for machines that require simple, repetitive sequences operating from an RS-232 terminal. Communication for the stand-alone mode can be simple and is often not time-critical. Simple point-to-point RS-232 communication is often acceptable for applications requiring few motors. Multidrop RS-485 or RS-1394 is suitable for applications with many axes of motion and motor networks. The requirements for the “blended moves with important trajectory” option involve additional communications to a host computer, as the moves involve more information, and speed and detail of feedback to the host are critical. A typical application for this requirement would be a computerized numerical control system designed to cut accurate, repeatable paths. The trajectory generation and following must be smooth, as there is a permanent record of the cut. The distributed servo control module must be capable of full coordination of multiple axis control. Communication to the host is critical to this application. Latency times between issue of the command and motor reaction time can cause inconsistencies in the motion. In coordinating the motion between multiple motors, there are several possibilities. With a single distributed servo controller and amplifier module using multiple amplifiers, coordination can be achieved on the same controller. For applications requiring coordination between motors connected to different controllers, it is possible to achieve this with a suitable choice of high-bandwidth network.
Zhang_Ch04.indd 549
5/13/2008 5:51:05 PM
550
INDUSTRIAL CONTROL TECHNOLOGY (2) Trajectory planning. Applications that require complex moves with critical trajectories such as inverse kinematics often include the trajectory planning as part of their own intellectual property. The generation of the trajectory is done on a computer platform and the information is transferred to the motion controller. Typically, significant engineering effort is put into the top-level software provided with the robot. The software precalculates the trajectory and transfers this information to the controller, with the result that a conventional motion controller will be underused. For a distributed servo controller and amplifier, the interface would be to send a series of set points over the network that the controller would then follow, interpolating information for higher resolution. The servo drives are configured for torque, speed, or position control. This solution requires the addition of smart cards in the PC, with Device Net and FireWire solutions as add-on options. This type of configuration supports synchronized motion profile streaming (Profile Streaming Mode) for users who need to specify an arbitrary trajectory. Meanwhile, system integration continues to advance on all fronts. All major value-adding components of motion control systems will soon have to comply with the demands for faster controllers with high-speed multiaxis capabilities supplying commands in multitasking applications.
4.2.2.4
Important Servo Control Devices
(1) Types of motors. Electric motor design is based on the placement of conductors (wires) in a magnetic field. A winding has many conductors, or turns of wire, and the contribution of each individual turn adds to the intensity of the interaction. The force developed from a winding is dependent on the current passing through the winding and the magnetic field strength. If more current is passed through the winding, then more force (torque) is obtained. In effect, two magnetic fields interacting cause movement: the magnetic field from the rotor and the magnetic field from the stators attract each other. This becomes the basis of both AC and DC motor design. (a) AC motors. AC motors are relatively constant speed devices. The speed of an AC motor is determined by the frequency of the voltage applied (and the number of magnetic poles). There are basically two types of AC motors: induction and synchronous. (i) Induction motor. If the induction motor is viewed as a type of transformer, it becomes easy to understand.
Zhang_Ch04.indd 550
5/13/2008 5:51:05 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
551
By applying a voltage onto the primary of the transformer winding, a current flow results and induces current in the secondary winding. The primary is the stator assembly, and the secondary is the rotor assembly. One magnetic field is set up in the stator, and a second magnetic field is induced in the rotor. The interaction of these two magnetic fields results in motion. The speed of the magnetic field going around the stator will determine the speed of the rotor. The rotor will try to follow the stator’s magnetic field, but will “slip” when a load is attached. Therefore, induction motors always rotate slower than the stator’s rotating field. Typical construction of an induction motor consists of (1) a stator with laminations and turns of copper wire and (2) a rotor, constructed of steel laminations with large slots on the periphery, stacked together to form a “squirrel cage” rotor. Rotor slots are filled with conductive material (copper or aluminum) and are shortcircuited on themselves by the conductive end pieces. This “one” piece casting usually includes integral fan blades to circulate air for cooling purposes. The standard induction motor is operated at a “constant” speed from standard line frequencies. Recently, with the increasing demand for adjustable speed products, controls have been developed that adjust operating speed of induction motors. Microprocessor drive technology, using methods such as vector or phase angle control (i.e., variable voltage, variable frequency), manipulates the magnitude of the magnetic flux of the fields and thus controls motor speed. By the addition of an appropriate feedback sensor, this becomes a viable consideration for some positioning applications. Controlling the induction motor’s speed and torque becomes complex since motor torque is no longer a simple function of motor current. Motor torque affects the slip frequency, and speed is a function of both stator field frequency and slip frequency. (ii) Synchronous motor. The synchronous motor is basically the same as the induction motor but with slightly different rotor construction. The rotor construction enables this type of motor to rotate at the same speed (in synchronization) as the stator field. There are basically two types of synchronous motors: self excited (as the induction motor) and directly excited (as with permanent magnets).
Zhang_Ch04.indd 551
5/13/2008 5:51:05 PM
552
INDUSTRIAL CONTROL TECHNOLOGY The self-excited motor (may be called reluctance synchronous) includes a rotor with notches, or teeth, on the periphery. The number of notches corresponds to the number of poles in the stator. Oftentimes the notches or teeth are termed salient poles. These salient poles create an easy path for the magnetic flux field, thus allowing the rotor to “lock in” and run at the same speed as the rotating field. A directly excited motor (may be called hysteresis synchronous, or AC permanent magnet synchronous) includes a rotor with a cylinder of a permanent magnet alloy. The permanent magnet north and south poles, in effect, are the salient teeth of this design, and therefore prevent slip. In both the self-excited and directly excited types there is a “coupling” angle, that is, the rotor lags a small distance behind the stator field. This angle will increase with load, and if the load is increased beyond the motor’s capability, the rotor will pull out of synchronism. The synchronous motor is generally operated in an “open loop” configuration and within the limitations of the coupling angle (or “pull-out” torque) it will provide absolute constant speed for a given load. Also, note that this category of motor is not self-starting and employs start windings (split-phase, capacitor start), or controls that slowly ramp up frequency and voltage to start rotation. A synchronous motor can be used in a speed control system even though a feedback device must be added. Vector control approaches will work quite adequately with this motor design. However, in general, the rotor is larger than that of an equivalent servomotor and, therefore, may not provide adequate response for incrementing applications. Other disadvantages are the following: While the synchronous motor may start a high inertial load, it may not be able to accelerate the load enough to pull it into synchronism. If this occurs, the synchronous motor operates at low frequency and at very irregular speeds, resulting in audible noise. Also, for a given horsepower, synchronous motors are larger and more expensive than nonsynchronous motors. (b) DC motors. DC motor speeds can easily be varied; therefore they are utilized in applications where speed control, servo control, and/or positioning needs exist. The stator field is
Zhang_Ch04.indd 552
5/13/2008 5:51:05 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
553
produced by either a field winding or by permanent magnets. This is a stationary field (as opposed to the AC stator field that is rotating). Passing sets up the second field, the rotor field, and current through a commutator and into the rotor assembly. The rotor field rotates in an effort to align itself with the stator field, but at the appropriate time (due to the commutator) the rotor field is switched. In this method then, the rotor field never catches up to the stator field. Rotational speed (i.e., how fast the rotor turns) is dependent on the strength of the rotor field. In other words, the more voltage on the motor, the faster the rotor will turn. The following will briefly explore the various wound field motors and the permanent magnet (PMDC) motors. (i) Shunt wound motors. With the shunt wound, the rotor and stator (or field windings) are connected in parallel. The field windings can be connected to the same power supply as the rotor, or excited separately. Separate excitation is used to change motor speed (i.e., rotor voltage is varied while stator or field winding is held constant). The parallel connection provides a relatively flat speed-torque curve and good speed regulation over wide load ranges. However, because of demagnetization effects, these motors provide starting torques comparatively lower than other DC winding types. (ii) Series wound motors. In the series wound motor, the two motor fields are connected in series. The result is two strong fields that will produce very high starting torque. The field winding carries the full rotor current. These motors are usually employed where large starting torques are required such as cranes and hoists. Series motors should be avoided in applications where they are likely to load because of the tendency to “run away” under no-load conditions. (iii) Compound wound motor. Compound motors use both a series and a shunt stator field. Many speed torque curves can be created by varying the ratio of series and shunt fields. In general, small compound motors have a strong shunt field and a weak series field to help start the motor. High starting torques are exhibited along with relatively flat speed torque characteristics. In reversing applications, the polarity of both windings must be switched, thus requiring large, complex circuits.
Zhang_Ch04.indd 553
5/13/2008 5:51:05 PM
554
INDUSTRIAL CONTROL TECHNOLOGY (iv) Stepper motor. Step motors are electromechanical actuators that convert digital inputs to analog motion. This is possible through the motor’s controller electronics. There are various types of step motors such as solenoid activated, variable reluctance, permanent magnet, and synchronous inductor. Independent of stepper type, all are devices that index in fixed angular increments when energized in a programmed manner. Step motors’ normal operation consists of discrete angular motions of uniform magnitude rather than continuous motion. A step motor is particularly well suited to applications where the controller signals appear as pulse trains. One pulse causes the motor to increment one angle of motion. This is repeated for one pulse. Most step motors are used in an open-loop system configuration, which can result in oscillations. To overcome this, either complex circuits or feedback is employed—thus resulting in a closedloop system. Stepper motors are, however, limited to about one horsepower and 2000 rpm, limiting them in many applications. (v) PMDC motor. The predominant motor configuration utilized in demanding incrementing (start-stop) applications is the permanent magnet DC (PMDC) motor. This type with appropriate feedback is quite an effective device in closed-loop servo system applications. Since the stator field is generated by permanent magnets, no power is used for field generation. The magnets provide constant field flux at all speeds. Therefore, linear speed torque curves result. This motor type provides relatively high starting, or acceleration torque, is linear and predictable, has a smaller frame and lighter weight compared to other motor types, and provides rapid positioning. (2) Types of feedback devices. Servos use feedback signals for stabilization, speed, and position information. This information may come from a variety of devices such as the analog tachometer, the digital tachometer (optical encoder), or from a resolver. In the following, each of these devices will be defined and the basics explored. (a) Analog tachometers. Tachometers resemble miniature motors. However, the similarity ceases there. In a tachometer, the gauge of wire is quite fine, so the current handling capability
Zhang_Ch04.indd 554
5/13/2008 5:51:05 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
555
is small. But the tachometer is not used for a power-delivering device. Instead, the shaft is turned by some mechanical means and a voltage is developed at the terminals (a motor in reverse!). The faster the shaft is turned, the larger the magnitude of voltage developed (i.e., the amplitude of the signal is directly proportional to speed). The output voltage shows a polarity (+ or –) that is dependent on the direction of rotation. Analog, or DC, tachometers, as they are often termed, play an important role in drives because of their ability to provide directional and rotational information. They can be used to provide speed information to a meter (for visual speed readings) or provide velocity feedback (for stabilization purposes). The DC provides the simplest, most direct method of accomplishing this feat. As an example of a drive utilizing an analog tachometer for velocity information, consider a lead screw assembly that must move a load at a constant speed. The motor is required to rotate the lead screw at 3600 rpm. If the tachometer’s output voltage gradient is 2.5 V/Krpm, the voltage read on the tachometer terminals should be: 3.600 Krpm × 2.5 V/Krpm = 9 V If the voltage read is indeed 9 V, then the tachometer (and motor/load) is rotating at 3600 rpm. The servo drive will try to maintain this voltage to ensure the desired speed. Although this example has been simplified, the basic concept of speed regulation via the tachometer is illustrated. Some of the terminologies associated with tachometers that explain the basic characteristics of this device are voltage constant, ripple, and linearity. The following will define each. A tachometer’s voltage constant may also be referred to as voltage gradient, or sensitivity. This represents the output voltage generated from a tachometer when operated at 1000 rpm, that is, V/Krpm. Sometimes converted and expressed in volts per radian per second, that is, V/rad/s. Ripple may be termed voltage ripple or tachometer ripple. Since tachometers are not ideal devices, and when designing and manufacturing tolerances enter into the product, there are deviations from the norm. When the shaft is rotated, a DC signal is produced as well as a small amount of an AC signal that is superimposed on the DC level. In reviewing literature, care must be exercised to determine the definition of ripple since there are three methods of
Zhang_Ch04.indd 555
5/13/2008 5:51:05 PM
556
INDUSTRIAL CONTROL TECHNOLOGY presenting the data: (1) peak-to-peak—the ratio of peak-topeak ripple expressed as a percent of the average DC level, (2) RMS—the ratio of the rms of the AC component expressed as a percent of the average DC level, and (3) peak to average—the ratio of maximum deviation from the average DC value expressed as a percent of the average DC level. Linearity in the ideal tachometer would have a perfect straight line for voltage versus speed. Again, designing and manufacturing tolerances enter the picture and alter this straight line. Thus, linearity is a measure of how far away from perfect this product or design is. The maximum difference of the actual versus theoretical curves is linearity. (b) Digital tachometers. A digital tachometer, often termed an optical encoder or simply encoder, is a mechanical-toelectrical conversion device. The encoder’s shaft is rotated and an output signal results, which is proportional to distance (i.e., angle) the shaft is rotated through. The output signal may be square waves, or sinusoidal waves, or provide an absolute position. Thus, encoders are classified into two basic types: absolute and incremental. (i) Absolute encoder. The absolute encoder provides a specific address for each shaft position throughout 360°. This type of encoder employs either contact (brush) or noncontact schemes of sensing position. The contact scheme incorporates a brush assembly to make direct electrical contact with the electrically conductive paths of the coded disk to read address information. The noncontact scheme utilizes photoelectric detection to sense position of the coded disk. The number of tracks on the coded disk may be increased until the desired resolution or accuracy is achieved. And since position information is directly on the coded disk assembly, the disk has a built-in “memory system” and a power failure will not cause this information to be lost. Therefore, it will not be required to return to a “home” or “start” position on reenergizing power. (ii) Incremental encoder. The incremental encoder provides either pulses or a sinusoidal output signal as it is rotated throughout 360°. Thus, distance data is obtained by counting this information. The disk is manufactured with opaque lines. A light source passes a beam through the transparent segments onto a photosensor that outputs a sinusoidal waveform. Electronic processing can be used to transform this signal into a square pulse train.
Zhang_Ch04.indd 556
5/13/2008 5:51:05 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
557
In utilizing this device, the following parameters are important: (1) Line count: This is the number of pulses per revolution. The number of lines is determined by the positional accuracy required in the application. (2) Output signal: The output from the photosensor can be either a sine or square wave signal. (3) Number of channels: Either one or two channel outputs can be provided. The two-channel version provides a signal relationship to obtain motion direction (i.e., clockwise or counterclockwise rotation). In addition, a zero index pulse can be provided to assist in determining the “home” position. A typical application using an incremental encoder is as follows: An input signal loads a counter with positioning information. This represents the position the load must be moved to. As the motor accelerates, the pulses emitted from the incremental (digital) encoder come at an increasing rate until a constant run speed is attained. During the run period, the pulses come at a constant rate that can be directly related to motor speed. The counter, in the meanwhile, is counting the encoder pulses and, at a predetermined location, the motor is commanded to slow down. This is to prevent overshooting the desired position. When the counter is within 1 or 2 pulses of the desired position, the motor is commanded to stop. The load is now in position. (3) Resolvers. Resolvers look similar to small motors—that is, one end has terminal wires, and the other end has a mounting flange and a shaft extension. Internally, a “signal” winding rotor revolves inside a fixed stator. This represents a type of transformer: When one winding is excited with a signal, through transformer action the second winding is excited. As the first winding is moved (the rotor), the output of the second winding changes (the stator). This change is directly proportional to the angle that the rotor has been moved through. As a starting point, the simplest resolver unit contains a single winding on the rotor and two windings on the stator (located 90° apart). A reference signal is applied onto the primary (the rotor), then via transformer action this is coupled to the secondary. The secondary output signal would be a sine wave proportional to the angle (the other winding would be a cosine wave), with one electrical cycle of output voltage produced for each 360° of mechanical rotation. These are fed into the controller. Inside the controller, a resolver to digital (R to D) converter analyzes the signal, producing an output representing the angle
Zhang_Ch04.indd 557
5/13/2008 5:51:06 PM
558
INDUSTRIAL CONTROL TECHNOLOGY that the rotor has moved through, and an output proportional to speed (how fast the rotor is moving). There are various types of resolvers. The type described above would be termed a single speed resolver; that is, the output signal goes through only one sine wave as the rotor goes through 360 mechanical degrees. If the output signal went through four sine waves as the rotor goes through 360 mechanical degrees, it would be called a four-speed resolver. Another version utilizes three windings on the stator—and would be called a synchronizer. The three windings are located 120° apart. The basic type of resolver discussed thus far may also be termed a “resolver transmitter”—one phase input and two phase outputs (i.e., a single winding of the rotor is excited and the stator’s two windings provide position information). Resolver manufacturers may term this a “CX” unit, or “RCS” unit. Another type of resolver is termed “resolver control transformer”: twophase inputs and one phase output (i.e., the two-stator windings are excited and the rotor single winding provides position information). Resolver manufacturers term this type “CT” or “RCT” or “RT.” The third type of resolver is termed a “resolver transmitter”: two-phase inputs and two-phase outputs (i.e., two rotor windings are excited, and position information is derived from the two-stator windings). This may be referred to as a “differential” resolver, or “RD,” or “RC” depending on the manufacturer.
4.2.3
Fuzzy Logic Controllers
Fuzzy systems are showing good promise in consumer products, industrial and commercial systems, and decision support systems. Fuzzy logic is a paradigm for an alternative design methodology that can be applied in developing both linear and nonlinear systems for embedded control. Fuzzy logic can make control engineering easier for many types of tasks. It can also add control where it was previously impractical, as applications such as fuzzy-controlled washing machines have shown. However, fuzzy control need not be a dramatic departure from conventional control techniques such as proportional integral derivative (PID) feedback systems. As this technical brief demonstrates, fuzzy logic can be used to simplify the scheduling of two different controllers. Figure 4.37 lists several fuzzy logic control patterns, which reveal that in most control systems the fuzzy logic controller (FLC) works with the PID to employ close-loop feedback control.
Zhang_Ch04.indd 558
5/13/2008 5:51:06 PM
559
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL Desired Error value +
Control output FLC
Process
−
(a)
Desired Error value + −
FLC
+
Control output Process
− PID
(b)
Desired Error value
Control output PID
Process
+
Desired Error value +
−
PID-2
Process
Control output
FLC
FLC (c)
−
Switch PID-1
(d)
Figure 4.37 Fuzzy logic control patterns: (a) classic Fuzzy logic control; (b) FLC and conventional PID as a backup; (c) FLC tuning conventional PID; and (d) FLC in gain scheduling PID.
4.2.3.1
Fuzzy Control Principle
The term “fuzzy” refers to the ability to deal with imprecise or vague inputs. Instead of using complex mathematical equations, fuzzy logic uses linguistic descriptions to define the relationship between the input information and the output action. In engineering systems, fuzzy logic provides a convenient and user-friendly front-end to develop control programs, helping designers to concentrate on the functional objectives, not on the mathematics. Fuzzy control strategies come from experience and experiments rather than from mathematical models and, therefore, linguistic implementations are accomplished much faster. Fuzzy control strategies involve a large number of inputs, most of which are relevant only for some special conditions. Such inputs are activated only when the related condition prevails. In this way, little additional computational overhead is required for adding extra rules. As a result, the rule base structure remains understandable, leading to efficient coding and system documentation. (1) Logical inference. Reasoning makes a connection between cause and effect, or a condition and a consequence. Reasoning can be expressed by a logical inference or by the evaluation of inputs to draw a conclusion. We usually follow rules of inference that have the form: IF cause1 = A and cause2 = B THEN effect = C, where A, B, and C are linguistic variables. For example, IF “room temperature” is Medium THEN “set fan speed to Fast” Medium is a function defining degrees of room temperature, while Fast is a
Zhang_Ch04.indd 559
5/13/2008 5:51:06 PM
560
INDUSTRIAL CONTROL TECHNOLOGY function defining degrees of speed. The intelligence lies in associating those two terms by means of an inference expressed in heuristic IF. . .THEN terms. To convert a linguistic term into a computational framework, one needs to use the fundamentals of set theory. On the statement IF “room temperature” is Medium, we have to ask and check the following question: “Is the room temperature Medium?” A traditional logic, also called Boolean logic, would have two answers: YES and NO. Therefore, the idea of membership of an element x in a set A is a function µA(x) whose value indicates if that element belongs to the set A. Boolean logic would indicate, for example: µA(x) = 1, then the element belongs to set A, or µA(x) = 0, the element does not belong to set A. (2) Fuzzy sets. A fuzzy set is represented by a membership function defined on the universe of discourse. The universe of discourse is the space where the fuzzy variables are defined. The membership function gives the grade, or degree, of membership within the set, of any element of the universe of discourse. The membership function maps the elements of the universe onto numerical values in the interval [0, 1]. A membership function value of zero implies that the corresponding element is definitely not an element of the fuzzy set, while a value of unity means that the element fully belongs to the set. A grade of membership in between corresponds to the fuzzy membership to set. In crisp set theory, if someone is taller than 1.8 m, we can state that such person belongs to the “set of tall people.” However, such sharp change from 1.7999 m of a “short person” to 1.8001 m of a “tall person” is against common sense. Another example could be given as follows: Suppose a highway has a speed limit as 65 miles/h. Those who drive faster than 65 miles/h belongs to the set A whose elements are violators and their membership function has the value of 1. On the other hand, those who drive slower do not belong to set A. Would the sharp transition between membership and nonmembership be realistic? Should there be a traffic summons issued to drivers who are caught at 65.5 miles/h? Or at 65.9 miles/h? Therefore, in practical situations there is always a natural fuzzification when someone analyzes statements and a smooth membership curve usually better describes the grade where an element belongs to a set. (a) Fuzzification. Fuzzification is the process of decomposing a system input and/or output into one or more fuzzy sets. Many types of curves and tables can be used, but triangular or trapezoidal shaped membership functions are the most common because they are easier to represent in embedded controllers.
Zhang_Ch04.indd 560
5/13/2008 5:51:06 PM
561
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
Figure 4.38 shows a system of fuzzy sets for an input with trapezoidal and triangular membership functions. Each fuzzy set spans a region of input (or output) value graphed with the membership. Any particular input is interpreted from this fuzzy set, and a degree of membership is interpreted. The membership functions should overlap to allow smooth mapping of the system. The process of fuzzification allows the system inputs and outputs to be expressed in linguistic terms so that rules can be applied in a simple manner to express a complex system. Consider a simplified implementation for an air-conditioning system with a temperature sensor. The temperature might be acquired by a microprocessor that has a fuzzy algorithm to process an output to continuously control the speed of a motor which keeps the room in a “good temperature;,” it also can direct a vent upward or downward as necessary. The figure illustrates the process of fuzzification of the air temperature. There are five fuzzy sets for temperature: COLD, COOL, GOOD, WARM, and HOT. The membership function for fuzzy sets COOL and WARM are trapezoidal, the membership function for GOOD is triangular, and the membership functions for COLD and HOT are half triangular with shoulders indicating the physical limits for such process (staying in a place with a room temperature lower than 8°C or above 32°C would be quite uncomfortable). The way to design such fuzzy sets is a matter of degree and depends solely on the designer’s experience and intuition. The figure shows some nonoverlapping fuzzy sets, which can indicate any nonlinearity in the modeling process. There an input temperature of 18°C would be considered COOL with a degree of 0.75 and would be considered GOOD with a degree of 0.25. To build the rules that will control the air conditioning motor, we could watch how a human expert would adjust the settings to speed up and slow down the motor in accordance with the temperature, Cold
Cool
Good
Warm
Hot
1.0
8°
11°
14°
17°
20°
23°
26°
29°
32°
Figure 4.38 Fuzzy sets defining temperature.
Zhang_Ch04.indd 561
5/13/2008 5:51:06 PM
562
INDUSTRIAL CONTROL TECHNOLOGY obtaining the rules empirically. If the room temperature is good, keep the motor speed medium, if it is warm, turn the knob of the speed to fast, and blast the speed if the room is hot. On the other hand, if the temperature is cool, slow down the speed, and stop the motor if it is cold. This is the beauty of fuzzy logic: to turn common sense, linguistic descriptions, into a computer controlled system. Therefore, it is necessary to understand how to use some logical operations to build the rules. Boolean logic operations must be extended in fuzzy logic to manage the notion of partial truth—truth-values between “completely true” and “completely false.” A fuzziness nature of a statement like “X is LOW” might be combined to the fuzziness statement of “Y is HIGH” and a typical logical operation could be given as X is LOW and Y is HIGH. What is the truth-value of this and operation? The logic operations with fuzzy sets are performed with the membership functions. Although there are various other interpretations for fuzzy logic operations, the following definitions are very convenient in embedded control applications: truth(X and Y) = Min(truth(X), truth(Y)) truth(X or Y) = Max(truth(X), truth(Y)) truth(not X) = 1.0—truth(X) (b) Defuzzification. After fuzzy reasoning, we have a linguistic output variable that needs to be translated into a crisp value. The objective is to derive a single crisp numeric value that best represents the inferred fuzzy values of the linguistic output variable. Defuzzification is such an inverse transformation that maps the output from the fuzzy domain back into the crisp domain. Some defuzzification methods tend to produce an integral output considering all the elements of the resulting fuzzy set with the corresponding weights. Other methods take into account just the elements corresponding to the maximum points of the resulting membership functions. The following defuzzification methods are of practical importance: (i) Center-of-Area (C-o-A). The C-o-A method is often referred to as the Center-of-Gravity method because it computes the centroid of the composite area representing the output fuzzy term. (ii) Center-of-Maximum (C-o-M). In the C-o-M method only the peaks of the membership functions are used. The defuzzified crisp compromise value is determined
Zhang_Ch04.indd 562
5/13/2008 5:51:07 PM
563
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
by finding the place where the weights are balanced. Thus, the areas of the membership functions play no role and only the maxima (singleton memberships) are used. The crisp output is computed as a weighted mean of the term membership maxima, weighted by the inference results. (iii) Mean-of-Maximum (M-o-M). The M-o-M is used only in some cases where the C-o-M approach does not work. This occurs whenever the maxima of the membership functions are not unique and the question is which one of the equal choices one should take. (3) Fuzzy controllers. A fuzzy logic system has four blocks as shown in Fig. 4.39. Crisp input information from the device is converted into fuzzy values for each input fuzzy set with the fuzzification block. The universe of discourse of the input variables determines the required scaling for correct per-unit operation. The scaling is very important because the fuzzy system can be retrofitted with other devices or ranges of operation by just changing the scaling of the input and output. The decision-making logic determines how the fuzzy logic operations are performed (Sup-Min inference), and together with the knowledge base determines the outputs of each fuzzy IF–THEN rule. Those are combined and converted to crispy values with the defuzzification block. The output crisp value can be calculated by the center of gravity or the weighted average. To process the input to get the output reasoning, there are six steps involved in the creation of a rule based fuzzy system: (a) Identify the inputs and their ranges and name them. (b) Identify the outputs and their ranges and name them. (c) Create the degree of fuzzy membership function for each input and output. (d) Construct the rule base that the system will operate under. (e) Decide how the action will be executed by assigning strengths to the rules. (f) Combine the rules and defuzzify the output.
Knowledge base
Input
Fuzzification
Logic inference
Defuzzification
Output
Figure 4.39 Fuzzy controller block diagram.
Zhang_Ch04.indd 563
5/13/2008 5:51:07 PM
564
4.2.3.2
INDUSTRIAL CONTROL TECHNOLOGY
Fuzzy Logic Process Controllers
Fuzzy Logic microprocessor based temperature and process controllers are state-of-the-art in design and function, yet have low cost for a variety of design applications. These controllers are ideal for controlling all process parameters, including temperature, flow, pressure, humidity, pH, and connectivity. Fuzzy logic algorithm makes these controllers smart enough to learn processes and make rapid, accurate adjustments. A variety of choices for number of buttons, DIN size, configuration, and options make them versatile enough to match various individual needs. All of these controllers employ patented fuzzy logic algorithms with PID Autotune. These controllers “learn” industrial process, using the PID parameters as a starting point for all decisions made by the controller. This intelligence allows industrial processes to reach its set point in the shortest time possible while virtually eliminating overshoot. The result is that industrial process maintains a steady set point.
Bibliography Aerotech (http://www.aerotech.com). 2006. Soft PLC. http://www.aerotech.com/ ACSBrochure/plc.html. Accessed date: January. AMCI (http://www.amci.com). 2006. What is PLC. http://www.amci.com/tutorials/ tutorials-what-is-programmable-logic-controller.asp. Accessed date: January. ARP (http://www.arptech.com.au). 2006. CNC Specifications. http://www.arptech .com.au/specs/cncspecs.htm. Accessed date: January. Automation Direct (http://web4.automationdirect.com). PLC Hardware. http:// web4.automationdirect.com/adc/Overview/Catalog/PLC_Hardware . Accessed date: January. AXYZ (http://www.axyz.com). 2006. CNC Router Specifications. http://www.axyz .com/table_4000.html. Accessed date: January. BALDOR (http://www.baldor.com). 2006. Servo Control Facts. http://www .baldor.com/pdf/manuals/1205-394.pdf. Accessed date: January. BMJ (http://www.bmjmold.com). 2006. CNC Milling. http://www.bmjmold.com/ cnc-milling-bmj.htm. Accessed date: January. Brock Solutions (http://www.brocksolutions.com). 2006. S88 Batch Control; Model and Standard. http://www.brocksolutions.com/food_cp/S88%20Brock .pdf. Accessed date: January. CMC (http://www.cmccontrols.com). 2006. Soft PLCs. http://www.bin95.com/ plc_training.htm. Accessed date: January. CNC (http://cnc.fme.vutbr.cz). 2006. Computer Numerical Control. http://cnc. fme.vutbr.cz/. Accessed date: January. CNC Academy (http://www.cnc-academy.com). 2006. CNC Programming. http:// www.cnc-academy.com/cnc-programming-articles/cnc-programming-articles .htm. Accessed date: January.
Zhang_Ch04.indd 564
5/13/2008 5:51:07 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
565
CNCCI (http://www.cncci.com). 2006a. CNC Training Materials. http://www. cncci.com/. Accessed date: January. CNCCI (http://www.cncci.com). 2006b. CNC Basics. http://www.cncci.com/ resources/articles/CNC%20basics%201.htm. Accessed date: January. Control Global (http://www.controlglobal.com). 2006. Batch Standard. http:// www.controlglobal.com/Media/MediaManager/TeachTheBatchStandard .pdf. Accessed date: January. Direct Industrial (http://www.directindustry.com). PID Controller. http://www .directindustry.com/industrial-manufacturer/pid-controller-79196.html?gclid =CKrdu9KMpYwCFQE0ZwodqCyY5Q. Accessed date: January. DPS Telecom (http://www.dpstele.com). 2006. SCADA Tutorial. http://www .dpstele.com/info2/scada/scada.php/Tutorial?t_id=114 & tg=optimum & unique=f5821b8470b1b0602d6eea25dc90a204 & v=1.0 & source=google . Accessed date: January. Electricity Forum (http://www.electricityforum.com). 2006a. PLC Handbook. http://www.electricityforum.com/bookstore/programmable-logic-controllerhandbook.html. Accessed date: January. Engineers Edge (http://www.engineersedge.com). 2006. CNC. http://www.engi neersedge.com/manufacturing/computer-numerical-control.htm. Accessed date: January. FAQS (http://www.faqs.org). 2006. Fuzzy Control. http://www.faqs.org/docs/ fuzzy/. Accessed date: February. Find Articles (http://findarticles.com). 2006. CNC Open System. http://findarticles .com/p/articles/mi_m3101/is_n2_v70/ai_19745442. Accessed date: January. Five Co (http://www.fiveco.com). 2006. Motor Controllers. http://www.fiveco .com/section_motion/products_motion_E.htm?gclid=COfdns3fpowCFSYcE AoddFEx0Q. Accessed date: January. Galilmc (http://www.galilmc.com). 2006. Step Control. http://www.galilmc .com/support/motioncode/index.html. Accessed date: January. GlobalSpec (http://www.globalspec.com). 2006a. About PLC. http://industrialcomputers.globalspec.com/LearnMore/Industrial_Computers_Embedded_ Computer_Components/Industrial_Computing/Programmable_Logic_ Controllers_PLCs?SrchItem=1 & frmqry=programmable%20logic%20 controller&frmsrc=soe. Accessed date: January. GlobalSpec (http://www.globalspec.com). 2006b. Computer Numerical Controllers. http://motion-controls.globalspec.com/LearnMore/Motion_Controls/ Machine_Motion_Controllers/CNC_Controllers?SrchItem=2&frmqry=CNC. Accessed date: January. GlobalSpec (http://www.globalspec.com). 2006c. CNC Machining Services. http:// manufacturing-fabrication.globalspec.com/LearnMore/Part_Fabrication_ Production/Machine_Shop_Services/CNC_Machining_Services?SrchItem =1&frmqry=CNC. Accessed date: January. Helsinki Control Engineering (http://www.control.hut.fi). 2006. PID Control in Networks. http://www.control.hut.fi/Publications/Pohjola-2006/. Accessed date: January. Hertog, Troy A. 2006. Advanced SCADA Technology. http://www.usfilter.com/ NR/rdonlyres/739F9584-9AB7-4A85-86D8-0CD3E3A8E84D/0/ AdvancedTechnologyForSCADAwitepaper.pdf. Accessed date: January.
Zhang_Ch04.indd 565
5/13/2008 5:51:07 PM
566
INDUSTRIAL CONTROL TECHNOLOGY
HITACHI (http://www.hitachi-ds.com). 2006. PLC. http://www.hitachi-ds.com/en/ product/plc/. Accessed date: January. HTS (http://www.htservices.com). 2006a. PLC Tutorial. http://www.htservices .com/Tutorials/plctutorial1.htm. Accessed date: January. HTS (http://www.htservices.com). 2006b. SCADA Automation Software. http:// www.htservices.com/Tools/Scada/index.htm. Accessed date: January. HTS (http://www.htservices.com). 2006c. PID Control. http://www.htservices.com/ Applications/Process/PID1.htm. Accessed date: January. ICP DAS (http://www.icpdas.com). 2006. SCADA Protocol and Interface. http:// www.icpdas.com/products/PAC/i-7188_7186/whatisscada.htm. IDC (http://www.idc-online.com). 2006. Batch Management and Control (including s88). http://www.idc-online.com/pdf/training/instrumentation/Batch%20 Management%20&%20Control%20(Including%20S88).pdf. Accessed date: January. ISE (http://iseinc.com). 2006. Process and Temperature Controllers. http://iseinc .com/process_and_temperature_controls.htm. Accessed date: January. ITS (http://www.its-ltd.co.uk). 2006. Batch Control. http://www.its-ltd.co.uk/ index.htm?bis.htm & 3 & gclid=CJKR5_XPpowCFRAFEgodnUKu6Q . Accessed date: January. Jack, Hugh. 2006. Automated manufacturing systems; PLCs. http://claymore .engineer.gvsu.edu/~jackh/books/plcs/. Accessed date: January. Jantzen, Jan. 2006. Design of Fuzzy Controllers. http://www.iau.dtu.dk/~jj/pubs/ design.pdf. Accessed date: January. Kantronics (http://www.kantronics.com). 2006. SCADA Applications. http:// www.kantronics.com/applications/scada.html. Accessed date: January. Keyence (http://www.keyence.com). 2006a. Programmable Logic Controllers. http://www.keyence.com/products/barcode/plc/plc.php. Accessed date: January. Keyence (http://www.keyence.com). 2006b. PLC Software. http://www.keyence .com/plclibraryhome/index.php. Accessed date: January. Laurels Electronics (http://www.laurels.com). 2006. A Batch Controller Specifications. http://www.laurels.com/batch.htm. Accessed date: January. Machine Tool Help (http://www.software.machinetoolhelp.com). 2006. CNC Programming. http://www.software.machinetoolhelp.com/CNC/CNC_ Software.html?gclid=CPSStMSKpIwCFSQHEgod6iqP5w. Accessed date: January. May steel (http://www.maysteel.com). 2006. SCADA System Specifications. http:// www.maysteel.com/products/electricutility/smartbank_scada_requirements_ gfs.pdf. Accessed date: January. Michigan Engineering (http://www.engin.umich.edu). 2006. PID Tutorial. http:// www.engin.umich.edu/group/ctm/PID/PID.html. Accessed date: January. Motion controllers (http://www.motion-controller.machinedesign.com). PLC. http:// www.motion-controller.machinedesign.com/guiEdits/Content/bdeee2/ bdeee2_6.aspx. Accessed date: January. Motion Village (http://www.motionvillage.com). 2006a. Servo Handbook. http:// www.motionvillage.com/training/handbook/. Accessed date: January.
Zhang_Ch04.indd 566
5/13/2008 5:51:07 PM
4: DIGITAL CONTROLLERS FOR INDUSTRIAL CONTROL
567
Motion Village (http://www.motionvillage.com). 2006b. Servo Control. http:// www.motionvillage.com/training/handbook/drive/servocontrol/index.html. Accessed date: January. Motion Village (http://www.motionvillage.com). 2006c. Motor Control. http:// www.motionvillage.com/training/handbook/drive/motorcontrol/index.html. Accessed date: January. New Port US (http://www.newportus.com). 2006. Tuning a PID Controller. http:// www.newportus.com/Products/Techncal/techncal.htm. Accessed date: January. NTSB (http://www.ntsb.gov). 2006. SCADA in Liquid Piplelines-NTSB/SS-05/02. http://www.ntsb.gov/publictn/2005/SS0502.pdf. Accessed date: January. O’Dwyer, Aidan. 2003. Handbook of PI and PID Controller Tuning Rules. London: Imperial College Press. OMEGA (http://www.omega.com). 2006a. PID Controller Tuning. http://www .omega.com/temperature/Z/pdf/z115-117.pdf. Accessed date: January. Parker Motion (http://www.parkermotion.com). 2006. Servo Fundamentals. http:// www.parkermotion.com/whitepages/ServoFundamentals.pdf. Accessed date: January. Passino, Kevin M. and Yurkovich, Stephen. 2006. Fuzzy Control. http://www.ece .osu.edu/~passino/FCbook.pdf. Accessed date: February. Petriu, Emil M. 2006. Fuzzy Systems for Control Application. http://www.fulton .asu.edu/~nsfadp/ieeecis/Emil_M_Petriu_Fuzzy.pdf. Accessed date: February. PLC Technician (http://www.plctechnician.com). 2006. PLC Training Tutorial. http://www.plctechnician.com. Accessed date: January. Rockwell (http://www.ab.com). 2006. Programmable Controllers. http://www .ab.com/programmablecontrol/plc/. Accessed date: January. Serfilco (http://www.serfilco.com). 2006. A Pump Batch Control System. http:// www.serfilco.com/ftp/Literature/bulletin/P-412.pdf. Accessed date: January. Servo Controller Information. 2006. http://servocontroller.info/. Accessed date. January. Simple Solvers (http://simplesolvers.com). PID Control. http://simplesolvers.com/ ss_What_is_PID.html. Accessed date: January. Stouffer, Keith et al. 2006. Guide to SCADA and ICS Security. http://csrc.nist .gov/publications/drafts/800-82/Draft-SP800-82.pdf. Accessed date: January. TD World (http://www.tdworld.com). 2006. SCADA Tests. http://tdworld.com/ distribution_management_systems/power_bench04_test_first/. Accessed date: January. ThomasNet (http://www.thomasnet.com). 2006. SCADA Systems. http://www .thomasnet.com/products/supervisory-control-data-acquisition-scada-systems70991005-1.html. Accessed date: January. Wikipedia (http://en.wikipedia.org). 2006a. CNC. http://en.wikipedia.org/wiki/ CNC. Accessed date: January. Wikipedia (http://en.wikipedia.org). 2006b. SCADA. http://en.wikipedia.org/ wiki/SCADA. Accessed date: January. Wikipedia (http://en.wikipedia.org). 2006c. PID Controller. http://en.wikipedia .org/wiki/PID_controller. Accessed date: January.
Zhang_Ch04.indd 567
5/13/2008 5:51:07 PM
568
INDUSTRIAL CONTROL TECHNOLOGY
Wikipedia (http://en.wikipedia.org). 2006d. Fuzzy Logic. http://en.wikipedia.org/ wiki/Fuzzy_logic. Accessed date: February. Wolfram (http://documents.wolfram.com). 2006. Fuzzy Logic Control. http:// documents.wolfram.com/applications/fuzzylogic/DemonstrationNotebooks/5 .html. Accessed date: January. Wonderware (http://us.wonderware.com). 2006. SCADA Software Solutions. http://us.wonderware.com/products/intouch/. Accessed date: January.
Zhang_Ch04.indd 568
5/13/2008 5:51:08 PM
5
Application Software for Industrial Control
A control system represents a group of electronic, electric, and mechanical equipment and devices that are locally or remotely connected together to monitor and control the target environment. In respect to industrial control systems, the target environment is inevitably associated with hardware devices that are sensors, actuators, and valves. Real-time control system means that the control system must provide the control responses or actions to the stimulus or requests within specific times, which therefore depend not just on what the system does but also on how fast it reacts. In software, each of the stimulus handlers requires a process (or task). Actually, a real-time control system normally consists of these three types of processes: (1) Sensor control processes that collect information from sensors and may buffer information collected in response to a sensor stimulus. (2) Data processor carrying out processing of collected information and computing the system response. (3) Actuator control processes that generate control signals for actuators. Real-time control systems are usually designed in the software as cooperating processes with an executive concurrently controlling these processes. Because of the need to respond to timing demands made by different stimulus, an industrial control system must be in an architecture allowing for fast switching between stimulus handlers. This architecture should include the following three system components given below: (1) Real-time operating systems are specialized operating systems which manage the processes in the real-time systems. They are mainly responsible for process and resource management. Realtime operating systems in programming may be based on a standard kernel that is used unchanged or modified for a particular application. These system modules should be documented in real-time operating systems: (1) Real-time timer that provides information for process scheduling. (2) Interrupt handler that manages aperiodic requests for service. (3) Scheduler that chooses the next process to be run. (4) Resource manager that allocates memory and processor resources. (5) Dispatcher that starts process execution. (2) Monitoring and control systems poll sensors and send control signals to actuators. 569
Zhang_Ch05.indd 569
5/13/2008 6:16:57 PM
570
INDUSTRIAL CONTROL TECHNOLOGY (3) Data acquisition systems collect data from sensors for subsequent processing and analysis. Data collection processes and processing processes may have different periods and deadlines; and could be faster than processing. The programming is usually organized according to a producer consumer model by using circular or ring buffers which are the mechanism for smoothing speed differences.
5.1 Boot Code for Microprocessor Unit Chipset 5.1.1
Introduction
To boot up a microprocessor unit is to load an operating system into the microprocessor unit’s main memory or random access memory (RAM). Once the operating system is loaded, it is ready for users to run applications. Sometimes, an instruction is given to “reboot” the operating system. This simply means to reload the operating system. On larger microprocessor units (including mainframes), the equivalent term for “boot” is “initial program load” and for “reboot” is “reinitial program load.” The booting of an operating system works by loading a very small program into the microprocessor set and then giving that program control so that it in turn loads the entire operating system. Booting or loading an operating system is different from installing it, which is generally an initial one-time activity. The installation of operating system stores the operating system source code on hard disk that is ready to be booted (loaded) into random access memory. The microprocessor unit storage is closer to the microprocessor and faster to work with than the hard disk. Typically, when an operating system is installed, it is set up so that when the microprocessor unit is turned on, the system is automatically booted as well. If storage (memory) runs out or the operating system or an application program encounters an error, it may display an error message or the screen may “freeze.” In these events, you may have to reboot the operating system.
5.1.2
Code Structures
Boot code of a microprocessor unit is a complicated program set consisting mainly of BIOS and Kernel; Master Boot Record (MBR); and Boot program.
Zhang_Ch05.indd 570
5/13/2008 6:16:57 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
5.1.2.1
571
BIOS and Kernel
(1) BIOS. BIOS, in computing, stands for Basic Input and Output System. BIOS refers to the software code run by a controller or a computer when first powered on. The primary function of BIOS is to prepare the machine so other software programs stored on various distributed modules can load, execute, and assume control of the controller or computer. The other main responsibilities of the BIOS include booting the system, and providing the BIOS setup program that allows changing BIOS parameters. When a controller or a personal computer is first turned on, the processor is “raring to go,” but it needs some instructions to execute. However, since the machine has just been turned on, its system memory is empty; there are no programs to run. To make sure that the BIOS program is always available to the processor, even when it is first turned on, it is “hard-wired” into a read-only memory (ROM) chip that is placed on the system’s motherboard. A uniform standard was created between the makers of processors and the makers of BIOS programs, so that the processor would always look in the same place in memory to find the start of the BIOS program. The processor gets its first instructions from this location, and the BIOS program begins executing. The BIOS program then begins the system boot sequence that calls other programs, gets operating system loaded, and the controller or computer up and running. A control system can contain several BIOS firmware chips. The motherboard BIOS typically contains code to access fundamental hardware components such as the keyboard, floppy drives, ATA (IDE) hard disk controllers, USB human interfaces, and storage devices. In addition, plug-in adapter cards such as SCSI, RAID, Network interface cards, and video boards often include their own BIOS, complementing or replacing the system BIOS code for the given component. In some cases, where devices may also be used by add-in adapters, and actually directly integrated on the motherboard, the add-in ROM may also be stored as separate code on the main BIOS flash chip. It may then be possible to upgrade this “add-in” BIOS (sometimes called an “option ROM”) separately from the main BIOS code. (2) Kernel. The kernel is a program that constitutes the central core of an operating system. It has complete control over everything that occurs in the system. The kernel is the first part of the operating system to load into memory during booting, and it remains there for the entire duration of the session because its services are required continuously. Thus, it is important for it to be as small as possible while still providing all the essential services
Zhang_Ch05.indd 571
5/13/2008 6:16:57 PM
572
INDUSTRIAL CONTROL TECHNOLOGY needed by the other parts of the operating system and by the various application programs. Because of its critical nature, the kernel code is usually loaded into a protected area of memory, which prevents it from being overwritten by other, less frequently used parts of the operating system, or by application programs. The kernel performs its tasks, such as executing processes and handling interrupts, in kernel space, whereas everything a user normally does, such as writing text in a text editor or running programs in a GUI (graphical user interface), is done in user space. This separation is made to prevent user data and kernel data from interfering with each other and thereby diminishing performance or causing the system to become unstable (and possibly crashing). The kernel provides basic services for all other parts of the operating system, typically including memory management, process management, file management, and I/O (input/output) management that means the accessing the peripheral devices. These services are requested by other parts of the operating system or by application programs through a specified set of program interfaces referred to as system calls. The contents of a kernel vary considerably according to the operating system, but they typically include the following: (a) A scheduler, which determines how the various processes share the kernel’s processing time (including in what order); (b) A supervisor, which grants use of the controller or computer to each process when it is scheduled; (c) An interrupt handler, which handles all requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel’s services; (d) A memory manager, which allocates the system’s address spaces (i.e., locations in memory) among all users of the kernel’s services. (e) Many (but not all) kernels also provide a “Device I/O Supervisor” category of services. These services, if available, provide a uniform framework for organizing and accessing the many hardware device drivers that are typical of an embedded system. The kernel should not be confused with the BIOS. The BIOS is an independent program stored in a chip on the motherboard (the main circuit board of a computer) that is used during the booting process for such tasks as initializing the hardware and loading the kernel into memory. Whereas the BIOS always remains in the computer and is specific to its particular hardware, the kernel can be easily replaced or upgraded by changing or
Zhang_Ch05.indd 572
5/13/2008 6:16:58 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
573
upgrading the operating system or, in the case of Linux, by adding a newer kernel or modifying an existing kernel. Kernels can be classified into four broad categories: monolithic kernels, microkernel, hybrid kernels, and exokernel. Each has its own advocates and detractors. When a controller or computer crashes, it actually means the kernel has crashed. If only a single program has crashed but the rest of the system remains in operation, then the kernel itself has not crashed. A crash is the situation in which a program, either a user application or a part of the operating system, stops performing its expected function(s) and responding to other parts of the system. The program might appear to the user to freeze. If such a program is critical to the operation of the kernel, the entire controller or computer could stall or shut down.
5.1.2.2
Master Boot Record (MBR)
When you turn on a controller or computer, the processor attempts to begin processing data. But, since the system memory is empty, the processor does not really have anything to execute, or even begin to know where to look for it. To ensure that the controller or computer will always boot regardless of the BIOS code, both chip and BIOS manufacturers developed their code so that the processor once turned on, always starts executing at the same address, FFFF0h. Similarly, every hard disk must have a consistent “starting point” where key information is stored about the disk, such as the number of partitions and what type they are. There also must be someplace where the BIOS can load the initial boot program that starts the process of loading the operating system. The place where this information is stored is called the master boot record (MBR), also referred to as the master boot sector or even just the boot sector. The MBR is always located at cylinder 0, head 0, and sector 1, the first sector on the disk. This is the consistent starting point that the disk will always use. When a computer starts and the BIOS boots the machine, it will always look at this first sector for instructions and information on how to proceed with the boot process and load the operating system. As illustrated in Fig. 5.1, the master boot record contains the following structures: (1) Master partition table. This small bit of code that is referred to as a table contains a complete description of the partitions that are contained on the hard disk. When the developers designed
Zhang_Ch05.indd 573
5/13/2008 6:16:58 PM
574
INDUSTRIAL CONTROL TECHNOLOGY First 446 bytes: Master boot code area 16 byte partition table entry 16 byte partition table entry 16 byte partition table entry 16 byte partition table entry 2 byte MBR signature (0xAA55)
Figure 5.1 Layout of master boot record.
the size of this master partition table, they left just enough room for the description of four partitions, hence the four partition (four physical partitions) limit. For this reason, and no other, a hard disk may only have four true partitions, also called primary or physical partitions. Any additional partitions must be logical partitions that are linked to (or are part of) one of the primary partitions. One of these partitions is marked as active, indicating that it is the one that the computer should used to continue the boot process. (2) Master boot code. The master boot record is the small bit of computer code that the BIOS loads and executes to start the boot process. This code, when fully executed, transfers control to the boot program stored on the boot (active) partition to load the operating system. The MBR working in the boot process has these details: The bootstrapping firmware contained within the ROM BIOS loads and executes the master boot record. The MBR of a drive usually includes the drive’s partition table, which the controller or computer uses to load and run the boot record of the partition that is marked with the active flag. This design allows the BIOS to load any operating system without knowing exactly where to start inside its partition. Because the MBR is read almost immediately when the computer is started, many computer viruses made in the era before virus scanner software became widespread operated by changing the code within the MBR. Technically, only partitioned media contain a master boot record, while unpartitioned media only have a boot sector as the first sector. In both cases, the BIOS transfers control to the first sector of the disk after reading it into memory. A legacy that MBR contains is partition selection code that loads and runs the boot (first) sector of the selected primary partition. That partition boot sector would contain another boot loader. Newer MBRs, however, can directly load the next stage from an arbitrary location on the hard drive.
Zhang_Ch05.indd 574
5/13/2008 6:16:58 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
575
This can pose some problems with dual-booting, as the boot loader whose location is coded into the MBR must be configured to load each operating system. If one operating system must be reinstalled, it may overwrite the MBR such that it will load a different boot loader.
5.1.2.3
Boot Program
Figure 5.2 gives code examples that can be used to boot a CPU and run a program (operating system or an application code) starting at the bottom of the first bank of Flash. This section of the code is copied from RAM into the top of EEPROM. In these code examples, the purpose is to allow easy booting of the CPU without having to preload the Flash banks with boot code. The Flash bank select addressing is indeterminate (messed up) following reset because port G defaults to input. The EEPROM is mapped to the top of the address space. It therefore contains the reset and interrupt vectors. The top 288 bytes of EEPROM are normally protected to minimize the possibility of accidentally making the module unbootable. This code occupies the top 256 bytes of EEPROM, most of which is unused. The interrupt vectors are all forced to the boot program in EEPROM. These would normally point to interrupt service routines somewhere. These code examples in Fig. 5.2 are written in the Motorola AS11 freeware assembler. Code sample are provided as examples only. There is no guarantee that the examples will work in a particular environment when applied.
5.1.3
Boot Sequence
The following are the main steps that a typical boot sequence involves. Of course, this will vary by the manufacturer of your hardware, BIOS, and so on, and especially by what peripherals you have in the control system.
5.1.3.1
Power On
The internal power supply turns on and initializes. The power supply takes some time until it can generate reliable power for the rest of the computer, and having it turn on prematurely could potentially lead to damage. Therefore, the chipset will generate a reset signal to the processor until it receives the Power Good signal from the power supply.
Zhang_Ch05.indd 575
5/13/2008 6:16:59 PM
576
INDUSTRIAL CONTROL TECHNOLOGY
BOOT ORG 2000H
; first 256 bytes of external RAM
; INITIAL CPU SETUP FOR LOADER LDS #01FFH LDX #1000H
; init stack ; register base address
LDAA #10010001B STAA OPTION,X
; adpu, irqe, dly, cop = 65mS
LDAA #00000101B STAA CSCTL,X LDAA #00000000B STAA CSGADR,X LDAA #00000001B STAA CSGSIZ,X LDAA #00001111B STAA DDRG,X CLR PORTG,X JMP 8000H
; enable program CS for 32K ; RAM starts at address 0000H ; RAM block size is 32K ; bank select bits = outputs ; select 1ST bank ; point to APPLICATION in FLASH
; RESET & INTERRUPT VECTORS ; Make these point to the appropriate service routines within the ; application. ORG 20FFH-29H
; Reset & Interrupt vectors for EEPROM
DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H DWM 0FF00H
; SCI ; D6 SCI (FFD6) ; SPI ; D8 SPI ; PAIE ; DA PULSE ACCUMULATOR I/P EDGE ; PAO ; DC PULSE ACCUMULATOR OVERFLOW ; TOF ; DE TIMER OVERFLOW ; OC5 ; E0 TIMER O/P COMPARE 5 ; OC4 ; E2 TIMER O/P COMPARE 4 ; OC3 ; E4 TIMER O/P COMPARE 3 ; OC2 ; E6 TIMER O/P COMPARE 2 ; OC1 ; E8 TIMER O/P COMPARE 1 ; IC3 ; EA TIMER I/P COMPARE 3 ; IC2 ; EC TIMER I/P COMPARE 2 ; IC1 ; EE TIMER I/P COMPARE 1 ; RTI ; F0 REAL TIME INTERRUPT ; IRQ ; F2 EXTERNAL IRQ ; XIRQ ; F4 EXTERNAL XIRQ ; SCI ; F6 SOFTWARE INTERRUPT (SWI) ; ILLOP ; F8 ILLEGAL OPCODE ; COP ; FA COP OPERATED ; CLM ; FC CLOCK MONITOR OPERATED ; START ; FE RESET
BOOTEND
; end of boot code loaded into EEPROM
Figure 5.2 An example of boot CPU program.
Zhang_Ch05.indd 576
5/13/2008 6:16:59 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
5.1.3.2
577
Load BIOS, MBR and Boot Program
When the processor receives the reset signal, the processor will be ready to start executing. When the processor first starts up, there is nothing at all in the memory to execute. Of course, processor makers know this will happen, so they preprogram the processor to always look at the same place in the system BIOS from specific ROM for the start of the BIOS boot program. This is normally location FFFF0h, right at the end of the system memory. They put it there so that the size of the ROM can be changed without creating compatibility problems. Since there are only 16 bytes left from there to the end of conventional memory, this location just contains a “jump” instruction telling the processor where to go to find the real BIOS start-up program.
5.1.3.3
Initiate Hardware Components
The first thing that the BIOS does when it boots the system is to perform what is called the Power-On Self-Test, or POST for short. The POST is a built-in diagnostic program that checks hardware to ensure that everything is present and functioning properly, before the BIOS begins the actual boot that later continues with additional tests (such as the memory test) as the boot process is proceeding. The POST starts with an internal check of CPU and with a check of the boot code by comparing code at various locations against a fixed template. Then, the POST checks the bus, ports, system clock, display adapter memory, RAM, DMA, keyboard, floppy drives, hard drives, and so forth. The CPU sends signals over the system bus to make sure that these devices are functioning. In addition to the POST, the BIOS initialization routines initialize memory refresh and load BIOS routines to memory. BIOS initialization routines will add to the system BIOS routines with routines and data from other BIOS chips on installed controllers. The routine then compares the information it has gathered with the information stored in the setup program. If there are any discrepancies, it halts the boot process and informs the operator. If everything is okay, it will usually be displayed on screen. The POST runs very quickly, and you will normally not even notice that it is happening; unless it finds a problem. During the POST, if any errors are detected, the system may deliver an error code. Error codes, both visual and audible, differ from manufacturer to manufacturer. To interpret the visual (printed) or audible (beeps) error codes, you will need a table of these codes from the manufacturer of the system’s motherboard. If any
Zhang_Ch05.indd 577
5/13/2008 6:16:59 PM
578
INDUSTRIAL CONTROL TECHNOLOGY
problems are encountered during the POST routine, then you know that it is hardware related. In some cases, you can find and repair problems by searching for poor connections, damaged cables, seized fans, and power problems. Check that adapter cards and RAM are installed in their respective slots properly. In extreme cases, you may have to strip the system down to its motherboard, RAM, video card, power supply, and CPU. By adding the rest of the devices individually and restarting after each one, you can sometimes discover the cause of the problem. The BIOS does more tests on the system, including the memory count-up test. The BIOS will generally display a text error message on the screen if it encounters an error at this point; these error messages and their explanations can be found in this part of the Troubleshooting Expert. The BIOS performs a “system inventory” of sorts, doing more tests to determine what sort of hardware is in the system. The BIOS will display a summary screen about your system’s configuration. Checking this page of data can be helpful in diagnosing setup problems.
5.1.3.4
Initiate Interrupt Vectors
The initialization of the system during POST creates interrupt vectors to the proper interrupt handling routines and sets up registers with parameters. In Fig. 5.2, the Interrupt Vector Table is included in the boot sector program, thus initializing the Interrupt Vectors to set up pointers in memory to access those interrupt handling routines. In addition to the POST, Interrupt Vectors are reinitialized and system timers reinitialized. In other words, the BIOS code initializes the computer system to such a state that the computer system is ready for loading the operating system. Issuing an interrupt does the loading of the operating system.
5.1.3.5 Transfer to Operating System Whenever a PC is turned ON, BIOS takes the control and performs a lot of operations. It checks the Hardware, Ports, and so on and finally it loads the MBR program into memory (RAM). Now, MBR takes control of the booting process. When only one operating system is installed in the system, the functions of MBR are as follows: (1) the boot process starts by executing code of MBR in the first sector of the disk, (2) the MBR looks over the partition table to find the Active Partition, (3) control is passed to that partition’s boot record (PBR) to continue booting, (4) the PBR locates the system-specific boot files, (5) then these boot files continue the process of loading and initializing the rest of the operating system.
Zhang_Ch05.indd 578
5/13/2008 6:16:59 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
579
5.2 Real-Time Operating System 5.2.1
Introduction
A Real-time Operating System (RTOS) facilitates the creation of realtime systems, but does not guarantee that they are real-time; this requires correct development of the system level software. Nor does an RTOS necessarily have high throughput; rather, it allows, through specialized scheduling algorithms and deterministic behavior, the guarantee that system deadlines can be met. That is, an RTOS is valued more for how quickly it can respond to an event than for the total amount of work it can do. Key factors in evaluating an RTOS are given below. (1) Kernel. As mentioned in Section 5.1.2, the Kernel is a program that constitutes the central core of an operating system. For details of the Kernel in the RTOS, refer to Section 5.1.2. (2) Multitasking. An integrated RTOS implements cooperative and/ or preemptive (time sliced) multitasking with only a few microseconds task switch time by means of resource variables and mailboxes to handle intertask communication and resource sharing techniques. (3) Dynamic memory allocation. Dynamic memory allocation is the allocation of memory storage for use in a computer program during the runtime of that program. It is a way of distributing ownership of limited memory resources among many pieces of data and code. A dynamically allocated object remains allocated until it is deallocated explicitly, either by the programmer or by a garbage collector; this is notably different from automatic and static memory allocation. It is said that such an object has dynamic lifetime. Fulfilling an allocation request, which involves finding a block of unused memory of a certain size in the heap, is a difficult problem. A wide variety of solutions have been proposed, including (1) free lists, (2) paging, and (3) buddy memory allocation.
5.2.2 Task Controls 5.2.2.1 Multitasking Concepts Any task, or “process,” running on a control system requires certain resources to be able to run. The simplest processes have merely a requirement for some time in the CPU and a bit of system memory, while more complex processes may need extra resources (a serial port, perhaps, or a
Zhang_Ch05.indd 579
5/13/2008 6:16:59 PM
580
INDUSTRIAL CONTROL TECHNOLOGY
certain file on a hard disk, or a tape drive). The basic concept of multitasking, often called “multi-programming,” is that of allowing the operating system to split the time that each system resource is available into small chunks, and allowing each of the running processes, in turn, to have access to its desired resources. It is true that different processes have different requirements. So, the essence of a multitasking system is in the cleverness of the algorithm that decides what processes get what resources at what time. As processors have become more powerful, so it has become possible to implement increasingly powerful and complex resource allocation algorithms without compromising the speed of the computer itself. Most mainstream computer users probably have a single core single processor inside the case. However, it can still do more than one task at the same time with very little, if any, drops in speed. The processor seems to be processing two sets of code at the same time. A typical single-core single-processor cannot process more than one line of code at any given instant. The processor is quickly switching from one task to the next, creating an illusion that tasks are being processed simultaneously. There are two basic methods that this illusion is created: Cooperative multitasking and preemptive multitasking. (1) Cooperative multitasking. When one task is already occupying the processor, a wait line is formed for other tasks that also need to use the CPU. Each application is programmed such that after a certain amount of cycles, the program would step down and allow other tasks their processor time. However, this cooperation schema is rather outdated in its use and is hampered by its limitations. Programmers were free to decide if and when their application would surrender CPU time. In the perfect world, every program would respect the other programs running alongside it, but unfortunately, this is not the perfect world. Even when program A could be using CPU cycles while program B and program C are waiting in line, there was no way to stop program A unless it voluntarily stepped down. (2) Preemptive multitasking. The inefficiency of cooperative multitasking left the computer industry scrambling for different ideas. A new standard called preemptive multitasking took form. In preemptive multitasking, the system has the power to halt or “preempt” one code from hogging CPU time. After forcing an interrupt, the control is at the hands of the operating system, which can appropriately hand CPU cycle time to another task. Inconvenient interrupt timing is the greatest drawback of preemptive multitasking. But in the end, it is better that all programs see some CPU time rather than having a single program work
Zhang_Ch05.indd 580
5/13/2008 6:16:59 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
581
negligibly faster. Preemptive and cooperative multitasking are, as mentioned, “illusion” multitasking. There are processors that can physically address two streams of data simultaneously, and these technologies are dual or multiprocessor, dual or multicore, and simultaneous multithreading. The main problem that multitasking introduces in a system is “deadlock”. Deadlock occurs when two or more processes are unable to continue running because each holds some resources that the other needs. Imagine you have processes A and B and resources X and Y. Each process requires both X and Y to run, but process A is holding onto resource X, and B is holding onto Y. Clearly, neither process can continue until it can grab the missing resources and so the system is said to be in deadlock. Fortunately, there are a number of ways of avoiding this. Option one is to force each process to request all of the resources it is likely to need to run, before it actually begins processing. If all resources are not available, it has to wait and try again later. This works but is inefficient (a process may be allocated a resource but not use it for many seconds or minutes, during which time the resource is unavailable). Alternatively, we could define an ordering for resources such that processes are required to ask for resources in the order stipulated. In our example above, we could say that resources must be requested in alphabetical order. So processes A and B would both have to ask for resource X before asking for Y; we could never reach the situation where process B is holding resource Y, because it would have had to be granted access to resource X first.
5.2.2.2 Task Types A task type is a limited type that mainly depends on the task properties given in Table 5.1. Hence, neither assignment nor the predefined comparison for equality and inequality are defined for objects of task types; moreover, the mode out is not allowed for a formal parameter whose type is a task type. A task object is an object whose type is a task type. The value of a task object designates a task that has the entries of the corresponding task type, whose execution is specified by the corresponding task body. If a task object is the object, or a subcomponent of the object, declared by an object declaration, then the value of the task object is defined by the elaboration of the object declaration. If a task object is the object, or a subcomponent of the object, created by the evaluation of a scheduler, then the value of the task object is defined by the evaluation of the scheduler. For all parameter
Zhang_Ch05.indd 581
5/13/2008 6:16:59 PM
582
INDUSTRIAL CONTROL TECHNOLOGY
Table 5.1 Some Task Properties Name
Description
TaskID
Specifies a string that identifies the task. This property is always required and must be unique If this task is to be nested under a previously defined task in the tasks panel, this value contains the TaskID of the parent task Specifies the dotted notation name for the resource bundle that should be used to retrieve all translatable strings used for items such as menu labels Specifies the key of the resource to retrieve from the resource bundle specified as ResourceBundle. This property is used as the label under the tasks icon in the Tasks panel. If no resource bundle was specified or if no resource exists in the bundle associated with this key value, this value is used as the icon’s label text. If this property is not specified, the task’s icon is not to be displayed in the Tasks panel Specifies whether the Title property value is a literal or a key in the ResourceBundle (default). To make the Task title a literal value, use LiteralTitle = true. This property should only be used for tasks that are created dynamically from end user input If the task should be accessible by all users regardless of how the permissions are set, set this value to true
ParentTaskID
ResourceBundle
Title
LiteralTitle
TaskUnrestricted
modes, if an actual parameter designates a task, the associated formal parameter designates the same task; the same holds for a subcomponent of an actual parameter and the corresponding subcomponent of the associated formal parameter; finally, the same holds for generic parameters.
5.2.2.3 Task Stack and Heap The process stack or task stack is typically an area of prereserved main storage (system memory) that is used for return addresses, procedure arguments, temporarily saved registers, and locally allocated variables. The processor typically contains a register that points to the top of the stack. This register is called the stack pointer and is implicitly used by machine code instructions that call a procedure, return from a procedure, store a data item on the stack, and fetch a data item from the stack.
Zhang_Ch05.indd 582
5/13/2008 6:16:59 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
583
The process heap or task heap is also an area of prereserved main storage (system memory) that a program process can use to store data in some variable amount that would not be known until the program is running. For example, a program may accept different amounts of input from one or more users for processing and then do the processing on all the input data at once. Having a certain amount of heap storage already obtained from the operating system makes it easier for the process to manage storage and is generally faster than asking the operating system for storage every time it is needed. The process manages its allocated heap by requesting a “chunk” of the heap (called a heap block) when needed, returning the blocks when no longer needed, and doing occasional “garbage collecting,” which makes blocks available that are no longer being used and also reorganizes the available space in the heap so that it is not being wasted in small unused pieces. A stack and a heap are similar to each other except that the blocks are taken out of storage in a certain order and returned in the same way. The following are some descriptions for the working procedure of the stack, heap, and frame-stack that are used in the execution of a process or task. (1) Process stack. A stack holds the values used and computed during evaluation of a program. When the machine calls a function, the parameters are pushed on the top of the stack. Actually, the parameters are merged to form a single list datum. If the function requires any local variable, the machine allocates space for their values. When the function returns, the allocated local variables and the arguments stored on the stack are popped; then the value is returned by pushing it onto the stack. Figure 5.3(a) below shows the working procedure of the process stack including before a function is called, during a function call, the arguments having been pushed onto the stack, and after the function has returned, the arguments and local variables having been popped, and the answer pushed onto the stack. Since the stack is a fixed size memory block, a deep function call may cause the “stack overflow” error. (2) Process heap. To reduce the frequency of copying the contents of strings and lists, another data structure, called the heap, is used to hold the contents of temporary strings or lists during the string and list operations. The heap is a large memory block. Memory is allocated within this block. A pointer keeps track of the top of the heap similar to the stack pointer of the stack. The process heap is not affected by the return of the function call. There are normally some “Free-heap” instructions that are automatically inserted by the programmer at the points that it
Zhang_Ch05.indd 583
5/13/2008 6:17:00 PM
584
INDUSTRIAL CONTROL TECHNOLOGY
Top of heap Stack ptr.
Stack ptr.
Stack ptr.
Stack before
Stack during (a)
Space for f2 Space for f1
Stack ptr.
Stack after
12 11
Stack
Frame
Heap
(b)
Figure 5.3 Process stack, process heap, and process frame-stack: (a) the working procedure of the process stack; (b) a comparison between the working procedures between the process stack, process heap, and the process frame-stack.
thinks safe to free memory. However, if a computation involves long strings, lists, or too deep function called, the machine may not have the chance to free the memory and thus causes the “heap overflow” error. The programmer may have to modify the program to minimize the load of stack and heap to avoid memory overflow errors. For instance, an iterative function has less demand on stack and heap than its recursive equivalent. (3) Process frame-stack. In returning from a function call, the machine has to store the previous machine status, for example, stack pointer (pseudo machine) program counter, number of local variables (since local variables were put on the stack), and so on. A frame-stack is a stack holding this information. When a function is called, the current machine status, a framestack, is pushed on the frame-stack. For example, if f1() calls f2(), the status of stack, heap, and frame-stack is shown in Fig. 5.3(b). The frame-stack keeps track of the stack and heap pointers. A “Free-heap” instruction causes the top-of-heap to return to the bottom pointed to by the top-most frame-stack (e.g., f2). On the other hand, the stack pointer is lowered only if the function returns. A frame-stack is popped when a function returns and the machine status resumes to the previous state. Since the framestack is implemented as a small stack, about 64 entries, the number of level of nested function calls is about 64. This maximum limit is enough for general use, but if a function is nested too deep, the machine will generate a “nested too deep” error.
Zhang_Ch05.indd 584
5/13/2008 6:17:00 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
585
5.2.2.4 Task States In any moment of its life a task is characterized by its state. Normally, the following five task states are defined for tasks in RTOS: (1) “Running”. In the running state, the CPU is assigned to the task, so that its instructions can be executed. Only one task can be in this state at any point in time, while all the other states can be adopted simultaneously by several tasks. (2) “Ready”. All functional prerequisites for a transition into the running state exist, and the task only waits for allocation of the processor. The scheduler decides which ready task is executed next. (3) “Waiting”. A task cannot continue execution because it has to wait for at least one event. Only extended tasks can jump into this state (because they are the only ones that can use events). (4) “Suspended”. In the suspended state the task is passive and can be activated. (5) “Terminated”. In this task state, the task allocator deletes the corresponding task object and releases the resources taken by this task. Figure 5.4 is a diagram of a possible design for the transitions between these five task states. Note that basic tasks (also called main or background tasks) have no waiting state: a basic task can only represent a synchronization point at the beginning and at the end of the task. Application parts with internal synchronization points have to be implemented by more than
Suspended
Ready
Running
Waiting
Terminated
Figure 5.4 A possible design for the task states transitions.
Zhang_Ch05.indd 585
5/13/2008 6:17:00 PM
586
INDUSTRIAL CONTROL TECHNOLOGY
one basic task. An advantage of extended tasks is that they can handle a coherent job in a single task, no matter which synchronization requests are active. Whenever current information for further processing is missing, the extended task switches over into the waiting state. It exits this state whenever corresponding events signal the receipt or the update of the desired data or events. Depending on the conformance class a basic task can be activated once or multiple times. The latter means that activation issued when a task is not in the suspended state will be recorded and then executed when the task finishes the current instance. The termination of a task instance only happens when a task terminates itself (to simplify the RTOS, no explicit task kill primitives are provided).
5.2.2.5 Task Body The execution of the task is defined by the corresponding task body. A task body documents all the executable contents fulfilling all the functions and computations for the corresponding task. Task objects and types can be declared in any declarative part, including task bodies themselves. For any task type, the specification and body must be declared together in the same unit, with the body usually being placed at the end of the declarative part. The simple name at the start of a task body must repeat the task unit identifier. Similarly, if a simple name appears at the end of the task specification or body, it must repeat the task unit identifier. Within a task body, the name of the corresponding task unit can also be used to refer to the task object that designates the task currently executing the body; furthermore, the use of this name as a type mark is not allowed within the task unit itself. The elaboration of a task body has no other effect than to establish that the body can from then on be used for the execution of tasks designated by objects of the corresponding task type. The execution of a task body is invoked by the activation of a task object of the corresponding type. The optional exception handlers at the end of a task body handle exceptions raised during the execution of the sequence of statements of the task body.
5.2.2.6 Task Creation and Termination A task object can be created either as part of the elaboration of an object declaration occurring immediately within some declarative region, or as part of the evaluation of an allocator. All tasks created by the elaboration
Zhang_Ch05.indd 586
5/13/2008 6:17:00 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
587
of object declarations of a single declarative region (including subcomponents of the declared objects) are activated together. Similarly, all tasks created by the evaluation of a single allocator are activated together. The execution of a task object has three main active phases: (1) Activation. The elaboration of the declarative part, if any, of the task body (local variables in the body of the task are created and initialized during activation). The activator identifies the task that created and activated the task. (2) Normal execution. In this phase, the execution of the statements is visible within the body of the task. (3) Finalization. The execution of any finalization code associated with any objects in its declarative part. The parent is the task on which a task depends. The following rules apply: If the task has been declared by means of an object declaration, its parent is the task which declared the task object. If the task has been declared by means of an allocator, its parent is the task that has the corresponding access declaration. When a parent creates a new task, the parent’s execution is suspended while it waits for the child to finish activating (either immediately, if the child is created by an allocator, or after the elaboration of the associated declarative part). Once the child has finished its activation, parent and child proceed concurrently. If a task creates another task during its activation, then it must also wait for its child to activate before it can begin execution. The master is the execution of a construct that includes finalization of local objects after it is complete (and after waiting for any local task), but before leaving. Each task depends on one or more masters, as follows: If the task is created by the evaluation of an allocator for a given access type, it depends on each master that includes the elaboration of the declaration of the ultimate ancestor of the given access type. If the task is created by the elaboration of an object declaration, it depends on each master that includes its elaboration. Furthermore, if a task depends on a given master, it is defined as depending on the task that executes the master, and (recursively) on any master of that task. For the finalization of a master, dependent tasks are first awaited. Then each object whose accessibility level is the same as that of the master is finalized if the object was successfully initialized and still exists. Note that any object whose accessibility level is deeper than that of the master would no longer exist; those objects would have been finalized by some inner master. Thus, after leaving a master, the only objects yet to be finalized are those whose accessibility level is not as deep as that of the master.
Zhang_Ch05.indd 587
5/13/2008 6:17:00 PM
588
INDUSTRIAL CONTROL TECHNOLOGY
5.2.2.7 Task Queue Task queues are the way of deferring work until later in the kernels of the RTOS. A task queue is a simple data structure, see Fig. 5.5 which consists of a singly linked list of “tq_object” data structures each of which contains the address of a task body routine and a pointer to some data. The routine will be called when the element on the task queue is processed, and it will be passed by a pointer to the data. Anything in the kernel, for example, a device driver, can create and use task queues but there are three task queues created and managed by the kernel: (1) Timer. This queue is used to queue work that will be done as soon after the next system clock tick as is possible. At each clock tick this queue is checked to see if it contains any entries and, if it does, the timer queue handler is made active. The timer queue handler is processed, along with all the other handlers, when the scheduler next runs. This queue should not be confused with system timers that are a much more sophisticated mechanism. (2) Immediate. This queue is also processed when the scheduler processes the active handlers by means of their priorities. The immediate handler is not as high in priority as the timer queue handler and so these tasks will be run later. (3) Scheduler. This task queue is processed directly by the scheduler. It is used to support other task queues in the system and, in this case, the task to be run will be a routine that processes a task queue, say, for a device driver or an interface monitor. When task queues are processed, the pointer to the first element in the queue is removed from the queue and replaced with a null pointer. In fact, this removal is an atomic operation that cannot be interrupted. Then each element in the queue has its handling routine called in turn. The elements in the queue are often statically allocated data, however, there is no inherent mechanism for discarding allocated memory. The task queue processing routine simply moves onto the next element in the list. It is the job of the task itself to ensure that it properly cleans up any allocated kernel memory. tq_object next sync *body() *data
tq_object next sync *body() *data
tq_object next sync *body() *data
Figure 5.5 A task queue.
Zhang_Ch05.indd 588
5/13/2008 6:17:01 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
589
5.2.2.8 Task Context Switch and Task Scheduler (1) Task context switch. A context switch (also sometimes referred to as a process switch or a task switch) is the switching of the CPU from one process or task to another. Sometimes, a context switch is described as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been executed. Context switching is an essential feature of multitasking operating systems. This illusion of concurrency is achieved by means of context switches that are occurring in rapid succession (tens or hundreds of times per second). These context switches occur as a result of processes voluntarily relinquishing their time in the CPU or as a result of the scheduler making the switch when a process has used up its CPU time slice. A context is the contents of a CPU’s registers and program counter at any point in time. A register is a small amount of very fast memory inside of a CPU (as opposed to the slower RAM main memory outside of the CPU) that is used to speed the execution of computer programs by providing quick access to commonly used values, generally those in the midst of a calculation. A program counter is a specialized register that indicates the position of the CPU in its instruction sequence and holds either the address of the instruction being executed or the address of the next instruction to be executed, depending on the specific system. Context switching can be described in slightly more detail as the kernel performing the following activities with regard to processes (including tasks) on the CPU: (a) In a context switch, the state of the first process must be saved somehow, so that, when the scheduler gets back to the execution of the first process, it can restore this state and continue normally. (b) The state of the process includes all the registers that the process may be using, especially the program counter, plus any other operating system specific data that may be necessary. Often, all the data that is necessary for state is stored in one data structure, called a switchframe or a process control block. (c) Now, to switch processes, the switchframe for the first process must be created and saved. The switchframes are sometimes stored upon a per-process stack in kernel memory (as opposed to the user-mode stack), or there may be some specific operating system defined data structure for this information.
Zhang_Ch05.indd 589
5/13/2008 6:17:01 PM
590
INDUSTRIAL CONTROL TECHNOLOGY (d) Since the operating system has effectively suspended the execution of the first process, it can now load the switchframe and context of the second process. In doing so, the program counter from the switchframe is loaded, and thus execution can continue in the new process. A context switch can also occur as a result of a hardware interrupt, which is a signal from a hardware device to the kernel that an event has occurred. Intel 80386 and higher CPUs contain hardware support for context switches. Some processors, like the Intel 80386 and higher CPUs, have hardware support for context switches, by making use of a special data segment designated the Task State Segment (TSS). When a task switch occurs (referring to a task gate or explicitly due to an interrupt or exception) the CPU can automatically load the new state from the TSS. With other tasks performed in hardware, one would expect this to be rather fast; however, mainstream operating systems, including Windows, do not use this feature. This is due mainly to two reasons: that hardware context switching does not save all the registers (only general purpose registers, not floating point registers), and associated performance issues. However, most modern operating systems perform software context switching, which can be used on any CPU, rather than hardware context switching in an attempt to obtain improved performance. Software context switching was first implemented in Linux for Intel-compatible processors with the 2.4 kernel. One major advantage claimed for software context switching is that, whereas the hardware mechanism saves almost all of the CPU state, software can be more selective and save only that portion that actually needs to be saved and reloaded. However, there is some question as to how important this really is in increasing the efficiency of context switching. Its advocates also claim that software context switching allows for the possibility of improving the switching code, thereby further enhancing efficiency, and that it permits better control over the validity of the data that is being loaded. (2) Task scheduler. The process or task scheduler is the part of the operating system that responds to the requests by programs and interrupts for processor attention and gives control of the processor to those processes or tasks. A scheduler can also stand alone to act as a centerpiece to a program that requires moderation between many different tasks. In this capacity, each task the program must accomplish is written so that it can be called by the scheduler as necessary. Most embedded programs can be described as having a specific, discrete response to a stimulus or
Zhang_Ch05.indd 590
5/13/2008 6:17:01 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
591
time interval. These tasks can be ranked in priority, allowing the scheduler to hand control of the processor to each process in turn. The scheduler itself is a loop that calls one of the other processes each time it executes. Each processor executes the scheduler itself and will select the next task to run from all runnable processes not allocated to a processor. Whether a process needs the attention is stored in an array of flags to indicate this. The simplest way to handle the problem is to give each program a turn. When a process gets its turn, the process can decide when to return control. This method does not support any notion of importance among processes, so it is not as useful as it could be. The more complicated the scheduler or operating system, the more elaborate is the kind of scheduling information that can be maintained. The scheduling information of a process or task can be measured with these metrics: (1) CPU utilization, which is the percentage of time that the CPU is doing useful work (i.e., not idling). 100% is perfect. (2) Wait time, which is the average time a process spends in the run queue. (3) Throughput, which is the number of processes completed/time unit. (4) Response time is the average time elapsed from the time the process is submitted until useful output is obtained. (5) Turnaround time is average time elapsed from when a process is submitted to when it has completed. Typically, utilization and throughput are traded off for better response time. Response time is important for an operating system that aims to be user-friendly. In general, we would like to optimize the average measure. In some cases, minimum and maximum values are optimized, for example, it might be a good idea to minimize the maximum response time. The types of process or task schedulers depend on the adopted algorithms, which can be following: (a) First-Come, First-Served (FCFS). The FCFS scheduler simply executes processes to completion in the order they are submitted. FCFS algorithms can use a queue data structure. Given a group of processes to run, insert them all into the queue and execute them in that order. (b) Round-Robin (RR). RR is a preemptive scheduler, which is designed especially for time-sharing systems. In other words, it does not wait for a process to finish or give up control. In RR, each process is given a time slot to run. If the process does not finish, it will “get back in line” and receive another time slot until it has completed. The implementation of RR can be using a FIFO queue where new jobs are inserted at the tail end.
Zhang_Ch05.indd 591
5/13/2008 6:17:02 PM
592
INDUSTRIAL CONTROL TECHNOLOGY (c) Shortest-Job-First (SJF). The SJF scheduler is exactly like FCFS except that instead of choosing the job at the front of the queue, it will always choose the shortest job (i.e., the job that takes the least time) available. It uses a sorted list to order the processes from longest to shortest. When adding a new process or task, we need to figure out where in the list to insert it. (d) Priority Scheduling (PS). As mentioned, a priority is associated with each process. This priority can be defined by means of any kinds of meaning, for example, we can think of the shortest job as top priority. This algorithm can be as a special case of PRI. Processes with equal priorities may be scheduled in accordance with FCFS. The following words describe an example for process scheduling: A microprocessor controlling the brake system on a car is running a simple operating system. Currently, there are three processes running with priorities 8, 4, and 1 (with higher numbered priorities being more important). The highest priority process is the antilock braking process that pulses the pressure on the brake. The second highest priority process monitors the pressure on the brake pedal from the driver. Finally, the brake pad maintenance process checks for degradation of the brake pads and has the lowest priority. At first, the brake pedal is not depressed, and the brake pedal monitoring process is executing. Meanwhile, the antilock brake process waits for the brakes to be pressed before meriting attention, and the maintenance program is waiting for the pedal monitoring program to yield to it. After some time, the brake process surrenders control to the scheduler even though the brake has not been pressed. The scheduler then checks the list of processes waiting to be scheduled and sees that Y is still the highest priority process needing attention. The brake pedal monitoring process then executes again. When the brake is pressed, the brake pedal notifies the antilock process that the brake is active. The scheduler updates the list of processes waiting to reflect the fact that the antilock brake process is waiting. When the pedal monitoring process returns control to the scheduler, the scheduler checks the data structure and sees that the antilock process is the highest priority process waiting. The scheduler then calls the antilock brake function. The antilock brake process controls the pressure on the brakes as needed, surrendering control to check the status of the pedal. Because it is finished, it notifies the scheduler by calling a function that it will wait for the brake pedal program to call it again. This function updates the scheduler’s list of processes so that the antilock
Zhang_Ch05.indd 592
5/13/2008 6:17:02 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
593
process is no longer shown as waiting. The antilock process then returns control to the scheduler. The scheduler sees that the pedal monitoring process is again at the top of the list, and allows that program to execute. The pedal process updates the status of the brake and notifies the scheduler that more antilock compression is needed. This cycle of antilock pressure and pedal status continues for some time. Eventually the car is put into park, and the pedal and antilock processes are put into the waiting status. The scheduler removes the antilock and pedal programs from the list of waiting processes. The scheduler then sees that only the maintenance program remains in the queue, and it can execute the brake check. The maintenance process determines that it needs to warn the driver. This notifies the scheduler that a new process, a warning process, needs to be created with a priority of 2. The scheduler adds the warning process to its list of processes, ahead of the maintenance process. The maintenance process then notifies the scheduler that the warning process needs to execute and cedes control. The scheduler checks the queue of processes and sees that warning process is the highest priority process waiting for attention because the car is still in park. The scheduler calls the warning function which then blinks a dashboard warning.
5.2.2.9 Task Threads A thread is a single sequential flow of control within a program. Task threads are a way for a program to split a task into two or more simultaneously running subtasks. Multiple threads can be executed in parallel on many control systems. This multithreading generally occurs by time slicing (where a single processor switches between different threads) or by multiprocessing (where threads are executed on separate processors). A single thread also has a beginning, a sequence, and an end. At any given time during the runtime of the thread, there is a single point of execution. However, a thread itself is not a program; it cannot run on its own. Rather, a thread runs within a program. Figure 5.6 shows this relationship. The real excitement surrounding threads is not about a single sequential thread. Rather, it is about the use of multiple threads running at the same time and performing different tasks in a single program. Figure 5.7 shows this multithreading case in a program. Threads are distinguished from traditional multitasking operating system processes in that processes are typically independent, carry considerable state information, have separate address spaces, and interact only through system-provided interprocess communication mechanisms.
Zhang_Ch05.indd 593
5/13/2008 6:17:02 PM
594
INDUSTRIAL CONTROL TECHNOLOGY
A Program A Thread
Figure 5.6 Single thread running in a single program.
Two threads A Program
Figure 5.7 Two threads running concurrently in a single program.
Multiple threads, on the other hand, typically share the state information of a single process, and share memory and other resources directly. Context switching between threads in the same process is typically faster than context switching between processes. An advantage of a multithreaded program is that it can operate faster on computer systems that have multiple CPUs, CPUs with multiple cores, or across a cluster of machines. This is because the threads of the program naturally lend themselves for truly concurrent execution. In such a case, the programmer needs to be careful to avoid race conditions, and other nonintuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time to process the data in the correct order. Threads may also require atomic operations (often implemented using semaphores) to prevent common data from being simultaneously modified, or read while in the process of being modified. Careless use of such primitives can lead to deadlocks.
Zhang_Ch05.indd 594
5/13/2008 6:17:02 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
595
Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. Some implementations are called kernel threads or lightweight processes. Absent that, programs can still implement threading by using timers, signals, or other methods to interrupt their own execution and hence perform a sort of ad hoc timeslicing. These are sometimes called user-space threads. Operating systems generally implement threads in one of two ways: preemptive multithreading, or cooperative multithreading. Preemptive multithreading is generally considered the superior implementation, as it allows the operating system to determine when a context switch should occur. Cooperative multithreading, on the other hand, relies on the threads themselves to relinquish control once they are at a stopping point. This can create problems if a thread is waiting for a resource to become available. The disadvantage to preemptive multithreading is that the system may make a context switch at an inappropriate time, causing priority inversion or other bad effects that may be avoided by cooperative multithreading. Traditional mainstream computing hardware did not have much support for multithreading as switching between threads was generally already quicker than full process context switches. Processors in embedded systems, which have higher requirements for real-time behaviors, might support multithreading by decreasing the thread switch time, perhaps by allocating a dedicated register file for each thread instead of saving/restoring a common register file. In the late 1990s, the idea of executing instructions from multiple threads simultaneously has become known as simultaneous multithreading. This feature was introduced in Intel’s Pentium 4 processor, with the name Hyper-threading.
5.2.3
Input/Output Device Drivers
The I/O subsystem, composed of I/O devices, device controllers, and associated I/O software, is a main component of a computer control system. One of the important tasks of the operating system is to control all of the I/O devices, such as issuing commands concerning data transfer or status polling, catching and processing interrupts, as well as handling different kind of errors. We will show in this section how the operating system manages I/O devices and I/O operations. Device drivers are specific programs, which contain device-dependent codes. Each device driver can handle one device type or one class of closely related devices. For example, some kind of “dumb terminals” can be controlled by a single terminal driver. On the other hand, a dumb
Zhang_Ch05.indd 595
5/13/2008 6:17:03 PM
596
INDUSTRIAL CONTROL TECHNOLOGY
hardcopy terminal and an intelligent graphics terminal are so different that different drivers must be used. Each device controller has one or more registers used to receive its commands. The device drivers issue these commands, and check that they are carried out properly. Thus, a communication driver is the only part of the operating system that knows how many registers the associated serial controller has, and what they are used for. In general, a device driver has to accept requests from the deviceindependent software above it, and to check that they are carried out correctly. For example, a typical request is to read a block of data from the disk. If the device driver is idle, it starts carrying out the request immediately. However, if it is already busy with another request, it will enter the new request into a queue of pending requests, which will be dealt with as soon as possible. To carry out an I/O request, the device driver must decide which controller operations are required, and in what sequence. It starts issuing corresponding commands by writing them into the controller’s device registers. In many cases, the device driver must wait until the controller does some work, so it blocks itself until the interrupt comes in to unblock it. Sometime, however, the I/O operation finishes without delay, so the driver does not need to block. After the operation has been completed, it must check for errors. Status information is then returned back to its caller. Buffering is also an issue, for both block and character devices. For block devices, the hardware generally insists on reading and writing entire blocks at once, but user processes are free to read and write in arbitrary units. If a user process writes half a block, the operating system will normally keep the data internally until the rest of the data are written, at which time the block can go out to the disk. For character devices, users can write data to the system faster than it can be output, necessitating buffering. A keyboard input can also arrive before it is needed, also requiring buffering. Error handling is done by the drivers. Most errors are highly devicedependent, so only the driver knows what to do, such as retry or ignore, and so on. A typical error is caused by a disk block that has been damaged and cannot be read any more. After the driver has tried to read the block a certain number of times, it gives up and informs the device-independent software about the errors. How the error is treated from here on is a task of the device independent software. If the error occurred while reading a user file, it may be sufficient to report the error back to the caller. However, if it occurred while reading a critical system data structure such as the block containing the bit map showing which blocks are free, the operating system may have no choice but to print an error message and terminate.
Zhang_Ch05.indd 596
5/13/2008 6:17:03 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
5.2.3.1
597
I/O Device Types
All I/O devices are classified as either block or character devices. The block special device causes the I/O to be buffered in large pieces. The character device causes I/O to occur one character (byte) at a time. Some devices, such as disks and tapes, can be both block and character devices, and must have entries for each mode.
5.2.3.2
Driver Content
Every device driver has two important data structures; the device information structure and the static structure. These are used to install the device driver and to share information among the entry point routines. The device information structure is a static file that is passed to the install entry point. The purpose of the information structure is to pass the information required to install a major device into the install entry point where it is used to initialize the static structure. The static structure is used to pass information between the different entry points and is initialized with the information stored in the information structure. The operating system communicates with the driver through its entry point routines.
5.2.3.3
Driver Status
The entry point routines provide an interface between the operating system and the user applications. For example, when a user makes an open system call, the operating system responds by calling the open entry point routine, if it exists. There is a list of defined entry points, but every driver does not need to use all of them. (1) Install. This routine is called once for each major device when it is configured into the system. The install routine is responsible for allocating and initializing data structures and the device hardware, if present. It receives the address of a device information structure that holds the parameters for a major device. The install routine for a character driver should follow this pseudocode. (2) Open. The open entry point performs the initialization for a minor device. Every open system call results in the invocation of the open entry point. The open entry point is not reentrant; therefore, only one user task can be executing this entry point’s code at any time for a particular device. (3) Close. The close entry point is invoked when the last open file descriptor for a particular device file is closed.
Zhang_Ch05.indd 597
5/13/2008 6:17:03 PM
598
INDUSTRIAL CONTROL TECHNOLOGY (4) Read. The read entry point copies a certain number of bytes from the device into the user’s buffer. (5) Write. The write entry point copies a certain number of bytes from the user’s buffer to the device. (6) Select. The select entry point supports I/O polling or multiplexing. The code for this entry point is complicated and instead of trying to explain it now, the discussion of this code or structure is probably better left until needed. Unless you have a slow device, most likely it will never be needed. (7) Uninstall. The uninstall entry point is called once when the major device is uninstalled from the system. Any dynamically allocated memory or interrupt vectors set in the install entry point should be freed in this entry point. (8) Strategy. The strategy entry point is valid only for block devices. Instead of having a read and write entry point, block device drivers have a strategy entry point routine that handles both reading and writing.
5.2.3.4
Request Contention
Dealing with race conditions is one of the hard aspects of an I/O device driver. The most common way of protecting data from concurrent access is the I/O request contention. The I/O request contention traditionally is operated by means of request queue. The most important function in a block driver is the request function, which performs the low-level operations related to reading and writing data. Each block driver works with at least one I/O request queue. This queue contains, at any given time, all of the I/O operations that the operating system would like to see done on the driver’s devices. The management of this queue is complicated; the performance of the system depends on how it is done. The I/O request queue is a complex data structure that is accessed in many places in the operating system. It is entirely possible that the operating system needs to add more requests to the queue at the same time that the device driver is taking requests off. The queue is thus subject to the usual sort of race conditions, and must be protected accordingly. A variant of this latter case can also occur if the request function returns while an I/O request is still active. Many drivers for real hardware will start an I/O operation, then return; the work is completed in the driver’s interrupt handler. We will look at interrupt-handling methodology in detail later in this chapter; for now it is worth mentioning, however, that the request function can be called while these operations are still in progress.
Zhang_Ch05.indd 598
5/13/2008 6:17:04 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
599
Some drivers handle request function reentrancy by maintaining an internal request queue. The request function simply removes any new requests from the I/O request queue and adds them to the internal queue, which is then processed through a combination of task-schedulers and interrupt-handlers. One other detail regarding the behavior of the I/O request queue is relevant for block drivers that are dealing with clustering. It has to do with the queue head: the first request on the queue. For historical compatibility reasons, the operating system (almost) always assumes that a block driver is processing the first entry in the request queue. To avoid corruption resulting from conflicting activity, the operating system will never modify a request once it gets to the head of the queue. No further clustering will happen on that request, and the elevator code will not put other requests in front of it. The queue is designed with physical disk drives in mind. With disks, the amount of time required to transfer a block of data is typically quite small. The amount of time required to position the head (seek) to do that transfer, however, can be very large. Thus, the operating system works to minimize the number and extent of these seeks performed by the device. Two things are done to achieve those goals. One is the clustering of requests to adjacent sectors on the disk. Most modern file systems will attempt to lay out files in consecutive sectors; as a result, requests to adjoining parts of the disk are common. The operating system also applies an “elevator” algorithm to the requests. An elevator in a skyscraper is either going up or down; it will continue to move in those directions until all of its “requests” (people wanting on or off) have been satisfied. In the same way, the operating system tries to keep the disk head moving in the same direction for as long as possible; this approach tends to minimize seek times while ensuring that all requests get satisfied eventually. Requests are not made of random lists of buffers; instead, all of the buffer heads attached to a single request will belong to a series of adjacent blocks on the disk. Thus a request is, in a sense, a single operation referring to a (perhaps long) group of blocks on the disk. This grouping of blocks is called clustering.
5.2.3.5
I/O Operations
(1) Interrupt-driven I/O. In an interrupt driven I/O, the dedicated I/O processors can conduct I/O operations. Whenever a data transfer to or from the managed hardware might be delayed for any reason, the driver writer should implement buffering. Data buffers
Zhang_Ch05.indd 599
5/13/2008 6:17:04 PM
600
INDUSTRIAL CONTROL TECHNOLOGY help to detach data transmission and reception from the write and read system calls, and overall system performance benefits. A good buffering mechanism leads to interrupt-driven I/O, in which an input buffer is filled at interrupt time and is emptied by processes that read the device; an output buffer is filled by processes that write to the device, and is emptied at interrupt time. For interrupt-driven data transfer to happen successfully, the hardware should be able to generate interrupts with the following semantics: (a) For input, the device interrupts the processor when new data has arrived and is ready to be retrieved by the system processor. The actual actions to perform depend on whether the device uses I/O ports, memory mapping, or DMA. (b) For output, the device delivers an interrupt either when it is ready to accept new data or to acknowledge a successful data transfer. Memory-mapped and DMA-capable devices usually generate interrupts to tell the system they are done with the buffer. However, interrupt-driven I/O introduces the problem of synchronizing concurrent access to shared data items and all the issues related to race conditions. This related topic has been discussed in the last subsection. (2) Memory-mapped read and write. Memory-mapped I/O and port I/O are two complementary methods of performing input and output between the CPU and I/O devices. Memory-mapped I/O uses the same bus to address both memory and I/O devices, and the CPU instructions used to read and write to memory are also used in accessing I/O devices. To accommodate the I/O devices, areas of CPU addressable space must be reserved for I/O rather than memory. This does not have to be permanent; for example, the Commodore 64 could bank switch between its I/O devices and regular memory. The I/O devices monitor the CPU’s address bus and respond to any CPU access of their assigned address space, mapping the address to their hardware registers. Port-mapped I/O uses a special class of CPU instructions specifically for performing I/O. This is generally found on Intel microprocessors, specifically the IN and OUT instructions which can read and write a single byte to an I/O device. I/O devices have a separate address space from general memory, either from an extra “I/O” pin on the CPU’s physical interface, or an entire bus dedicated to I/O. (3) Bus-based read and write. In bus-based I/O, the microprocessor has a set of address, data, and control ports corresponding to bus
Zhang_Ch05.indd 600
5/13/2008 6:17:04 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
601
lines, and uses the bus to access memory as well as peripherals. The microprocessor has the bus protocol built into its hardware. Alternatively, the bus may be equipped with memory read and write plus input and output command lines. Now, the command line specifies whether the address refers to a memory location or an I/O device. The full range of addresses may be available for both. Again, with 16 address lines, the system may now support both 64K memory locations and 64K I/O addresses. Because the address space for I/O is isolated from that for memory, this is referred to as isolated I/O. Isolated I/O is also known as I/Omapped I/O or standard I/O.
5.2.4 Interrupts 5.2.4.1 Interrupt Handling Handling interrupts is at the heart of a real-time and embedded control system. The actual process of determining a good handling method can be complicated. Numerous actions are occurring simultaneously at a single point, and these actions have to be handled fast and efficiently. This subsection will provide a practical guide to designing an interrupt handler and discuss the various trade-offs between the different methods. The methods covered are (1) nonnested interrupt handler, (2) nested interrupt handler, (3) reentrant nested interrupt handler, and (4) prioritized interrupt handlers. (1) Nonnested interrupt handler. The simplest interrupt handler is a handler that is nonnested. This means that the interrupts are disabled until control is returned back to the interrupted task or process. A nonnested interrupt handler can service a single interrupt at a time. Handlers of this form are not suitable for complex embedded systems that service multiple interrupts with differing priority levels. When the Interrupt Request pin is raised, the microprocessor will disable further interrupts occurring. The microprocessor will then set the controller to point to the correct entry in the vector table and execute that instruction. This instruction will alter the controller to point to the interrupt handler. Once in the interrupt code, the interrupt handler has to first save the context, so that the context can be restored on return. The handler can now identify the interrupt source and call the appropriate Interrupt Service Routine (ISR). After servicing the interrupt the context can be restored and the controller manipulated
Zhang_Ch05.indd 601
5/13/2008 6:17:04 PM
602
INDUSTRIAL CONTROL TECHNOLOGY to point back to the next instruction before the interruption. Figure 5.8 shows the various stages that occur when an interrupt is raised in a system that has implemented a simple nonnest interrupt handler. Each stage is explained in more detail below: (a) External source (e.g., from an interrupt controller) sets the interrupt flag. Processor masks further external interrupts and vectors to the interrupt handler through an entry in the vector table. (b) On entry to the handler, the handler code saves the current context of the nonbanked registers. (c) The handler then identifies the interrupt source and executes the appropriate ISR. (d) ISR services the interrupt. (e) On return from the ISR the handler restores the context. (f) Enables interrupts and return. (2) Nested interrupt handler. A nested interrupt handler allows for another interrupt to occur within the currently called handler. This is achieved by reenabling the interrupts before the handler has fully serviced the current interrupt. For a real-time system this feature increases the complexity of the system. This complexity introduces the possibility of subtle timing issues that can cause a system failure. These subtle problems can be extremely difficult to resolve. The nested interrupt method has to be designed carefully so that these types of problems are avoided. Interrupt (1)
Disable interrupt
(2)
Save context (3)
Interrupt handler
(4) Interrupt service routine (5)
Restore context
Return to task (6)
Enable interrupt
Figure 5.8 Simple nonnested interrupt handler.
Zhang_Ch05.indd 602
5/13/2008 6:17:04 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
603
This is achieved by protecting the context restoration from interruption, so that the next interrupt will not fill (overflow) the stack, or corrupt any of the registers. Owing to an increase in complexity, there are many standard problems that can be observed if nested interrupts are supported. One of the main problems is a race condition where a cascade of interrupts occurs. This will cause a continuous interruption of the handler until either the interrupt stack is full (overflowing) or the registers are corrupted. A designer has to balance efficiency with safety. This involves using a defensive coding style that assumes problems will occur. The system should check the stack and protect against register corruption where possible. Figure 5.9 shows a nested interrupt handler. As can been seen from the diagram, the handler is quite a bit more complicated than the simple nonnested interrupt handler described in the last paragraph. How stacks are organized is one of the first decisions a designer has to make when designing a nested interrupt handler. There are two fundamental methods that can be adopted. The first uses a single stack and the second uses multiple stacks. The multiple stack method uses one stack for each interrupt or service routine. Having multiple stacks increases the execution time and complexity of the handler. For a time critical system, these tend to be undesirable characteristics. The nested interrupt handler entry code is identical to the simple non nested interrupt handler, except on exit, the handler tests a flag that is updated by the ISR. The flag indicates whether further processing is required. If further processing is not required then the service routine is complete and the handler can exit. If further processing is required, the handler may take several actions; reenabling interrupts and/or performing a context switch. Reenabling interrupts involves switching out of interrupt request (IRQ) mode. We cannot simply reenable interrupts in IRQ mode as this would lead to the link register being corrupted if an interrupt occurred after a branch with link instruction. This problem will be discussed in more detail in the next paragraph. As a side note, performing a context switch involves flattening (emptying) the IRQ stack, as the handler should not perform a context switch while there is data on the IRQ stack unless the handler can maintain a separate IRQ stack for each task which is, as mentioned previously, undesirable. All registers saved on the IRQ stack must be transferred to the task’s stack. The remaining registers must then be saved on the task stack. This is transferred to a reserved block on the stack called a stack frame.
Zhang_Ch05.indd 603
5/13/2008 6:17:04 PM
604
INDUSTRIAL CONTROL TECHNOLOGY Interrupt (1) Enter interrupt handler
Disable interrupt (2)
Save context (3)
Return task Complete (4)
Service interrupt
Restore context
Not complete (5)
Prepare stack
(6)
Switch to mode
(7)
Start constructing a frame
Enable interrupt
(8)
(9) (10)
Complete service routine
Finish frame construction Interrupt
Return to task Interrupt Restore context
Figure 5.9 Nested interrupt handler.
(3) Reentrant nested interrupt handler. A reentrant interrupt handler is a method of handling multiple interrupts where interrupts are filtered by priority (Fig. 5.10). This is important since there is a requirement that interrupts with higher priority have a lower latency. This type of filtering cannot be achieved using the conventional nested interrupt handler. The basic difference between a reentrant interrupt handler and a nested interrupt handler is that the interrupts are reenabled early on in the interrupt handler to achieve low interrupt latency. There are a number of issues relating to reenabling the interrupts early, which are described in more detail later in the following paragraphs.
Zhang_Ch05.indd 604
5/13/2008 6:17:05 PM
605
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL Interrupt (1) Disable interrupt
Enter interrupt handler
(2)
Save partial context
(3)
Change mode
(4)
Reserve stack space and save complete context
(5)
Clear external interrupt
(6)
Enable interrupt
Return to task Service interrupt
(7)
Servicing complete (8)
Restore context
Enable external interrupt (9)
Servicing incomplete Re-save context
(10) Return to task (11) (12)
Restore context
Interrupt Continue servicing interrupt
Figure 5.10 Reentrant interrupt handler.
If interrupts are reenabled in an interrupt mode and the interrupt routine performs a subroutine call instruction (BL), the subroutine return address will be set in a special register. This address would be subsequently destroyed by an interrupt, which would overwrite the return address into this special register. To avoid this, the interrupt routine should swap into SYSTEM mode. The BL instruction can then use another register to store the
Zhang_Ch05.indd 605
5/13/2008 6:17:06 PM
606
INDUSTRIAL CONTROL TECHNOLOGY subroutine address. The interrupts must be disabled at the source by setting a bit in the interrupt controller before reenabling interrupts through the Current Processor Status Register (CPSR). If interrupts are reenabled in the CPSR before processing is complete and the interrupt source is not disabled, an interrupt will be immediately regenerated leading to an infinite interrupt sequence or race condition. Most interrupt controllers have an interrupt mask register that allows you to mask out one or more interrupts leaving the remainder of the interrupts enabled. The interrupt stack is unused since interrupts are serviced in SYSTEM mode (i.e., on the task’s stack). Instead the IRQ stack pointer is used to point to a 12 byte structure that will be used to store some registers temporarily on interrupt entry. It is paramount for a reentrant interrupt handler to operate effectively so the interrupts can be prioritized. If the interrupts are not prioritized the system latency degrades to that of a nested interrupt handler as lower priority interrupts will be able to preempt the servicing of a higher priority interrupt. This can lead to the locking out of higher priority interrupts for the duration of the servicing of a lower priority interrupt. (4) Prioritized interrupt handler. (a) Simple prioritized interrupt handler. The simple and nested interrupt handler services interrupts on a first-come-first-serve basis, whereas a prioritized interrupt handler will associate a priority level with a particular interrupt source. A priority level is used to dictate the order that the interrupts will be serviced. This means that a higher priority interrupt will take precedence over a lower priority interrupt, which is a desirable characteristic in an embedded system. Methods of prioritization can be achieved either in hardware or software. Hardware prioritization means that the handler is simpler to design since the interrupt controller will provide the current highest priority interrupt that requires servicing. These systems require more initialization code at start-up, since the interrupts and associated priority level tables have to be constructed before the system can be switched on. Software prioritization requires an external interrupt controller. This controller has to provide a minimal set of functions that include being able to set and unset masks and read interrupt status and source. For software systems the rest of this subsection will describe a priority interrupt handler, and to help with this a fictional interrupt controller will be used. The interrupt controller takes in multiple interrupt sources and will generate an IRQ and/or FIQ signal
Zhang_Ch05.indd 606
5/13/2008 6:17:06 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
607
depending on whether a particular interrupt source is enabled or disabled. The interrupt controller has a register that holds the raw interrupt status (IRQRawStatus). A raw interrupt is an interrupt that has not been masked by a controller. IRQEnable register determines which interrupts are masked from the processor. This register can only be set or cleared using IRQEnableSet and IRQEnableClear. Table 5.2 shows a summary of the register set names and the types of operation (read/write) that can occur with these register. Most interrupt controllers also have a corresponding set of registers for FIQ; some interrupt controllers can also be programmed to select what type of interrupt distinction, as in, select the type of interrupt raised (IRQ/FIQ) from a particular interrupt source. (b) Standard prioritized interrupt handler. A simple priority interrupt handler tests all the interrupts to establish the highest priority. An alternative solution is to branch early when the highest priority interrupt has been identified. The prioritized interrupt handler follows the same entry code as for the simple prioritized interrupt handler. The prioritized interrupt handler has the same start as a simple prioritized handler but intercepts the interrupts with a higher priority earlier. Figure 5.11 gives the part of the algorithm applied to the standard prioritized interrupt handler. (c) Direct prioritized interrupt handler. A direct prioritized interrupt handler branches directly to the interrupt service routine (ISR). Each ISR is responsible for disabling the lower priority interrupts before modifying the CPSR so that interrupts are re-enabled. This type of handler is relatively simple since the masking is done by the service routine. This Table 5.2 Interrupt Controller Registers Register
R/W
IRQRawStatus
r
IRQEnable
r
IRQStatus IRQEnableSet IRQEnableClear
r w w
Zhang_Ch05.indd 607
Description Represents interrupt sources that are actively HIGH Masks the interrupt sources that generates IRQ/FIQ to the CPU Represents interrupt sources after masking Sets the interrupt enable register Clears the interrupt enable register
5/13/2008 6:17:06 PM
608
INDUSTRIAL CONTROL TECHNOLOGY
Obtain external interrupt status
(3)
Is a priority 1 interrupt ?
(4)
Is a priority 2 interrupt ?
(5)
Disable lower priority interrupts
(6)
Enable external interrupts
(7)
Enable internal interrupts
Return to task
(8)
Interrupt Service interrupt
Restore context
Figure 5.11 Part of a prioritized interrupt handler.
does cause minimal duplication of code since each service routine is effectively carrying out the same task. (d) Grouped prioritized interrupt handler. Last, the grouped priority interrupt handler is assigned a group priority level to a set of interrupt sources. This is sometimes important when there is a large number of interrupt sources. It tends to reduce the complexity of the handler since it is not necessary to scan through every interrupt to determine the priority level. This may improve the response times.
5.2.4.2
Enable and Disable Interrupts
Interrupt masking allows you to disable the detection of an interruptline assertion, which causes the operating system or kernel to ignore an
Zhang_Ch05.indd 608
5/13/2008 6:17:06 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
609
interrupt signal. The signal can be ignored at the microprocessor level or at other levels in the hardware architecture. In some cases, each interrupt source in the system can be masked individually. In other cases, masking an interrupt in a microprocessor register can mask a group of interrupts. An example of this is multiple interrupts. When an interrupt occurs, the microprocessor must globally disable interrupts at the microprocessor level to avoid being interrupted while gathering and saving interrupt state information. Because disabling interrupts globally blocks all other interrupts, mask global interrupts for as short a time as possible in your code. When you determine what has specifically interrupted, you can mask just that interrupt. A maskable interrupt is essentially a hardware interrupt that may be ignored by setting a bit in an interrupt mask register’s (IMR) bit-mask. Similarly, a nonmaskable interrupt is a hardware interrupt that typically does not have a bit-mask associated with it allowing it to be ignored. All of the regular interrupts that we normally use and refer to by number are called maskable interrupts. The processor is able to mask, or temporarily ignore, any interrupt if it needs to, to finish something else that it is doing. In addition, however, there has a nonmaskable interrupt (NMI) that can be used for serious conditions that demand the processor’s immediate attention. The NMI cannot be ignored by the system unless it is shut off specifically. When an NMI signal is received, the processor immediately drops whatever it was doing and attends to it. As you can imagine, this could cause crashes if used improperly. In fact, the NMI signal is normally used only for critical problem situations, such as serious hardware errors. The most common use of NMI is to signal a parity error from the memory subsystem. This error must be dealt with immediately to prevent possible data corruption.
5.2.4.3
Interrupt Vector
As mentioned in Chapter 2, the vector table starts at memory address as 0x00000000. A vector table consists of a set of assembler (or machine) instructions. These instructions cause the controller or computer to jump to a specific location that can handle a specific exception or interrupt. Figure 2.6 shows the vector table and the modes which an Intel microprocessor is placed into when a specific event occurs. A vector uses a special assembler instruction for loading to load the address of the handler. The address of the handler will be called indirectly, whereas a branch instruction will go directly to the handler. When booting a system, quite often, the read-only memory (ROM) is located at 0x00000000. This means that when
Zhang_Ch05.indd 609
5/13/2008 6:17:07 PM
610
INDUSTRIAL CONTROL TECHNOLOGY
SRAM is remapped to location 0x00000000 the vector table has to be copied to SRAM at its default address prior to the remap. This is normally achieved by the system initialization code. SRAM is normally remapped because it is wider and faster than ROM; also allows vectors to be dynamically updated as requirements change during program execution. Figure 5.12 shows a typical vector table of a real system. The undefined instruction handler is located so that a simple branch is adequate, whereas the other vectors require an indirect address using assembler instructions special for loading. Where the interrupt stack is placed depends on the real-time operating system (RTOS) requirements and the specific hardware being used. The example in Fig. 5.13 shows two possible stack layouts. The first (A) shows the traditional stack layout with the interrupt stack being stored underneath the code segment. The second, layout (B) shows the interrupt stack at the top of the memory above the user stack. One of the main advantages that layout (B) has over layout (A) is that the stack grows into the user stack and thus does not corrupt the vector table. For each mode, a stack has to be set up. This is carried out every time the processor is reset. If the interrupt stack expands into the interrupt vector, the target system will crash, unless some check is placed on the extension of the stack and some means exist to handle that error when it occurs. Before an interrupt can be enabled, the IRQ mode stack has to be set up. This is normally accomplished in the initialization code for the system. It is important that the maximum size of the stack is known, since that size can be reserved for the interrupt stack.
5.2.4.4
Interrupt Service Routines
An interrupt service routine (ISR) is a software routine that hardware invokes in response to an interrupt. ISR examines an interrupt and determines how to handle it. The ISR handles the interrupt, and then returns a
0x00000000: 0x00000004: 0x00000008: 0x0000000c: 0x00000010: 0x00000014: 0x00000018: 0x0000001c:
0xe59ffa38 0xea000502 0xe59ffa38 0xe59ffa38 0xe59ffa38 0xe59ffa38 0xe59ffa38 0xe59ffa38
8... : > .... : 8... : 8... : 8... : 8... : 8... : 8... :
ldr b ldr dr ldr ldr ldr ldr
pc,0x00000a40 0x1414 pc,0x00000a48 pc,0x00000a4c pc,0x00000a50 pc,0x00000a54 pc,0x00000a58 pc,0x00000a5c
Figure 5.12 Shows a typical vector table.
Zhang_Ch05.indd 610
5/13/2008 6:17:07 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
611
Interrupt stack User stack User stack Heap Heap Code Code Interrupt stack Interrupt vectors
Interrupt vectors
0x00000000
0x00000000
(a)
(b)
Figure 5.13 Typical stack design layouts.
logical interrupt value. If no further handling is required because the device is disabled or data is buffered, the ISR notifies the kernel with a return value. An ISR must perform very fast to avoid slowing down the operation of the device and the operation of all lower priority ISRs. Although an ISR might move data from a CPU register or a hardware port into a memory buffer, in general it relies on a dedicated interrupt thread, called the interrupt service thread (IST), to do most of the required processing. If additional processing is required, the ISR returns a logical interrupt value to the kernel. It then maps a physical interrupt number to a logical interrupt value. For example, the keyboard might be associated with hardware interrupt 4 on one device and hardware interrupt 15 on another device. The ISR translates the hardware-specific value to the standard value corresponding to the specific device. When an ISR notifies the kernel of a specific logical interrupt value, the kernel examines an internal table to map the logical interrupt value to an event handle. The kernel wakes the IST by signaling the event. An event is a standard synchronization object that serves as an alarm clock to wake up a thread when something interesting happens. The interrupt service thread is a thread that does most of the interrupt processing. The operating system wakes the IST when the operating system has an interrupt to process. Otherwise, the IST is idle. For the
Zhang_Ch05.indd 611
5/13/2008 6:17:08 PM
612
INDUSTRIAL CONTROL TECHNOLOGY
operating system to wake the IST, the IST must associate an event object with an interrupt identifier. After an interrupt is processed, the IST should wait for the next interrupt signal. This call is usually inside a loop. When the hardware interrupt occurs, the kernel signals the event on behalf of the ISR, and then the IST performs necessary I/O operations in the device to collect the data and process it. When the interrupt processing is completed, the IST should inform the kernel to reenable the hardware interrupt. An interrupt notification is a signal from an IST that notifies the operating system that an event must be processed. For devices that connect to a hardware platform through intermediate hardware, the device driver for that intermediate hardware should pass the interrupt notification to the toplevel device driver. Generally, the intermediate hardware’s device driver has some facility that allows another device driver to register a call-back function, which the intermediate device driver calls when an interrupt occurs. For example, PC Cards connect to hardware platforms through a PC Card slot, which is an intermediate piece of hardware with its own device driver. When a PC Card device sends an interrupt, it is actually the PC Card slot hardware that signals the physical interrupt on the system bus. The device driver for the PC Card slot has an ISR and IST that handle the physical interrupt. They use a function to pass the interrupt to the device driver for the PC Card device. Devices with similar connection methods behave similarly.
5.2.5
Memory Management
Memory management is the process by which a computer control system allocates a limited amount of physical memory among the various processes (or tasks) that need it in a way that optimizes performance. Actually, each process has its own private address space. The address space is initially divided into three logical segments: text, data, and stack. The text segment is read-only and contains the machine instructions of a program. The data and stack segments are both readable and writable. The data segment contains the initialized and noninitialized data portions of a program, whereas the stack segment holds the application’s run-time stack. On most machines, the stack segment is extended automatically by the kernel as the process executes. A process can expand or contract its data segment by making a system call, whereas a process can change the size of its text segment only when the contents of the segment are overlaid with data from the file system, or when debugging takes place. The initial
Zhang_Ch05.indd 612
5/13/2008 6:17:09 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
613
contents of the segments of a child process are duplicates of the segments of a parent process. The entire contents of a process address space do not need to be resident for a process to execute. If a process references a part of its address space that is not resident in main memory, the system pages the necessary information into memory. When system resources are scarce, the system uses a two-level approach to maintain available resources. If a modest amount of memory is available, the system will take memory resources away from processes if these resources have not been used recently. Should there be a severe resource shortage, the system will resort to swapping the entire context of a process to secondary storage. The demand paging and swapping done by the system are effectively transparent to processes. A process may, however, advise the system about expected future memory utilization as a performance aid. A common technique for doing the above is virtual memory, which simulates a much larger address space than is actually available, using a reserved disk area for objects that are not in physical memory. The kernel often does allocations of memory that are needed for only the duration of a single system call. In a user process, such short-term memory would be allocated on the run-time stack. Because the kernel has a limited run-time stack, it is not feasible to allocate even moderate-sized blocks of memory on it. Consequently, such memory must be allocated through a more dynamic mechanism. For example, when the system must translate a pathname, it must allocate a 1-kbyte buffer to hold the name. Other blocks of memory must be more persistent than a single system call, and thus could not be allocated on the stack even if there was space. An example is protocol-control blocks that remain throughout the duration of a network connection.
5.2.5.1 Virtual Memory The operating system uses virtual memory to manage the memory requirements of its processes by combining physical memory with secondary memory (swap space) on disk. The swap area is usually located on a local disk drive. Diskless systems use a page server to maintain their swap areas on its local disk. The translation from virtual to physical addresses is implemented by a Memory Management Unit (MMU). This may be either a module of the CPU, or an auxiliary, closely coupled chip. The operating system is responsible for deciding which parts of the program’s simulated main memory are kept in physical memory. The operating system also maintains the translation tables that provide the mappings
Zhang_Ch05.indd 613
5/13/2008 6:17:09 PM
614
INDUSTRIAL CONTROL TECHNOLOGY
between virtual and physical addresses, for use by the MMU. When a virtual memory exception occurs, the operating system is responsible for allocating an area of physical memory to hold the missing information (and possibly in the process pushing something else out to disk), bringing the relevant information in from the disk, updating the translation tables, and finally resuming execution of the software that incurred the virtual memory exception. Virtual memory is usually (but not necessarily) implemented using paging. In paging, the low order bits of the binary representation of the virtual address are preserved, and used directly as the low order bits of the actual physical address; the high order bits are treated as a key to one or more address translation tables, which provide the high order bits of the actual physical address. For this reason, a range of consecutive addresses in the virtual address space whose size is a power of two will be translated in a corresponding range of consecutive physical addresses. The memory referenced by such a range is called a page. The page size is typically in the range of 512–8192 bytes (with 4k currently being very common), though page sizes of 4 MB or larger may be used for special purposes. (Using the same or a related mechanism, contiguous regions of virtual memory larger than a page are often mappable to contiguous physical memory for purposes other than virtualization, such as setting access and caching control bits.) The operating system stores the address translation tables, the mappings from virtual to physical page numbers, in a data structure known as a page table. For a page that is marked as unavailable (perhaps because it is not present in physical memory, but instead is in the swap area), when the CPU tries to reference a memory location in that page, the MMU responds by raising an exception (commonly called a page fault) with the CPU, which then jumps to a routine in the operating system. If the page is in the swap area, this routine invokes an operation called a page swap, to bring in the required page. The page swap operation involves a series of steps. First it selects a page in memory, for example, a page that has not been recently accessed and (preferably) has not been modified since it was last read from disk or the swap area. If the page has been modified, the process writes the modified page to the swap area. The next step in the process is to read in the information in the needed page (the page corresponding to the virtual address the original program was trying to reference when the exception occurred) from the swap file. When the page has been read in, the tables for translating virtual addresses to physical addresses are updated to reflect the revised contents of the physical memory. Once the page swap completes, it exits,
Zhang_Ch05.indd 614
5/13/2008 6:17:09 PM
615
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
and the program is restarted and continues on as if nothing had happened, returning to the point in the program that caused the exception. It is also possible that a virtual page was marked as unavailable because the page was never previously allocated. In such cases, a page of physical memory is allocated and filled with zeros, the page table is modified to describe it, and the program is restarted as above. Figure 5.14 illustrates how a virtual memory of a process might correspond to what exists in physical memory, on swap, and in the file system. The U-area of a process consists of two 4 kB pages (displayed here as U1 and U1) of virtual memory that contain information about the process needed by the system when the process is running. In this example, these pages are shown in physical memory. The data pages, D3 and D4, are shown as being paged out to the swap area on disk. The text page, T4, has also been paged out but it is not written to the swap area as it exists in the file system. Those pages that have not yet been accessed by the process (D5, T2, and T5) do not occupy any resources in physical memory or in the swap area.
Process virtual memory
U-area U1
U2
Physical memory
Disk
U1 D3
Stack S 1 Data
Text
D1
U2
S2 S3 D2
T1 D4
D3
D4
T3
T4
S1
D1
S2
S3
Pages on swap
T3 T1
D2
Page Page not yet accessed
T1 T 2 D5
T3 T4
T5
Program text and data in filesystem
Paged out Page in use by kernel or another process
Figure 5.14 How the virtual memory of a process relates to physical memory and disk.
Zhang_Ch05.indd 615
5/13/2008 6:17:09 PM
616
5.2.5.2
INDUSTRIAL CONTROL TECHNOLOGY
Dynamic Memory Pool
Memory pools allow dynamic memory allocation comparable to “malloc” or the operator “new” in C++. As those implementations suffer from fragmentation because of variable block sizes, it can be impossible to use them in a real time system due to performance. A more efficient solution is preallocating a number of memory blocks with the same size called the memory pool. The application can allocate, access, and free blocks represented by handles at runtime.
5.2.5.3
Memory Allocation and Deallocation
The inefficient allocation or deallocation of memory can be detrimental to system performance. The presence of wasted memory in a block of allocated memory is called internal fragmentation. This occurs because the size that was requested in that region was smaller than the size of the block of memory that was allocated. The result is a block of unusable memory, which is considered allocated when, in fact, the block of memory is not being used. The reverse situation is called external fragmentation, when blocks of memory are freed, leaving holes in memory that are not contiguous. If these holes are not large, they may not be usable because further requests for memory may call for larger blocks. Both internal and external fragmentation results in unusable memory. Memory allocation and deallocation is a process that has several layers of applications. If one application fails, another operates to attempt to resolve the request. This whole process is called dynamic memory management in the C++ or C programming language. Memory is allocated to applications using an operating subsystem called “malloc.” The “malloc” subsystem controls heap, a region of memory to which memory allocation and deallocation occurs. Another function, the reallocation of memory, also is under the control of “malloc.” In “malloc,” the allocation of memory is performed by two subroutines, “malloc” and “calloc.” Deallocation is performed by the free subroutine, and reallocation is performed by the subroutine known as “realloc.” In deallocation, those memory blocks that have been deallocated are returned to the binary tree at its base. Thus, a binary tree can be envisioned as a sort of river of information, with deallocated memory flowing in at the base and allocated memory flowing out from the tips. Garbage collection is another term associated with deallocation of memory. Garbage collection refers to an automated process that determines what memory a program is no longer using, and the subsequent
Zhang_Ch05.indd 616
5/13/2008 6:17:10 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
617
recycling of that memory. The automation of garbage collection relieves the user of time-consuming and error-prone tasks. There are a number of algorithms for the garbage collection process. These operate independently of “malloc.” These two ways are categorized to perform the memory allocation and deallocation. (1) Static memory allocation. Static memory allocation refers to the process of allocating memory at compile-time before the associated program is executed. An application of this technique involves a program module (e.g., function or subroutine) declaring static data locally, such that this data is inaccessible in other modules unless references to it are passed as parameters or returned. A single copy of static data is retained and accessible through many calls to the function in which it is declared. Static memory allocation therefore has the advantage of modularizing data within a program design in the situation where this data must be retained through the runtime of the program. The use of static variables within a class in object-oriented programming enables a single copy of such data to be shared among all the objects of that class. (2) Dynamic memory allocation. In computer science, dynamic memory allocation is the allocation of memory storage for use in a computer program during the runtime of that program. It is a way of distributing ownership of limited memory resources among many pieces of data and code. A dynamically allocated object remains allocated until it is deallocated explicitly, either by the programmer or by a garbage collector; this is notably different from automatic and static memory allocation. It is said that such an object has dynamic lifetime. Fulfilling an allocation request, which involves finding a block of unused memory of a certain size in the heap, is a difficult problem. A wide variety of solutions have been proposed, however, mainly including the following: (a) Free lists. A free list is a data structure used in a scheme for dynamic memory allocation. It operates by connecting unallocated regions of memory together in a linked list, using the first word of each unallocated region as a pointer to the next. It is most suitable for allocating from a memory pool, where all objects have the same size. Free lists make the allocation and deallocation operations very simple. To free a region, we just add it to the free list. To allocate a region, we simply remove a single region from the end of the free list and use it. If the regions are variable-sized,
Zhang_Ch05.indd 617
5/13/2008 6:17:11 PM
618
INDUSTRIAL CONTROL TECHNOLOGY we may have to search for a region of large enough size, which can be expensive. Free lists have the disadvantage, inherited from linked lists, of poor locality of reference and so poor data cache utilization, and they provide no way of consolidating adjacent regions to fulfill allocation requests for large regions. Nevertheless, they are still useful in a variety of simple applications where a full-blown memory allocator is unnecessary or requires too much overhead. (b) Paging. As mentioned earlier, the memory access part of paging is done at the hardware level through page tables, and is handled by the MMU. Physical memory is divided into small blocks called pages (typically 4 kB or less) in size, and each block is assigned a page number. The operating system may keep a list of free pages in its memory, or may choose to probe the memory each time a memory request is made (though most modern operating systems do the former). Whatever the case, when a program makes a request for memory, the operating system allocates a number of pages to the program, and keeps a list of allocated pages for that particular program in memory.
5.2.5.4
Memory Requests Management
Dealing with race conditions is also one of the hard aspects in the managements of the memory requests. To manage the memory requests coming from the system, a scheduler is necessary in the application layer or in kernel, in addition to the MMU as a hardware manager. The most common way of protecting data from concurrent access by the memory request scheduler is the memory request contention. Section 5.2.3.4 of this book provides a detailed discussion in respect to the I/O request contention. The semantics and methodologies for the memory request contention should be the same as the I/O request contention. We suggest referring to Section 5.2.3.4 for this topic.
5.2.6
Event Brokers
An event is a case or a condition that requires the services of the microprocessor to handle in a control system. Events may be caused by an executing program (an internal event) or by an outside agent (an external event). Internal events are triggered by a running program and may result from software errors, such as a protection violation, or from normal execution,
Zhang_Ch05.indd 618
5/13/2008 6:17:11 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
619
such as a page fault. External events are created by sources outside the execution of the current program and may include I/O events such as a disk request, timer generated events, or even a message arrival in a messagepassing multiprocessor system. The response of a microprocessor to an event may be precise or imprecise and may be concurrent or sequential. The response to an event is precise if proper handler execution ensures correct program behavior. Precise event handling is guaranteed if the particular instruction that caused the event (the faulting instruction) is identified, and any instructions that need the result of the faulting instruction are not issued until the handler has generated this result. This definition of precise event handling enables us to use multithreading to implement concurrent event handling; the event handler and the faulting program run concurrently in different thread contexts. This is in contrast to sequential event handling in which an event interrupts the faulting program, brings it to a sequentially consistent state, runs the handler to completion, and resumes the faulting program. Concurrent and sequential event handling achieve the same end; all instructions get correct input operands, but do so through different means. In this subsection, the concurrent event handling with multithread architecture will be highlighted because it has several advantages over the sequential event handling.
5.2.6.1
Event Notification Service
An event notification service (ENS) connects providers of information and interested clients. The ENS informs the clients about the occurrence of events from the providers. Events can be, for instance, changes of existing objects or the creation of new objects such as a new measurement coming from a meter in a control system. The service learns about events by means of event messages reporting the events. The event messages are compiled either by the providers (push) or by the service actively observing the providers (pull). Clients define their interest in certain events by means of personal profiles. The profile creation is based on the profile definition language of the service. The information about observed events is filtered according to these profiles, and notifications are sent to the interested clients. Figure 5.15 depicts the data-flow in a high-level architecture of an ENS. Keep in mind that the dataflow is independent of the delivery mode, such as push or pull. The tasks of an event notification service can be subdivided into four steps. First, the observable event (message) types are to
Zhang_Ch05.indd 619
5/13/2008 6:17:11 PM
620
INDUSTRIAL CONTROL TECHNOLOGY Single data flow Event description
Event description Event notification service
Provider
Profile Registration
Client
Event data Broadcasts Repeated data flow
Figure 5.15 Data flow of an event notification service.
be determined and advertised to the clients. This advertisement can be implicit, for example, provided by the client interface or due to information the client has about the service. Second, the clients’ profiles have to be defined through a client interface; they are stored within the service. Third, the occurring events have to be observed and filtered by the event notification service. Before creating notifications, the event information is combined to detect composite events. The messages can be buffered to enable more efficient notification (e.g., by merging several messages into one notification). Fourth, the clients have to be notified, immediately or according to a given schedule. In object-oriented programming designs, the following considerations regarding to the ENS can be very helpful: (1) In the programs, the Event and the Event-Data can be written as abstract objects in the operating system code. Each of the concert events can be the instance of the abstract event object. EventData should be linked to an event. Each of these event objects includes its own trigger list. Each node of this trigger list can be a client object to this event. Each client related to the same event should be registered into the trigger list of the corresponding Event. This registration can be performed either dynamically in run-time or initially when the system powers on. Some data structures like Hash-Table can be chosen to keep the diagram of Event-Client in programs. (2) The client object as nodes in the trigger list should include the handlers of the linked event which can be routines, functions, or methods. Once an event occurs while a program is running, the program context should go to the trigger list of this event immediately. Each node of this trigger list is drawn and the corresponding event handlers are run. This is called broadcast semantics.
Zhang_Ch05.indd 620
5/13/2008 6:17:11 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
621
(3) In multithreads programming architecture, each of these routines can be implemented by one or more threads. Concurrent event handling requires the synchronization between the handling routines. Although there are some ways to pursue this synchronization, the context switch of tasks or threads in kernel of a microprocessor chip seems essential for this purpose. We also can use the software interrupt to deal with the event handling routines; the occurrence of each event creates a software interrupt that leads the context to the handling routines. When the routines are complete, the context restores to original process. However, for distributed control systems, message communications are the basic method to manage the synchronization between the handling routines in different microprocessor chipsets.
5.2.6.2
Event Trigger
Triggers are operations linked to types and associated with events. When the specified event occurs on an object of the linked type, the trigger operation will be invoked automatically. A trigger specifies the name of the trigger along with the event that will trigger its execution. Four kinds of events may be specified: create, update, delete, and commit. A create event triggers the execution after creation. Update and delete events trigger execution before any changes are made to the actual stored object. A commit event triggers execution before the commit is performed. All triggers and events refer only to that part of the object associated to the type in question. Thus, dressing an object with an additional type will cause a new type part to be created and hence any create triggers associated with the acquired type to be invoked.
5.2.6.3
Event Broadcasts
The event broadcast is responsible for sending an event notification to all client applications that have previously registered into this event. The information passed to the client applications for an event notification includes the notification name and the identifier of the notifying session server. When broadcast semantics is used to signal the occurrence of the events, a useful programming technique is to put the broadcasts in a rule used to wake up applications that need to pay attention to something. It is common for a client that executes handling routines. In that case it will get back an event notification, just like all the other listening sessions. Depending on the application logic, this could result in useless work.
Zhang_Ch05.indd 621
5/13/2008 6:17:14 PM
622
5.2.6.4
INDUSTRIAL CONTROL TECHNOLOGY
Event Handling Routine
An event handling routine is required to handle a type of event. In the event handling routines, the sources of the events are determined by reading the event flags (or event triggers). The event flags should be reset and one event should be handled at a time. The handling of events continues in this fashion until all event flags are found to be inactive, then the event routine is left. It is recommended to use the trap mechanism for a microprocessor to execute the event handling routine, especially handling an event from hardware level. After an event is handled, it resets the event flag first and then handles the particular event. If a new event from the same source arrives while the flag is not reset yet and the previous event is handled, this new event might be lost when the flag is reset. Thus, the chances of losing events is decreased when the flag is reset immediately. After handling the event, the event flags should be read again, because the microprocessor trap may not exit from the event routine when there are pending events. While a previous event is being handled, a new event may have arrived (of the same type or a different type). By reading the event flags after each handled event, the user or programmer can easily implement an event priority scheme.
5.2.7 Message Queue 5.2.7.1 Message Passing An operating system provides inter-process communication (IPC) to allow processes to exchange information. IPC is useful for creating cooperating processes. The most popular form of IPC involves message passing by which processes communicate with each other by exchanging messages. A process may send information to a channel (or port), from which another process may receive information. The sending and receiving processes can be on the same or different computers connected through a communication medium. One reason for the popularity of message passing is its ability to support client–server interaction. A server is a process that offers a set of services to client processes. These services are invoked in response to messages from the clients and results are returned in messages to the client. We shall be particularly interested in servers that offer operating system services. With such servers, part of the operating system functionality can be transferred from the kernel to utility processes. For instance, file management
Zhang_Ch05.indd 622
5/13/2008 6:17:14 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
623
can be handled by a file server, which offers services such as open, read, write, and seek. There are several issues involved in message passing. We discuss some of these below. (1) Reliability and order. Messages sent between computers can fail to arrive or can be garbled because of noise and contention for the communication line. There are techniques to increase the reliability of data transfer. However, these techniques cost both extra space (longer messages to increase redundancy, more code to check the messages) and time. Message passing techniques can be distinguished by the reliability by which they deliver messages. Another issue is whether messages sent to a channel are received in the order in which they are sent. Differential buffering delays and routings in a network environment can place messages out of order. It takes extra effort (in the form of sequence number, and more generally, time stamps) to ensure order. (2) Access. An important issue is how many readers and writers can exchange information at a channel. Different approaches impose various restrictions on the access to channels. A bound channel is the most restrictive: There may be only one reader and writer. At the other extreme, the free channel allows any number of readers and writers. These are suitable for programming the client and server interactions based on a family of servers providing a common service. A common service is associated with a single channel; clients send their service requests to this channel and servers providing the requested service receive service requests from the channel. Unfortunately, implementing free channels can be quite costly if in-order messages are to be supported. The message queue associated with the channel is kept at a site which, in general, is remote to both a sender and a receiver. Thus both sends and receives result in messages being sent to this site. The former put messages in this queue and the latter request messages from it. Between these two extremes are input channels and output channels. An input channel has only one reader but any number of writers. It models the fairly typical many client, one server situation. Input channels are easy to implement since all receivers that designate a channel occur in the same process. Thus, the message queue associated with a channel can be kept with the receiver. Output channels, in contrast, allow any number of readers but only one writer. They are easier to implement than free channels since the message queue can be kept with the sender. However, they are not popular since the one client and many server situations are very unusual.
Zhang_Ch05.indd 623
5/13/2008 6:17:14 PM
624
INDUSTRIAL CONTROL TECHNOLOGY Several applications can use more than one kind of channel. For instance, a client can enclose a bound channel in a request message to the input report of a file server. This bound channel can represent the opened file, and subsequent read and write requests can be directed to this channel. (3) Synchronous and asynchronous. The send, receive, and reply operations may be synchronous or asynchronous. A synchronous operation blocks a process till the operation completes. An asynchronous operation is nonblocking and only initiates the operation. The caller could discover completion by some other mechanism. Note that both synchronous and asynchronous imply blocking and not blocking but not vice versa, that is, not every blocking operation is synchronous and not every nonblocking operation is asynchronous. For instance, a send that blocks till the receiver machine has received the message is blocking but not synchronous, since the receiver process may not have received it. These definitions of both synchronous and asynchronous operations are similar but not identical to the ones given in your textbooks, which tend to equate synchronous with blocking. Asynchronous message passing allows more parallelism. Since a process does not block, it can do some computation while the message is in transit. In the case of receive, this means a process can express its interest in receiving messages on multiple channels simultaneously. In a synchronous system, such parallelism can be achieved by forking a separate process for each concurrent operation, but this approach incurs the cost of extra process management. Asynchronous message passing introduces several problems. What happens if a message cannot be delivered? The sender may never wait for delivery of the message, and thus never hear about the error. Similarly, a mechanism is needed to notify an asynchronous receiver that a message has arrived. The operation invoker could learn about completion or errors by polling, getting a software interrupt, or by waiting explicitly for completion later using a special synchronous wait call. An asynchronous operation needs to return a call ID if the application needs to be later notified about the operation. At notification time, this ID would be placed in some global location or passed as an argument to a handler or wait call. Another problem related to asynchronous message passing has to do with buffering. If messages sent asynchronously are buffered in a space managed by the operating system, then a process may fill this space by flooding the system with a large number of messages.
Zhang_Ch05.indd 624
5/13/2008 6:17:14 PM
625
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
5.2.7.2
Message Queue Types
Message queues allow one or more processes to write messages, which will be read by one or more reading processes. Most operating systems or kernels maintain a list of message queues; a vector keeps the messages, each element of which points to a data structure that fully describes the message queue. When message queues are created, a new data structure is allocated from system memory and inserted into the vector. Figure 5.16 displays a possible design of system message queue. In Fig. 5.16, each “msqid_ds” data structure contains an “ipc_” data structure and pointers to the messages entered onto this queue. In addition, it keeps queue modification times such as the last time that this queue was written to and so on. The “msqid_ds” also contains two wait queues, one for the writers to the queue and one for the readers of the message queue. Each time a process attempts to write a message to the write queue, its effective user and group identifiers are compared with the mode in this queue’s “ipc_” data structure. If the process can write to the queue then the message may be copied from the process’s address space into “a_msg” data structure and put at the end of this message queue. Each message is tagged with an application specific type, agreed between the cooperating processes. However, there may be no room for the message as it is possible that the operating system restricts the number and length of messages that
msg_queue_id ipc msg *msg_last *msg_first
msg *msg_next msg_type *msg_spot msg_stime msg_ts
*msg_next msg_type *msg_spot msg_stime msg_ts
times *wwait *twait msg_qhum msg_ts
msg_ts
message
message
msg_qhum
Figure 5.16 A design for system message queues.
Zhang_Ch05.indd 625
5/13/2008 6:17:14 PM
626
INDUSTRIAL CONTROL TECHNOLOGY
can be written. In this case, the process will be added to this message queue’s write wait queue and the scheduler will be called to select a new process to run. It will be woken up when one or more messages have been read from this message queue. Reading from the queue is a similar process. Again, the processes access rights to the write queue are checked. A reading process may choose to either get the first message in the queue regardless of its type or select messages with particular types. If no messages match these criteria, the reading process will be added to the message queue’s read wait queue and the scheduler run. When a new message is written to the queue this process will be woken up and run again.
5.2.7.3
Pipes
In the implementations of the IPC, one solution to some of the buffering problems of asynchronous send is to provide an intermediate degree of synchrony between pure synchronous and asynchronous. We can treat the set of message buffers as a “traditional bounded buffer” that blocks the sending process when there are no more buffers available. That is exactly the kind of message passing supported by pipes. Pipes also allow the output of one process to become the input of another. A pipe is like a file opened for reading and writing. Pipes are constructed by the service call pipe, which opens a new pipe and returns two descriptors for it, one for reading and another for writing. Reading a pipe advances the read buffer, and writing it advances the write buffer. The operating system may only wish to buffer a limited amount of data for each pipe, so an attempt to write to a full pipe may block the writer. Similarly, an attempt to read from an empty buffer will block the reader. Though a pipe may have several readers and writers, it is really intended for one reader and writer. Pipes are used to unify both the input and output mechanisms and IPC. Processes expect that they will have two descriptors when they start, one called “standard input” and another called “standard output.” Typically, the first is a descriptor for the terminal open for input, and the second is a similar descriptor for output. However, the command interpreter, which starts most processes, can arrange for these descriptors to be different. If the standard output descriptor happens to be a file descriptor, the output of the process will go to the file, and not to the terminal. Similarly, the command interpreter can arrange for the standard output of one process to be one end of a pipe and for the other end of the pipe to be standard input for a second process. Thus, a listing program can be piped to a sorting program, which in turn directs its output to a file.
Zhang_Ch05.indd 626
5/13/2008 6:17:14 PM
627
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
Conceptually, a pipe can be thought of much like a hose-pipe, in that it is a conduit where we pour data in at one end and it flows out at the other. A pipe looks a lot like a file in that it is treated as a sequential data stream. Unlike a file, a pipe has two ends, so when we create a pipe we get two end points back in return. We can write to one end point and read from the other. Also unlike a file, there is no physical storage of data when we close a pipe; anything that was written in one end but not read out from the other end will be lost. We can illustrate the use of pipes as conduits between processes in the diagram given by Fig. 5.17. Here we see two processes that are called parent and child, for reasons that will become apparent shortly. The parent can write to Pipe A. and read from Pipe B. The child can read from Pipe A and write to Pipe B. A pipe can be implemented using two file data structures that both point at the same temporary innode which itself points at a physical page within memory. Figure 5.18 shows that each file data structure contains pointers to different file operation routine vectors: one for writing to the pipe, the other for reading from the pipe. This hides the underlying differences from the generic system calls which read and write to ordinary files. As the writing process writes to the pipe, bytes are copied into the shared data page and when the reading process reads from the pipe, bytes are copied from the shared data page. The operating system must synchronize access to the pipe. It must make sure that the reader and the writer of the pipe are in step and to do this it uses locks, wait queues, and signals. When the writer wants to write to the pipe it uses the standard write library functions. These all pass file descriptors that are indices into the process’s set of file data structures, each one representing an open file, or, as in this case, an open pipe. The operating system call uses the write routine pointed at by the file data structure describing this pipe. That write routine uses information held in the innode representing the pipe to manage the write request. If there is enough room to write all of the bytes into the pipe and, so long as the pipe is not locked by its reader, the operating system locks it for the
Pipe A Parent
Child Pipe B
Figure 5.17 Pipes in IPC.
Zhang_Ch05.indd 627
5/13/2008 6:17:14 PM
628
INDUSTRIAL CONTROL TECHNOLOGY
Process 1
Process 2
file
file
f_mode
f_mode
f_pos
f_pos
f_flags
f_flags
f_count
f_count
f_owner
f_owner
f_inode
f_inode
f_op
f_op inode
f_version
f_version
Data page
Pipe write operations
Pipe read operations
Figure 5.18 Pipes.
writer and copies the bytes to be written from the process’s address space into the shared data page. If the pipe is locked by the reader or if there is not enough room for the data, then the current process is made to sleep on the pipe in node’s wait queue and the scheduler is called so that another process can run. It is interruptible, so it can receive signals and it will be woken by the reader when there is enough room for the write data or when the pipe is unlocked. When the data has been written, the pipe’s innode is unlocked and any waiting readers sleeping on the wait queue of innode will themselves be woken up. Reading data from the pipe is a very similar process to writing to it. Some operating systems also support named pipes, also known as FIFOs because pipes operate on a first in, first out principle. The first data written into the pipe is the first data read from the pipe. Unlike pipes, FIFOs are not temporary objects; they are entities in the file system and can be created using system command. Processes are free to use a FIFO so long as they have appropriate access rights to it. The way that FIFOs are opened is
Zhang_Ch05.indd 628
5/13/2008 6:17:16 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
629
a little different from pipes. A pipe (its two file data structures, its innode, and the shared data page) is created in one go whereas a FIFO already exists and is opened and closed by its users. It must handle readers opening the FIFO before writers open it as well as readers reading before any writers have written to it. That aside, FIFOs are handled almost exactly the same way as pipes and they use the same data structures and operations.
5.2.8
Semaphores
Semaphores are used to control access to shared resources by processes. There are named and unnamed semaphores. Named semaphores provide access to a resource between multiple processes. Unnamed semaphores provide multiple accesses to a resource within a single process or between related processes. Some semaphore functions are specifically designed to perform operations on named or unnamed semaphores. A semaphore is an integer variable taking on the values 0 to a predefined maximum. Each semaphore is associated with a queue for process suspension. The order of process activation from the queue must be fair. Two indivisible or atomic operations are defined for a semaphore: (1) WAIT: decrease the counter by one; if it gets negative, block the process and enter this process’ ID in the waiting processes queue. (2) SIGNAL: increase the semaphore by one; if it is still negative, unblock the first process of the waiting processes queue, removing this process’ ID from the queue itself. In its simplest form, a semaphore is a location in memory whose value can be tested and set by more than one process. The test and set operation is, so far as each process is concerned, uninterruptible or atomic; once started nothing can stop it. The result of the test and set operation is the addition of the current value of the semaphore and the set value, which can be positive or negative. Depending on the result of the test and set operation, one process may have to sleep until the semaphore’s value is changed by another process. Say you had many cooperating processes reading records from and writing records to a single data file. You would want that file access to be strictly coordinated. You could use a semaphore with an initial value of 1 and, around the file operating code, put two semaphore operations, the first to test and decrement the semaphore’s value and the second to test and increment it. The first process to access the file would try to decrement the
Zhang_Ch05.indd 629
5/13/2008 6:17:16 PM
630
INDUSTRIAL CONTROL TECHNOLOGY
semaphore’s value and it would succeed, the semaphore’s value now being 0. This process can now go ahead and use the data file but if another process wishing to use it now tries to decrement the semaphore’s value, it would fail as the result would be –1. That process will be suspended until the first process has finished with the data file. When the first process has finished with the data file it will increment the semaphore’s value, making it 1 again. Now the waiting process can be woken and this time its attempt to increment the semaphore will succeed. If all of the semaphore operations would have succeeded and the current process does not need to be suspended, the system’s operating system goes ahead and applies the operations to the appropriate members of the semaphore array. Now the operating system must check any waiting or suspended processes that may now apply for their semaphore operations. It looks at each member of the operations pending queue in turn testing to see if the semaphore operations will succeed this time. If they will, then it removes the data structure representing this process from the operations pending list and applies the semaphore operations to the semaphore array. It wakes up the sleeping process making it available to be restarted the next time the scheduler runs. The operating system keeps looking through the pending processes queue from the start until there is a pass where no semaphore operations can be applied and so no more processes can be woken.
5.2.8.1
Semaphore Depth and Priority
Semaphores are global entities and are not associated with any particular process. In this sense, semaphores have no owners making it impossible to track semaphore ownership for any purpose, for example, error recovery. Semaphore protection works only if all the processes using the shared resource cooperate by waiting for the semaphore when it is unavailable and incrementing the semaphore value when relinquishing the resource. Since semaphores lack owners, there is no way to determine whether one of the cooperating processes has become uncooperative. Applications using semaphores must carefully detail cooperative tasks. All of the processes that share a resource must agree on which semaphore controls the resource. There is a problem with semaphores, called “deadlocks” which occur when one process has altered the semaphore’s value as it enters a critical region but then fails to leave the critical region because it crashed or was killed. Some protects can be performed against this by maintaining lists of adjustments to the semaphore arrays. The idea is that when these
Zhang_Ch05.indd 630
5/13/2008 6:17:17 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
631
adjustments are applied, the semaphores will be put back to the state that they were in before the process set of semaphore operations was applied. There is another problem defined as “Process priority inversion” that is a form of indefinite postponement which is common in multitasking, preemptive executives with shared resources. Priority inversion occurs when a high priority task requests access to a shared resource which is currently allocated to a low priority task. The high priority task must block until the low priority task releases the resource. This problem is exacerbated when the low priority task is prevented from executing by one or more medium priority tasks. Because the low priority task is not executing, it cannot complete its interaction with the resource and release that resource. The high priority task is effectively prevented from executing by lower priority tasks. Priority inheritance is an algorithm that calls for the lower priority task holding a resource to have its priority increased to that of the highest priority task blocked waiting for that resource. Each time a task blocks attempting to obtain the resource, the task holding the resource may have its priority increased. Some kernels support priority inheritance for local, binary semaphores that use the priority task wait queue blocking discipline. When a task is of higher priority than the task holding the semaphore blocks, the priority of the task holding the semaphore is increased to that of the blocking task. When the task that held the task completely releases the binary semaphore (i.e., not for a nested release), the holder’s priority is restored to the value it had before any higher priority was inherited. The implementation of the priority inheritance algorithm takes into account the scenario in which a task holds more than one binary semaphore. The holding task will execute at the priority of the higher of the highest ceiling priority or at the priority of the highest priority task blocked waiting for any of the semaphores the task holds. Only when the task releases all of the binary semaphores it holds will its priority be restored to the normal value. Priority ceiling is an algorithm that calls for the lower priority task holding a resource to have its priority increased to that of the highest priority task which will ever block waiting for that resource. This algorithm addresses the problem of priority inversion although it avoids the possibility of changing the priority of the task holding the resource multiple times. The priority ceiling algorithm will only change the priority of the task holding the resource a maximum of one time. The ceiling priority is set at creation time and must be the priority of the highest priority task which will ever attempt to acquire that semaphore. Some kernels support priority ceiling for local, binary semaphores that use the priority task wait queue
Zhang_Ch05.indd 631
5/13/2008 6:17:17 PM
632
INDUSTRIAL CONTROL TECHNOLOGY
blocking discipline. When a task of lower priority than the ceiling priority successfully obtains the semaphore, its priority is raised to the ceiling priority. When the task holding the task completely releases the binary semaphore (i.e., not for a nested release), the holder’s priority is restored to the value it had before any higher priority was put into effect. The need to identify the highest priority task that will attempt to obtain a particular semaphore can be a difficult task in a large, complicated system. Although the priority ceiling algorithm is more efficient than the priority inheritance algorithm with respect to the maximum number of task priority changes that may occur while a task holds a particular semaphore, the priority inheritance algorithm is more forgiving in that it does not require this earlier information. The implementation of the priority ceiling algorithm takes into account the scenario in which a task holds more than one binary semaphore. The holding task will execute at the priority of the higher of the highest ceiling priority or at the priority of the highest priority task blocked waiting for any of the semaphores the task holds. Only when the task releases all of the binary semaphores it holds will its priority be restored to the normal value.
5.2.8.2
Semaphore Acquire, Release and Shutdown
A semaphore can be viewed as a protected variable whose value can be modified only with the methods for creating semaphore, obtaining semaphore, and releasing semaphore. Many kernels support both binary and counting semaphores. A binary semaphore is restricted to values of zero or one, while a counting semaphore can assume any nonnegative integer value. A binary semaphore can be used to control access to a single resource. In particular, it can be used to enforce mutual exclusion for a critical section in user code. In this instance, the semaphore would be created with an initial count of one to indicate that no task is executing the critical section of code. On entry to the critical section, a task must issue the method for obtaining a semaphore to prevent other tasks from entering the critical section. On exit from the critical section, the task must issue the method for releasing the semaphore to allow another task to execute the critical section. A counting semaphore can be used to control access to a pool of two or more resources. For example, access to three printers could be administered by a semaphore created with an initial count of three. When a task requires access to one of the printers, it issues the method for obtaining semaphore to obtain access to a printer. If a printer is not currently available,
Zhang_Ch05.indd 632
5/13/2008 6:17:17 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
633
the task can wait for a printer to become available or return immediately. When the task has completed printing, it should issue the method for releasing semaphore to allow other tasks access to the printer. Task synchronization may be achieved by creating a semaphore with an initial count of zero. One task waits for the arrival of another task by issuing a method that obtains semaphore when it reaches a synchronization point. The other task performs a corresponding semaphore release operation when it reaches its synchronization point, thus unblocking the pending task. (1) Creating a semaphore. This method creates a binary or counting semaphore with a user-specified name as well as an initial count. If a binary semaphore is created with a count of zero (0) to indicate that it has been allocated, then the task creating the semaphore is considered the current holder of the semaphore. At create time, the method for ordering waiting tasks in the semaphore’s task wait queue (by FIFO or task priority) is specified. In addition, the priority inheritance or priority ceiling algorithm may be selected for local, binary semaphores that use the priority task wait queue blocking discipline. If the priority ceiling algorithm is selected, then the highest priority of any task that will attempt to obtain this semaphore must be specified. (2) Obtaining semaphore IDs. When a semaphore is created, the kernel generates a unique semaphore ID and assigns it to the created semaphore until it is deleted. The semaphore ID may be obtained by either of the two methods. First, after the semaphore is invocated by means of the semaphore creation method, the semaphore ID is stored in a user provided location. Second, the semaphore’s ID may be obtained later using the semaphore identify method. The semaphore’s ID is used by other semaphore manager methods to access this semaphore. (3) Acquiring a semaphore. The method for requiring a semaphore is used to acquire the specified semaphore. A simplified version of this method can be described as follows: if the semaphore’s count is greater than zero then decrement semaphore’s count or else wait for release of the semaphore and return “successful.” When the semaphore cannot be immediately acquired, one of the following situations applies: By default, the calling task will wait forever to acquire the semaphore. If the task waits to acquire the semaphore, then it is placed in the semaphore’s task wait queue in either FIFO or task priority order. If the task blocked waiting for a binary semaphore using priority inheritance and the task’s priority is greater than that of the task currently holding
Zhang_Ch05.indd 633
5/13/2008 6:17:17 PM
634
INDUSTRIAL CONTROL TECHNOLOGY the semaphore, then the holding task will inherit the priority of the blocking task. All tasks waiting on a semaphore are returned an error code when the semaphore is deleted. When a task successfully obtains a semaphore using priority ceiling and the priority ceiling for this semaphore is greater than that of the holder, then the holder’s priority will be elevated. (4) Releasing a semaphore. The method for releasing semaphore is used to release the specified semaphore. A simplified version of the semaphore release method can be described as follows: if no tasks are waiting on this semaphore then increment the semaphore’s count or else assign the semaphore to a waiting task and return “successful.” If this is the outermost release of a binary semaphore that uses priority inheritance or priority ceiling and the task does not currently hold any other binary semaphores, then the task performing the semaphore release will have its priority restored to its normal value. (5) Deleting a semaphore. The semaphore delete method removes a semaphore from the system and frees its control block. A semaphore can be deleted by any local task that knows the semaphore’s ID. As a result of this directive, all tasks blocked waiting to acquire the semaphore will be readied and returned a status code that indicates that the semaphore was deleted. Any subsequent references to the semaphore’s name and ID are invalid.
5.2.8.3
Condition and Locker
The paragraphs below present a series of basic synchronization problems including serialization and mutual exclusion and show some ways of using semaphores to solve them. (1) Signaling. Possibly the simplest use for a semaphore is signaling, which means that one thread sends a signal to another thread to indicate that something has happened. Signaling makes it possible to guarantee that a section of code in one thread will run before a section of code in another thread; in other words, it solves the serialization problem. Assume that we have a semaphore named “sem” with initial value 0, and that threads A and B have shared access to it as given in Fig. 5.19. The word “statement” represents an arbitrary program statement. To make the example concrete, imagine that “a1” reads a line from a file, and “b1” displays the line on the screen. The semaphore in this program guarantees that thread A has completed
Zhang_Ch05.indd 634
5/13/2008 6:17:17 PM
635
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL Thread A 1 2
statement sem.signal()
Thread B a1
1 2
sem.wait() statement
b1
Figure 5.19 Signaling by semaphore.
a1 before thread B begins “b1.” Here is how it works: if thread B gets to the wait statement first, it will find the initial value, zero, and it will block. Then when thread A signals, thread B proceeds. Similarly, if thread A gets to the signal first then the value of the semaphore will be incremented, and when thread B gets to the wait, it will proceed immediately. Either way, the order of a1 and b1 is guaranteed. This use of semaphores is the basis of the names signal and wait, and in this case the names are conveniently mnemonic. Unfortunately, we will see other cases where the names are less helpful. Speaking of meaningful names, “sem” is not one. When possible, it is a good idea to give a semaphore a name that indicates what it represents. In this case, a name like a1_Done might be good, where a1_Done = 0 means that a1 has not executed and a1_Done = 1 means it has. (2) Mutex. A second common use for semaphores is to enforce mutual exclusion. We have already seen one use for mutual exclusion: controlling concurrent access to shared variables. The mutex guarantees that only one thread accesses the shared variable at a time. A mutex is like a token that passes from one thread to another, allowing one thread at a time to proceed. For example, in “The Lord of the Flies” a group of children use a conch as a mutex. To speak, you have to hold the conch. As long as only one child holds the conch, only one can speak. Similarly, in order for a thread to access a shared variable, it has to “get” the mutex; when it is done, it “releases” the mutex. Only one thread can hold the mutex at a time. Create a semaphore named mutex that is initialized to 1. A value as “one” means that a thread may precede and may access the shared variable. A value as “zero” means that it has to wait for another thread to release the mutex. A code segment of performing mutex by semaphore is given in Fig. 5.20. Since mutex is initially 1, whichever thread gets to the wait first will be able to proceed immediately. Of course, the act
Zhang_Ch05.indd 635
5/13/2008 6:17:17 PM
636
INDUSTRIAL CONTROL TECHNOLOGY Thread A mutex.wait() # critical section count = count + 1 mutex.signal()
Thread B mutex.wait() # critical section count = count + 1 mutex.signal()
Figure 5.20 Mutex by semaphore.
of waiting on the semaphore has the effect of decrementing it, so the second thread to arrive will have to wait until the first signals. In this example, both threads are running the same code. This is sometimes called a symmetric solution. If the threads have to run different code, the solution is asymmetric. Symmetric solutions are often easier to generalize. In this case, the mutex solution can handle any number of concurrent threads without modification. As long as every thread waits before performing an update and signals after, then no two threads will access count concurrently. Often the code that needs to be protected is called the critical section that is critically important to prevent from concurrent access. In the tradition of computer science and mixed metaphors, there are several other ways people sometimes talk about mutexes. In the metaphor we have been using so far, the mutex is a token that is passed from one thread to another. In an alternative metaphor, we think of the critical section as a room, and only one thread is allowed to be in the room at a time. In this metaphor, mutexes are called locks, and a thread is said to lock the mutex before entering and unlock it while exiting. Occasionally, though, people mix the metaphors and talk about “getting” or “releasing” a lock, which does not make much sense. Both metaphors are potentially useful and potentially misleading. As you work on the next problem, try to figure out both ways of thinking and see which one leads you to a solution. (3) Multiplex. Generalize the previous solution so that it allows multiple threads to run in the critical section at the same time, but it enforces an upper limit on the number of concurrent threads. In other words, no more than n threads can run in the critical section at the same time. This pattern is called a multiplex. In real life, the multiplex problem occurs at busy nightclubs where there are a maximum number of people allowed in the building at a time, either to
Zhang_Ch05.indd 636
5/13/2008 6:17:18 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
637
maintain fire safety or to create the illusion of exclusivity. At such places a bouncer usually enforces the synchronization constraint by keeping track of the number of people inside and barring arrivals when the room is at capacity. Then, whenever one person leaves another is allowed to enter. Enforcing this constraint with semaphores may sound difficult, but it is almost trivial (Fig. 5.21). To allow multiple threads to run in the critical section, just initialize the mutex to n, which is the maximum number of threads that should be allowed. At any time, the value of the semaphore represents the number of additional threads that may enter. If the value is zero, then the next thread will block until one of the threads inside exits and signals. When all threads have exited the value of the semaphore is restored to n. Since the solution is symmetric, it is conventional to show only one copy of the code, but you should imagine multiple copies of the code running concurrently in multiple threads. What happens if the critical section is occupied and more than one thread arrives? Of course, what we want is for all the arrivals to wait. This solution does exactly that. Each time an arrival joins the queue, the semaphore is decremented, so that the value of the semaphore (negated) represents the number of threads in queue. When a thread leaves, it signals the semaphore, incrementing its value and allowing one of the waiting threads to proceed. Thinking again of metaphors, in this case it is useful to think of the semaphore as a set of tokens (rather than a lock). As each thread invokes wait, it picks up one of the tokens; when it invokes a signal it releases one. Only a thread that holds a token can enter the room. If no tokens are available when a thread arrives, it waits until another thread releases one. In real life, ticket windows sometimes use a system like this. They hand out tokens (sometimes poker chips) to customers in line. Each token allows the holder to buy a ticket. (4) Barrier. Barrier is hinted by presenting the variables used in solution and explaining their roles in Fig. 5.22.
Multiplex solution 1 2 3
multiplex.wait() critical section multiplex.signal()
Figure 5.21 Multiplex by semaphore.
Zhang_Ch05.indd 637
5/13/2008 6:17:19 PM
638
INDUSTRIAL CONTROL TECHNOLOGY Barrier hint 1 2 3 4
int n # the number of threads int count = 0 Semaphore mutex = 1 Semaphore barrier = 0
Figure 5.22 Barrier by semaphore (1).
Finally, here in Fig. 5.23 is a working barrier. The only change from the signaling is another signal after waiting at the barrier. Now as each thread passes, it signals the semaphore so that the next thread can pass. This pattern, a wait and a signal in rapid succession, occurs often enough that it has a name; it is called a turnstile, because it allows one thread to pass at a time, and it can be locked to bar all threads. In its initial state (zero), the turnstile is locked. The nth thread unlocks it and then all n threads go through.
5.2.9 Timers Most control systems have some electronic timers. These are usually just digital counters that are set to a number by software, and then count down to zero. When they reach zero, they interrupt the controller. Another common form of timer is a number that is compared to a counter. This is somewhat harder to program, but can be used to measure events or control motors (using a digital electronic amplifier to perform pulsewidth modulation). Embedded systems often use a hardware timer to implement a list of Barrier solution 1 2 3 4 5 6 7 8 9 10 11
mutex.wait() count = count + 1 mutex.signal() if count == n: barrier.signal() barrier.wait() barrier.signal() critical point
Figure 5.23 Barrier by semaphore (2).
Zhang_Ch05.indd 638
5/13/2008 6:17:19 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
639
software timers. Basically, the hardware timer is set to expire at the time of the next software timer of a list of software timers. The hardware timer’s interrupt software handles the housekeeping of notifying the rest of the software, finding the next software timer to expire, and resetting the hardware timer to the next software timer’s expiration.
5.2.9.1
Kernel Timers
Whenever you need to schedule an action to happen later, without blocking the current process until that time arrives, kernel timers are the tool for you. These timers are used to schedule execution of a function at a particular time in the future, based on the clock tick, and can be used for a variety of tasks; for example, polling a device by checking its state at regular intervals when the hardware cannot fire interrupts. Other typical uses of kernel timers are turning off the floppy motor or finishing another lengthy shut down operation. Finally, the kernel itself uses the timers in several situations, including the implementation of “schedule timeout.” A kernel timer is a data structure that instructs the kernel to execute a user-defined function with a user-defined argument at a user-defined time. The functions scheduled to run almost certainly do not run while the process that registered them is executing. They are, instead, run asynchronously. When a timer runs, however, the process that scheduled it could be asleep, executing on a different processor, or quite possibly has exited altogether. In fact, kernel timers are run as the result of a “software interrupt.” When running in this sort of atomic context, your code is subject to a number of constraints. Timer functions must be atomic in all the ways, but there are some additional issues brought about by the lack of a process context. Repetition is called for because the rules for atomic contexts must be followed assiduously, or the system will find itself in deep trouble. One other important feature of kernel timers is that a task can reregister itself to run again at a later time. This is possible because each timer list structure is unlinked from the list of active timers before being run and can, therefore, be immediately relinked elsewhere. Although rescheduling the same task over and over might appear to be a pointless operation, it is sometimes useful. For example, it can be used to implement the polling of devices. Therefore, a timer that reregisters itself always runs on the same CPU. An important feature of timers is that they are a potential source of race conditions, even on single-processor systems. This is a direct result of their being asynchronous with other code. Therefore, any data structures
Zhang_Ch05.indd 639
5/13/2008 6:17:22 PM
640
INDUSTRIAL CONTROL TECHNOLOGY
accessed by the timer function should be protected from concurrent access, either by being atomic types or by using spin locks. The implementation of the timers has been designed to meet the following requirements and assumptions: (1) Timer management must be as lightweight as possible. (2) The design should scale well as the number of active timers increases. (3) Most timers expire within a few seconds or minutes at most, while timers with long delays are pretty rare. (4) A timer should run on the same CPU that registered it. The solution devised by kernel developers is based on a per-CPU data structure. The “Timer list” structure includes a pointer to that data structure in its base field. If base is null, the timer is not scheduled to run; otherwise, the pointer tells which data structure (and, therefore, which CPU) runs it. Whenever kernel code registers a timer, the operation is eventually performed which, in turn, adds the new timer to a double-linked list of timers within a “cascading table” associated with the current CPU. The cascading table works like this: if the timer expires in the next 0–255 jiffies, it is added to one of the 256 lists devoted to short-range timers using the least significant bits of the expired field. If it expires farther in the future (but before 16,384 jiffies), it is added to one of 64 lists based on bits 9–14 of the expired fields. For timers expiring even farther, the same trick is used for bits 15–20, 21–26, and 27–31. Timers with an expire field pointing still farther in the future (something that can happen only on 64-bit platforms) are hashed with a delay value of 0xffffffff, and timers with expires in the past are scheduled to run at the next timer tick. (A timer that is already expired may sometimes be registered in high-load situations, especially if you run a preemptible kernel.). Keep in mind, however, that a kernel timer is far from perfect, as it suffers from other artifacts induced by hardware interrupts, as well as other timers and other asynchronous tasks. While a timer associated with simple digital I/O can be enough for simple tasks like running a stepper motor or other amateur electronics, it is usually not suitable for production systems in industrial environments. For such tasks, you will most likely need to resort to a real-time kernel extension.
5.2.9.2 Watchdog Timers (1) Working mechanism. A watchdog timer is a piece of hardware, often built into a microcontroller that can cause a processor reset when it judges that the system has hung, or is no longer executing the correct sequence of code. The hardware component of a watchdog is a counter that is set to a certain value and then counts
Zhang_Ch05.indd 640
5/13/2008 6:17:22 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
641
down towards zero. It is the responsibility of the software to set the count to its original value often enough to ensure that it never reaches zero. If it does reach zero, it is assumed that the software has failed in some manner and the CPU is reset. It is also possible to design the hardware so that a kick that occurs too soon will cause a bite, but to use such a system, very precise knowledge of the timing characteristics of the main loop of your program is required. A properly designed watchdog mechanism should, at the very least, catch events that hang the system. In electrically noisy environments, a power glitch may corrupt the program counter, stack pointer, or data in RAM. The software would crash almost immediately, even if the code is completely bug-free. This is exactly the sort of transient failure that watchdogs will catch. Bugs in software can also cause the system to hang, if they lead to an infinite loop, an accidental jump out of the code area of memory, or a deadlock condition (in multitasking situations). Obviously, it is preferable to fix the root cause, rather than getting the watchdog to pick up the pieces. In a complex embedded system it may not be possible to guarantee that there are no bugs, but by using a watchdog you can guarantee that none of those bugs will hang the system indefinitely. Once your watchdog has bitten, you have to decide what action to take. The hardware will usually assert the processor’s reset line, but other actions are also possible. For example, when the watchdog bites it may directly disable a motor, engage an interlock, or sound an alarm until the software recovers. Such actions are especially important to leave the system in a safe state if, for some reason, the system’s software is unable to run at all (perhaps due to chip death) after the failure. A microcontroller with an internal watchdog will almost always contain a status bit that gets set when a bite occurs. By examining this bit after emerging from a watchdog-induced reset, we can decide whether to continue running, switch to a fail-safe state, and/or display an error message. At the very least, you should count such events, so that a persistently errant application would not be restarted indefinitely. A reasonable approach might be to shut the system down if three watchdog bites occur in one day. If we want the system to recover quickly, the initialization after a watchdog reset should be much shorter than power-on initialization. On the other hand, in some systems it is better to do a full set of self-tests since the root cause of the watchdog timeout might be identified by such a test. In terms of the outside
Zhang_Ch05.indd 641
5/13/2008 6:17:22 PM
642
INDUSTRIAL CONTROL TECHNOLOGY world, the recovery may be instantaneous, and the user may not even know a reset occurred. The recovery time will be the length of the watchdog timeout plus the time it takes the system to reset and perform its initialization. How well the device recovers depends on how much persistent data the device requires, and whether that data is stored regularly and read after the system resets. (2) Sanity checks. Kicking the dog on a regular interval proves that the software is running. It is often a good idea to kick the watchdog only if the system passes some sanity check, as shown in Fig. 5.24. Stack depth, number of buffers allocated, or the status of some mechanical component may be checked before deciding to kick the watchdog. Good design of such checks will help the watchdog to detect more errors. One approach is to clear a number of flags before each loop is started, as shown in Fig. 5.25. Each flag is set at a certain point in the loop. At the bottom of the loop the watchdog is kicked, but first the flags are checked to see that all of the important points in the loop have been visited. The multitasking or the multithreads approach is based on a similar set of sanity flags. For a specific failure, it is often a good idea to try to record the cause (possibly in nonvolatile RAM), since it may be difficult to establish the cause after the reset. If the watchdog bite is due to a bug, then any other information you can record about the state of the system or the currently active task will be valuable when trying to diagnose the problem.
Main loop of code
If sanity checks are OK { Kick the watchdog; } Else { Record failure; }
Figure 5.24 At the end of each execution of the main loop, the watchdog is kicked before starting over.
Zhang_Ch05.indd 642
5/13/2008 6:17:22 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
643
Main loop of code; Part One
Flag1 = true;
Main loop of code; Part Two
Flag2 = true;
Main loop of code; Part Three
Flag3 = true;
If all flags are TRUE { Kick thewatchdog; } Else { Record failure; } Clear all flags to FALSE
Figure 5.25 Use three flags to check that certain points within the main loop have been visited.
(3) Timeout interval. Any safety chain is only as good as its weakest link, and if the software policy used to decide when to kick the dog is not good, watchdog hardware can make the system less reliable. One approach is to pick an interval that is several seconds long. This is a robust approach. Some systems require fast recovery, but for others, the only requirement is that the system is not left in a hung state indefinitely. For these more sluggish systems, there is no need to do precise measurements of the worst case time of the program’s main loop to the nearest millisecond. When picking the timeout it needs to consider the greatest amount of damage the device can do between the original failure and the watchdog biting. With a slowly responding system, such
Zhang_Ch05.indd 643
5/13/2008 6:17:23 PM
644
INDUSTRIAL CONTROL TECHNOLOGY as a large thermal mass, it may be acceptable to wait 10 s before resetting. Such a long time can guarantee that there will be no false watchdog resets. While on the subject of timeouts, it is worth pointing out that some watchdog circuits allow the very first timeout to be considerably longer than the timeout used for the rest of the periodic checks. This allows the processor to initialize, without having to worry about the watchdog biting. Although the watchdog can often respond fast enough to halt mechanical systems, it offers little protection for damage that can be done by software alone. Consider an area of nonvolatile RAM that may be overwritten with rubbish data if some loop goes out of control. It is likely that overwrite would occur far faster than a watchdog could detect the fault. For those situations some other protection such as a checksum may be needed. The watchdog is really just one layer of protection, and should form part of a comprehensive safety net. On some microcontrollers, the built-in watchdog has a maximum timeout on the order of a few hundred milliseconds by means of multiplying the time interval in software. Say the hardware provides a 100 ms timeout, but you only want to check the system for sanity every 300 ms. You will have to kick the watchdog at an interval shorter than 100 ms, but it will only do the sanity check every third time the kick function is called. This approach may not be suitable for a single loop design if the main loop could take longer than 100 ms to execute. One possibility is to move the sanity check out to an interrupt. The interrupt would be called every 100 ms, and would then kick the watchdog. On every third interrupt the interrupt function would check a flag that indicates that the main loop is still spinning. This flag is set at the end of the main loop, and cleared by the interrupt as soon as it has read it. If kicking the watchdog from an interrupt, it is vital to have a check on the main loop, such as the one described in the previous paragraph. Otherwise it is possible to get into a situation where the main loop has hung, but the interrupt continues to kick the dog, and the watchdog never gets a chance to reset the system. (4) Self-test. Assume that the watchdog hardware fails in such a way that it never bites. The fault would only be discovered when some failure that normally leads to a reset, instead leads to a hung system. If such a failure was acceptable, you would never have bothered with the watchdog in the first place. Many systems contain a means to disable the watchdog, like a jumper that connects the watchdog output to the reset line. This is
Zhang_Ch05.indd 644
5/13/2008 6:17:23 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
645
necessary for some test modes, and for debugging with any tool that can halt the program. If the jumper falls out, or a service engineer who removed the jumper for a test forgets to replace it, the watchdog will be rendered toothless. The simplest way for a device to do a start-up self-test is to allow the watchdog to timeout, causing a processor reset. To avoid looping infinitely in this way, it is necessary to distinguish the power-on case from the watchdog reset case. If the reset was due to a power-on, then perform this test, but if the reset was due to a watchdog bite, then we may already be running the test. By writing a value in RAM that will be preserved through a reset, you can check if the reset was due to a watchdog test or to a real failure. A counter should be incremented while waiting for the reset. After the reset, check the counter to see how long before the timeout, so you are sure that the watchdog bit after the correct interval. If counting the number of watchdog resets to decide if the system should give up trying, then be sure that you do not inadvertently count the watchdog test reset as one of those.
5.2.9.3 Task Timers A task-timer strategy has four objectives in a multitasking or multithreading system: (1) to detect an operating system, (2) to detect an infinite loop in any of the tasks, (3) to detect deadlock involving two or more tasks, (4) to detect if some lower priority tasks are not getting to run because higher priority tasks are hogging the CPU. Typically, not enough timing information is available on the possible paths of any given task to check for a minimum execution time or to set the time limit on a task to be exactly the time taken for the longest path. Therefore, while all infinite loops are detected, an error that causes a loop to execute a number of extra iterations may go undetected by the designed task timer mechanism. A number of other considerations have to be taken into account to make any scheme feasible: (1) The extra code added to the normal tasks (as distinct from a task created for monitoring tasks) must be small, to reduce the likelihood of becoming prone to errors itself. (2) The amount of system resources used, especially CPU cycles, must be reasonable. Most tasks have some minimum period during which they are required to run. A task may run in reaction to a timer that occurs at a regular interval.
Zhang_Ch05.indd 645
5/13/2008 6:17:23 PM
646
INDUSTRIAL CONTROL TECHNOLOGY
These tasks have a start point through which they pass in each execution loop. These tasks are referred to as regular tasks. Other tasks respond to outside events, the frequency of which cannot be predicted. These tasks are referred to as waiting tasks. Watchdog can be used for a task timer. The watchdog timeout can be chosen to be the maximum time during which all regular tasks have had a chance to run from their start point through one full loop back to their start point again. Each task has a flag that can have two values, ALIVE and UNKNOWN. The flag is later read and written by the monitor. The monitor’s job is to wake up before the watchdog timeout expires and check the status of each flag. If all flags contain the value ALIVE, every task had its turn to execute and the watchdog may be kicked. Some tasks may have executed several loops and set their flag several times to ALIVE, which is acceptable. After kicking the watchdog, the monitor sets all of the flags to UNKNOWN. By the time the monitor task executes again, all of the UNKNOWN flags should have been overwritten with ALIVE.
5.2.9.4 Timer Creation and Expiration An active timer will perform its synchronization action when its expiration time is reached. A timer can be created using the “Timer-Create” call. It is bound to a task and to a clock device that will measure the passage of time for it. The task binding is useful for ownership and proper tear-down of timer resources. If a task is terminated, then the timer resources owned by it are terminated as well. It is also possible to explicitly terminate a timer using the “Timer-Terminate” call. Timers allow synchronization primarily through two interfaces. “TimerSleep” is a synchronous call. The caller specifies a time to wake up and indicates whether that time is relative or absolute. Relative times are less useful for real-time software since program-wide accuracy may be skewed by preemption. For example, a thread which samples data and then sleeps for 5 s may be preempted between sampling and sleeping, causing a steadily increasing skew from the correct time base. Timers that are terminated or cancelled while sleeping return an error to the user. Another interface that can be named as “Timer-Arm” provides an asynchronous interface to timers. Through it the user specifies an expiration time, an optional period, and a port which will receive expiration notification messages. When the expiration time is reached, the kernel sends an asynchronous message containing the current time to this port. This port then carries on stopping the related task that owns this timer. If the timer is specified as
Zhang_Ch05.indd 646
5/13/2008 6:17:23 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
647
periodic, then it will rearm itself with the specified period relative to the last expiration time. Last, “Timer-Cancel” call allows the user to cancel the expiration of a pending timer. For periodic timers, the user has the option of canceling only the current expiration, or all forthcoming expirations. These last facilities, periodic timers and partial cancellation, are important because they allow an efficient and correct implementation of periodic computation. This permits a user level implementation of real-time periodic threads.
5.3 Real-Time Application System 5.3.1
Architecture
Software architecture is like building architecture in that it encompasses the purpose, themes, materials, and concept of a structure. A software architect employs extensive knowledge of software theory and appropriate experience to conduct and manage the high-level design of a software product. The software architect develops concepts and plans for software modularity, module interaction methods, user interface dialog style, interface methods with external systems, innovative design features, and highlevel business object operations, logic, and flow. The software architect consults with clients on conceptual issues, managers on broad design issues, software engineers on innovative structural features, and computer programmers on implementation techniques, appearance, and style. Software architecture is a sketchy map of the system. Software architecture describes the coarse grain components (usually describes the computation) of the system. The connectors between these components describe the communication, which is explicit and pictured in a relatively detailed way. In the implementation phase, the coarse components are refined into “actual components”, for example, classes and objects. In the objectoriented field, the connectors are usually implemented as interfaces. The following gives an example of a proposed software system architecture for an Elevator Control System. An elevator at its basic level must only be able to move between floors in the hoist way, open and close the doors at each floor, and ensure passenger safety. The elevator can move slowly in the hoist way, stopping at each floor to admit and release passengers, opening and closing the doors at each floor, ignoring all passenger requests and not providing any passenger feedback, and still be an elevator, albeit a very inefficient one (this is in fact an operating mode used in real elevator systems when passenger request information is unavailable). All other
Zhang_Ch05.indd 647
5/13/2008 6:17:23 PM
648
INDUSTRIAL CONTROL TECHNOLOGY
functionality in the system; processing passenger requests, providing passenger feedback, and only stopping at desired floors, are enhancements to increase efficiency. The components that provide the minimum functionality are the Door Control, Drive Control, and Safety components (Fig. 5.26(a)). Note that they all have direct connections to the sensors they require for correct operation and do not communicate with each other. The Door Control must know when the Drive is moving and when the car is stopped at a floor to determine when to open the door, so it has access to the relevant sensors. The Drive Control must know when the doors are completely closed and when to stop at a floor. It should also have access to the hoist way limit sensors for internal safety checks. The safety object must be able to detect when and if the drive and door perform any unsafe actions, such as moving when the door is open, opening the door between floors, or moving the car past the hoist way limits. These represent “logical” sensors in the software architecture as they can be either multiple I/O channels to the same physical sensor in the system, or multiple different physical sensors for each software component. This choice represents a cost/reliability trade-off that does not affect the software architecture. The software components will have the same interfaces to the sensors regardless of the physical configuration. The replication of sensors for these critical components would prevent single points of failure that might lead to catastrophic system failures. If any of these components fail in the system, the system must fail safe and no longer be operational. The standard assumption is that these components will “fail silent,” that is, stop sending messages. One of the major challenges to implementing this scheme is how individual components determine other components’ or their own failure. If components fail silent, that can be detected via a timeout, but it is more difficult to determine when a component is broken and is sending incorrect messages and providing misinformation. Another challenge is how to verify that the noncritical components do not violate constraints of the critical core components. This approach is to specify a tight interface in the form of a well-defined data dictionary of all the messages that can be sent between components, and ensure that as long as components adhere to this interface, they will preserve the constraints. Above this base configuration, it can add a real-time network bus for component communication and coordination. Next it can add hall button and car button controllers to manage user input from passengers; the car lantern controllers and car position indicator controller for user feedback; and a dispatcher component to schedule efficiently the elevator car’s
Zhang_Ch05.indd 648
5/13/2008 6:17:23 PM
649
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
destinations (Fig. 5.26(b)). With the addition of the real-time network, each additional component need not have a direct connection to each sensor, and control state may be passed between components as “advice” for the critical controllers to increase functionality. An example advisory command would be that the Dispatcher component could notify the Drive controller that the next floor where passengers are waiting for elevator service
Dooropen
Drive
Door control
Drive speed
Driver control
Door closed
Atfloor
Door motor
Doorrevised
Safety Hoistway limit
Emergencybrake
Control
Software component
Sensor
Listen
Actuator
(a) DrS
DC
Drive control
DrS
Door control
DC
AF
DrS
HWL
DO
HWL
AF
Dr
AF
DM
Safety
DCL EB
Dispatcher
Network Bus Car call control Car call Car light
Hall call control
Car position control
Hall call Hall light
Car position indicator
Car latent control
AF
Car latent
(b)
Figure 5.26 (a) Critical components in the proposed elevator software architecture. (b) Complete elevator software architecture.
Zhang_Ch05.indd 649
5/13/2008 6:17:23 PM
650
INDUSTRIAL CONTROL TECHNOLOGY
is floor 6, and to stop at that floor. If the car is at floor 2, the Drive controller can decide to skip the floors between and go directly to floor 6. The Drive controller still decides when it is safe to leave floor 2, and cannot be commanded to move by the Dispatcher, only receiving advice about where to stop. If the Dispatcher stops sending out advisory commands, the Drive controller can declare the Dispatcher failed and default to its base functionality of stopping at each floor. If any noncritical component fails, it should not interfere with the operation of any critical component or, ideally, none of the other noncritical components.
5.3.2 Input/Output Protocol Controllers 5.3.2.1 Server or Manager In an application software system, the developments of various interfaces are conventional solutions to handle the input (read) from and output (write) to some resources such as memories, disks, keyboards, and tapes. The interfaces performing these I/O functions consist of service processes that are named as client, server, or manager depending upon the definitions of I/O actions in programs. These clients, servers, and managers are working based on the designed I/O protocols, thus being also named as I/O protocol controllers. The semantics of the I/O protocols can be either synchronous or asynchronous. The server keeps listening to client requests. When a client calls for a protocol method, the server creates a new process and a new servant object, and passes the object to the process to handle the request. The server also keeps a record in a hash table for the client’s request–reply set as well as a related data reference that was processed by the servant. All control activities for the client are controlled by the process. The process and object are on the per-client and per-file basis. That is, if the same client requests a different file, then it is treated as a separate set of process and servant, and therefore it occupies a separate place in the hash table maintained by the server. The client uses the information and data in the hash table by providing client name and file name and their combination is a key to access the hash table. When an I/O request is received by the server, it first checks the hash table for existing records. If it exists in the record, then the server just puts the request in a queue, and then may invoke the process if it is sleeping. Another queue may also be needed to hold the result of operation so that when client polls the status or requests data, the server can get the result and reply to the client right away.
Zhang_Ch05.indd 650
5/13/2008 6:17:24 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
651
For each process that deals with the I/O, the process receives a request from the server by dequeue the request queue that is passed to process by the server (the same queue also occupies a place in the hash table maintained by the server.). In high-level design, an I/O protocol controller can have the typical methods listed below: (1) Open (); The method opens a file for I/O read or write or both. Return value is the state indicating whether open is successful or not. (2) Close (); The method cancels all on-going I/O, releases all memory used by this file, and closes the file. All data that are not written to disk are lost. The file length and data may not be guaranteed to be as user expected, so usually use other methods to check the status of the file. Return value indicating length of the file after closing. (3) Read (); The method reads data for the whole file. Return value contains an array of control block numbers and whether the read operation is successful or not. (4) Read-Some (); The method reads data the specific data blocks. Return value contains an array of control block numbers whether the read operation is successful or not. (5) Write (); The method writes data to the file by the given name. The data are stored in memory that was read in by former read operations. Return value contains information whether the write operation is successful or not. (6) Write-Some (); The method writes data for the specific data blocks. Return value contains information whether the write operation is successful or not. (7) State (); Check the status of last read or write operations. Return values include all block numbers and the read and write status (done or pending). (8) Cancel (); The method cancels the I/O operation for the given file and control block number. The data for this chunk is unpredictable if read or write is pending or ongoing. But if read and write is done, no action is taken. Return value indicates whether the block was finished read/write and whether cancel is successful.
Zhang_Ch05.indd 651
5/13/2008 6:17:24 PM
652
INDUSTRIAL CONTROL TECHNOLOGY (9) Delete-Block (); The method deletes the data block holding the chunk of the data and marks the control block as free so that this control block can be used for other read/write operations for the same file.
5.3.2.2
I/O Device Module
The I/O device modules can be classified into two kinds: System call I/O and Stream I/O. The stream I/O method has a very good advantage: it is portable. Another feature that could improve the performance of your applications with this stream I/O is its built-in-buffering feature. One has to focus on error detection and error handling while dealing with input and output in any program. Though your program might work (without any errors), the moment something goes wrong will cause data corruption in your program. The very first step in preventing the data loss from I/O errors is to make sure that we identify such error conditions as they occur. There are several methods that will help us in that direction. Two such methods are: (1) One needs to properly check the return values of open () system call to make sure that the files are really open as we requested (please refer to main open for more details on this system call). (2) By checking the return value of write () system call, we can make sure that a disk did not fill up while you were writing your data. But checking for error conditions after every system call can become very tedious. To make error checking easier for your programs, you can go for some functions that wrap around the actual calls. These functions will do the check for your programs. We can write the functions in such a way that if an error is detected, it can print an appropriate error message automatically. The usual practice is to write such error messages to standard error, but they can also be redirected to any file handle, which will allow us to write the error messages to a specific log file. Usually, when you perform I/O, the function you call waits till the requested action is performed. Sometimes, you may be in a situation where the function needs to return immediately. This method of I/O is known as Nonblocking I/O. In this method, the function call returns immediately irrespective of completion of the requested action. It is important to remember here that Nonblocking I/O is available only with system call I/O. One of the common problems is the contention between two or more processes. There needs to be synchronous when several processes are trying to access a file. The programs call a function provided by the operating system to coordinate their accesses. There are various ways by which
Zhang_Ch05.indd 652
5/13/2008 6:17:24 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
653
processes can communicate with each other. Pipes, FIFO, Message queues, Semaphores and Shared memory segments are the different Inter Process Communication (IPC) mechanisms. Pipe can be used as a communication mechanism between two related processes such as parent and child process. In a typical client-server scenario, the sequence of events will be the following: (1) The server process gets an access to a shared segment using a semaphore. (2) The server will then transfer the data to the memory segment. (3) Once the server is done with its work, the semaphore will be released. (4) Then the client gets an access to the shared segment. (5) The client reads from the shared segment.
5.3.3
Process
A process is a running instance of a program, including all the variables and other states. A multitasking operating system may just switch between processes to give the appearance of many processes executing concurrently or simultaneously, though in fact only one process can be executing at any one time per CPU. A single processor may be shared among several processes with some scheduling algorithm being used to determine when to stop work on one process and service a different one. In general, a process will need certain resources such as CPU time, memory, files, I/O devices, and so on, to accomplish its tasks. These resources are allocated to the process when it is created. When a program is loaded as a process it is allocated a section of virtual memory that forms its useable address space. Each process has its own virtual memory space. References to real memory are provided through a process-specific set of address translation maps by the Memory Management Unit (MMU). Once the current process changes, the MMU must load the translation maps for the new process. This is called a context switch. Executable files are stored in a defined format on the memory or disk as process image. Within this process image there are typically at least four elements: (1) Program code. Program code comprises the program instructions to be executed. Program code on a multitasking operating system must be reentrant. This means it can be shared by multiple processes. To be reentrant the code must not modify itself at any time and the data must be stored separately from the instruction text (such that each independent process can maintain its own data space).
Zhang_Ch05.indd 653
5/13/2008 6:17:24 PM
654
INDUSTRIAL CONTROL TECHNOLOGY (2) Program data. Program data may be distinguished as initialized variables including external global and static variables, and as uninitialized variables. Program data blocks are not shared between processes by default. (3) Stack. A process will commonly have at least two last-in, firstout (LIFO) stacks; one is a user stack for user mode and the other is a kernel stack for system or kernel mode. (4) Process control block. This block stores the information needed by the operating system to control the process.
5.3.3.1
Process Types
On the basis of the creation mechanism, industrial control systems have processes categorized into two types: event-interactive processes and automatic processes. Event-interactive processes are initialized and controlled through events triggered inside or outside the microprocessor unit that runs these processes. In other words, there has to have been something that occurred to the system to start these processes; they are not started automatically as part of the system functions. These events can be hardware interrupts or software interrupts either generated from devices or from software processes; alternatively they can be user sessions through, for example, screen terminals or user graphic interfaces (GUI). Automatic processes are not connected to an event. These processes are initialized and controlled by means of system functions and routines. Rather, these are tasks that can be queued into a spooler area, where they wait to be executed until the system loads them. Such processes can be executed at a certain date and time.
5.3.3.2
Process Attributes
A process has a series of characteristics such as (1) the process ID, which is a unique identification number used to refer to the process; (2) the parent process ID, which is the identification number of the process that started this process, (3) the priority, which represents the degree of friendliness of this process toward other processes and its recent CPU usage; (4) the stack and heap allocation, which gives the volume of the system memory allocated by kernel or operating system as the stack and heap of this process. Each process is identified with its own Process Control Block that is a data block containing all information associated with this process. These
Zhang_Ch05.indd 654
5/13/2008 6:17:24 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
655
are the following: (1) process state, which may be new, ready, running, waiting, or halted; (2) program counter, which indicates the address of the next instruction to be executed for this process; (3) CPU registers, which vary in number and type, depending on the concrete microprocessor architecture; (4) memory management information, which includes base and bounds registers or page table; (5) I/O status information, composed I/O requests, I/O devices allocated to this process, a list of open files and so on; 6) CPU scheduling information, which includes process priority, pointers to scheduling queues, and any other scheduling parameters.
5.3.3.3
Process Status
One of the most important parts of a process is the executing program code. This code is read in from an executable file and executed within the program’s address space. At this point, the kernel is said to be “executing on behalf of the process and is in process context.” When in process, context on exiting the kernel, the process resumes execution in user-space, unless a higher-priority process has become runnable in the interim, in which case the scheduler is invoked to select the higher priority process. The following diagram (Fig. 5.27) shows the state machine used for the Windows NT scheduler plotted by H. Custer. Each process context contains information about the process, including the following: (1) Hardware context includes the following: (1) Program counter, which is the address of the next instruction. (2) Stack pointer, which is the address of the last element on the stack. (3) Processor status word, which contains information about system state, with bits devoted to things like execution modes, interrupt priority levels, overflow bits, carry bits, and so forth. (4) Memory management registers, which provide the mapping of the address translation tables of the process. (5) Floating point unit registers. During a context switch, the hardware context registers are stored in the Process Control Block in the user area. (2) User address space that includes program text, data, user stack, shared memory regions, and so forth. (3) Control information that is user area, process structure, kernel stack, address translation maps. (4) Credentials that includes user and group IDs (real and effective). (5) Environment variables are the strings of the form: variable = value.
Zhang_Ch05.indd 655
5/13/2008 6:17:24 PM
656
INDUSTRIAL CONTROL TECHNOLOGY Create and initialise object Reinitialise
Place in ready queue Ready
Initialized Set object to signalled state
Terminated Waiting Resources unavailable
Execution completes
Thread waits on an object handle
Resources become available
Select for execution
Preempt Transition
Preempt (or time quantum ends)
Standby
Running Context-switch to it and start its execution (dispatching) Inside Windows NT : H.Custer
Figure 5.27 The state machine used for the Windows NT scheduler plotted by H. Custer.
5.3.3.4
Process and Task
Processes are often called tasks in embedded operating systems. The sense of “process” (or task) is “something that takes up time.” Historically, the terms “task” and “process” were used interchangeably, but the term “task” seems to be dropping from the computer lexicon.
5.3.3.5
Process Creation, Evolution, and Termination
A new process can be created because an existing process makes an exact copy of itself. This child process has the same environment as its parent, only the process ID number is different. This procedure is called “forking.” After the forking process, the address space of the child process is overwritten with the new process data. In an exceptional case, a process might finish while the parent does not wait for the completion of this process. Such an unburied process is called a “zombie” process. When a process ends normally (it is not killed or otherwise unexpectedly interrupted), the program returns its exit status to the parent. This exit
Zhang_Ch05.indd 656
5/13/2008 6:17:24 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
657
status is a number returned by the program providing the results of the program’s execution.
5.3.3.6
Synchronization
Concurrent processes in multitasking and/or multiprocessing operating systems must deal with a number of potential problems: (1) Process starvation or indefinite postponement. A low priority process never gets access to the processor due to the higher effective processor access of other processes. The solution is to cause processes to “age” or decline in priority as they use up CPU quanta. (2) Process deadlock. Two or more processes are competing for resources, each blocking the other. (3) Race conditions. The processing result depends on when and how fast two or more processes complete their tasks. The data consistency and race condition problems may be addressed by the implementation of Mutual Exclusion and Synchronization rules between processes, whereas solving the starvation is a responsibility of the scheduler. There are a number of synchronization primitives: (1) Events. A thread may wait for events such as the setting of a flag, integer, signal or presence of an object. Until that event occurs, the thread will be blocked and will be removed from the run queue. (2) Critical sections. These are areas of code that can only be accessed by a single thread at any one time. (3) Mutual exclusions. This is a semantics that ensures that only a single thread has access to a protected variable or code at any one time. (4) Semaphores. These are similar to mutual exclusions but may include counters allowing only a specified number of threads access to a protected variable or code at any one time. (5) Atomic operations. This mechanism ensures that a nondecomposable transaction is completed by a thread before access to the same atomic operation is granted to another thread. The thread may have noninterruptible access to the CPU until the operation is completed. A scheduler (dispatcher) is responsible for the coordination of the running of processes to manage their access to the system resources such that each candidate process gets a fair share of the available process time, with the utilization of the CPU being maximized. The scheduler must ensure
Zhang_Ch05.indd 657
5/13/2008 6:17:25 PM
658
INDUSTRIAL CONTROL TECHNOLOGY
that processes gain access to the CPU for a time relative to its designated priority and process class and that no process is starved of access to the CPU, no matter if it is the lowest priority task available. A process may choose to voluntarily give up its use of the microprocessor when it must wait, usually for some system resource or for synchronization with another process. Alternatively, the scheduler may preemptively remove the thread or process from the CPU at the expiry of its allocated time quantum. The scheduler chooses the most appropriate process to run next.
5.3.3.7
Mutual Exclusive
Allowing multiple processes access to the same resources in a time slice multiprocessor system can cause many problems. This is due to the need to maintain data consistency, maintain true temporal dependencies, and to ensure that each thread will properly release the resource as required when it has completed its action. Although nonsharable resources require mutual exclusive access, for example, a printer can not be simultaneously shared by several processes; sharable resources do not require mutual exclusive access and thus cannot be involved in a deadlock. Read-only files are a good example for sharable resources. It is quite normal that if several processes try to open a readonly file at the same time, they will get access to that file. In general, we cannot prevent the occurrence of deadlock by denying the mutual exclusive condition. In order that the hold-and-wait condition never holds in the system, we must guarantee that whenever a process requests a resource, it does not hold any other resources. We can also stipulate that each process has to acquire all of its resources before it begins execution. This has a serious disadvantage: the resource utilization may be very low, because many of the resources may be allocated, but unused for a long period of time. Another mechanism allows a process to request resources only if it has none, that is, a process can request some resources and use them. Before it can request any additional resources, however, it must release all the resources it is currently allocated. If a system does not employ deadlock prevention, then a deadlock situation may occur. In this case, the system must provide the following: (1) An algorithm that examines the state of the system to determine whether a deadlock has occurred. (2) An algorithm to recover from deadlock situation. There are several alternatives that help the system to recover from a deadlock
Zhang_Ch05.indd 658
5/13/2008 6:17:25 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
659
automatically. One of them is process termination. Two possibilities concerning process termination are considered: (a) Kill all deadlocked processes: this will break the deadlock cycle, but at a great expense, because these processes may have computed for a long period of time, and the results of these partial computations must be discarded. (b) Kill one process at a time until the deadlock cycle is eliminated. The cost for that possibility is relatively high, since after each process is killed a deadlock detection algorithm must be invoked to determine whether any processes are still in deadlock. If a partial termination is possible, that is, killing one process already deadlocked (or some processes) may break down the deadlock state of a set of processes, then we must determine the suitable one. There are many factors to be considered, for example, (1) the priority of the process, (2) the computing duration of the process, (3) the number and resource type which are used by the process, (4) how many processes will need to be terminated and so on. It is clear that none of the alternatives mentioned above alone is appropriate for handling deadlocks in the wide spectrum of resource allocation problems. So they should be used in an optimal combination. One possibility to do this is to classify resources into classes, which are hierarchically ordered, and to apply a suitable approach for handling deadlocks to each class.
5.3.4
Finite State Automata
Finite State Automata, often called Finite State Machine (FSM), responds to events by jumping from state-to-state according to a formal set of rules for the problem at hand. An FSM is often used in real-time systems to control devices such as toaster ovens, elevators, and even nuclear power plants. An FSM is also widely used to parse text according to the rules of a formal grammar. There are many variants of the FSM approach. These pages will describe only the basic principles suitable for hand-coding small FSM in software developments. Larger FSM may require a more formal and disciplined approach, and perhaps specialized tools. An FSM consists of three components: (1) A set of states. At any given time the machine is in a single state. (2) A set of events that the machine recognizes. Typically an event represents some kind of external input, but the FSM may also
Zhang_Ch05.indd 659
5/13/2008 6:17:25 PM
660
INDUSTRIAL CONTROL TECHNOLOGY generate events internally. For example, in lexical analysis, most events correspond to an input character. (3) A mapping from each state-event pair to a corresponding action. An action may be empty; it may include a transition to another state; it may include anything else which the application needs. One of the states is the initial state. If the FSM is not to fall into an infinite loop, there must also be at least one terminating state. For lexical analysis, there will typically be two kinds of terminating states. One kind represents the recognition of a token, and the other represents a syntax error in the input text.
5.3.4.1
Models
An FSM is a model of behavior composed of states, transitions, and actions. Figure 5.28 illustrates an FSM model for a simple door control in which the door has two states: open and close. These two states can be mutually changed into each other depending upon transition conditions resulted from corresponding events. (1) States. One of the key concepts in computer programming is the idea of state that is essentially a snapshot of the measure of various conditions in the system. A state stores information about the past, that is, a state reflects the input changes from the system
1 Opened E: open door
State
Close door
Open door
Transition condition
Transition
Entry action
2 Closed E: close door
Figure 5.28 Finite state machine.
Zhang_Ch05.indd 660
5/13/2008 6:17:25 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
661
start to the present moment. In computer science, imperative programming is opposed to declarative programming. Imperative programming is a programming paradigm that describes computation in terms of a program state and statements that change the program state Most programming languages require a considerable amount of state information to operate properly, information that is generally hidden from the programmer. In fact, the state is often hidden from the computer itself as well, which normally has no idea that this piece of information encodes state, while that is temporary and will soon be discarded. This is a serious problem, as the state information needs to be shared across multiple processors in parallel processing machines. Without knowing which state is important and which is not, most languages force the programmer to add a considerable amount of extra code to indicate which data and parts of the code are important in this respect. (2) Action. An action is a description of an activity in a control system that is to be performed at a given moment, and has influence on something. In some cases, one action can include several subactivities. There are several action types: (1) Entry action that executes the action when entering the state. (2) Exit action that executes the action when exiting the state. (3) Input action that executes the action dependent on present state and input condition. (4) Transition action that executes the action when performing a certain transition. (3) Transition. A transition indicates a change between states and is described by a condition that would need to be fulfilled to enable the transition. An FSM can be represented using a state diagram (or state transition diagram) as an example in Fig. 5.28. Besides this, several state transition table types are used. A state transition table is a table describing the transition function of an FSM. This function governs what state (or states in the case of a nondeterministic FSM ) the FSM will move to, given an input to the machine. Given a state diagram of an FSM, a state transition table can be derived from it and vice versa. State transition tables are typically two-dimensional tables. There are three common forms for arranging them. (a) The most common representation is shown in Table 5.3: the combination of current state (B) and condition (Y) shows the next state (C). The complete action information can be added only using footnotes. An FSM definition including the full action information is possible using state tables.
Zhang_Ch05.indd 661
5/13/2008 6:17:26 PM
662
INDUSTRIAL CONTROL TECHNOLOGY (b) With the form given in Table 5.4, the vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates events, and the cells (row/column intersections) in the table contain the next state “S” if an event happens (and possibly the action “A” linked to this state transition). (c) With Table 5.5, the vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates next states, and the row/column intersections contain the event “E” which will lead to a particular next state.
Table 5.3 A State Transition Table Current State Condition
State A
State B
State C
– – –
– State C –
– – –
Condition X Condition Y Condition Z
Table 5.4 A State Transition Table Events State S1 S2 … Sm
E1
E2
…
En
– – … Az/Sk
Ay/Sj – … –
… … … …
– Ax/Si … –
Note: S: state; E: event; A: action; … : illegal transition.
Table 5.5 State Transition Table Next Current S1 S2 … Sm
S1
S2
…
Sm
Ay/Ej – … –
– – … Az/Ek
… … … …
– Ax/Ei … –
Note: S: state; E: event; A: action; … : impossible transition.
Zhang_Ch05.indd 662
5/13/2008 6:17:26 PM
663
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
An example of a state transition table for a machine M together with the corresponding state diagram is given in Table 5.6. All the possible inputs to the machine are enumerated across the columns of the table. All the possible states are enumerated across the rows. From the state transition table given above, it is easy to see that if the machine is in “S1”(the first row), and the next input is character “1,” the machine will stay in “S1.” If a character “0” arrives, the machine will transition to “S2” as can be seen from the second column. In the diagram this is denoted by the arrow from “S1” to “S2” labeled with a “0.” For a nondeterministic finite automaton (NFA), a new input may cause the machine to be in more than one state, hence its nondeterminism. This is denoted in a state transition table by a pair of curly braces { } with the set of all target states between them. An example is given in Table 5.7. Here, a nondeterministic machine in the state “S1” reading an input of “0” will cause it to be in two states at the same time, the states “S2” and “S3 .” The last column defines the legal transition of states of the special character, e. This special character allows the NFA to move to a different state when given no input. In state “S3,” the NFA may move to “S1” without consuming an input character. The two cases above make the finite automaton described nondeterministic. (4) Mealy FSM model. A Mealy FSM is a finite state machine where the outputs are determined by the current state and the input.
Table 5.6
State Diagram 1
State Transition Table Input State
1
0
S1 S2
S1 S2
S2 S1
1
0
S2
S1 0
Table 5.7 State Transition Table for an NFA Input State
1
0
e
S1 S2 S3
S1 S2 S2
{S2, S3} S1 S1
F F S1
Zhang_Ch05.indd 663
5/13/2008 6:17:26 PM
664
INDUSTRIAL CONTROL TECHNOLOGY This means that the state diagram will include an output signal for each transition edge. For a Mealy FSM model machine, input and output are signified on each edge, each vertex is a state. Figure 5.29 illustrates its state transition diagram. For example, the machine in Fig. 5.29 is a 1-Timed Delay machine and would generate an output string of 0x0x1 … xn–1 for an input string of x0x1 … xn. S0 is the start state. In Fig. 5.29, in going from a state “1” to a state “2” on input “0,” the output might be “1” (its edge would be labeled 0/1). (5) Moore FSM model. A Moore FSM is a finite state automaton where the outputs are determined by the current state alone (and not on the input). The state diagram for a Moore machine will include an output signal for each state. Compare with a Mealy machine, which maps transitions in the machine to outputs. Most electronics are designed as clocked sequential systems. Clocked sequential systems are a restricted form of Moore machine where the state changes only when the global clock signal changes. Typically. the current state is stored in flip-flops, and global clock signal is connected to the “clock” input of the flip-flops. Clocked sequential systems are one way to solve metastability problems. For a Moore FSM machine, output is signified on each state. In practice, vertices are normally represented by circles and, if needed, double circles are used for accept states. Figure 5.30 is an example of Moore FSM controlling an elevator door. (6) UML state diagram. The Unified Modeling Language (UML) state diagram is essentially a state diagram with standardized notation that can describe many things, from computer programs
0/0
1/1 1/0
S1
S2
0/1
1/0
S0
0/0
Figure 5.29 Mealy model FSM.
Zhang_Ch05.indd 664
5/13/2008 6:17:27 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
665
1 Opened
command_close
sensor_opened
4 Opening
command_open
E:
2 Closing E:
command_close
command_open
sensor_closed
3 Closed
Figure 5.30 Moore model: Control of an elevator door.
to business processes. The following tools can be used to make up a diagram, which are partially illustrated in Fig. 5.31: (a) Filled circle, denoting START. Not absolutely necessary to use; (b) Hollow circle, denoting STOP. Not absolutely necessary to use; (c) Rectangle, denoting state; (d) Top of the rectangle contains a name of the state. Can contain a horizontal line in the middle, below which the activity is written that is done in that state; (e) Arrow, denoting transition. An expression can be written on top of the line, enclosed in brackets( [.]) denoting that this expression must be true for the transition to take place; (f) Thick horizontal line with either x > 1 lines entering and “1” line leaving or “1” line entering and x > 1 lines leaving. These denote join/fork, respectively.
5.3.4.2
Designs
The simplest way to design an FSM is to draw a picture of it in the form of a transition diagram. Represent each state with a numbered circle. Draw
Zhang_Ch05.indd 665
5/13/2008 6:17:27 PM
666
INDUSTRIAL CONTROL TECHNOLOGY [Start] [Stop] Simulator running
[Pause]
[Unpause]
Simulator paused Do/wait [Data request] Log retrieval [Continue]
Do/output log
Figure 5.31 UML state diagram.
arrows from circle to circle to represent the possible state transitions. Label each arrow with the events which cause the associated state transition. The diagram may also reflect actions other than state transitions. One approach is to label the arrows not only with events but also with actions, at least in an abbreviated form. It is possible to draw a state diagram from the table. A sequence of easy to follow steps is given in the following: (1) Draw the circles to represent the states given. (2) For each of the states, scan across the corresponding row and draw an arrow to the destination state(s). There can be multiple arrows for an input character if the automaton is an NFA. (3) Designate a state as the start state. The start state is given in the formal definition of the automaton. (4) Designate one or more states as accept state. This is also given in the formal definition. It suggests that it had better make sure that (1) There is an initial state, with no arrows leading into it. (2) There is at least one terminal state, with no arrows leading out of it. (3) You have considered all possible events required in your system even those for unexpected cases.
Zhang_Ch05.indd 666
5/13/2008 6:17:29 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
667
(4) You have accounted for all the possible error conditions. You may omit them from the diagram to avoid clutter, but do not forget them. (5) Each state has exactly one transition for each possible event, even if the transition is back to the same state. (6) There are no infinite loops. It seems correct for the arrows to form loops, but make sure that all loops will eventually terminate. As long as a loop continues to read characters, it will eventually reach the end, so most diagrams satisfy this condition without much effort. (7) Each transition should make sense.
5.3.4.3
Implementation and Programming
The FSM diagram in Fig. 5.32 is an example for seemingly trivial FSM: This FSM has two states, two inputs, and four different actions and outputs defined, for a total of 16 different items that need to be considered by the software. The simple state machine in Fig. 5.32 can be implemented in code as a switch statement as given in Fig. 5.33. A switch statement, however, is not necessarily the best way to implement an FSM. In particular, the inputs must be constant, and a lot of code might be required to do similar things. We can then write the code as follows (Fig. 5.34): Of course, the output might be dependent on the action as well. But the above shows how jump tables (i.e., arrays of functions) and lookup tables can be used to efficiently implement code that is structured as a finite state machine. As the software designer, you have total control of how states are numbered. Generally, you number them beginning at 0, so that lookup tables are concise. However, if any other kinds of state numbering work, then use them. In cases where software is defined with dozens of states and hundreds of possible inputs, such abbreviated code becomes very easy to work with. 0/b1
0/a1 1/a2
B
A 1/b2
Figure 5.32 A trivial FSM example.
Zhang_Ch05.indd 667
5/13/2008 6:17:29 PM
668
INDUSTRIAL CONTROL TECHNOLOGY switch (state) { case A: switch (input) { case 0: do action a1 produce output 1 break; case 1: do action b2 produce output 2 break; } case B: switch (input) { case 0: do action b1 produce output 3 break; case 1: do action a2 produce output 4 break; } }
Figure 5.33 A code segment.
call function generate output
action[state, input](); lookup[state, input];
Figure 5.34 A code segment.
Each action is then defined as a standalone subroutine, and the task of integrating everything is very simple. On the other hand, with a large number of states or inputs, there might be only a few different states and actions, and rather more complex conditions are used to determine the action. For example, “input 0” might refer to “all alphanumeric characters” while “input 2” is all special characters. Maybe there is a third input, “all nonprintable characters.” In such cases, either large if-then-else statements can replace the use of a switch statement, or the conditions for each input can be encoded as functions. The use of large if-then-else statements does make it much more difficult to modify and debug. However, sometimes it is the best way. But at the very least, always consider the alternate lookup table methods. Suppose we define each box in the state table by a structure (Fig. 5.35): The state table is then defined as follows (Fig. 5.36):
Zhang_Ch05.indd 668
5/13/2008 6:17:29 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
669
The MAXINPUTS refers to the number of input columns in the table, not the number of inputs. Thus, if one of the columns is “all alphanumeric characters,” that counts as a single input. The state machine can then be implemented as a single loop within the infinite outer loop (Fig. 5.37): Variations to the above loop can be made as necessary. But the closer the design of the system can correspond to the above basic structure, the easier it will be to implement. New states can then easily be added just by building a module with such as xyzCondition() and xyzAction() functions defined, then adding them to the state table.
typedef
struct { fsmCond condition; /* returns a boolean value */ fsmAction action; /* return value is the output */ int nextstate; } fsm_t;
Figure 5.35 A code segment.
fsm_t
statetable[MAXSTATES][MAXINPUTS] = { { abcCondition, abcAction, 1 }, { defCondition, defAction, 3 }, etc. }
Figure 5.36 A code segment.
fsm_t
*fstate; while (1) { /* infinite loop waits for events */ state = nextstate; wait for event; fstate = &statetable[state][i]; for (i=0; i<MAXINPUTS; ++i) { if (fstate->condition(args-if-any)) { output = fstate->action(args-if-any); send output to wherever nextstate = fstate->nextstate; } } }
Figure 5.37 A code segment.
Zhang_Ch05.indd 669
5/13/2008 6:17:30 PM
670
INDUSTRIAL CONTROL TECHNOLOGY
Bibliography All World Software (http://www.allworldsoft.com). 2006. Boot Software. http:// www.allworldsoft.com/cgi/search.cgi?text=boot. Accessed date: February. Apple (http://www.apple.com). 2006a. Chapter 1—Introduce to Process and Task. http://developer.apple.com/documentation/mac/Processes/Processes-12. html. Accessed date: February. Apple (http://www.apple.com). 2006b. Chapter 2—Process Manager. http://devel oper.apple.com/documentation/mac/Processes/Processes-21.html #HEADING21-0. Accessed date: February. Apple (http://www.apple.com). 2006c. Chapter 3—Timer Manager. http://developer .apple.com/documentation/mac/Processes/Processes-53.html#HEADING53-0. Accessed date: February. Apple (http://www.apple.com). 2006d. Chapter 5—Notification Manager. http:// developer.apple.com/documentation/mac/Processes/Processes-105.html #HEADING105-0. Accessed date: February. Apple (http://www.apple.com). 2006e. IO Device Driver—Design Guideline. http://developer.apple.com/documentation/DeviceDrivers/Conceptual/ WritingDeviceDriver/WritingDeviceDriver.pdf. Accessed date: February. Barr, Michael. 2006a. How to Choose a Real-Time Operating System. http:// www.netrino.com/Articles/RTOSes/index.php. Accessed date: February. Barr, Michael. 2006b. Introduction to Watchdog Timer. http://www.embedded.com/ story/OEG20010920S0064. Accessed date: February. Bellevue Linux (http://www.bellevuelinux.org). 2006a. Software Process. http:// www.bellevuelinux.org/process.html. Accessed date: February. Bellevue Linux (http://www.bellevuelinux.org). 2006b. Software Kernel Space. http://www.bellevuelinux.org/kernel_space.html. Accessed date: February. Bellevue Linux (http://www.bellevuelinux.org). 2006c. Software Use Space. http://www.bellevuelinux.org/user_space.html. Accessed date: February. Beyond logic (http://www.beyondlogic.org). 2006. Window IO Port Device Driver. http://www.beyondlogic.org/porttalk/porttalk.htm. Accessed date: February. Buyya, Rajkumar. 2000a. The design of PARAS Microkernel. http://www.buyya .com/microkernel/. Accessed date: February 2006. Buyya, Rajkumar. 2000b. Chapter 5: Process Management—Task and Threads. http://www.buyya.com/microkernel/chap5.pdf. Accessed date: February 2006. Colorado School of Mines (http://www.mines.edu). 2006. Concurrency and Synchronization. http://www.mines.edu/Academic/courses/math_cs/macs442/ resources/S5synch.pdf. Accessed date: February. David, Francis M. et al. 2006. Context Switch Overheads for Linux. http://choices .cs.uiuc.edu/contextswitching.pdf. Accessed date: February. Dharmasanam, Srinivas. 2006. Multiprocessing with Real-Time Operating System. http://www.embedded.com/story/OEG20030512S0080. Accessed date: February. FAQ (http://www.faqs.org). 2006. Interrupt Handling. http://www.faqs.org/docs/ kernel/x1206.html. Accessed date: February. Foster, Ian. 2006. Message Passing Interface. http://www-unix.mcs.anl.gov/dbpp/ text/node94.html. Accessed date: February.
Zhang_Ch05.indd 670
5/13/2008 6:17:32 PM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
671
Ganier, C. J. 2006a. Task Scheduling. http://cnx.org/content/m12008/latest/. Accessed date: February. Ganier, C. J. 2006b. Watchdog Timer. http://cnx.org/content/m11998/latest/. Accessed date: February. Goldt, Sven et al. 1999a. The Linux Programmer’s Guide—Linux Interprocess Communications (Unix Pipe; Message Queue; Semaphores; Shared memory; etc.). http://tldp.org/LDP/lpg/node7.html#SECTION00700000000000000000. Accessed date: February 2006. Goldt, Sven et al. 1999b. The Linux Programmer’s Guide—Programming I/O Ports. http://tldp.org/LDP/lpg/node131.html#SECTION001000000000000000000. Accessed date: February 2006. Green, Paul. 2006. Multics Virtual Memory Tutorial. ftp://ftp.stratus.com/pub/vos/ multics/pg/mvm.html. Accessed date: February. Hartley, S. 2006a. Distributed Mutual Exclusion. http://www.mcs.drexel.edu/ ~shartley/OSusingSR/P/mes_dimu.sr. Accessed date: February. Hartley, S. 2006b. Message Passing. http://www.mcs.drexel.edu/~shartley/ OSusingSR/messaging.html. Accessed date: February. Hawaii Engineering (http://www-ee.eng.hawaii.edu). 2006a. Stack Heap Allocation. http://www-ee.eng.hawaii.edu/~tep/EE150/book/chap14/subsection2.1.1.8 .html. Accessed date: February. Hawaii Engineering (http://www-ee.eng.hawaii.edu). 2006b. Dynamic Memory Allocation. http://www-ee.eng.hawaii.edu/~tep/EE150/book/chap14/section 2.1.2.html. Accessed date: February. IBM (http://www.ibm.com). 2006a. Task Attributes. http://publib.boulder.ibm .com/infocenter/tivihelp/v15r1/index.jsp?topic=/com.ibm.itmfd.doc/main_ sybase104.htm. Accessed date: February. IBM (http://www.ibm.com). 2006b. Concurrency Control. http://publib.boulder .ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere .express.doc/info/exp/ae/cejb_cncr.html. Accessed date: February. Indiana Unix Support Group (http://www.ussg.iu.edu). 2006. Boot Process. http:// www.ussg.iu.edu/usail/installation/boot-process.html. Accessed date: February. JALUNA (http://www.jaluna.com). 2006. Boot Program. http://www.jaluna.com/ doc/c5/html/BSPDevGuide/c1034.html. Accessed date: February. Java Sun (http://java.sun.com). Race Conditions and Mutual Exclusion. http:// java.sun.com/developer/Books/performance2/chap3.pdf. Accessed date: February. Kalinsky, David. 2006a. Basic Concepts of Real-Time Operating System. http:// linuxdevices.com/articles/AT4627965573.html. Accessed date: February. Kalinsky, David. 2006b. Context Switch. http://www.embedded.com/story/ OEG20010222S0038. Accessed date: February. Kaplan, Ronald M. 2006. Finite State Machine. http://cslu.cse.ogi.edu/HLTsurvey/ ch11node8.html. Accessed date: February. Lawrence, Kesteloot. 1996. Mutual Exclusion Algorithm for Multiprocessor Operating System. http://www.teamten.com/lawrence/242.paper/242.paper .html. Accessed date: February 2006. Linux HQ (http://www.linuxhq.com). 2006a. Linux Kernel. http://www.linuxhq .com/guides/TLK/tlk-toc.html. Accessed date: February.
Zhang_Ch05.indd 671
5/13/2008 6:17:32 PM
672
INDUSTRIAL CONTROL TECHNOLOGY
Linux HQ (http://www.linuxhq.com). 2006b. Interrupts and Interrupt Handling. http://www.linuxhq.com/guides/TLK/dd/interrupts.html. Accessed date: February. Linux HQ (http://www.linuxhq.com). 2006c. Processes. http://www.linuxhq.com/ guides/TLK/kernel/processes.html. Accessed date: February. Linux HQ (http://www.linuxhq.com). 2006d. Device Driver. http://www.linuxhq .com/guides/TLK/dd/drivers.html. Accessed date: February. Linux HQ (http://www.linuxhq.com). 2006e. Memory Management. http://www .linuxhq.com/guides/TLK/mm/memory.html. Accessed date: February. Marshall, Dave. 1999a. IPC—Semaphores. http://www.cs.cf.ac.uk/Dave/C/ node26.html. Accessed date: February 2006. Marshall, Dave. 1999b. Thread Programming—Synchronization. http://www.cs .cf.ac.uk/Dave/C/node31.html. Accessed date: February 2006. Martin, Robert C. 1998. UML Tutorial—Finite State Machines. http://www .objectmentor.com/resources/articles/umlfsm.pdf. Accessed date: February 2006. Math Works (http://www.mathworks.com). 2006. Dealing with Task Overruns. https://tagteamdbserver.mathworks.com/ttserverroot/Download/22845_ temporary_overruns_scheduler.pdf. Accessed date: February. Mentor (http://www.mentor.com). 2006. Embedded Operating System. http:// www.mentor.com/products/embedded_software/nucleus_rtos/kernels/index .cfm. Accessed date: February. Meyer, Nathaniel. 2006. Finite State Machine. http://www.generation5.org/ content/2003/FSM_Tutorial.asp. Accessed date: February. Microsoft (http;//msdn2.microsoft.com). 2006a. Interrupt Handling. http://msdn2 .microsoft.com/en-us/library/aa448274.aspx. Accessed date: February. Microsoft (http;//msdn2.microsoft.com). 2006b. Interrupt Service Routine. http:// msdn2.microsoft.com/en-us/library/ms892408.aspx. Accessed date: February. Microsoft (http;//msdn2.microsoft.com). 2006c. Using Timer Object. http:// msdn2.microsoft.com/en-us/library/aa490187.aspx. Accessed date: February. PC Guide (http://www.pcguide.com). 2006a. BIO Setup Code. http://www .pcguide.com/ref/mbsys/bios/setup.htm. Accessed date: February. PC Guide (http://www.pcguide.com). 2006b. BIO Boot Sequence. http://www .pcguide.com/ref/mbsys/bios/boot_Sequence.htm. Accessed date: February. PC Guide (http://www.pcguide.com). 2006c. Master Boot Record. http://www .pcguide.com/ref/hdd/file/structMBR-c.html. Accessed date: February. Program URL (http://www.programurl.com). 2006. Boot Code. http://www .programurl.com/software/boot.htm. Accessed date: February. Rusling, David A. 2006. Interrupt Handling. http://www.science.unitn.it/~fiorella/ guidelinux/tlk/node83.html. Accessed date: February. Sun Microsystems (http://www.sun.com). 2006a. DR Requirement for I/O Device Drivers. http://www.sun.com/blueprints/0699/dr2.pdf. Accessed date: February. Sun Microsystems (http://www.sun.com). 2006b. Interrupt Handler Functionality. http://docs.sun.com/app/docs/doc/819-3196/6n5ed4gqe?a=view. Accessed date: February. Symbian (http://www.symbian.com). 2006. Interrupt Service Routine. http://www .symbian.com/developer/techlib/v70docs/SDL_v7.0/doc_source/Base Porting/KernelProgramming/InterruptArchitecture/InterruptServiceRoutine .guide.html. Accessed date: February.
Zhang_Ch05.indd 672
5/24/2008 9:45:21 AM
5: APPLICATION SOFTWARE FOR INDUSTRIAL CONTROL
673
Teach Target (http://searchsmb.techtarget.com). 2006a. What Is Operating System. http://searchsmb.techtarget.com/sDefinition/0,,sid44_gci212714,00.html. Accessed date: February. Teach Target (http://searchsmb.techtarget.com). 2006b. What Is Mainframe. http:// searchdatacenter.techtarget.com/sDefinition/0,290660,sid80_gci212516,00 .html. Accessed date: February. Teach Target (http://searchsmb.techtarget.com). 2006c. What Is Operating System. http://searchsmb.techtarget.com/sDefinition/0,,sid44_gci212714,00.html. Accessed date: February. Teach Target (http://searchsmb.techtarget.com). 2006d. What Is Watchdog Timer. http://searchmobilecomputing.techtarget.com/sDefinition/0,,sid40_gci 781540,00.html. Accessed date: February. Wikipedia (http://en.wikipedia.org). 2006a. Software. http://en.wikipedia.org/ wiki/Computer_software. Accessed date: February. Wikipedia (http://en.wikipedia.org). 2006b. Source Code. http://en.wikipedia.org/ wiki/Source_code. Accessed date: February. Wikipedia (http://en.wikipedia.org). 2006c. Booting Sector. http://en.wikipedia .org/wiki/Boot_sector. Accessed date: February. Wikipedia (http://en.wikipedia.org). 2006d. Finite State Machine. http://en .wikipedia.org/wiki/Finite_state_machine. Accessed date: February. Wonderware (http://www.wonderware.com). Real-Time Control Software. http:// content.wonderware.com/NR/rdonlyres/6EB78A0F-5C15-4609-910C-971CABF0A323/590/InControl_ds_80_final_crop.pdf. Accessed date: February. Xie, Rong et al. 2006. Scheduling Multitask Multi-Agent Systems. http://cmc .cs.dartmouth.edu/papers/rus:scheduling.pdf. Accessed date: February. Zavala, Carlos et al. 2006. Model-Based Real-Time Embedded Control Software for Automotive Torque Management. http://www.cse.wustl.edu/~cdgill/ RTAS04/ModelBasedTorqueManagement.pdf. Accessed date: February.
Zhang_Ch05.indd 673
5/13/2008 6:17:32 PM
Zhang_Ch05.indd 674
5/13/2008 6:17:32 PM
6 Data Communications in Distributed Control System 6.1 Distributed Industrial Control System A key property of modern control systems is the intensive crosscommunication and interaction between components and their dynamically changing environment, of which the main features are the following: (1) The systems comprise physically and/or logically distributed components, (2) the components are essentially heterogeneous (different architecture, hardware, networking, operating systems, and software), (3) crosscommunication and cooperation between the components and their environment are key features, (4) the components act in unison to achieve a common goal. A control system that has these features is the distributed control system (DCS). In the DCS, the resources are shared and logic of the system is distributed among its components. Development of distributed systems is a challenging task; the challenges are heterogeneity, openness, scalability, concurrency, transparency, and mobility.
6.1.1
Introduction
Distributed control is based on layered control architectures. The overall control architecture is hierarchical. At the high (decision) layer finite state machines (learning automata), adaptive algorithms or reinforcement learning algorithms are used to generate setpoints. At the low (execution) layer, an embedded continuous-time controller operates. This low-level controller assures convergence to the prescribed setpoints. Communication issues through network protocols and transmission delays should also be taken into account. Distributed systems can be open or closed or in-between. Very open systems like the Internet have no one in control: broad standards of communication are set, which are often extended for particular purposes by different groups of users. In a closed system, the designers have control over the whole lot, though they usually want it to be readily extensible: the different parts of an aircraft control system and a robot control system are examples.
675
Zhang_Ch06.indd 675
5/13/2008 6:18:20 PM
676
INDUSTRIAL CONTROL TECHNOLOGY
6.1.1.1
Opened Architectures for Distributed Control
The Common Object Request Broker Architecture (CORBA) is an emerging open distributed object computing infrastructure being standardized by the Object Management Group (OMG). CORBA automates many common network programming tasks such as object registration, location, and activation; request demultiplexing; framing and error handling; parameter marshalling and demarshalling; and operation dispatching. See the OMG web site for more overview material on CORBA. Figure 6.1 illustrates the primary components in the CORBA ORB architecture. Descriptions of these components are available below. (1) Object. This is a CORBA programming entity that consists of an identity, an interface, and an implementation, which is known as a servant. (2) Servant. This is an implementation programming language entity that defines the operations that support a CORBA IDL interface. Servants can be written in a variety of languages, including C, C++, Java, Smalltalk, and Ada. (3) Client. This is the program entity that invokes an operation on an object implementation. Accessing the services of a remote object
Interface repository
IDL compiler
Implementation repository
in args Client
DII
OBJ REF
IDL stubs
GIOP/IIOP
Operation()
Object (Servant)
Out args + return value
IDL skeleton
DSI Object adapter
ORB interface
ORB CORE
Standard interface
Standard language mapping
ORB-specific interface
Standard protocol
Figure 6.1 CORBA ORB architecture (courtesy of OMG).
Zhang_Ch06.indd 676
5/13/2008 6:18:20 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
(4)
(5)
(6)
(7)
(8)
(9)
Zhang_Ch06.indd 677
677
should be transparent to the caller. Ideally, it should be as simple as calling a method on an object. Object Request Broker (ORB). The ORB provides a mechanism for transparently communicating client requests to target object implementations. The ORB simplifies distributed programming by decoupling the client from the details of the method invocations. This makes client requests appear to be local procedure calls. When a client invokes an operation, the ORB is responsible for finding the object implementation, transparently activating it if necessary, delivering the request to the object, and returning any response to the caller. ORB interface. An ORB is a logical entity that may be implemented in various ways (such as one or more processes or a set of libraries). To decouple applications from implementation details, the CORBA specification defines an abstract interface for an ORB. This interface provides various helper functions such as converting object references to strings and vice versa, and creating argument lists for requests made through the dynamic invocation interface described below. CORBA IDL stubs and skeletons. CORBA IDL stubs and skeletons serve as the “glue” between the client and server applications, respectively, and the ORB. The transformation between CORBA IDL definitions and the target programming language is automated by a CORBA IDL compiler. The use of a compiler reduces the potential for inconsistencies between client stubs and server skeletons and increases opportunities for automated compiler optimizations. Dynamic invocation interface (DII). This interface allows a client to directly access the underlying request mechanisms provided by an ORB. Applications use the DII to dynamically issue requests to objects without requiring IDL interface-specific stubs to be linked in. Unlike IDL stubs (which only allow RPC-style requests), the DII also allows clients to make nonblocking deferred synchronous (separate send and receive operations) and one way (send-only) calls. Dynamic skeleton interface (DSI). This is the server side’s analog to the client side’s DII. The DSI allows an ORB to deliver requests to an object implementation that does not have compiletime knowledge of the type of the object it is implementing. The client making the request has no idea whether the implementation is using the type-specific IDL skeletons or is using the dynamic skeletons. Object adapter. This assists the ORB with delivering requests to the object and with activating the object. More importantly,
5/13/2008 6:18:20 PM
678
INDUSTRIAL CONTROL TECHNOLOGY an object adapter associates object implementations with the ORB. Object adapters can be specialized to provide support for certain object implementation styles (such as OODB object adapters for persistence and library object adapters for nonremote objects).
6.1.1.2
Closed Architectures for Distributed Control
DCS is a very broad term that describes solutions across a large variety of industries. The broad architecture of a solution involves either a direct connection to physical equipment such as switches, pumps, and valves or connection through a secondary system such as a Supervisory Control and Data Acquisition (SCADA) system. A DCS solution does not require operator intervention for its normal operation, but with the line between SCADA and DCS merging, systems claiming to offer DCS may actually permit operator interaction through a SCADA system. An example of distributed control is that of autonomous robots. Autonomous robots are used in transportation (automated vehicles), in industry, and in the assistance of the elderly and the disabled. The behavior of autonomous robots should be adaptive, reconfigurable, and reflexive. Autonomous robots should operate in unknown environments and should be able to learn from monitoring human control policies. Intelligent (behavior-based) control of autonomous robots implies algorithms that emulate the reaction of humans to several operation scenarios and which improve their performance through task repetition. Distributed industrial control systems categorized in closed architectures have three popular network types: (1) Ethernet network. In 1985, International Standards Organization (ISO) made Ethernet an international standard. It is very common for workstations and PCs shipping today to include an Ethernet interface card for compatibility with both 10 Mbit/s and 100 Mbit/s Ethernet networks. The most popular uses for Ethernet networks today is to conduct program maintenance, send plant-floor data to and from MIS (manufacturing information system) and MES (manufacturing execution system) systems, perform supervisory control, provide connectivity for operator interfaces, and log events and alarms. An Ethernet network is ideal for applications requiring (1) large data transfers, (2) wide access (site to site), (3) no timecritical data exchange. (2) SCADA network. SCADA network is a state-of-the-art control network that meets the demands of real-time, high-throughput
Zhang_Ch06.indd 678
5/13/2008 6:18:20 PM
679
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
applications. The SCADA network combines the functionality of an I/O network and supervisory equipment while providing high-speed performance for both functions. Figure 6.2 is an example of the SCADA network. The SCADA network gives you deterministic, repeatable transfers of all mission-critical control data in addition to supporting transfers of nontime-critical data. The patented media-access method used on the SCADA network results in deterministic delivery of time-critical (scheduled) data by assigning it a higher priority than nontime-critical (unscheduled) data. As a result, I/O updates and controller-to-controller interlocking always take precedence over program uploads and downloads and messaging. (3) CAN network. Controller Area Network (CAN) is a low-level network that provides connections between simple industrial devices (such as sensors, actuators, and valves) and higher-level devices (such as PLC controllers and computers). The CAN
Process historical archiver
Engineering & operator worksations
Ethernet TCP/IP
SCADA data server
Field control unit microF CU
AB I/O
Modicon I/O
PLCs, RTUs, or PLC I/O other third-party devices
GE I/O Field devices
Field devices
Field devices
Field devices
Field devices
Figure 6.2 An example for the SCADA network.
Zhang_Ch06.indd 679
5/13/2008 6:18:20 PM
680
INDUSTRIAL CONTROL TECHNOLOGY network is a flexible, open network that works with devices from multiple vendors. The CAN network can (1) provide a cost-effective networking solution to simple devices, (2) let you access data in intelligent sensors and actuators from multiple vendors, (3) provide master/ slave and peer-to-peer capabilities (4) offer the producer– consumer services that let you configure devices (5) have control, and collect information over a single network.
6.1.1.3
Similarity to Computer Network
Modern computer systems, using Windows technology and running on distributed networks such as the Internet, have all but taken over every application in the business environment. This evolution is also apparent in other sectors. Traditionally, many aspects of industrial automation were based on systems that needed specialized programming and were built using nonstandard, proprietary hardware. Not only are these traditional programmable logic controllers (PLC) being replaced by computer-based systems, but the additional possibilities that distributed computer networks offer are now being discovered.
6.1.2 Data Communication Model for Distributed Control System Data communications concerns the transmission of digital messages to devices external to the message source. “External” devices are generally thought of as being independently powered circuitry that exists beyond the chassis of a computer, a controller, or other digital message source. To perform the required data communication, industrial control networks use multilayered models handling the message exchanges between components or entities in the networks. In these models, each layer defines a specific type of communication. Bottom layers define how to transmit bits across physical links. Upper layers define how to build network control applications and manage the data communication protocol. Intermediate layers define connectionless and connection-oriented network services. Within each layer, specific types of protocols are available to accomplish a specific task. For example, the lower layers define different link protocols such as Ethernet, token ring, ATM, frame relay, and so on. There are perhaps hundreds of protocols that occupy the different layers of the various network architectures.
Zhang_Ch06.indd 680
5/13/2008 6:18:21 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
681
6.1.2.1 Data Communication Models for Open Control Systems (1) OSI reference model. The ISO developed the OSI reference model. The OSI is accepted as a “reference” model. This OSI reference model outlines the levels of networking protocols and the relationship they have with one another. The layered OSI model is pictured in Fig. 6.3. The stacks on the left and right represent two systems that are engaged in an end-to-end communication session. The middle devices are routers that provide Internet work connections between the devices. Each layer in the protocol stack provides a particular function. These functions provide services to the layer just above. In addition, each layer communicates with its peer layer in the system to which it is connected. For example, the transport layer on a server communicates with its peer transport layer on a client. This takes place through the underlying layers and across the network. Peer-layer communication is handled through message exchange between peers. For example, assume the receiver is getting data faster than it can process it. To “slow down” the sender, it needs to send it a message. The transport layer handles
Network architecture based on the OSI model
Exchange unit
Layer 7
Application
6 Presentation
5
Session
4
Transport
Application protocol
Presentation protocol
Session protocol
Transport protocol
Application
APDC
Presentation
PPDC
Session
SPDC
Transport
TPDC
Communication subnet boundry 3
Network
Network
Network
Network
Packet
2
Data link
Data link
Data link
Data link
Frame
1
Physical
Physical
Physical
Physical
Bit
Figure 6.3 The architecture of OSI reference model.
Zhang_Ch06.indd 681
5/13/2008 6:18:21 PM
682
INDUSTRIAL CONTROL TECHNOLOGY this sort of “throttling.” So the transport layer creates a message for its peer transport layer in the sending system. The message is passed down the protocol stack where it is packaged and sent across the network. The message is then passed up to the protocol stack where it is read by the transport protocol, which then initiates a procedure to throttle back. The main point is that lower layers provide services to upper layers. Applications are the usual source of messages and data that are passed down through the protocol stack, but each protocol layer may also generate its own messages to manage the communication session. One other thing to note is that the lower-layer physical and data link protocols operate across physical point-to-point links while the transport layer protocols operate on end-to-end virtual circuits that are created across the underlying network. Each layer of the OSI model is described here for what it defines. The lowest layer is discussed first, because it represents the physical network components. (a) The physical layer. The physical layer defines the physical characteristics of the interface, such as mechanical components and connectors, electrical aspects such as voltage levels representing binary values, and functional aspects such as setting up, maintaining, and taking down the physical link. Well-known physical layer interfaces for data communication include serial interfaces, parallel interfaces, and the physical specifications for LAN systems such as Ethernet and token ring. Wireless systems have “air” interfaces that define how data is transmitted using radio, microwave, or infrared signals. (b) The data link layer. The data link layer defines the rules for sending and receiving information across a physical connection between two systems. Data links are typically network segments (not Internet works) and point-to-point links. Data is packaged into frames for transport across the underlying physical network. Some reliability functions may be used, such as acknowledgment of received data. In broadcast networks such as Ethernet, a MAC (medium access control) sublayer was added to allow multiple devices to share the same medium. (c) The network layer. This layer provides internetworking services that deliver data across multiple networks. An internetworking addressing scheme assigns each network and each node a unique address. The network layer supports multiple data link connections. In the Internet Protocol (IP) suite, IP is the network layer internetworking protocol. (d) The transport layer. The transport layer provides end-to-end communication services and ensures that data is reliably
Zhang_Ch06.indd 682
5/13/2008 6:18:22 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
683
delivered between those end systems. Both end systems establish a connection and engage in a dialog to track the delivery of packets across the Internet work. The protocol also regulates the flow of packets to accommodate slow receivers and ensures that the transmission is not completely halted if a disruption in the link occurs. (e) The session layer. The session layer coordinates the exchange of information between systems by using conversational techniques, or dialogs. Dialogs are not always required, but some applications may require a way of knowing where to restart the transmission of data if a connection is temporarily lost, or may require a periodic dialog to indicate the end of one data set and the start of a new one. (f) The presentation layer. Protocols at this layer are part of the operating system and application the user runs on a workstation. Information is formatted for display or printing in this layer. Codes within the data, such as tabs or special graphics sequences, are interpreted. Data encryption and the translation of other character sets are also handled in this layer. (g) The application layer. Applications access the underlying network services using defined procedures in this layer. The application layer is used to define a range of applications that handle file transfers, terminal sessions, and message exchange (e.g., electronic mail). (2) TCP/IP model. Transmission Control Protocol/Internet Protocol (TCP/IP) is a connection oriented protocol that provides the flow controls and reliable data delivery services. It is particularly important that these services run in the host computers at either end of a connection, not in the network itself. Therefore, TCP/IP is a protocol for managing end-to-end connections. Since end-toend connections may exist across a series of point-to-point connections, they are often called “virtual circuits.” An illustration of the TCP/IP model is shown in Fig. 6.4. The functions of the layers are described below. (a) Network access layer. This layer is responsible for managing the physical medium such as an Ethernet LAN or a point-topoint line. When transmitting, it accepts IP datagram and transmits frames that are specific to the particular type of network. When receiving, this layer accepts network frames and passes the enclosed datagram to the layer above. Some relevant examples of the protocols in this layer are the following: (i) Address Resolution Protocol (ARP). For machines on a given network to communicate, they must know each
Zhang_Ch06.indd 683
5/13/2008 6:18:22 PM
684
INDUSTRIAL CONTROL TECHNOLOGY Application layer Transport layer Internet layer Network Access layer (Data Link layer) Physical layer
Figure 6.4 TCP/IP layers.
other’s physical network address that is encoded in the hardware devices on the particular network. To hide the physical details of the network and make the internetworking uniform, however, the hosts are assigned IP addresses. The Address Resolution Protocol maps IP addresses to the addresses used by the particular network, for example, to the Ethernet addresses. (ii) High-level data link control (HDLC). This is an international standard for the transfer of data over both point-to-point and multipoint serial data links. It uses predefined bit patterns to signal the start and end of frames. The receiver searches the incoming bit stream on a bit-by-bit basis for the predefined start and end of frame sequence ( frame delimiting ), which the sender would have inserted appropriately. (iii) Ethernet. This is a popular local area network technology that was standardized by Xerox Corporation, Intel Corporation, and Digital Equipment Corporation. Ethernet is a carrier sense, multiple access/collision detect (CSMA/CD) network. The Ethernet standard also specifies the physical media, signaling technique (Manchester), maximum repeater spacing, maximum network span, frame format, and other parameters. The frame format includes fields for source and destination network addresses, protocol type, data and cyclic redundancy check. An important characteristic of an Ethernet frame is that it is self-identifying. Not only does the frame identify the sender and receiver of the frame, but also which higher level protocol should
Zhang_Ch06.indd 684
5/13/2008 6:18:22 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
685
process the frame. This facilitates the use of multiple protocol systems on the same network. The standard is termed “multiple accesses” because all hosts have access to the network (the other hosts). All hosts share a single communication channel, making Ethernet a bus technology. In addition it is a broadcast system because all hosts receive every transmission. Before transmitting, hosts determine whether or not the network is idle. They do this by attempting to sense a carrier wave (listen for frames being transmitted) on the network, hence the term “carrier sense.” If a host, waiting to transmit, senses that no frames are on the network it then transmits its frame. It is possible that two or more hosts sense the network being idle and attempt to send their frames; in such a case, a collision takes place. Such collisions are detected by the transmitting hosts (hence the term “collision detect”) and they will cease transmission. The colliding hosts then wait for a random period of time before attempting to retransmit their frames. Each transmission is limited in duration by the maximum packet size, and there is a required minimum idle time between transmissions. These two factors, “multiple accesses” and “carrier sense,” act to ensure that no pair of communicating hosts can monopolize the network. (iv) Point-to-Point Protocol (PPP). This is a protocol that allows TCP/IP and some other protocols to connect over a serial line (e.g., a direct cable connection or between a pair of modems, as in a telephone network). Computers utilizing this protocol are thus able to dial up and become part of the Internet. Effectively, PPP turns a serial port into a logical network port. PPP supports link level error detection to compensate for noisy, error-prone connections. The protocol also allows for header compression as well as other features. (b) Internet layer. The Internet layer is responsible for machineto-machine communication, where a machine may be an ordinary host or a gateway. The packet received from the transport layer for transmission is encapsulated in an Internet Protocol (IP) datagram and the datagram header inserted. The Internet layer then determines whether to deliver the datagram directly or through a gateway, then passes the datagram to the network access layer for transmission.
Zhang_Ch06.indd 685
5/13/2008 6:18:23 PM
686
INDUSTRIAL CONTROL TECHNOLOGY Incoming datagrams are also handled by the Internet layer. In this case, each datagram is checked for validity, its header information deleted, and it is determined whether the datagram should be processed locally or forwarded. For a locally directed datagram, the Internet layer determines that transport layer protocol should handle the packet. The Internet layer also sends all Internet control message protocol (ICMP) messages as needed and handles all ICMP massages received. (i) Internet Protocol (IP). This protocol defines the basic unit of data transfer and the exact format of all data as it traverses the Internet. It provides an unreliable, connectionless delivery mechanism to the layers above. Not only does the protocol specify data formats, but it also specifies how packets should be processed and how errors should be handled. The basic transfer unit in TCP/IP is the IP datagram. The datagram is a self-contained packet, independent of other packets, and as such, it contains sufficient information to enable it to be routed from the source to the destination host. This is necessary because of the connectionless approach adopted. The IP datagram format as shown in Fig. 6.5 includes, among others, fields for the source and destination IP addresses, and a protocol field. The IP address is a unique 32-bit (or 64-bit) integer that each host on a TCP/ IP network, such as the Internet, is assigned, and that is used in all communication with that host. This address
Bit Order: 1 Version
5
9
Header length Datagram identification
Types of service
Time to live
Protocol
17 20 Total datagram length
32
Flags
Fragment offset Header checksum
Source IP address Destination IP address Options
Padding Data
Figure 6.5 The IP datagram format with a header followed by data.
Zhang_Ch06.indd 686
5/13/2008 6:18:23 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
687
assignment is defined in software at the Internet layer and is independent of the physical network address of the host. The protocol field is analogous to the protocol type field in an Ethernet frame in that it specifies which higher level protocol the datagram is meant for. (ii) Internet Control Message Protocol (ICMP). As stated before, the Internet Protocol provides an unreliable, connectionless delivery service. Under normal circumstances, each gateway or host routes and delivers datagrams without coordinating with the sender. It is possible, however, that any of a number of unusual conditions, failures, or error conditions may occur, resulting in a datagram being discarded or lost. In the absence of an error-reporting mechanism, the transmitting host would have no way of determining the cause of the failure of a transmitted datagram to reach the destination host. The ICMP is the mechanism that allows machines on the Internet and TCP/IP internetworking in general to report errors and unexpected circumstances to a transmitting host. Some of the functions of the protocol are the following: Datagram error reporting, congestion control (source quenching), route-change notification, host reachability and status testing, destination unreachability reporting, Performance measurement (transittime estimation), and Subnet addressing excessively long (or circular) route detection. ICMP messages are encapsulated in the data portion of the IP datagram in a similar manner to the way higher level (above the Internet layer) protocol messages are encapsulated. ICMP is an Internet layer protocol, but nonetheless, is a required part of IP, and must be included in every IP implementation. ICMP messages are not directed to application processes, but rather, are handled by the IP software on the hosts. (c) Transport layer. This layer has the responsibility of providing communication between application programs on the source and destination hosts. The transport layer may regulate the flow of information between the source and destination hosts. It may also provide a reliable transport mechanism to the application layer, ensuring that data is passed to the application layer in sequence, unduplicated, and without errors. To achieve this mechanism, the receiving host sends
Zhang_Ch06.indd 687
5/13/2008 6:18:24 PM
688
INDUSTRIAL CONTROL TECHNOLOGY acknowledgments to the sender, for packets received. If packets are lost, the sender is required to retransmit them. When transmitting, the transport layer accepts data from the application programs in the layer above, breaks the data into smaller pieces if needed, places identifying information in the packets, and sends it to the Internet layer. The transport layer is the first end-to-end (or source-todestination) layer. As such, the program on the source system communicates with the program at the destination system. This is in contrast to the lower layers, in which communication is between neighboring systems, and not directly between the source and destination systems. (i) Transmission Control Protocol (TCP). This protocol provides a reliable, connection-oriented stream delivery service to the layers above. On the sending end, TCP accepts arbitrarily long messages from the layers above and breaks them into pieces not exceeding 64k bytes. Each piece is then sent separately to the lower layers for transmission. Given that the lower layers do not guarantee delivery or proper packet sequencing, TCP must do the following at the receiving end: Guarantee data delivery, provide properly sequenced data, and provide unduplicated data. In addition to this, TCP at both ends manages a session established between the source and destination system. This includes such issues as flow control. TCP allows for multiple application programs (or higher-level protocols) on a given machine to communicate concurrently and directs incoming TCP traffic to the appropriate application programs. To achieve this, TCP incorporates abstract objects called ports that identify the destination application program. Each port is assigned a small integer used to identify it uniquely. The TCP segment format is shown in Fig. 6.6. (ii) User Datagram Protocol (UDP). The user datagram protocol (UDP) provides a datagram form of communication at the transport layer. Unlike TCP, this protocol provides an unreliable, connectionless, stream-oriented service to the higher layers. In addition, session-management functions, such as flow control, are not provided by this protocol. Higher level protocols that use UDP must, therefore, address issues such as flow control and reliability. Application protocols that do not need a connectionoriented transport protocol generally use UDP.
Zhang_Ch06.indd 688
5/13/2008 6:18:24 PM
689
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM Bit Order: 1
17
Source port
32 Destination port
Sequence number Acknowledgement number Header length
Reserved
Flags
Window
Checksum
Urgent pointer
Options
Padding Data
Figure 6.6 The TCP segment format with a header followed by data.
UDP also allows multiple higher layer protocols to communicate concurrently and, as such, incorporates the port concept. The UDP datagram format is shown in Fig. 6.7. (d) Application layer. At this level, users invoke application processes that access the network or internetworking. The application then interacts with transport layer protocol(s) to send or to receive the data transmitted by the process. Some common application layer protocols are described below. (i) TELNET. This is a remote terminal protocol. TELNET allows a user at one site to establish a connection to a machine at another site. It then passes keystrokes from the local machine directly to the remote machine as if the user were physically at the remote machine.
Bit order: 1
17
Source port
32 Destination port
Length
Checksum Data
Figure 6.7 The UDP datagram format with a header followed by data.
Zhang_Ch06.indd 689
5/13/2008 6:18:24 PM
690
INDUSTRIAL CONTROL TECHNOLOGY (ii) File Transfer Protocol (FTP). FTP allows authorized users to log into a remote system, identify themselves, and copy files to or from the remote machine. Users can also list remote directories and execute a few simple commands remotely. FTP understands a few basic, popular, file formats and can convert between such. (iii) Simple Mail Transfer Protocol (SMTP). SMTP is the standard specified by the Internet for the exchange of mail. SMTP specifies how the mail-delivery system passes messages across a link from one host to another. It does not specify the interface between the mail system and the user, or how the system presents the user with incoming mail. SMTP also does not specify the details of mail storage or how frequently the system attempts to send messages. (iv) Finger. Finger is a program that returns information about a registered user or users on a computer. It may give information on all the users logged into the system or on a specific user, depending on the arguments supplied by the requesting user and the specific implementation of the protocol.
6.1.2.2 Data Communication Models for Closed-Control Systems Data communication protocols for open systems can be mapped into closed-control systems. In many applications, closed-control systems require simplified layer models for data communication in their networks. One example in this aspect is given in the Table 4.7 that shows an enhanced layer model for a SCADA networks. The model given in Table 4.7 consists of three layers that are correspondent to the OSI layer 1, the OSI layer 2, and the OSI layer 7. Actually, a commonly accepted data communication model for closed-control systems has a similar architecture to that given in Table 4.7, which is of three layers: the first layer on the bottom is the data transmission layer, the second layer in the middle is data link layer, and the third layer at the top is the data communication layer. The following sections in this chapter describe these three layers. Section 6.4 lists all the protocols used in the data transmission layer, Section 6.5 gives the protocols used for the data link layer, and Section 6.6 gives the semantics for the data communication layer: the application layer.
Zhang_Ch06.indd 690
5/13/2008 6:18:25 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
691
6.2 Data Communication Basics 6.2.1
Introduction
Data communication basically means that a system A sends data messages to a system B through some electric and electronic channels, and the system B receives and processes the data messages. In data communication, the sender is the message source, the channels are the transmitters, and the receiver is at the destination. The distance between systems A and B may vary from those within a single integrated circuit (IC) chip, to somewhere beyond the local circuitry that constitutes a control system. In distributed industrial control networks, the distances involved for data communication may be enormous, reaching greater than 1 km. The technology for data communications depends upon the transferring distances; long distance data transmission should deal with some substantial problems, such as electric noise and distortion, not significant within IC chips.
6.2.1.1
Data Transfers within an IC Chipset
Data is typically grouped into packets that are 8, 16, 32, 64, or 128 bits long, and passed between temporary holding units called registers that can be parts of a memory or an I/O port of an IC chip. Data within a register is available in parallel because each bit exits the register on a separate conductor. To transfer data from one register to another, the output conductors of one register are switched onto a channel of parallel wires referred to as a bus. The input conductors of another register, which is also connected to the bus, capture the information. Following a data transaction, the content of the source register is reproduced in the destination register. It is important to note that after any digital data transfer, the source and destination registers are equal; the source register is not erased when the data is sent. Bus signals that exit CPU chips and other circuitry are electrically capable of traversing about one foot of a conductor on a printed circuit board, or less if many devices are connected to it. Special buffer circuits may be added to boost the bus signals sufficiently for transmission over several additional centimeters of conductor length, or for distribution to many other chips (such as memory chips). The transmit switches and receive switches shown in Fig. 6.8 are electronic and operate in response to commands from a CPU. It is possible that two or more destination registers will be switched on to receive data from
Zhang_Ch06.indd 691
5/13/2008 6:18:25 PM
692
INDUSTRIAL CONTROL TECHNOLOGY IC chip Destination register
1 0 1 1 0 0 0 1
IC chip Upto 12 inches
Receive switch
Bus
Transmit switch
Data Bus terminator
Amplifier
1 0 1 1 0 0 0 1 Source register
Figure 6.8 The mechanism for the data transfer within an IC board (courtesy of IBM).
a single source. However, only one source may transmit data onto the bus at any time. If multiple sources were to attempt transmission simultaneously, an electrical conflict would occur when bits of opposite value are driven onto a single bus conductor. Such a condition is referred to as a bus contention. Not only will a bus contention result in the loss of information, but it also may damage the electronic circuitry. As long as all registers in a system are linked to one central control unit, bus contentions should never occur if the circuit has been designed properly. Note that the data buses within a typical microprocessor are fundamentally half-duplex channels. When the source and destination registers are part of an integrated circuit within a microprocessor chip, for example, they are extremely close (thousandths of an inch). Consequently, the bus signals are at very low power levels, may traverse a distance in very little time, and are not very susceptible to external noise and distortion. This is the ideal environment for digital communications. However, it is not yet possible to integrate all the necessary circuitry on a single chip. When data is sent off-chip to another integrated circuit, the bus signals must be amplified and conductors extended out of the chip through external pins.
Zhang_Ch06.indd 692
5/13/2008 6:18:26 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
6.2.1.2
693
Data Transfers over Medium Distances
The data transferring between an IC chip and its peripherals generally needs mechanisms that cannot be situated within the chip itself. A simple technique to tackle this might be by extending the internal buses with a cable to reach the peripheral. However, this would expose all bus transactions to external noise and distortion. To locate a peripheral , within 6 m of the chip, a bus interface circuit is installed, as shown in Fig. 6.9. This bus interface consists of a holding register for peripheral data, timing and formatting circuitry for external data transmission, and signal amplifiers to boost the signal sufficiently for transmission through a cable. When communication with the peripheral is necessary, data is first deposited in the holding register by the CPU. This data will then be reformatted, sent with error-detecting codes, and transmitted at a relatively slow rate by digital hardware in the bus interface circuit. Data sent in this manner may be transmitted in byte-serial format if the cable has eight parallel channels, or in bit-serial format if only a single channel is available. In addition, the signal power is greatly boosted before transmission through the cable. These steps ensure that the data will not be corrupted by noise or distortion during its passage through the cable. In addition, because only data destined for the peripheral is sent, the party-line transactions taking place on the buses are not unnecessarily exposed to noise.
6.2.1.3
Data Transfer over Long Distances
When relatively long distances are involved in reaching a peripheral device, driver circuits are required after the bus interface unit to compensate for the electrical effects of long cables. As illustrated in Fig. 6.10, this is the only change needed if a single peripheral is used.
Microprocessor
System buses
Bus interface circuit
Cable
Peripheral device
Memory Computer
Figure 6.9 The mechanism for medium distance data transfer (courtesy of IBM).
Zhang_Ch06.indd 693
5/13/2008 6:18:26 PM
694
INDUSTRIAL CONTROL TECHNOLOGY Microprocessor
System buses
Bus interface circuit
Balanced line driver circuit
Cable
Peripheral device
Memory Computer
Figure 6.10 The mechanism for long distance data transfer (courtesy of IBM).
However, if many peripherals are connected, or if other IC chips are to be linked, a local area network (LAN) is required, and it becomes necessary to drastically change both the electrical drivers and the protocol to send messages through the cable. Because multiple cables are expensive, bit-serial transmission is almost always used when the distance exceeds a distance such as 6 m. In either a simple extension cable or a LAN, a balanced electrical system is used for transmitting digital data through the channel. This type of system involves at least two wires per channel, neither of which is a ground. Note that a common ground return cannot be shared by multiple channels in the same cable as would be possible in an unbalanced system. The basic idea behind a balanced circuit is that a digital signal is sent on two wires simultaneously, one wire expressing a positive voltage image of the signal and the other a negative voltage image. Figure 6.11 illustrates the working principle. When both wires reach the destination, the signals are subtracted by a summing amplifier, producing a signal swing of twice the
Transmitter Differential driver Input signal
Receiver
Noise voltage added to both
Summing amp
Positive signal
+
Negative signal
–
Output signal
Shield Cable
Figure 6.11 A balanced circuit.
Zhang_Ch06.indd 694
5/13/2008 6:18:27 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
695
value found on either incoming line. If the cable is exposed to radiate electrical noise, a small voltage of the same polarity is added to both wires in the cable. When the signals are subtracted by the summing amplifier, the noise cancels and the signal emerges from the cable without noise. A great deal of technology has been developed for LAN systems to minimize the amount of cable required and maximize the throughput. The costs of a LAN have been concentrated in the electrical-interface card that would be installed in PCs or peripherals to drive the cable, and in the communications software, not in the cable itself. Thus, the cost and complexity of a LAN are not particularly affected by the distance between stations.
6.2.2
Data Formats
Virtually all electronic ICs, transistors, capacitors, processors, and so on are designed to operate internally with all information encoded in binary numbers. This is because it is relatively simple to construct electronic circuits that generate two distinct voltage levels (i.e., off and on or low and high) to represent zero and one. The reason is that transistors and capacitors, which are the fundamental components of processors (the logic units of computers) and memory, generally have only two distinct states: off and on. Binary refers to any system that uses two alternative states, components, conditions, or conclusions. The binary, or base 2, numbering system uses combinations of just two unique numbers, that is, zero and one, to represent all values, in contrast with the decimal system (base 10), which uses combinations of ten unique numbers, that is, zero through nine. Binary is therefore the basic digital format of the data transmission in telecommunications.
6.2.2.1
Bit
A “bit” is a digit in a binary numbering system and is the most basic unit of information in digital communications systems. The rate of data transfer in computer networks and distributed control systems is referred to as the bit rate or bandwidth, and it is usually measured in terms of some multiple of bits per second, abbreviated bps, such as kilobits, megabits, or gigabits (e.g., billions of bits) per second. This “bit” is also used as a unit to measure the capability of processors such as CPUs that treat data in 32-bit chunks (e.g., processors with 32-bit registers and 32-bit memory addresses), and 64-bit chunks. A bitmap is a method of storing graphics (e.g., images) in which each pixel (e.g., dot that is used to form an image on a display screen) is stored as one or several bits. Graphics are also often described in terms of bit
Zhang_Ch06.indd 695
5/13/2008 6:18:28 PM
696
INDUSTRIAL CONTROL TECHNOLOGY
depth, which is the number of bits used to represent each pixel. A singlebit pixel is monochrome (e.g., either black or white), a two-bit pixel can represent any of four colors (or black and white and two shades of grey), an eight-bit pixel can represent 256 colors, and 24-bit and 32-bit pixels support highly realistic color that is referred to as true color.
6.2.2.2
Byte
A “byte” is a contiguous sequence of a fixed number of bits that is used as a unit of memory, storage, and instructions execution in computers. Although computers usually provide ways to test and manipulate single bits, they are almost always designed to store data and execute instructions in bytes. The number of bits in a byte varied according to the model of computer and its operating system in the early days of computing. Today, however, a byte virtually always consists of eight bits. Whereas a bit can have only one of two values, an eight-bit byte (also referred to as an octet) can have any of 256 possible values, because there are 256 possible permutations (i.e., combinations of zero and one) for eight successive bits (i.e., 28). Thus, an eight-bit byte can represent any unsigned integer from zero through 255 or any signed integer from –128 to 127. Because bytes represent a very small amount of data, for convenience they are commonly referred to in multiples, particularly kilobytes (represented by the uppercase letters KB or just K), megabytes (represented by the uppercase letters MB or just M), and gigabytes (represented by the uppercase letters GB or just G). A kilobyte is 1024 bytes, although it is often used loosely as a synonym for 1000 bytes. A megabyte is 1,048,576 bytes, but it is frequently used as a synonym for one million bytes. For example, a computer that has a 256 MB main memory can store approximately 256 million bytes (or characters) in memory at one time. A gigabyte is equal to 1024 MB.
6.2.2.3
Character
Characters are used in computers for (1) input, such as through the keyboard or through optical scanning, and output, such as on the screen or on printed pages; (2) writing programs in programming languages; (3) as the basis of some operating systems such as Linux that are largely collections of human-readable character files; and (4) for the storage and transmission of noncharacter data, for example, the transmission of images by FTP in UNIX or email in Windows.
Zhang_Ch06.indd 696
5/13/2008 6:18:28 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
697
Issues regarding characters and their use with computers are relatively simple if dealing with a single language, such as English, which has a small number of characters. However, they become quite complex when dealing with internationalization and localization because of the diverse array of writing systems and vast number of characters in use throughout the world. The vast number of characters and the great diversity of writing systems in use around the world present some major challenges for the development of software. This has become an increasingly important issue as a result of the rapid growth in the use of computers in countries that do not use European languages.
6.2.2.4 Word A word is simply a fixed-sized group of bits that are handled together by the machine. The number of bits in a word (the word size or word length) is an important characteristic of a computer architecture. The majority of the registers in the computer are usually word-sized. The typical numeric value manipulated by the computer is probably word sized. The amount of data transferred between the processing part of the computer and the memory system is most often a word. An address used to designate a location in memory often fits in a word. Modern computers usually have a word size of 16, 32, or 64 bits. Many other sizes have been used in the past, including 12, 18, 24, 36, 39, 40, 48, and 60 bits. Depending on how a computer is organized, units of the word size may be used for the following: (1) Integer numbers. Holders for integer numerical values may be available in one or in several different sizes, but one of the sizes available will almost always be the word. The other sizes, if any, are likely to be multiples or fractions of the word size. The smaller sizes are normally used only for efficient use of memory; when loaded into the processor, their values usually go into a larger, word-sized holder. (2) Floating point numbers. Holders for floating point numerical values are typically either a word or a multiple of a word. (3) Addresses. Holders for memory addresses must be of a size capable of expressing the needed range of values, but not be excessively large. Often the size used is that of the word, but it can also be a multiple or fraction of the word size. (4) Registers. Processor registers are designed with a size appropriate for the type of data they hold, such as integers, floating point numbers, or addresses. Many computer architectures use
Zhang_Ch06.indd 697
5/13/2008 6:18:28 PM
698
INDUSTRIAL CONTROL TECHNOLOGY “general purpose” registers that can hold any of several types of data; those registers are sized to allow the largest of any of those types, and typically that size is the word size of the architecture. (5) Memory-processor transfer. When the processor reads from the memory subsystem into a register, or writes a register’s value to memory, the amount of data transferred is often a word. In simple memory subsystems, the word is transferred over the memory data bus, which typically has a width of a word or half word. In memory subsystems that use caches, the word-sized transfer is the one between the processor and the first level of cache; at lower levels of the memory hierarchy larger transfers (which are a multiple of the word size) are normally used. (6) Unit of address resolution. In a given architecture, successive address values designate successive units of memory; this unit is the unit of address resolution. In most computers, the unit is either a character (e.g., a byte) or a word. (A few computers have used bit resolution.) If the unit is a word, then addresses can be smaller. On the other hand, if the unit is a byte, then individual characters can be addressed (i.e., selected during the memory operation). (7) Instructions. Machine instructions are normally fractions or multiples of the architecture’s word size. This is a natural choice since instructions and data usually share the same memory subsystem.
6.2.2.5
Basic Codeword Standards
(1) EBCDIC Standard. Extended Binary Coded Decimal Interchange Code (EBCDIC) is a binary coding scheme developed by IBM Corporation for the operating systems within its larger computers. EBCDIC is a method of assigning binary number values to characters (alphabetic, numeric, and special characters such as punctuation and control characters). EBCDIC is functionally similar to the ASCII (American Standard Code for Information Interchange) coding scheme that is widely used with smaller computers. However, IBM’s PC and workstation operating systems do not use EBCDIC but the industry-standard ASCII code. Conversion programs permit different operating systems to change a file back and forth between EBCDIC and ASCII. EBCDIC is eight bits, or one byte, wide. Each byte consists of two nibbles, each four bits wide. The first four bits define the class of character, while the second nibble defines the specific character inside that class. For example, setting the first nibble to
Zhang_Ch06.indd 698
5/13/2008 6:18:28 PM
699
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
all-ones, 1111, defines the character as a number, and the second nibble defines which number is encoded. In recent years, EBCDIC has been expanded to 16- and 32-bit variants to allow for representation of large, non-Latin character sets. Each EBCDIC variant is known as a codepage, identified by its Coded Character Set Identifier, or CCSID. EBCDIC codepages have been created for a number of major writing scripts, including such complex ones as Chinese, Korean, and Japanese. (2) ASCII Standard and Unicode. ASCII is a single-byte encoding system (i.e., uses one byte to represent each character), and the use of the first seven bits allows it to represent a maximum of 128 characters. ASCII is based on the characters used to write the English language (including both upper and lower case letters). Extended versions (which utilize the eighth bit to provide a maximum of 256 characters) have been developed for use with other character sets. This standard relates binary codes to printable characters and control codes. Total 25% of the ASCII character set represents nonprintable control codes, such as carriage return (CR) and line feed (LF). Most modern character-oriented peripheral equipment abides by the ASCII standard, and thus may be used interchangeably with different computers. Characters sent through a serial interface generally follow the character standard given in Fig. 6.12.
Nonprintable control characters 00 NUL 01 SOH 02 STX 03 ETX 04 EOT 05 ENG 06 ACK 07 BEL 08 BS 09 HT 0A LF 0B VT 0C FF 0D CR 0E SO 0F SI
10 DLE 11 DC1 12 DC2 13 DC3 14 DC4 15 NAK 16 SYN 17 ETB 18 CAN 19 EM 1A SUB 1B ESC 1C FS 1D GS 1E RS 1F VS
Special symbols 20 SP 21 ! 22 " 23 = 24 $ 25 % 26 & 27 ' 28 ( 29 ) 2A * 2B + 2C , 2D _ 2E . 2F /
Numbers, special symbols 30 0 31 1 32 2 33 3 34 4 35 5 36 6 37 7 38 8 39 9 3A : 3B ; 3C < 3D = 3E > 3F ?
Upper-case alphabet 40 @ 41 A 42 B 43 C 44 D 45 E 46 F 47 G 48 H 49 I 4A J 4B K 4C L 4D M 4E N 4F O
50 P 51 Q 52 R 53 S 54 T 55 U 56 V 57 W 58 X 59 Y 5A Z 5B [ 5C \ 5D ] 5E ^ 5F #
Lower-case alphabet 60 . 61 a 62 b 63 c 64 d 65 e 66 f 67 g 68 h 69 i 6A j 6B k 6C l 6D m 6E n 6F o
70 p 71 q 72 r 73 s 74 t 75 u 76 v 77 w 78 x 79 y 7A z 7B { 7C | 7D } 7E ~ 7F DEL
Figure 6.12 ASCII Standard.
Zhang_Ch06.indd 699
5/13/2008 6:18:28 PM
700
INDUSTRIAL CONTROL TECHNOLOGY Unicode was developed as a means of allowing computers to deal with the full range of characters used by human languages. It has a goal of providing a unique encoding for every character that currently exists or that has ever existed (but not for their variant glyphs). This is accomplished by representing each character with two or more bytes, thus vastly increasing the total number of possible unique character encodings. Unicode version 2.0 (released in 1996) listed 38,885 characters, version 3.0 (released in 2000) listed 49,194, and version 4.0 (released in 2003) lists 96,382. Although Unicode has achieved considerable success, it remains a work in progress.
6.2.3
Electrical Signal Transmission Modes
The modes in this subsection show up in most data communication devices from ICs through modems and other networking equipment. Fullduplex, for example, might not give the best performance when most data is moving in one direction, which is often the case for computer data, so some full-duplex equipment has the option of being operated in a halfduplex mode for optimal communication.
6.2.3.1
Bit-Serial and Bit-Parallel Modes
Hardware based signal processing uses both bit-serial and bit-parallel information exchanges and processing, associated with costly operators with a big logic effort. Figure 6.13 illustrates an example with both a bitserial (left) and a bit-parallel adder (right). In a serial cable, the bits follow one another on one wire (or frequency) and the receiver assembles the “frames” or “packets” as they arrive. In a parallel cable (fast becoming obsolete for the desktop computer environment), there is a separate wire for each bit—the sending unit puts one bit (ON/OFF) on each of the wires and fires a “strobe” signal to let the receiving unit know it is the instant to read the wires. In serial pattern, each individual bit of information travels along its own communications path; the bits flow in a continuous stream along the communications channel. This pattern is analogous to the flow of traffic down a one-lane residential street. On parallel transmission, each signal is sent through its own line, which implicitly defines its significance. A simple ripple carry adder computes the sum si and carry ci for each pair of equal significant bits. In case of a carry in stage i, it is transmitted to the next stage, where si + 1 and ci + 1 have to be recomputed. Therefore the carry ripples through the following adders,
Zhang_Ch06.indd 700
5/13/2008 6:18:30 PM
701
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM s0
b
a1
Clock
b1
Sync
Adder
b0
s1 Adder
a
Carry Adder
a0
Clock Sync
bn−1
Adder
an−1
sn−1
Clock
Figure 6.13 Bit-serial and Bit-parallel adder.
which can change the value of their outputs and leads to a long computation time. There are faster but even more complicated algorithms to speed up operation. Bit-serial transmission is typically slower than bit-parallel transmission, because data are sent sequentially in a bit-by-bit fashion.
6.2.3.2 Word-Parallel Mode Parallel data transmission involves the concurrent flow of bits of data through separate communications lines. This pattern resembles the flow of automobile traffic on a multilane highway. Internal transfer of binary data in a computer uses a parallel mode. If the computer uses a 32-bit internal structure, all the 32 bits of data are transferred simultaneously on 32 lane connections. Parallel data transmission is commonly used for interactions between a computer and its printing unit. The printer is usually located close to the computer, because parallel cables need many wires and may not work stably in long distance. Figure 6.14 is a diagram for word-parallel transmission for a word of 11 bits.
6.2.3.3
Simplex Mode
Simplex communication is a mode in which data only flows in one direction as illustrated at the top in Fig. 6.15. Because most modern
Zhang_Ch06.indd 701
5/13/2008 6:18:30 PM
702
INDUSTRIAL CONTROL TECHNOLOGY
8 information bits 11 total bits
Data packet 8 information bits
S
D
D
D
D
D
D
D
LSB Channel efficiency = 8/11 = 0.73
D
P
T
MSB T
T = 1/baud rate
For 9600 baud, T = 1042
Figure 6.14 Word-parallel mode.
Channel types Transmitter
Receiver Simplex channel
Transmitter Receiver
Receiver Half-duplex channel
Transmitter
Receiver
Transmitter
Transmitter
Receiver Full-duplex channel
Figure 6.15 Types for data transmission directional modes.
communications require a two-way interchange of data and information, this mode of transmission is not as popular as it once was. However, one current usage of simplex communications in business involves certain point-of-sends terminals in which sends data is entered without a corresponding reply. Radio or TV is an example of simplex communication.
6.2.3.4
Half-Duplex Mode
Half-duplex communication means a two-way flow of data between computer terminals. In this directional mode, data travels in two directions, but not simultaneously. Data can only move in one direction when data is not being received from the other direction. This mode is commonly used for linking computers together over telephone lines. This case is illustrated in the middle of Fig. 6.15.
Zhang_Ch06.indd 702
5/13/2008 6:18:31 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
6.2.3.5
703
Full-Duplex Mode
The fastest directional mode of communication is full-duplex communication. Here, data is transmitted in both directions simultaneously on the same channel. Thus, this type of communication can be thought of as similar to automobile traffic on a two-lane road. Full-duplex communication is made possible by devices called multiplexers. Full-duplex communication is primarily limited to mainframe computers because of the expensive hardware required to support this directional mode. This case is illustrated on the bottom of Figure 6.15.
6.2.3.6
Multiplexing Mode
Multiplexing is sending multiple signals or streams of information on a carrier at the same time in the form of a single, complex signal and then recovering the separate signals at the receiving end. In analog transmission, signals are commonly multiplexed using frequency-division multiplexing (FDM), in which the carrier bandwidth is divided into subchannels of different frequency widths, each carrying a signal at the same time in parallel. In digital transmission, signals are commonly multiplexed using time-division multiplexing (TDM), in which the multiple signals are carried over the same channel in alternating time slots. In some optical fiber networks, multiple signals are carried together as separate wavelengths of light in a multiplexed signal using dense wavelength division multiplexing (DWDM). (1) Frequency-division multiplexing (FDM). FDM is a scheme in which numerous signals are combined for transmission on a single communications line or channel. Each signal is assigned a different frequency (subchannel) within the main channel. A typical analog Internet connection through a twisted pair telephone line requires approximately three kilohertz (3 kHz) of bandwidth for accurate and reliable data transfer. Twisted-pair lines are common in households and small businesses. But major telephone cables, operating between large businesses, government agencies, and municipalities, are capable of much larger bandwidths. Suppose a long-distance cable is available with a bandwidth allotment of three megahertz (3 MHz). This is 3000 kHz, so in theory, it is possible to place 1000 signals, each 3 kHz wide, into the long-distance channel. The circuit that does this is known as a multiplexer. It accepts the input from each individual end user, and generates a signal on a different frequency for each of the
Zhang_Ch06.indd 703
5/13/2008 6:18:33 PM
704
INDUSTRIAL CONTROL TECHNOLOGY inputs. This results in a high-bandwidth, complex signal containing data from all the end users. At the other end of the longdistance cable, the individual signals are separated out by means of a circuit called a demultiplexer, and routed to the proper end users. A two-way communications circuit requires a multiplexer/ demultiplexer at each end of the long-distance, high-bandwidth cable. When FDM is used in a communications network, each input signal is sent and received at maximum speed at all times. This is its chief asset. However, if many signals must be sent along a single long-distance line, the necessary bandwidth is large, and careful engineering is required to ensure that the system will perform properly. In some systems, a different scheme, known as TDM, is used instead. (2) Time-division multiplexing (TDM). Time-division multiplexing (TDM) is a method of putting multiple data streams in a single signal by separating the signal into many segments, each having a very short duration. Each individual data stream is reassembled at the receiving end based on the timing. The circuit that combines signals at the source (transmitting) end of a communications link is known as a multiplexer. It accepts the input from each individual end user, breaks each signal into segments, and assigns the segments to the composite signal in a rotating, repeating sequence. The composite signal thus contains data from multiple senders. At the other end of the long-distance cable, the individual signals are separated out by means of a circuit called a demultiplexer, and routed to the proper end users. A two-way communications circuit requires a multiplexer/demultiplexer at each end of the long-distance, highbandwidth cable. If many signals must be sent along a single long-distance line, careful engineering is required to ensure that the system will perform properly. An asset of TDM is its flexibility. The scheme allows for variation in the number of signals being sent along the line, and constantly adjusts the time intervals to make optimum use of the available bandwidth. The Internet is a classic example of a communications network in which the volume of traffic can change drastically from hour-to-hour. In some systems, a different scheme, known as FDM, is preferred. (3) Dense wavelength division multiplexing (DWDM). Dense wavelength division multiplexing (DWDM) is a technology that puts data from different sources together on an optical fiber, with each signal carried at the same time on its own separate light
Zhang_Ch06.indd 704
5/13/2008 6:18:33 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
705
wavelength. Using DWDM, up to 80 (and theoretically more) separate wavelengths or channels of data can be multiplexed into a light stream transmitted on a single optical fiber. Each channel carries a time division multiplexed (TDM) signal. In a system with each channel carrying 2.5 Gbps (billion bits per second), up to 200 billion bits can be delivered a second by the optical fiber. DWDM is also sometimes called wave division multiplexing (WDM). Since each channel is demultiplexed at the end of the transmission back into the original source, different data formats being transmitted at different data rates can be transmitted together. Specifically, Internet (IP) data, Synchronous Optical Network data (SONET), and asynchronous transfer mode (ATM) data can all be traveling at the same time within the optical fiber. DWDM promises to solve the “fiber exhaust” problem and is expected to be the central technology in the all-optical networks of the future.
6.3 Data Transmission Control Circuits and Devices 6.3.1
Introduction
Within a digital computer or digital microcontroller, the data is internally transferred using a parallel format: all the bits of a byte or a memory word are exchanged simultaneously among registers, buses, ASIC, and components. However, for the data to be communicated over a serial channel, computer or controller it must convert the data from parallel to a serial bit stream. Some special hardware units including circuits or devices are therefore required to control the mutual translations between serial and parallel formats. This section lists a group of important hardware units for data transmission control. One important hardware unit for performing this functionality is universal receiver-transmitters that come in three types: universal asynchronous receiver–transmitter (UART), universal synchronous receiver– transmitter (USRT), and universal synchronous/asynchronous receiver– transmitter (USART). Other hardware units for data transmission control are some multiplexers and some modems. All these units may be built into computer or controller or added as part of an I/O interface board, or may consist of a single IC chip. Figure 6.16 is an example for their applications.
Zhang_Ch06.indd 705
5/13/2008 6:18:33 PM
706
INDUSTRIAL CONTROL TECHNOLOGY CPU
Bus
(Parallel data formats) USART
Serial data formats
Universal serial bus
Figure 6.16 An illustration for the application of the data transmission control circuits.
6.3.2 Universal Asynchronous Receiver Transmitter (UART) 6.3.2.1 Applications and Types A UART is the microchip or integrated circuit with coded program that controls a computer (or controller, thereafter) interfaces to its attached serial devices. Specifically, it provides the computer or controller with the RS232 specified data terminal equipment (DTE) interface so that it can “talk” to and exchange data with modems and other serial devices. As part of this interface, the UART also includes the following: (1) Converts the bytes it receives from the computer along parallel circuits into a single-serial bit stream for outbound transmission; (2) On inbound transmission, converts the serial bit stream into the bytes that the computer handles; (3) Adds a parity bit (if it has been selected) on outbound transmissions and checks the parity of incoming bytes (if selected) and discards the parity bit; (4) Adds start and stop delineators on outbound and strips them from inbound transmissions; (5) Handles interrupt from the serial devices with special ports such as keyboard and mouse; (6) May handle other kinds of interrupt and device management that require coordinating the computer’s speed of operation with device speeds. More advanced UART provides some amount of data buffer so that the computer and serial devices data streams remain coordinated. The most recent UART, the 16550, has a 16-byte buffer that can get filled before the
Zhang_Ch06.indd 706
5/13/2008 6:18:33 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
707
microprocessor needs to handle the data. The original UART was the 8250. For an internal modem today (2006), it probably includes a 16550 UART. According to modem manufacturer US Robotics, external modems do not include a UART. If you have an older computer, you may want to add an internal 16550 to get the most out of your external modem. Below is a listing of various UART chips. The 16550 chip series is the most commonly used UART. (1) 8250 UART was the original UART and was capable of speeds up to 9600 bps with a 1-byte FIFO. (2) 8250A UART was a revised version of the 8250 with an additional register that allowed software to verify it was an 8250 UART. (3) 16450 UART Slightly faster then earlier UART. (4) 16540 UART capable of speeds up to 9600 bps. (5) 16550 UART has a 16-byte FIFO. (6) 16550A UART had same features as previous 16550 UART with new fixes. (7) 16550AF UART had same features as previous 16550 UART with faster capabilities. (8) 16550AFN UART had same features as previous 16550 UART except was a ceramic chip. (9) 16650 UART has a 32-byte FIFO. (10) 16750 UART has a 64-byte FIFO. (11) 16950 UART has a 128-byte FIFO.
6.3.2.2
Mechanism and Components
As mentioned before, UART is commonly used with RS232 for embedded systems communications. It is useful to communicate between microcontrollers and also with PCs. Many chips provide UART functionality in silicon, and low-cost chips exist to convert UART to RS232 signals. Each UART contains a shift register that is the fundamental method of conversion between serial and parallel forms. By convention, teletypestyle UARTs send a “start” bit, 5–8 data bits, least-significant-bit first, an optional “parity” bit, and then a “stop” bit. The start bit is the opposite polarity of the data-line’s normal state. The stop-bit is the data-line’s normal state, and provides a space before the next character can start. In mechanical teletypes, the “stop” bit was often stretched to two bit times to give the mechanism more time to finish printing a character. A stretched “stop” bit also helps resynchronization. The parity bit can either make the number of bits odd, or even, or it can be omitted. Odd parity is more
Zhang_Ch06.indd 707
5/13/2008 6:18:34 PM
708
INDUSTRIAL CONTROL TECHNOLOGY
reliable because it ensures that there will always be a data transition, and this permits many UARTs to resynchronize. The word “asynchronuous” indicates that UARTs recover character timing information from the data stream, using designated “start” and “stop” bits to indicate the framing of each character. An asynchronous transmission sends nothing over the interconnection when the transmitting device has nothing to send; but a synchronous interface must send “PAD” characters to maintain synchronism between the receiver and transmitter. The usual filler is the ASCII “SYN” character. This may be done automatically by the transmitting device. The UART usually does not directly generate or receive the voltage levels that are put onto the wires interconnecting different equipment. An interface standard is used, which defines voltage levels and other characteristics of the interconnection. Examples of interface standards are EIA, RS232, RS422, and RS485. Depending on the limits of the communication channel to which the UART is ultimately connected, communication may be “full duplex” (both send and receive at the same time) or “half duplex” (devices take turns transmitting and receiving). Beside traditional wires, the UART is used for communication over other serial channels such as an optical fiber, infrared, wireless Bluetooth in its Serial Port Profile (SPP), and the DC-LIN for power line communication. Speeds for UARTs are in bits per second (bit/s or bps) although often incorrectly called the baud rate. Standard mechanical teletype rates are 45.5, 110, and 150 bit/s. Computers have used from 110 to 230,400 bit/s. Standard speeds are 110, 300, 1200, 2400, 4800, 9600, 19,200, 28,800, 38,400, 57,600, and 115,200 bit/s. A UART chip usually contains the following components: (1) Transmit and Receive Buffer, (2) Transmit and Receive Control, (3) Data Bus Buffer, (4) Read and Write Control Logics, and (5) Modem Control.
6.3.3 Universal Synchronous Receiver Transmitter (USRT) USRT is a circuit capable of receiving and sending data without requiring a “start” and/or “stop” bit code, which is unlike the asynchronous procedure as mentioned in the last subsection for the UART. In synchronous transmission, the clock data is recovered separately from the data stream and no start/stop bits are used. This improves the efficiency of transmission on suitable channels; more of the bits sent are data. Both synchronous and asynchronous transmissions for the binary data will be discussed in detail in Section 6.4.
Zhang_Ch06.indd 708
5/13/2008 6:18:34 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
709
6.3.4 Universal Synchronous/Asynchronous Receiver Transmitter (USART) USART chip is composed of logic circuits, which are connected by an internal data bus with a microprocessor or a CPU. These logic circuits are (1) read and write control logic circuits, (2) modem control circuits, (3) baud rate generator, (4) transmit buffer and transmit control circuits, and (5) receive buffer and receive control circuits. The CPU communicates with the USART over an 8-bit, or 16-bit, or 32-bit, or 64-bit and so on bidirectional data bus. The USART is programmable, meaning the CPU can control its mode of operation using data bus control and command words. The read/write control logic circuit then controls the operation of the USART as it performs specific asynchronous interfacing. Figure 6.17 is a diagram to display the pins of an USART.
6.3.4.1 Architecture and Components (1) Read/write control. The read/write control logic circuit accepts control signals from the control bus and command or control words from the data bus. The USART is set to an idle state by the RESET signal or control word. When the USART is “IDLE,” a new set of control words is required to program it for the applicable interface. The read/write control logic circuit receives a clock signal (CLK) that is used to generate internal device timing. D0-D7 Data bus (8 bits)
D2
1
28
D1
C/D
D3
2
27
D0
RxD
3
26
VCC
GND
4
25
RD WR CS
RxC
CLK
D4
5
24
DTR
Reset
D5
6
23
RTS
TxC
D6
7
D7
8
TxC
TxD
Control/data Read Write Chip select Clock pulse (TTL) Reset Transmitter clock Transmitter data Receiver clock
22
DSR
21
Reset
9
20
CLK
WR
10
RxRDY TxRDY
19
TxD
DSR
CS
11
18
TxEMPTY
C/D
12
17
CTS
RD
13
16
SYNDET
CTS
RxRDY
14
15
TxRDY
TxEMPTY Transmitter empty +s volt supply VCC
USART
RxC RxD
Receiver data Receiver ready Transmitter ready Data set ready
Data terminal ready DTR SYNDET Sync detect Request to send data RTS
GND
Clear to send data
Ground
Figure 6.17 The pins of an USART chip.
Zhang_Ch06.indd 709
5/13/2008 6:18:34 PM
710
INDUSTRIAL CONTROL TECHNOLOGY Four control signals are used to govern the read/write operations of the data bus buffer. They are as follows (Fig. 6.17): (a) The CHIP SELECT (CS) signal, when true, enables the USART for reading and writing operations. (b) The WRITE DATA (WD) signal, when true, indicates the microprocessor is placing data or control words on the data bus to the USART. (c) The READ DATA (RD) signal, when true, indicates the microprocessor is ready to receive data or status words from the USART. (d) The CONTROL/DATA (C/D) signal identifies the writeoperation transfer as data or control words, or the readoperation transfer as data or status words. (2) Modem control. The modem control logic circuit generates or receives four control or status signals used to simplify modem interfaces. They are as follows: (a) Data set ready (DSR). A data set ready is sent from the computer to the external device to notify the external device that the computer is ready to transmit data when it is HIGH. (b) Data terminal ready (DTR). A data terminal ready is sent from the external device to the computer to indicate that the external device is ready to receive data when it is HIGH. (c) Request to send (RTS). A request to send is sent from the external device to the computer to indicate that the external device is ready (HIGH) or busy (Low). (d) Clear to send (CTS). A clear to send is sent from the computer to the external device as a reply to the RTS signal. (3) Baud rate generator (BRG). The BRG supports both the asynchronous and synchronous modes of the USART. It is a dedicated 8-bit or more bits baud rate generator. The SPBRG register controls the period of a free running 8-bit timer. In asynchronous mode, bit BRGH (TXSTA) also controls the baud rate. In synchronous mode, bit BRGH is ignored. Table 6.1 shows the
Table 6.1 Baud Rate Formula SYNC 0 1
Zhang_Ch06.indd 710
BRGH = 0 (Low Speed) (Asynchronous) baud rate = FOSC/(64(X+1)) (Synchronous) baud rate = FOSC/(4(X+1))
BRGH = 1 (High Speed) Baud rate= FOSC/(16(X+1)) NA
5/13/2008 6:18:36 PM
711
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
formula for computation of the baud rate for different USART modes which only apply in master mode (internal clock). Given the desired baud rate and the value in the register FOSC, the nearest integer value for the SPBRG register can be calculated using the formula in Table 6.1, where X equals the value in the SPBRG register (0–255). From this, the error in baud rate can be determined. (4) Transmit buffer/transmit control. The transmit control logic converts the data bytes stored in the transmit buffer into an asynchronous bit stream. The transmit control logic inserts the applicable start/stop and parity bits into the stream to provide the programmed protocol. A start bit is used to alert the output device, a printer for instance, to get ready for the actual character (bit). The signal is sent just before the beginning of the actual character coming down the line. A stop bit is sent to indicate the end of transmission. The parity bit is used as a means to detect errors; odd or even parity may be used. Figure 6.18 is the transmit block diagram for a USART chip. (5) Receive buffer/receive control. The receive control logic accepts the input bit stream and strips the protocol signals from the data bits. The data bits are converted into parallel bytes and stored in the receive buffer until transmitted to the microprocessor. Figure 6.19 is the receive block diagram for a USART chip.
8 bits
Data bus
TXREG register
TXIF
MSb LSb 8
LSb …………
0
TK/CK Pin
Pin buffer and control
Interrupt SPEN TXEN
Baud rate CLK TX9D
TX9
TRMT
SPBRG Baud rate generator
Figure 6.18 USART transmit block diagram.
Zhang_Ch06.indd 711
5/13/2008 6:18:36 PM
712
INDUSTRIAL CONTROL TECHNOLOGY Baud rate CLK CREN
FERR
OERR
SPBRG MSb
Baud rate generator
RSR register
Stop (8) 7
. . .
LSb 1 0
Start
RX/DT Pin buffer and control
SPEN
RX9
Data recovery
RCIF
RX9D
RCREG register FIFO
Interrupt RCIE
8 bits Data bus
Figure 6.19 USART receive block diagram.
6.3.4.2
Mechanism and Modes
The USART can be configured into the following modes: (1) Asynchronous (full duplex), (2) Synchronous—Master (half duplex), and (3) Synchronous—Slave (half duplex). (1) USART asynchronous mode. In this mode, the USART uses standard nonreturn-to-zero (NRZ) format (one start bit, eight or nine data bits, and one stop bit). The most common data format is 8 bits (of course there can be more bits than 8). An on-chip dedicated 8-bit baud rate generator can be used to derive standard baud rate frequencies from the oscillator. The USART transmits and receives the LSb first. The USART transmitter and receiver are functionally independent but use the same data format and baud rate. The baud rate generator produces a clock either x16 or x64 of the bit shift rate, depending on the BRGH bit (TXSTA). Parity is not supported by the hardware, but can be implemented in software (stored as the ninth data bit). Asynchronous mode is stopped during SLEEP. Asynchronous mode is selected by clearing the SYNC bit (TXSTA). The USART asynchronous module consists of the following important elements: (1) Baud Rate Generator, (2) Sampling
Zhang_Ch06.indd 712
5/13/2008 6:18:36 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
713
Circuit, (3) Asynchronous Transmitter, and (4) Asynchronous Receiver. (a) USART asynchronous transmitter. The USART transmitter block diagram is shown in Fig. 6.18. The heart of the transmitter is (serial) transmit shift register (TSR). The shift register obtains its data from the read/write transmit buffer, TXREG. The TXREG register is loaded with data in software. The TSR register is not loaded until the STOP bit has been transmitted from the previous load. As soon as the STOP bit is transmitted, the TSR is loaded with new data from the TXREG register (if available). Once the TXREG register transfers the data to the TSR register (occurs in one TCY), the TXREG register is empty and the TXIF flag bit is set. This interrupt can be enabled/disabled by setting/clearing the TXIE enable bit. The TXIF flag bit will be set regardless of the state of the TXIE enable bit and cannot be cleared in software. It will reset only when new data is loaded into the TXREG register. While the TXIF flag bit indicates the status of the TXREG register, the TRMT bit (TXSTA) shows the status of the TSR register. The TRMT status bit is a read-only bit that is set when the TSR register is empty. No interrupt logic is tied to this bit, so the user has to poll this bit to determine if the TSR register is empty. Transmission is enabled by setting the TXEN enable bit (TXSTA). The actual transmission will not occur until the TXREG register has been loaded with data and the baud rate generator (BRG) has produced a shift clock (Fig. 6.18). The transmission can also be started by first loading the TXREG register and then setting the TXEN enable bit. Normally, when transmission is first started, the TSR register is empty, so a transfer to the TXREG register will result in an immediate transfer to TSR resulting in an empty TXREG. A backto-back transfer is thus possible. Steps to follow when setting up an asynchronous transmission: (i) Initialize the SPBRG register for the appropriate baud rate. If a high speed baud rate is desired, set the BRGH bit. (ii) Enable the asynchronous serial port by clearing the SYNC bit and setting the SPEN bit. (iii) If interrupts are desired, then set the TXIE, GIE, and PEIE bits. (iv) If 9-bit transmission is desired, then set the TX9 bit.
Zhang_Ch06.indd 713
5/13/2008 6:18:37 PM
714
INDUSTRIAL CONTROL TECHNOLOGY (v) Enable the transmission by setting the TXEN bit, which will also set the TXIF bit. (vi) If 9-bit transmission is selected; the ninth bit should be loaded in the TX9D bit. (vii) Load data to the TXREG register (starts transmission). (b) USART asynchronous receiver. The receiver block diagram is shown in Fig. 6.19. The data is received on the RX/DT pin and drives the data recovery block. The data recovery block is actually a high speed shifter operating at x16 times the baud rate, whereas the main receive serial shifter operates at the bit rate or at FOSC. Once asynchronous mode is selected, reception is enabled by setting the CREN bit (RCSTA). The heart of the receiver is (serial) receive shift register (RSR). After sampling the RX/TX pin for the STOP bit, the received data in the RSR is transferred to the RCREG register (if it is empty). If the transfer is complete, the RCIF flag bit is set. The actual interrupt can be enabled/disabled by setting/clearing the RCIE enable bit. The RCIF flag bit is a read-only bit that is cleared by the hardware. It is cleared when the RCREG register has been read and is empty. The RCREG is a double buffered register, that is, it is a two deep FIFO. It is possible for two bytes of data to be received and transferred to the RCREG FIFO and a third byte to begin shifting to the RSR register. On the detection of the STOP bit of the third byte, if the RCREG register is still full then overrun error bit, OERR (RCSTA), will be set. The word in the RSR will be lost. The RCREG register can be read twice to retrieve the two bytes in the FIFO. The OERR bit has to be cleared in software. This is done by resetting the receive logic (the CREN bit is cleared and then set). If the OERR bit is set, transfers from the RSR register to the RCREG register are inhibited, so it is essential to clear the OERR bit if it is set. Framing error bit, FERR (RCSTA), is set if a stop bit is detected as a low level. The FERR bit and the 9th receive bit are buffered the same way as the receive data. Reading the RCREG will load the RX9D and FERR bits with new values, therefore it is essential for the user to read the RCSTA register before reading the next RCREG register not to lose the old (previous) information in the FERR and RX9D bits. Steps to follow when setting up an asynchronous reception: (i) Initialize the SPBRG register for the appropriate baud rate. If a high-speed baud rate is desired, set bit BRGH.
Zhang_Ch06.indd 714
5/13/2008 6:18:37 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
715
(ii) Enable the asynchronous serial port by clearing the SYNC bit, and setting the SPEN bit. (iii) If interrupts are desired, then set the RCIE, GIE, and PEIE bits. (iv) If 9-bit reception is desired, then set the RX9 bit. (v) Enable the reception by setting the CREN bit. (vi) The RCIF flag bit will be set when reception is complete and an interrupt will be generated if the RCIE bit was set. (vii) Read the RCSTA register to get the ninth bit (if enabled) and determine if any error occurred during reception. (viii) Read the 8-bit received data by reading the RCREG register. (ix) If any error occurred, clear the error by clearing the CREN bit. (2) USART synchronous master mode. In Synchronous Master mode, the data is transmitted in a half-duplex manner in which transmission and reception do not occur at the same time. When transmitting data, the reception is inhibited and vice versa. Synchronous mode is entered by setting the SYNC bit (TXSTA). In addition, the SPEN enable bit (RCSTA) is set to configure the TX/CK and RX/DT I/O pins to CK (clock) and DT (data) lines, respectively. The Master mode indicates that the processor transmits the master clock on the CK line. The Master mode is entered by setting the CSRC bit (TXSTA). (a) USART synchronous master transmission. The USART transmitter block diagram is shown in Fig. 6.18. The heart of the transmitter is (serial) transmit shift register (TSR). The shift register obtains its data from the read/write transmit buffer register TXREG. The TXREG register is loaded with data in software. The TSR register is not loaded until the last bit has been transmitted from the previous load. As soon as the last bit is transmitted, the TSR is loaded with new data from the TXREG (if available). Once the TXREG register transfers the data to the TSR register, the TXREG is empty and the TXIF interrupt flag bit is set. The interrupt can be set into enabled/disabled by setting/clearing enable the TXIE bit. The TXIF flag bit will be set regardless of the state of the TXIE enable bit and cannot be cleared in software. It will reset only when new data is loaded into the TXREG register. While the TXIF flag bit indicates the status of the TXREG register, the TRMT bit (TXSTA) shows the status of the TSR
Zhang_Ch06.indd 715
5/13/2008 6:18:37 PM
716
INDUSTRIAL CONTROL TECHNOLOGY register. The TRMT bit is a read-only bit that is set when the TSR is empty. No interrupt logic is tied to this bit, so the user has to poll this bit to determine if the TSR register is empty. The TSR is not mapped in data memory so it is not available to the user. Steps to follow when setting up a synchronous master transmission are (i) Initialize the SPBRG register for the appropriate baud rate. (ii) Enable the synchronous master serial port by setting the SYNC, SPEN, and CSRC bits. (iii) If interrupts are desired, then set the TXIE bit. (iv) If 9-bit transmission is desired, then set the TX9 bit. (v) Enable the transmission by setting the TXEN bit. (vi) If 9-bit transmission is selected, the ninth bit should be loaded in the TX9D bit. (vii) Start transmission by loading data to the TXREG register. (b) USART synchronous master reception. Once synchronous mode is selected, reception is enabled by setting either of the SREN (RCSTA) or CREN (RCSTA) bits. Data is sampled on the RX/DT pin on the falling edge of the clock. If the SREN bit is set, then only a single word is received. If the CREN bit is set, the reception is continuous until the CREN bit is cleared. If both bits are set, then the CREN bit takes precedence. After clocking the last serial data bit, the received data in the receive shift register (RSR) is transferred to the RCREG register (if it is empty). When the transfer is complete, the RCIF interrupt flag bit is set. The actual interrupt can be enabled/disabled by setting/clearing the RCIE enable bit. The RCIF flag bit is a read-only bit that is cleared by the hardware. In this case, it is cleared when the RCREG register has been read and is empty. The RCREG is a double buffered register, which means that it is a two deep FIFO. It is possible for two bytes of data to be received and transferred to the RCREG FIFO and a third byte to begin shifting into the RSR register. On the clocking of the last bit of the third byte, if the RCREG register is still full then overrun error bit. Steps to follow when setting up a synchronous master reception are (i) Initialize the SPBRG register for the appropriate baud rate. (ii) Enable the synchronous master serial port by setting the SYNC, SPEN, and CSRC bits. (iii) Ensure that the CREN and SREN bits are clear.
Zhang_Ch06.indd 716
5/13/2008 6:18:37 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
717
(iv) If interrupts are desired, then set the RCIE bit. (v) If 9-bit reception is desired, then set the RX9 bit. (vi) If a single reception is required, set the SREN bit. For continuous reception set the CREN bit. (vii) The RCIF bit will be set when reception is complete and an interrupt will be generated if the RCIE bit was set. (viii) Read the RCSTA register to get the ninth bit (if enabled) and determine if any error occurred during reception. (ix) Read the 8-bit received data by reading the RCREG register. (x) If any error occurred, clear the error by clearing the CREN bit. (3) USART synchronous slave mode. Synchronous slave mode differs from the synchronous master mode in that the shift clock is supplied externally at the TX/CK pin (instead of being supplied internally in master mode). This allows the device to transfer or receive data while in SLEEP mode. Slave mode is entered by clearing the CSRC bit (TXSTA). (a) USART synchronous slave transmit. The transmit operation of both the synchronous master and slave modes are identical except in the case of the SLEEP mode. If two words are written to the TXREG and then the SLEEP instruction is executed, the following will occur: (1) the first word will immediately transfer to the TSR register and transmit. (2) The second word will remain in TXREG register. (3) The TXIF flag bit will not be set. (4) When the first word has been shifted out of TSR, the TXREG register will transfer the second word to the TSR and the TXIF flag bit will now be set. (5) If the TXIE enable bit is set, the interrupt will wake the chip from SLEEP and if the global interrupt is enabled, the program will branch to the interrupt vector (0004h). Steps to follow when setting up a synchronous slave transmission are the following: (i) Enable the synchronous slave serial port by setting the SYNC and SPEN bits and clearing the CSRC bit. (ii) Clear the CREN and SREN bits. (iii) If interrupts are desired, then set the TXIE enable bit. (iv) If 9-bit transmission is desired, then set the TX9 bit. (v) Enable the transmission by setting the TXEN enable bit. (vi) If 9-bit transmission is selected, the ninth bit should be loaded into the TX9D bit. (vii) Start transmission by loading data to the TXREG register.
Zhang_Ch06.indd 717
5/13/2008 6:18:37 PM
718
INDUSTRIAL CONTROL TECHNOLOGY (b) USART synchronous slave reception. The receive operation of both the synchronous master and slave modes is identical except in the case of the SLEEP mode. Also, bit SREN is a do not care in slave mode. If receive is enabled, by setting the CREN bit, before the SLEEP instruction, then a word may be received during SLEEP. On completely receiving the word, the RSR register will transfer the data to the RCREG register and if the RCIE enable bit is set, the interrupt generated will wake the chip from SLEEP. If the global interrupt is enabled, the program will branch to the interrupt vector (0004h). Steps to follow when setting up a synchronous slave reception are the following: (i) Enable the synchronous master serial port by setting the SYNC and SPEN bits and clearing the CSRC bit. (ii) If interrupts are desired, then set the RCIE enable bit. (iii) If 9-bit reception is desired, then set the RX9 bit. (iv) To enable reception, set the CREN enable bit. (v) The RCIF bit will be set when reception is complete and an interrupt will be generated, if the RCIE bit was set. (vi) Read the RCSTA register to get the ninth bit (if enabled) and determine if any error occurred during reception. (vii) Read the 8-bit received data by reading the RCREG register. (viii) If any error occurred, clear the error by clearing the CREN bit.
6.3.5
Bit-Oriented Protocol Circuits
Bit-oriented protocol represents a class of data link layer communication protocols that can transmit frames regardless of frame content. Unlike byte-oriented protocols, bit-oriented protocols provide full-duplex operation and are more efficient and reliable. However, the byte-oriented protocols use a specific character from the user character set to delimit frames in data-link communications. Today, the byte-oriented protocol circuits have largely been replaced by bit-oriented protocols. In today’s applications, there are two types of controllers that can perform the bit-oriented protocol: the SDLC controller and the HDLC controller. Figure 6.20 is a block diagram for both SDLC and HDL controllers.
Zhang_Ch06.indd 718
5/13/2008 6:18:38 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM sfrdatal
Shift register
CRC checker
Flag detection
Bit stripping
Data decoder
719 rxd
sfrdatao Receive
sfraddr
FIFO
sfrwe
Data
Address detection
Receive frame sequencer
Clock recovery
rxc
sfrra SFR
Internal signals
tv re rv
Data
ptv
FIFO
Transmit frame sequencer
txc
Transmit den
pre prv
Shift register
CRC generator
Bit stuffing
Flag insertion
Data encoder
txd
Figure 6.20 A block diagram for SDLC and HDLC controllers.
6.3.5.1
SDLC Controller
Synchronous data link control (SDLC) supposes an IBM protocol for use in systems network architecture (SNA) environment. SDLC is a bitoriented protocol. The SDLC controller supports a variety of link types and topologies. It can be used with point-to-point and multipoint links, bounded and unbounded media, half-duplex and full-duplex transmission facilities, and circuit-switched and packet-switched networks. The SDLC frame appears in Fig. 6.21. As the figure shows, SDLC frames are bounded by a unique flag pattern. The address field always contains the address of the secondary involved in the current communication. Because the primary is either the communication source or destination, there is no need to include the address of the primary: it is already known by all secondaries. The control field uses three different formats, depending on the type of SDLC frame used. The three SDLC frames are described as follows: (1) Information (I) frames. These frames carry upper-layer information and some control information. Send and receive sequence numbers and the poll final (P/F) bit perform flow and error control. The send sequence number refers to the number of the
Zhang_Ch06.indd 719
5/13/2008 6:18:38 PM
720
INDUSTRIAL CONTROL TECHNOLOGY
Field length, 1 in bytes Flag
1 or 2
1 or 2
Variable
2
1
Address
Control
Data
FCS
Flag
Information frame format Receive sequence number
Poll final
Send sequence number
0
Supervisory frame format Receive sequence number
Poll final
Function code
0 1
Unnumbered frame format Function code
Poll final
Function code
1 1
Figure 6.21 SDLC frame format.
frame to be sent next. The receive sequence number provides the number of the frame to be received next. Both the sender and the receiver maintain send and receive sequence numbers. The primary uses the P/F bit to tell the secondary whether it requires an immediate response. The secondary uses this bit to tell the primary whether the current frame is the last in its current response. (2) Supervisory (S) frames. These frames provide control information. They request and suspend transmission, report on status, and acknowledge the receipt of I frames. They do not have an information field. (3) Unnumbered (U) frames. These frames, as the name suggests, are not sequenced. They are used for control purposes. For example, they are used to initialize secondaries. Depending on the function of the unnumbered frame, its control field is 1 or 2 bytes. Some unnumbered frames have an information field. The frame check sequence (FCS) precedes the ending flag delimiter. The FCS is usually a cyclic redundancy check (CRC) calculation remainder. The CRC calculation is redone in the receiver. If the result differs from the value in the sender’s frame, an error is assumed.
Zhang_Ch06.indd 720
5/13/2008 6:18:38 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
6.3.5.2
721
HDLC Controller
High-level data link control (HDLC) is an ISO communications protocol used in X.25 packet switching networks. It is a bit-oriented data link control procedure under which all data transfer takes place in frames. Each frame ends with a frame check sequence for error detection. HDLC shares the frame format of SDLC, and HDLC fields provide the same functionality as those in SDLC. Also, like SDLC, HDLC supports synchronous, full-duplex operation. HDLC differs from SDLC in several minor ways. First, HDLC has an option for a 32-bit or more bits checksum. And, unlike SDLC, HDLC does not support the loop or hub go-ahead configurations. The major difference between HDLC and SDLC is that SDLC supports only one transfer mode, while HDLC supports three. The three HDLC transfer modes are as follows: (1) Normal response mode (NRM). This transfer mode is used by SDLC. In this mode, secondaries cannot communicate with a primary until the primary has given permission. (2) Asynchronous response mode (ARM). This transfer mode allows secondaries to initiate communication with a primary without receiving permission. (3) Asynchronous balanced mode (ABM). ABM introduces the combined node. A combined node can act as a primary or a secondary, depending on the situation. All ABM communication is between multiple combined nodes. In ABM environments, any combined station may initiate data transmission without permission from any other station.
6.3.6
Multiplexers
A multiplexer, sometimes simply referred to as “mux,” is a device that selects between a number of input signals. In its simplest form, a multiplexer will have two signal inputs, one control input, and one output. An everyday example of a multiplexer is the source selection control on a home stereo unit. Multiplexers are used in building digital semiconductors such as CPUs and graphics controllers. In these applications, the number of inputs is generally a multiple of 2 (2, 4, 8, 16, etc.), the number of outputs is either 1 or a relatively smaller multiple of 2, and the number of control signals is related to the combined number of inputs and outputs. For example, a 2-input, 1-output multiplexer requires only one control signal to select the
Zhang_Ch06.indd 721
5/13/2008 6:18:39 PM
722
INDUSTRIAL CONTROL TECHNOLOGY
input, while a 16-input, 4-output multiplexer requires four control signals to select the input and two to select the output. Multiplexers are mainly categorized into these types based on the working mechanisms described in Section 6.2.3. In these types below, the timedivision multiplexer is more complex, and the digital multiplexer is more important in applications. (1) (2) (3) (4)
6.3.6.1
Analog multiplexer Digital multiplexer Time-division multiplexer Fiber optic multiplexer.
Digital Multiplexer
In the designations of digital ICs, the multiplexer is a device that has multiple input streams and only one output stream. It forwards one of the input streams to the output stream based on the values of one or more “selection inputs” or control inputs. For example, a digital multiplexer of two inputs is a simple connection of logic gates whose output is either input I0 or input I1 depending on the value of a third input Sel which selects the input. Its Boolean equation is “Out = (I0 and Sel) or (I1 and not Sel).” The logics of this multiplexser can be expressed as Fig. 6.22, and its truth table is given in Table 6.2. Larger digital multiplexers are also common. Figure 6.23 is a single bit 4-to-1 line digital multiplexer. Its logic is also expressed by the circuit diagram and the truth table in Fig. 6.23. On the other hand, demultiplexers take one data input and a number of selection inputs, and they have several outputs. They forward the data input to one of the outputs, depending on the values of the selection inputs. Demultiplexers are sometimes convenient for designing general purpose logic, because if the demultiplexer’s input is always true, the demultiplexer acts as a decoder. This means that any function of the selection bits can be constructed by logically OR-ing the correct set of outputs. I0
I0
out
out I1
I1 sel
sel
Figure 6.22 A two-input digital multiplexer.
Zhang_Ch06.indd 722
5/13/2008 6:18:39 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
723
Table 6.2 A Digit Multiplexer Truth Table I0
I1
Sel
Out
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
0 (pick B) 0 (pick A) 1 (pick B) 0 (pick A) 0 (pick B) 1 (pick A) 1 (pick B) 1 (pick A)
6.3.6.2 Time Division Multiplexer (TDM) Time division multiplexers (TDM) share transmission time on an information channel among many data sources. Performance specifications for TDM include number of channels, maximum data rate, wavelength range, operating voltage, optical output, electrical output, data transmission type, and data interface. Additional features may also be available. One of the key choices in choosing TDM is the selection of the transfer mode that will be used. Synchronous transfer mode is a communications mode in which data signals are sent at precise intervals that are regulated by a system clock. Additional “start” and “stop” pulses are not required. Asynchronous transfer mode (ATM) is a connection-oriented protocol that uses very short, fixed-length (53 bytes) packets called cells to carry voice, data, and video signals. By using a standard cell size, ATM can use software for data switching. Consequently, ATM can route and switch traffic at higher speeds. An asynchronous and or synchronous time division multiplexer is capable of both asynchronous and synchronous transfer modes. There are three cable choices for TDM. Single-mode optical fiber cable allows only one mode to propagate. The fiber has a very small core diameter of approximately 8 µm. It permits signal transmission at extremely high bandwidths and allows very long transmission distances. Multimode fiber optic cable supports the propagation of multiple modes. Multimode fiber may have a typical core diameter of 50–00 µm with a refractive index that is graded or stepped. It allows the use of inexpensive LED light sources. Connector alignment and coupling is less critical than with single mode fiber. Distances of transmission and transmission bandwidth are also less than with single mode fiber due to dispersion. Single-mode/multimode
Zhang_Ch06.indd 723
5/13/2008 6:18:39 PM
724
INDUSTRIAL CONTROL TECHNOLOGY S1
S0
A sigle bit 4-to-1 line multiplexer 2-to-4 line decoder
I0 I1 F
I2 I3
Truth tabel S1 S0 I3 I2 I1 I0 F 0 0
0 0 0 0 0
S1 S0 I3 I2 I1 I0 F 0 1
0 0 0 0 0
S1 S0 I3 I2 I1 I0 F 1 0
0 0 0 0 0
S1 S0 I3 I2 I1 I0 F 1 1
0 0 0 0 0
0 0 0 1 1 0 0 1 0 0 0 0 1 1 1
0 0 0 1 0 0 0 1 0 1 0 0 1 1 1
0 0 0 1 0 0 0 1 0 0 0 0 1 1 0
0 0 0 1 0 0 0 1 0 0 0 0 1 1 0
0 1 0 0 0
0 1 0 0 0
0 1 0 0 1
0 1 0 0 0
0 1 0 1 1
0 1 0 1 0
0 1 0 1 1
0 1 0 1 0
0 1 1 0 0 0 1 1 1 1 1 0 0 0 0
0 1 1 0 1 0 1 1 1 1 1 0 0 0 0
0 1 1 0 1 0 1 1 1 1 1 0 0 0 0
0 1 1 0 0 0 1 1 1 0 1 0 0 0 1
1 0 0 1 1 1 0 1 0 0
1 0 0 1 0 1 0 1 0 1
1 0 0 1 0 1 0 1 0 0
1 0 0 1 1 1 0 1 0 1
1 0 1 1 1 1 1 0 0 0
1 0 1 1 1 1 1 0 0 0
1 0 1 1 0 1 1 0 1 1
1 0 1 1 1 1 1 0 0 1
1 1 0 1 1
1 1 0 1 0
1 1 0 1 1
1 1 0 1 1
1 1 1 0 0
1 1 1 0 1
1 1 1 0 1
1 1 1 0 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
Figure 6.23 A single bit 4-to-1 line digital multiplexer.
Zhang_Ch06.indd 724
5/13/2008 6:18:40 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
725
time division multiplexers can be used with both single mode and multimode cable types.
6.4 Data Transmission Protocols 6.4.1
Introduction
As has been mentioned, data are normally transmitted between two devices in multiples of a fixed-length unit, typically of 8 bits. For example, when a terminal is communicating with a computer, each typed (keyed) character is normally encoded into an 8-bit binary value and the complete message is then made up of a string (block) of similarly encoded characters. Since each character is transmitted bit serially, the receiving device receives one of two signal levels that vary according to the bit pattern (and hence character string) making up the message. For the receiving device to decode and interpret this bit pattern correctly, it must know (1) the bit rate being used, that is, the time duration of each bit cell, (2) the start and end of each element (character or byte), and (3) the start and end of each complete message block or frame. These three factors are known as bit or clock synchronism, byte or character synchronism, and block or frame synchronism, respectively. Synchronization is almost the traditional approach to data communication. In general, synchronization is accomplished in one of two ways, which rely on whether the transmitter and receiver clocks are synchronized. If the data to be transmitted are made up of a string of characters with random (possibly long) time intervals between each character, then each character is normally transmitted independently and the receiver resynchronizes at the start of each new character received. For this type of communication, asynchronous transmission is normally used. If, however, the data to be transmitted are made up of complete blocks of data each containing, say, multiple bytes or characters, the transmitter and receiver clocks must be in synchronism over long intervals, and hence synchronous transmission is normally used. These two types of transmission will now be considered separately. A more advanced communication protocol is the asynchronous transfer mode (ATM), which is an open, international standard for the transmission of voice, video, and data signals. Some advantages of ATM include a format that consists of short, fixed cells (53 bytes) that reduce overhead in maintenance of variable-sized data traffic. The versatility of this mode also allows it to simulate and integrate well with legacy technologies, as well
Zhang_Ch06.indd 725
5/13/2008 6:18:41 PM
726
INDUSTRIAL CONTROL TECHNOLOGY
as offering the ability to guarantee certain service levels, generally referred to as quality of service (QoS) parameters.
6.4.2
Asynchronous Transmission
Asynchronous transmission is primarily used when the data to be transmitted are generated at random intervals. With this type of communication, each character is transmitted at an indeterminate rate, with possibly long random time intervals between each successive typed character. This means that the signal on the transmission line will be in the idle (OFF) state for long time intervals. Therefore, it is necessary in this type of communication for the receiver to be able to resynchronize at the start of each new character received. To accomplish this, each transmitted character or, more generally, item of user data, is encapsulated or framed between an additional start bit and one or more stop bits, as shown in Fig. 6.24. As can be seen from Fig. 6.24, the polarity of the start and stop bits is different. This ensures that there is always a minimum of one transition (1 to 0 to 1) between each successive character, irrespective of the bit sequences in the characters being transmitted. The first l–0 transition after an idle period is then used by the receiving device to determine the start of each new character. In addition, by utilizing a clock whose frequency is N-times higher than the transmitted bit rate frequency (N = 16 is typical), the receiving device can reliably determine the state of each transmitted bit in the character by sampling the received signal approximately at the center of each bit cell period. This is shown diagrammatically in Fig. 6.24 and will be discussed further in the following.
6.4.2.1
Bit Synchronization
Asynchronous transmission does not use a clocking mechanism to keep the sending and receiving devices synchronized. Instead, this type of Start bit(s) Line idle
1
Stop bit(s) Line idle
0
0
1
0
0
1
0
lsb
msb
Time
8-bit character Receiver detects Start of new character
Each bit is sampled in its centre
Stop bit(s) ensures a transition for next
Figure 6.24 Asynchronous transmission.
Zhang_Ch06.indd 726
5/13/2008 6:18:41 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
727
transmission uses bit synchronization to synchronize the devices for each frame that is transmitted. In bit synchronization, each frame begins with a start bit that enables the receiving device to adjust to the timing of the transmitted signal. Messages are kept short so that the sending and receiving devices do not drift out of synchronization for the duration of the message. Asynchronous transmission is most frequently used to transmit character data and is ideally suited to environments in which characters are transmitted at irregular intervals, such as when users enter character data. On the bit-level (OSI level one), physical layer uses synchronous bit transmission. This enhances the transmitting capacity but also means that a sophisticated method of bit synchronization is required. While bit synchronization in an asynchronous character-oriented transmission is performed on the reception of the start bit available with each character, in a synchronous transmission protocol there is just one start bit available at the beginning of a frame. To enable the receiver to correctly read the messages, continuous resynchronization is required. Phase buffer segments are therefore inserted before and after the nominal sample point within a bit interval. Two types of synchronization are distinguished, which are the hard synchronization at the start of a frame and resynchronization within a frame. After a hard synchronization, the bit time is restarted at the end of the sync segment. Therefore, the edge, which caused the hard synchronization, lies within the sync segment of the restarted bit time. Resynchronization shortens or lengthens the bit time so that the sample point is shifted according to the detected edge.
6.4.2.2
Character Synchronization
The asynchronous format for data transmission is a procedure or protocol in which each character is individually synchronized or framed. The structure of an asynchronous frame consists of four key bit components. (1) A start bit or bits. This component signals that a frame is starting and enables the receiving device to synchronize itself with the message. (2) Data bits. This component consists of a group of seven or eight bits when character data is being transmitted. (3) A parity bit. This component is optionally used as a crude method of detecting transmission errors. (4) A stop bit or bits. This component signals the end of the data frame.
Zhang_Ch06.indd 727
5/13/2008 6:18:42 PM
728
INDUSTRIAL CONTROL TECHNOLOGY
At the receiver, a clock of the same nominal frequency is constructed and used to clock-in the data to the receive shift register. Only data that are bounded by the correct start and stop bits are accepted. This operation is normally performed by using a UART. The receiver is started by detecting the edge of the first start bit as shown in Fig. 6.25. The reconstructed receive clock is normally generated using a local stable high rate clock, frequently operating at 16 or 32 times the intended data rate. Clock generation proceeds by detecting the edge of the start bit and counting sufficient clock cycles from the high frequency clock to identify the mid position of the start bit. From there the center of the successive bits is located by counting cycles corresponding to the original data speed. Figure 6.25 shows this reconstruction of the clock. (1) Character format. Most communications equipment will require a specific number of bits to be in each data character or byte, depending on the equipment, the protocol, and the type of information that is to be transmitted. Each bit may be set to a binary value of either 1 or 0. A group of 4 bits is referred to as a digit. This group of 4 bits provides 16 different patterns that are referred to as hexadecimal notation. The basic hexadecimal notation allows a single 4-bit digit or symbol to represent 16 different values: 0–15. The relative position of each bit will determine the value that is assigned to the specific bit which, in turn, will determine the value of the digit. The combination of two 4-bit digits will form an 8-bit “character” or “byte” that may be processed and displayed as a symbol. Most of the existing equipment now uses a character or byte that contains 8 bits, consisting of two 4-bit digits that represent a specific symbol, letter, number, or function depending upon the
Half bit period
Bits sampled
Data
Reconstructed clock
Clock at 16X Data rate
Figure 6.25 The transition from the idle state triggers the UART at the receiver to start reception. Reconstruction of the clock by matching of phase to the transmitted data to the local stable high rate clock.
Zhang_Ch06.indd 728
5/13/2008 6:18:42 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
729
type of translation (Code-Set) used. The digits are referred to as belonging to a column and a row as presented on many code translation charts. The number of data bits per character may be 5, 6, 7, or 8. The most commonly used 8-bit format uses 7 data bits with the 8th bit, referred to as the “parity bit,” reserved for errorchecking purposes. This type of error checking is called “Parity Checking.” One of the most commonly used methods is to transmit the least-significant bit (LSB) first and the parity bit last, following the most-significant bit (MSB) of data. Figure 6.24 also gives a format of asynchronous character. One of the most commonly used transaction or code-sets used with this format is the 7-data-bit ASCII plus 1-parity-bit. Other code sets that may use character parity are BAUDOT and EBCD. (2) Character parity. The parity bit is used to establish the number of bits that are set to the value of 1. Some common character parity algorithms are identified as follows: (a) EVEN. The EVEN parity algorithm specifies that the character must have an EVEN number of 1 bits. Referred to as “7 EVEN” (7-E). (b) ODD. The ODD parity algorithm specifies that the character must have an ODD number of 1 bits. Referred to as “7 ODD” (7-O). (c) SPACE. The SPACE parity algorithm specifies that the parity bit of the character must have a value of 0, the “space” condition. This is referred to as “7 SPACE” (7-S). (d) MARK. The MARK parity algorithm specifies that the parity bit of the character must have a value of 1, the “mark” condition. This is referred to as “7 MARK” (7-M). (e) NONE. Some procedures will use all 8 bits for data or may not provide error checking. Therefore, NONE of the bits is used for parity and all 8 bits are considered to be data. This technique is referred to as “EIGHT NONE” (8-N). Character parity is also referred to as “vertical parity.” The vertical parity error-checking algorithms will report an error if the character does not contain the correct number of 1 bits in the correct positions. This is displayed by a BAR through the parityflawed character. (3) Transmission speeds and timing. Transmission speeds are expressed in the number of bits that are transmitted per unit of time, usually in bits per second (bps). The flow of the number of characters per second is dependent on the number of bits required to form one character.
Zhang_Ch06.indd 729
5/13/2008 6:18:42 PM
730
INDUSTRIAL CONTROL TECHNOLOGY The accuracy of the bit times is very critical: all bit times must remain within a narrow range to ensure accurate and error-free communications. The allowable bit-time variation is normally less than 3%. The sending data terminal equipment (DTE) must generate the bit timing using a very precise internal clock, usually crystal controlled, so that all bit times are of equal duration and operate at a constant repetition rate. The receiving DTE must use the same defined normal bittiming clock speed as the sending DTE; these two clocks must be operating at the same or matched speed. The receiving DTE will sense the beginning of the start bit and then sample each succeeding bit near the optimum center of the bit time. The receiving DTE sample timing or STROBE is generated by using another internal clock, usually operating at speeds 16 or 32 times as fast as the normal bit-timing clock. Some common speeds with the corresponding bit times and character rates are listed in Table 6.3.
6.4.2.3
Frame Synchronization
An asynchronous link communicates data has a series of characters of fixed size and format, which is shown in Fig. 6.26. In fact, there need be Table 6.3 Transmit Speeds and Their Timing Speed (bps) 110 150 300 600 1200 2400 3600 4800 9600 19200 48000 56000 64000
Zhang_Ch06.indd 730
Bit Time 9.09 ms 6.666 ms 3.333 ms 1.666 ms 833 µs 416 µs 278 µs 208 µs 104 µs 52 µs 21 µs 18 µs 16 µs
Character Rate (cps) 10 bit
11 bit
8 bit (SYNC)
11 15 30 60 120 240 360 480 960 1920 4800 5600 6400
10 13.6 27.3 54.5 109.1 218.2 327.3 436.4 872.7 1745.5 4363.6 5090.1 5818.2
14 19 37.5 75 150 300 450 600 1200 2400 6000 7000 8000
5/24/2008 10:33:19 AM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
731
no timing relationship between successive characters (or bytes of data). Individual characters may be separated by any arbitrary idle period. When asynchronous transmission is used to support packet data links (e.g., the Internet), then special characters have to be used (“framing”) to indicate the start and end of each frame transmitted. One character (none as an escape character) is reserved to mark any occurrence of the special characters within the frame. In this way the receiver is able to identify which characters are part of the frame and which are part of the “framing.” Packet communication over asynchronous links is used by some users to get access to a network using a modem. (1) Block mode transmission. Characters may be linked together or stored in a memory buffer and then transmitted in one contiguous string where the stop bit of one character is immediately followed by the start bit of the next character. This contiguous string of characters is referred to as a “transmission block.” The transmission block may use special characters to provide control functions and to act as delimiters to assist in the flowcontrol and error-recovery procedures. These special characters are referred to as control characters and normally provide a standard set of controls and functions. Some equipment and protocols may modify the use and functions of the control characters for unique circumstances. (2) Control characters and functions. There are many characters that are used for specific functions, the control of the flow of data, the control of the associated devices, and error reporting. Table 6.4 is a list of the more commonly used control characters and their standard functions. (3) Transmission block (message). The normal message or transmission block consists of a beginning, the data or text, and an ending. The beginning of a message is indicated by the SOH or the STX characters. The header or text will follow the respective characters. The END of the data or text is indicated by the ETX or the ETB characters. The block mode transmission protocol may provide error detection on each character with the use of character parity, also Char
Char
Char
Char Time
5-8 bits of data
Any idle period
Figure 6.26 Asynchronous transmission of a series of characters.
Zhang_Ch06.indd 731
5/24/2008 10:33:19 AM
732
INDUSTRIAL CONTROL TECHNOLOGY
Table 6.4 Control Characters Control Character NULL = SOH = STX = ETX = EOT = ENQ = ACK = BEL = BS = HT = LF = VT = FF = CR = SO = SI = DLE = DC1 = DC2 = DC3 = DC4 = NAK = SYN = ETB = CAN = EM = SUB = ESC = FS = GS = RS = US = DEL =
Hex 7—E
Hex 7—O
Hex 8—N
0-0 8-0 8-2 0-3 8-4 0-5 0-6 8-7 8-8 0-9 0-A 8-B 0-C 8-D 8-E 0-F 9-0 1-1 1-2 9-3 1-4 9-5 9-6 1-7 1-8 9-9 9-A 1-B 9-C 1-D 1-E 9-F F-f
1-0 0-1 0-2 8-3 0-4 8-5 8-6 0-7 0-8 8-9 8-A 0-B 8-C 0-D 0-E 9-F 1-0 9-1 9-2 1-3 9-4 1-5 1-6 9-7 9-8 1-9 1-A 9-B 1-C 9-D 9-E 1-F 7-F
0-0 0-1 0-2 0-3 0-4 0-5 0-6 0-7 0-8 0-9 0-A 0-B 0-C 0-D 0-E 0-F 1-0 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-a 1-B 1-C 1-D 1-E 1-F F-F
Description or Function Null or pad character Start of header Start of text End of text Transmission Enquiry Acknowledgment BELl or alarm character Back space character Horizontal tabulation Line feed Vertical tabulation Form feed or top of form Carriage return Shift out Shift in Data link escape Device control 1—Reader on Device control 2—Punch on Device control 3—Reader off Device control 4—Punch off Negative acknowledgment Synchronizing character (SYNC) End of transmission block Cancel End of media Substitute character Escape character File separator Group separator Record separator Unit separator Delete or trailing PAD
referred to as VRC or vertical parity. The block mode may also include the entire message in an error detection procedure that is referred to as a block check or longitudinal redundancy check (LRC) also referred to as the horizontal parity.
Zhang_Ch06.indd 732
5/13/2008 6:18:43 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
733
The character parity and block check procedure is designed to ensure that all of the bits that are sent by the transmission device are correctly received by the receiving device. There are several algorithms that may be used that may provide different levels of accuracy or validity. Figure 6.27 gives a format of a message in asynchronous transmission.
6.4.3
Synchronous Transmission
With synchronous transmission, the complete block or frame of data is transmitted as a single bit stream with no delay between each element of 8-, 16-, 32-, or more bits. To enable the receiving device to achieve the various levels of synchronization: the transmitted bit stream is suitably encoded so that the receiver can be kept in bit synchronism; all frames are preceded by one or more reserved bytes or characters to ensure the receiver reliably interprets the received bit stream on the correct byte or character boundaries (byte/character/synchronization); and the contents of each frame is encapsulated between a pair of reserved bytes or characters. The synchronous transmission ensures that the receiver, on receipt of the opening byte or character after an idle period, can determine that a new frame is being transmitted and, on receipt of the closing byte or character, that this signals the end of the frame. During the period between the transmissions of successive frames either idle (sync) bytes or characters are continuously transmitted to allow the receiver to retain bit and byte synchronism, or each frame is preceded by one or more special synchronizing bytes or characters to allow the receiver to regain synchronism. This is shown diagrammatically in Fig. 6.28. Although the type of framing (character or block) is often used to discriminate between asynchronous and synchronous transmission, the fundamental difference between the two methods is that with asynchronous transmission the transmitter and receiver clocks are unsynchronized while with synchronous transmission both clocks are synchronized. There are two alternative ways of organizing a synchronous data link: character (or byte) oriented and bit oriented. The essential difference between these two methods is in the way the start and end of a frame is
SOH
HEADER
STX
Text or data message . . . . .
BCC
This portion of transmission is protected by BCC
Figure 6.27 A format of an asynchronous message.
Zhang_Ch06.indd 733
5/13/2008 6:18:43 PM
734
INDUSTRIAL CONTROL TECHNOLOGY Transmitted frame
1
0
0
0
1
Transmitting
0
0
0 Time
Frame contents Sync bytes or characters
Start of frame Byte or character(s)
End of Sync bytes frame or characters Byte or character(s)
Figure 6.28 Synchronous transmission.
determined. With a bit-oriented system, it is possible for the receiver to detect the end of a frame at any bit instant and not just on an 8-bit (byte) boundary. This implies that a frame may be N bits in length where N is an arbitrary number. Both character oriented and bit oriented will be described here.
6.4.3.1
Bit Synchronization
With synchronous transmission, start and stop bits are not used. Instead, each frame is transmitted as a contiguous stream of binary digits. It is necessary, therefore, to utilize a different clock (bit) synchronization method. One approach, of course, is to have two pairs of lines between the transmitter and receiver, one to carry the transmitted bit stream and the other to carry the associated clock (timing) signal. The receiver could then utilize the latter to clock the incoming bit stream into, say, the receiver register within the USRT. In practice, however, this is very rarely possible, since if a switched telephone network is used, for example, only a single pair of lines is normally available. Two alternative methods are used to overcome this dilemma: (1) either the clocking information (signal) is embedded into the transmitted bit stream and subsequently extracted by the receiver, or (2) the information to be transmitted is encoded in such a way that there are sufficient guaranteed transitions in the transmitted bit stream to synchronize a separate clock held at the receiver. Both of these approaches will now be considered. An alternative approach to encoding the clock in the transmitted bit stream is to utilize a stable clock source at the receiver, which is kept in time synchronism with the incoming bit stream. However, as there are no
Zhang_Ch06.indd 734
5/13/2008 6:18:44 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
735
start and stop bits with a synchronous transmission scheme, it is necessary to encode the information in such a way that there are always sufficient bit transitions (1 to 0 or 0 to 1) in the transmitted waveform to enable the receiver clock to be resynchronized at frequent intervals. One approach is to pass the data to be transmitted through a scrambler that has the effect of randomizing the transmitted bit stream and hence removing contiguous strings of 1s or 0s. Alternatively, the data may be encoded in such a way that suitable transitions are always naturally present. The bit pattern to be transmitted is first encoded as shown in Fig. 6.29, the resulting encoded signal being referred to as a nonreturn-to-zeroinverted (NRZI) waveform. With NRZI encoding (also known as differential encoding), the transmission of a binary 1 does not change the signal level (1 or 0), whereas a binary 0 does cause a change. This means that there will always be bit transitions in the incoming signal of an NRZI waveform, providing there are no contiguous streams of binary 1s. On the surface, this may seem no different from the normal NRZ waveform but, as was described previously, if a bit-oriented scheme with zero bit insertion is adopted, an active line will always have a binary 0 in the transmitted bit stream at least every five bit cells. Consequently, the resulting waveform will contain a guaranteed number of transitions, since long strings of 0s cause a transition to every bit cell, and this enables the receiver to adjust its clock so that it is in synchronism with the incoming bit stream. The circuit used to maintain bit synchronism is known as a digital phaselocked loop (DPLL). To utilize a DPLL, a crystal-controlled oscillator (clock source), which can hold its frequency sufficiently constant to require only very small adjustments at irregular intervals, is connected to the DPLL. Typically, the frequency of the clock is 32 times, the bit rate used on the data link and this in turn is used by the DPLL to derive the timing interval between successive samples of the received bit stream. Hence, assuming that the incoming bit stream and the local clock are in synchronism, the state (1 or 0) of the incoming signal on the line will be sampled
Bit stream
1
0
0
1
1
1
0
1
NRZ waveform NRZI waveform
Figure 6.29 NRZI (differential) encoding.
Zhang_Ch06.indd 735
5/13/2008 6:18:45 PM
736
INDUSTRIAL CONTROL TECHNOLOGY
(clocked) at the center of each bit cell with exactly 32 clock periods between samples. This is shown in Fig. 6.30(a). Now assume that the incoming bit stream and local clock drift are out of synchronism. The adjustment of the sampling instant is carried out in discrete increments as shown in Fig. 6.30(b). If there are no transitions on the line, the DPLL simply generates a sampling pulse every 32 clock periods after the previous one. Whenever a transition (1 to 0 or 0 to 1) is detected, however, the time interval between the previously generated sampling pulse and the next is determined according to the position of the transition relative to where the DPLL thought it should occur. To achieve this, each bit period is divided into four quadrants, shown as A, B, C, and D in the figure. Each quadrant is equal to eight clock periods and if, for example, a transition occurs during quadrant A, this indicates that the last sampling pulse (a) Actual/assumed transitions Received bit stream
Generated sampling (clock) pulser
32 clocks
32 clocks Actual transitions possibilities
(b) Assumed transitions
32 ×CLK
32 clocks Generated sampling (clock) pulses
30 clocks 31 clocks 32 clocks 33 clocks 34 clocks
32−2 31−2 32 32+1 32+2
Quadrants A B C D clock adjustments −2 −1 +1 +2 + –
Figure 6.30 DPLL operation: (a) in phase; (b) clock adjustment rules.
Zhang_Ch06.indd 736
5/13/2008 6:18:45 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
737
was in fact too close and hence late. The time period to the next pulse is therefore shortened to 30 clock periods. Similarly, if a transition occurs in quadrant D, this indicates that the previous sampling pulse was too early. The time period to the next pulse is therefore lengthened to 34 clock periods. Transitions in quadrants B and C are clearly nearer to the assumed transition and, hence, the relative adjustments are less (–1 and +1, respectively). Clearly, a transition at the assumed transition will result in the adjustment. In this way, successive adjustments keep the generated sampling pulses close to the center of each bit cell. It can be readily deduced that in the worst case the DPLL will require 12 bit transitions to converge to the nominal bit center of a waveform: four bit periods of coarse adjustments (±2) and eight bit periods of fine adjustments (±1). Hence, when using a DPLL, it is usual before transmitting the first frame on a line, or following an idle period between frames, to transmit a number of characters to provide a minimum of 12 bit transitions. Two characters each composed of all 0s, for example, will provide 16 transitions with NRZI encoding. This ensures that the DPLL will generate sampling pulses at the nominal center of each bit cell by the time the opening flag of a frame is received. It should be stressed, however, that once in synchronism (lock) only minor adjustments will normally take place during the reception of a frame.
6.4.3.2
Character-Oriented Synchronous Transmission
With a character-oriented scheme, each frame to be transmitted is made up of a variable number of 7- or 8-bit characters, which are transmitted as a contiguous string of binary bits with no delay between them. The receiving device, therefore, having achieved clock (bit) synchronism must be able to (1) detect the start and end of each character that is character synchronism, and (2) detect the start and end of each complete frame that is frame synchronism. A number of schemes have been devised to achieve this, the main aim of which is to make the synchronization process independent of the actual contents of a frame. This type of synchronization scheme is said to be transparent to the frame contents or simply data transparent. The most common character-oriented scheme is that used in the binary synchronous control protocol known as Basic Mode. This is used primarily for the transfer of alphanumeric characters between communities of intelligent terminals and a computer. A number of alternative forms of this protocol are in use, and an example of the frame format used in one of these is shown in Fig. 6.31(a). The format selected is the one normally used to transmit a block of data that is an information frame.
Zhang_Ch06.indd 737
5/13/2008 6:18:47 PM
738
INDUSTRIAL CONTROL TECHNOLOGY (a)
Direction of transmission SYN SYN STX
Character synchronization
ETX
Start-of-frame character
(b)
Frame contents
End-of-frame character
Direction of transmission SYN
SYN
SYN
10000101100001011000 0 1 0 1 1 0 0 0 0 0 0 0 1 0 0 1 1 0 STX
Receiver enters hunt mode Receiver out of synchronization (c)
Frame contents
Receiver obtains character synchronization Receiver in synchronization
Direction of transmission Additional DLE inserted SYN SYN DLE STX Start-of-frame sequence
---
DLE DLE Frame contents
---
DLE ETX End-of-frame sequence
Figure 6.31 Character-oriented link: (a) basic frame format; (b) character synchronization; and (c) data transparency.
When using Basic Mode, character synchronism is achieved by the transmitting device sending two or more special synchronizing characters (known as SYN) immediately before each transmitted frame. The receiver, at start-up or after an idle period, then scans (hunts) the received bit stream one bit at a time until it detects the known pattern of the SYN character, which results in the receiver achieving character synchronism, and the subsequent string of binary bits is then treated as a contiguous sequence of 7- or 8-bit characters as defined at set-up time. This is illustrated in Fig. 6.31(b). With the Basic Mode protocol, the SYN character (00010110) is one of the reserved characters from the ISO defined set of character codes. Similarly, the characters used to signal the start and end of each frame are from this set. In the example, the start-of-text (STX) character is used to signal the start-of-a-frame and the end-of-text (ETX) character is used to signal the end-of-a-frame. Thus, as each character in the frame is received, following the STX character, it is compared with the ETX character. If the
Zhang_Ch06.indd 738
5/13/2008 6:18:47 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
739
character is not an ETX character, it is simply stored. If it is an ETX character, however, the frame contents are processed. This scheme is satisfactory provided the data (information) transmitted are made up of strings of printable characters entered at a keyboard, for example, since then there is no possibility of an ETX control character being present within the frame contents. Clearly, if the latter did occur, this would cause the receiver to terminate the reception process abnormally. In some applications, however, the contents of frames may not be character strings but rather the binary contents of a file, for example. For this type of application, it is necessary to take additional steps to ensure that the end-of-frame termination character is not present within the frame contents; that is, it must be data transparent. To achieve this with a character-oriented transmission control scheme, a pair of characters is used both to signal the start-of-a-frame and also the end-of-a-frame. This is shown in Fig. 3.8(c). A pair of characters is necessary to achieve data transparency: to avoid the abnormal termination of a frame due to the frame contents containing the end-of-frame character sequence; the transmitter inserts a second data link escape (DLE) character into the transmitted data stream whenever it detects a DLE character in the contents of the frame. This is often referred to as character (or byte) stuffing. The receiver can thus detect the end-of-a-frame by the unique DLE-ETX sequence and, whenever it receives a DLE character followed by a second DLE, it discards the second character. As has been mentioned, with a frame-oriented scheme, transmission errors are normally detected by the use of additional errordetection digits computed from the contents of the frame and transmitted at the end of the frame. To maintain transparency, therefore, the error check characters are transmitted after the closing frame sequence. The different error-detection methods will be expanded on in a later section.
6.4.3.3
Bit-oriented Synchronous Transmission
With a bit-oriented scheme, each transmitted frame may contain an arbitrary number of bits, which is not necessarily a multiple of 8. A typical frame format used with a bit-oriented scheme is shown in Fig. 6.32. As can be seen, the opening and closing flag fields indicating the start and end of the frame are the same (01111110). Thus, to achieve data transparency with this scheme it is necessary to ensure that the flag sequence is not present in the frame contents. This is accomplished by the use of a technique known as zero bit insertion or bit stuffing. As the frame contents are transmitted to line, the transmitter detects whenever there is a sequence of five contiguous binary 1 digits and automatically inserts an
Zhang_Ch06.indd 739
5/13/2008 6:18:47 PM
740
INDUSTRIAL CONTROL TECHNOLOGY (a)
Direction of transmission
Idle/flags 0 1 1 1 1 1 1 0 Opening flag (b)
0 1 1 1 1 1 1 0 Idle/flags Frame contents
Closing flag
Direction of transmission 0111111011011001111101101111100 Opening flag
Additional zero bits inserted
1101111110 Closing flag
Frame contents
Figure 6.32 Bit-oriented link: (a) frame format; (b) zero-bit insertion.
additional binary 0. In this way, the flag sequence 01111110 can never be transmitted between the opening and closing flags. Similarly, the receiver, after detecting the opening flag of a frame, monitors the incoming bit stream and, whenever it detects a binary 0 after five contiguous binary ls, removes (deletes) it from the frame contents. As with a byte-oriented scheme, each frame will normally contain additional error-detection digits at the end of the frame, but the inserted and deleted 0s are not included in the error-detection processing.
6.4.4
Data Compression and Decompression
Data compression (including data decompression) is often referred to as a branch of information theory in which the primary objective is to minimize the amount of data to be transmitted. A simple characterization of data compression is that it involves transforming a string of characters in some representation (such as ASCII) into a new string (e.g., of bits), which contains the same information but whose length is as small as possible. Data compression has important application in the areas of data transmission and data storage. Many data processing applications require storage of large volumes of data. At the same time, the proliferation of computer communication networks is resulting in massive transfer of data over communication links. Compressing data to be stored or transmitted reduces storage and communication costs. When the amount of data to be transmitted is reduced, the effect is that of increasing the capacity of the communication channel. Similarly, compressing a file to half of its original size is equivalent to doubling the capacity of the storage medium. It may then become feasible
Zhang_Ch06.indd 740
5/13/2008 6:18:47 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
741
to store the data at a higher, thus faster, level of the storage hierarchy and reduce the load on the input/output channels of the computer system. If the specific type of contents is already known before starting data compression, a suitable procedure could be chosen. Any type of contents has an imminent structure offering several opportunities for compression algorithms. The following are some examples of file formats utilized in practical use (1) text data, (2) pictures, (3) audio data, and (4) video data consisting of consecutive pictures.
6.4.4.1
Loss and Lossless Compression and Decompression
Lossless compression is a compression technique that does not lose any data in the compression process. Lossless compression “packs” data into a smaller file size by using a kind of internal shorthand to signify redundant data. If an original file is 1.5 MB, for example, lossless compression can reduce it to about half that size, depending on the type of file being compressed. This makes lossless compression convenient for transferring files across the network, as smaller files transfer faster. Lossless compression is also handy for storing files as they take up less volume. However, a lossy data compression method approaches by another way. With this method, compressing data and then decompressing it may result in it being different from the original, but “close enough” to be useful somehow. These methods are typically referred to as codecs in this context. Lossy methods are most often used for compressing sound, images, or videos. This is in contrast with lossless data compression. Depending on the design of the format, lossy data compression often suffers from generation loss, that is, compressing and decompressing multiple times will do more damage to the data than doing it once. The advantage of lossy methods over lossless methods is that in some cases a lossy method can produce a much smaller compressed file than any known lossless method, while still meeting the requirements of the application.
6.4.4.2
Data Encoding and Decoding
The data encoding is the process of putting a sequence of formatted characters (letters, numbers, punctuation, and certain symbols) into a specialized format for efficient transmission or storage. Nevertheless, the data decoding is the opposite process that converts an encoded format of characters back into the original sequence of characters. Both the data
Zhang_Ch06.indd 741
5/13/2008 6:18:48 PM
742
INDUSTRIAL CONTROL TECHNOLOGY
encoding and decoding are used in data communications, networking, and storage. The term is especially applicable to radio (wireless) communications systems. In data communications, Manchester encoding (known technically as nonreturn to zero (NRZ) that has been mentioned in previous paragraphs) is a special form of encoding in which the binary digits (bits) represent the transitions between high- and low-logic states. In radio communications, numerous encoding and decoding methods exist, some of which are used only by specialized groups of people (amateur radio operators, for example). The terms encoding and decoding are often used in reference to the processes of analog-to-digital conversion and digital-to-analog conversion. In this sense, these terms can apply to any form of data, including text, images, audio, video, multimedia, computer programs, or signals in sensors, telemetry, and control systems. Encoding should not be confused with encryption, a process in which data is deliberately altered so as to conceal its content. Encryption can be done without changing the particular code that the content is in, and encoding can be done without deliberately concealing the content.
6.4.4.3
Basic Data Compression Algorithms
(1) Shannon–Fano algorithm. At about 1960, Claude E. Shannon (MIT) and Robert M. Fano (Bell Laboratories) had developed a coding procedure to generate a binary code tree. The procedure evaluates the probability of a symbol and assigns code words with a corresponding code length. To create a code tree according to Shannon and Fano, an ordered table is required providing the frequency of any symbol. Each part of the table will be divided into two segments. The algorithm has to ensure that either the upper or the lower part of the segment have nearly the same sum of frequencies. This procedure will be repeated until only single symbols are left. Table 6.5 and Fig. 6.33 give an example of how this coding algorithm is performed. In this example, the original data can be coded with an average length of 2.26 bits. Linear coding of five symbols would require 3 bits per symbol. But, before generating a Shannon–Fano code tree, the table must be known or it must be derived from preceding data. (2) Lempel–Ziv algorithm. A variety of compression methods is based on the fundamental work of Abraham Lempel and Jacob Ziv. Their original algorithms are generally denoted as LZ77 and
Zhang_Ch06.indd 742
5/13/2008 6:18:48 PM
743
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM Table 6.5 Shannon–Fano Algorithm Symbol
Frequency
Code Length
Code
Total Length
A B C D E
24 12 10 8 8
2 2 2 3 3
00 01 10 110 111
48 24 20 24 24
Notes: Total: 62 symbols; SF coded: 140 bits; linear (3 bit/symbol): 186 bits.
ABCDE CDE
AB A
B
DE
C D
E
Figure 6.33 Shannon–Fano algorithm.
LZ78. A variety of derivates were introduced after LZ77 and LZ78: one of them is LZW. (a) LZ77. LZ77 is a dictionary based algorithm that addresses byte sequences from former contents instead of the original data. In general, only one coding scheme exists; all data will be coded in the same form: (1) address to already coded contents, (2) sequence length, and (3) first deviating symbol. If no identical byte sequence is available from former contents, the address 0, the sequence length 0, and the new symbol will be coded. Table 6.6 is an example of LZ77 coding. (b) LZ78. LZ78 is based on a dictionary that will be created dynamically at runtime. Both the encoding and the decoding processes use the same rules to ensure that an identical dictionary is available. This dictionary contains any sequence already used to build the former contents. The compressed data have the general form: (1) Index addressing an entry of the dictionary, and (2) first deviating symbol. Table 6.7 is an example of LZ78 coding. (c) LZW. The LZW compression method is derived from LZ78. It was invented by Terry A. Welch in 1984. LZW is an important part of a variety of data formats. Graphic formats like gif, tif, and postscript use LZW for entropy coding.
Zhang_Ch06.indd 743
5/13/2008 6:18:49 PM
744
INDUSTRIAL CONTROL TECHNOLOGY
Table 6.6 An Example of LZ77 Coding Target String
Address
Length
Deviating Symbol
Abracadabra a bracadabra ab racadabra abr acadabra abrac adabra abracad abra
0 0 0 3 2 7
0 0 0 1 1 4
‘a’ ‘b’ ‘r’ ‘c’ ‘d’ ‘’
Table 6.7 An Example of LZ78 Coding Target String
Index
New Entry Dictionary
Deviating Symbol
Abracadabra a bracadabra ab racadabra abr acadabra abrac adabra abracad abra abracadab ra
0 0 0 1 1 1 3
1. “a” 2. “b” 3. “r” 4. “ac” 5. ”ad” 6. “ab” 7. “ra”
‘a’ ‘b’ ‘r’ ‘c’ ‘d’ ‘b’ ‘a’
LZW is developing a dictionary that contains any byte sequence already coded. The compressed data exceptionally consist of indices to this dictionary. Before starting, the dictionary is preset with entries for the 256 single byte symbols. Any following entry represents sequences larger than one byte. This algorithm defines mechanisms to create the dictionary and to ensure that it will be identical for both the encoding and decoding process. (3) Arithmetic coding. The aim of the arithmetic coding is to define a method that provides code words with an ideal length. As for every other entropy coder, it is required to know the probability for the appearance of the individual symbols. Arithmetic coding assigns an interval to each symbol, whose size reflects the probability for the appearance of this symbol. The code word of a symbol is an arbitrary rational number belonging to the corresponding interval. The entire set of data is represented by a rational number, which is always placed within the interval of each symbol.
Zhang_Ch06.indd 744
5/13/2008 6:18:49 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
745
With data being added, the number of significant digits rises continuously. In arithmetic coding, a message is encoded as a real number in an interval from one to zero. Arithmetic coding typically has a better compression ratio than Huffman coding, as it produces a single symbol rather than several separate codewords. Arithmetic coding is a lossless coding technique. There are a few disadvantages of arithmetic coding. One is that the whole codeword must be received to start decoding the symbols, and if there is a corrupt bit in the codeword, the entire message could become corrupt. Another is that there is a limit to the precision of the number that can be encoded, thus limiting the number of symbols to encode within a codeword. There also exist many patents on arithmetic coding, so the use of some of the algorithms also call on royalty fees. Table 6.8 is the arithmetic coding algorithm, with an example to aid understanding. Start with an interval (0, 1), divided into subintervals of all possible symbols to appear within a message. Make the size of each subinterval proportional to the frequency at which it appears in the message. Table 6.8(1) represents making the size of each subinterval proportional to the frequency at which it appears in the message. Table 6.8(2) describes when encoding a symbol, “zoom” into the current interval, and divide it into subintervals as in step one with the new range, where an example is given: suppose we want to
Table 6.8 An Example of Arithmetic Coding Algorithm (1)
(3)
Symbol
Probability
Interval
Symbol
A B C D
0.2 0.3 0.1 0.4
[0.0, 0.2) [0.2, 0.5) [0.5, 0.6) [0.6, 1.0)
a b c d
(2)
[0.102, 0.1216) [0.1216, 0.151) [0.151, 0.1608) [0.1608, 0.2)
(4)
Symbol a b c d
Zhang_Ch06.indd 745
New “b” Interval
New “a” Interval [0.0, 0.04) [0.04, 0.1) [0.1, 0.102) [0.102, 0.2)
Symbol
New “d” Interval
a b c d
[0.1608, 0.16864) [0.16864, 0.1804) [0.1804, 0.18432) [0.18432, 0.2)
5/13/2008 6:18:50 PM
746
INDUSTRIAL CONTROL TECHNOLOGY encode “abd.” We “zoom” into the interval corresponding to “a,” and divide up that interval into smaller subintervals as before. We now use this new interval as the basis of the next symbol encoding step. Table 6.8(3) explains how to repeat the process until the maximum precision of the machine is reached, or all symbols are encoded. To encode the next character “b,” we use the “a” interval created before, and zoom into the subinterval “b,” and use that for the next step. This produces the result that is given in Table 6.8(3). And lastly, the final result is given in Table 6.8(4): Transmit some number within the latest interval to send the codeword. The number of symbols encoded will be stated in the protocol of the image format, so any number within [0.1608, 0.2) will be acceptable. To decode the message, a similar algorithm is followed, except that the final number is given, and the symbols are decoded sequentially from that. (4) Run length encoding (RLE). RLE is one of the oldest compression methods. It is characterized by the following properties: (1) simple implementation of each RLE algorithm, (2) compression efficiency restricted to a particular type of contents, and (3) mainly utilized for encoding of monochrome graphic data. The general algorithm behind RLE is very simple. Any sequence of identical symbols will be replaced by a counter identifying the number of repetitions and the particular symbol. For instance, the original contents “aaaa” would be coded as “4a.” The most important format using RLE is Microsoft bitmap (RLE8 and RLE4). (5) Relative encoding. Relative encoding is a transmission technique that attempts to improve efficiency by transmitting the difference between each value and its predecessor, in place of the value itself. Thus the values “15106433003” would be transmitted as “1 + 4 – 4 – 1 + 6 – 2 – 1 + 0 – 3 + 0 + 3.” In effect, the transmitter is predicting that each value is the same as its predecessor and the data transmitted is the difference between the predicted and actual values. Differential pulse code modulation (DPCM) is an example of relative encoding. The signal above can have one of 7 possible values (–3 to +3) and so would require 3 bits per sample. Each sample can also be described by the difference between it and the previous sample. Each sample is the same, one more, or one less than the previous sample. Only two bits are required to express the relationship between the samples. In this way, coding the signal obtains a reduction of one third in the number of bits. (6) Burrows–Wheeler transformation. With Burrows–Wheeler Transformation (BWT), a block of original data will be converted
Zhang_Ch06.indd 746
5/13/2008 6:18:50 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
747
into a certain form and will be sorted afterwards. The result is a sequence of ordered data that reflect frequently appearing symbol combinations by repetitions. Table 6.9 is an example displaying the algorithm of the BWT coding. In Table 6.9, both data blocks consist of identical symbols, only varying in their order. The transformed data may be encoded substantially better, for example, by adaptive Huffman or arithmetic coding. The total amount of symbols slightly increases, because additional information is required allowing the reconstruction of the original data. In real application, the number of symbols in a block is very large and the additional expenditure very small. A typical block size amounts to 900 kB. (7) Huffman coding. The algorithm as described by David Huffman assigns every symbol to a leaf node of a binary code tree. These nodes are weighted by the number of occurrences of the corresponding symbol called frequency or cost. The tree structure results from combining the nodes step-bystep until all of them are embedded in a root tree. The algorithm always combines the two nodes providing the lowest frequency in a bottom-up procedure. The new interior nodes get the sum of frequencies of both child nodes. Code tree according to Huffman is illustrated in Fig. 6.34. The branches of the tree represent the binary values 0 and 1 according to the rules for common prefix-free code trees. The path from the root tree to the corresponding leaf node defines the particular code word. (8) Adaptive (dynamic) Huffman coding. Adaptive Huffman coding is an adaptive coding technique based on Huffman coding, building the code as the symbols are being transmitted, having no initial knowledge of source distribution, that allows one-pass encoding and adaptation to changing conditions in data. The benefit of one-pass procedure is that the source can be encoded realtime, though it becomes more sensitive to transmission errors, since just a single loss ruins the whole code. Table 6.9 An example of BWT Coding Convert from Convert into
Zhang_Ch06.indd 747
Peter·Piper·picked·a·peck·of·pickled·peppers.A·peck·of· pickled·peppers·Peter·Piper·picked. dkkAaddsrrffrrsd··eeiiiieeeeppkllkppppttppPPooppppPPcc cccckk······iipp.·······eeeeeeeerree
5/13/2008 6:18:50 PM
748
INDUSTRIAL CONTROL TECHNOLOGY
0
1
A 1
0 0 B
1
0 C
D
1 E
Figure 6.34 A Huffman Code tree.
There are a number of implementations of this method; the most notable are FGK (Faller-Gallager-Knuth) and Vitter algorithm. In Vitter algorithm, code is represented as a tree structure in which every node has a corresponding weight and a unique number. Numbers go down, and from right to left. Weights must suffice sibling property, that is what nodes can be listed in order of nonincreasing weight with each node adjacent to its sibling. Thus if A is parent node of B and node C is child of B, then W(A) > W(B) > W(C). The weight is merely the count of symbols transmitted which codes are associated with children of that node. A set of nodes with the same weights make a block. To get the code for every node, in case of a binary tree, we could just traverse all the paths from the root to the node, writing down, for example, “1” if we go to the right and “0” if we go to the left. We then need some general and straightforward method to transmit symbols that are not yet transmitted (NYT). To approach this, a possibility, for example, is transmission of binary numbers for every symbol in the alphabet. Encoder and decoder start with only the root node, which has the maximum number. In the beginning it is our initial NYT node. When we transmit an NYT symbol we have to transmit code for the NYT node, then for its generic code. For every symbol which is already in the tree we only have to transmit code for its leaf node. For every symbol transmitted on both sides, this update procedure must be executed: (Step 1)
Zhang_Ch06.indd 748
If current symbol is NYT, add two child nodes to NYT node, one a new NYT node the other is a leaf node for our symbol, increase weight for new leaf node and old NYT, go to step 4 or go to symbol’s leaf node.
5/13/2008 6:18:50 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM (Step 2) (Step 3) (Step 4)
749
If this node does not have the highest number in a block, swap it with the one that has the highest number. Increase weight for current node. If this is not the root node go to parent node, go to step 2, or else end.
6.5 Data-Link Protocols A distributed control network, in particular a controller area network (CAN), requires three layers to perform the data communication: the physical layer for data transmissions, the data-link layer for the data links, and the application layer for the communication protocol and presentation. However, in computer networks (WAN, Internet, etc.) the data-link layer is the second layer of the seven layer OSI model. Although there is this difference, the data-link layer in distributed control plays a similar role as in a computer network. In distributed control, the data-link layer has three primary duties in data communication, which are (1) framing control, (2) flow control, and (3) error control. To carry on these duties, the data-link layer can be structurally divided into two sublayers that are the logic link control (LLC) and media access control (MAC) sublayers. In this section, some important details are provided for all these topics.
6.5.1
Framing Controls
The data-link layer must use the service provided by the physical layer for encoding bits into packets before transmission and then decoding the packets back into bits at the destination. The physical layer deals with transmission of raw bit streams from the source machine to the destination machine. The usual approach is to break the bit stream into discrete frames and compute the checksum for each frame. When the frame arrives at the destination machine, it computes the checksum again if the newly computed checksum is different from the old one in which an error has occurred and the data-link layer takes necessary steps to deal with it. Framing control in the data-link layer actually is the sequence control of the frame transfer between two physically connected entities. There are a number of similar standards for the sequence control protocols. Two of them are comparably important (1) high-level data-link control (HDLC) protocol which was issued by the ISO, and (2) synchronous data-link control (SDLC) protocol that was issued by IBM.
Zhang_Ch06.indd 749
5/13/2008 6:18:50 PM
750
INDUSTRIAL CONTROL TECHNOLOGY
6.5.1.1
High-Level Data Link Control (HDLC)
HDLC is a bit-oriented synchronous data-link layer protocol developed by the ISO. The current standard for HDLC is ISO 13239, which replaces all of those previous standards. HDLC provides both connection oriented and connectionless service. HDLC can be used for point to multipoint connections, but is now used almost exclusively to connect one device to another, using what is known as asynchronous balanced mode. The other modes are normal response mode and asynchronous response mode. (1) HDLC framing. HDLC frames can be transmitted over synchronous or asynchronous links. Those links have no mechanism to mark the beginning or end of a frame, so we have to identify the beginning and end of each frame. This is done by using a frame delimiter, or “flag,” which is a unique sequence of bits that is guaranteed not to be seen inside a frame. This sequence is 01111110, or, in hexadecimal notation, 7E. Each frame begins and ends with a frame delimiter. When no frames are being transmitted on a synchronous link, a frame delimiter is continuously transmitted on the link. This generates a continuous bit pattern: 01111110011111100111111 001111110 . . . This is used by modems to train and synchronize their clocks through phase-locked loops. Actual binary data could easily have a sequence of bits that is the same as the flag sequence. So the data’s bit sequence must be transmitted so that it does not appear to be a frame delimiter. On synchronous links, this is done with bit stuffing. The sending device ensures that any sequence of five contiguous 1-bits is automatically followed by a 0-bit. A simple digital circuit inserts a 0-bit after five 1-bits. The receiving device knows this is being done, and will automatically strip out the extra 0-bits. So if a flag is received, it will have six contiguous 1-bits. The receiving device sees six 1-bits and knows it is a flag; otherwise the sixth bit would have been a 0-bit. Asynchronous links using serial ports or UARTs just send bits in groups of eight. They lack the special bit-stuffing digital circuits. Instead they use “control-octet transparency,” also called “byte stuffing” or “octet stuffing.” The frame boundary octet is 01111110, (7’ in hexadecimal notation). A “control escape octet,” has the bit sequence 01111101, (7D hexadecimal). The escape octet is sent before a data byte with the same value as either an escape or frame octet. Then, the following data has bit 5 inverted. For example, the data sequence 01111110 (7E hexadecimal)
Zhang_Ch06.indd 750
5/13/2008 6:18:51 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
751
would be transmitted as 0111110101011110 (7D 5E hexadecimal). Any octet value can be escaped in the same fashion. (2) HDLC structure. The contents of an HDLC frame, including the flag, are given in Table 6.10. It is worth noting that the end flag of one HDLC frame can be (but does not have to be) the beginning (start) flag of the next HDLC frame. The data comes in groups of eight bits. The telephone and teletype systems arranged most long-haul digital transmission media to send bits eight at a time, and HDLC simply adapts that standard to send bulk binary data. Teletypes send 8-bit codes to represent each character. The FCS is the frame check sequence, and is a more sophisticated version of the parity bit. The field contains the result of a binary calculation that uses the bit sequences that make up the Address, Control, and Information fields. The calculation is designed to detect errors in the transmission of the frame including lost bits, flipped bits, extraneous bits, so that the frame can be dropped by the receiver if an error is detected. It is this method of detecting errors that can set an upper bound on the size of the data portion of the frame. Essentially, the longer the length of the data portion of the frame becomes, the harder it is to guarantee that certain types of transmission errors will be found. There are multiple types of frame check sequence, and the most commonly used in this context will be CRC-16 or CRC-CCITT (cyclic redundancy check). The FCS is needed to detect transmission errors. When HDLC was designed, long-haul digital media were designed for telephone systems, which only need a bit error rate of 1 × 10–5 errors per bit. Digital data for computers normally requires a bit error rate better than 1 × 10–12 errors per bit. By checking the FCS, the receiver can discover bad data. If the data is ok, it sends an “acknowledge” packet back to the sender. The sender can then send the next frame. If the receiver sends a “negative acknowledge” or simply drops the bad frame, the sender either receives the negative acknowledge, or runs into its time limit while waiting for the acknowldge. It then retransmits the failed frame. Table 6.10 The Structure of an HDLC Frame Flag
Address
Control
Information
FCS
Optional Flag
8 bits
8 bits
8 or 16 bits
Variable length, 0 or more bits, in multiples of 8
16 bits
8 bits
Zhang_Ch06.indd 751
5/13/2008 6:18:51 PM
752
INDUSTRIAL CONTROL TECHNOLOGY Modern optical networks have reliability substantially better than 1 × 10–5 bits per bit, but that simply makes HDLC even more reliable.
6.5.1.2
Synchronous Data Link Control (SDLC)
IBM developed the SDLC protocol in the mid-1970s for use in systems network architecture (SNA) environments. SDLC was the first data-link layer protocol based on synchronous, bit-oriented operation. (1) SDLC types and topologies. SDLC supports a variety of link types and topologies. It can be used with point-to-point and multipoint links, bounded and unbounded media, half-duplex and full-duplex transmission facilities, and circuit-switched and packet-switched networks. SDLC identifies two types of network nodes: primary and secondary. Primary nodes control the operation of other stations, called secondary. The primary polls the secondary in a predetermined order and secondary can then transmit if they have outgoing data. The primary also sets up and tears down links and manages the link while it is operational. Secondary nodes are controlled by a primary, which means that secondary can send information to the primary only if the primary grants permission. SDLC primary and secondary can be connected in four basic configurations: (a) Point-to-point: It involves only two nodes, one primary and one secondary. (b) Multipoint: It involves one primary and multiple secondary. (c) Loop: It involves a loop topology, with the primary connected to the first and last secondary. Intermediate secondaries pass messages through one another as they respond to the requests of the primary. (d) Hub go-ahead: It involves an inbound and an outbound channel (medium). The primary uses the outbound channel to communicate with the secondary. The secondary uses the inbound channel to communicate with the primary. The inbound channel is daisy-chained back to the primary through each secondary. (2) SDLC frame format. The SDLC frame is the same as shown in Fig. 6.20. The following descriptions summarize the fields illustrated in Fig. 6.20 for an SDLC frame. (a) Flag. It initiates and terminates error checking. (b) Address. It contains the SDLC address of the secondary station, which indicates whether the frame comes from the
Zhang_Ch06.indd 752
5/13/2008 6:18:51 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
753
primary or secondary. This address can contain a specific address, a group address, or a broadcast address. A primary is either a communication source or a destination, which eliminates the need to include the address of the primary. (c) Control. It employs three different formats, depending on the type of SDLC frames are used for (i) Information (I) frame (ii) Supervisory (S) frame (iii) Unnumbered (U) frame (Their descriptions have been given in Section 6.3.5.) (iv) Data (information) (It contains a path information unit (PIU) or exchange identification (XID) information.) (v) Frame check sequence (FCS) (It precedes the ending flag delimiter and is usually a cyclic redundancy check (CRC) calculation remainder. The CRC calculation is redone in the receiver. If the result differs from the value in the original frame, an error is assumed.)
6.5.2
Error Controls
No errors can occur in the ideal transmission medium. However, none of the transmission media is ideal. The signal representing the data is always subject to various error sources. Data communication systems use a variety of techniques to detect and correct errors that occur, usually for any of the following reasons: (1) electrostatic interference from nearby machines or circuits, (2) attenuation of the signal caused by a resistance to current in a cable, (3) distortion due to inductance and capacitance, (4) loss in transmission due to leakages, and (5) impulses from static in the atmosphere. In data communication, these error reasons generate two kinds of errors: (1) Bit errors are errors that corrupt single bits of a transmission, turning a 1 into a 0, and vice versa. These errors are caused by power surges and other interference. (2) Packet errors occur when packets are lost or corrupted. Packet loss can occur during times of network congestion when buffers become full and network devices start discarding packets. Errors and packet loss also occur during network link failures. It has been estimated that an error occurs for every 1 in 200,000 bits in data transmission. In practice, data communications systems are designed so that the transmission errors are within an acceptable rate. Under normal circumstances there are only few errors. However, it is possible that the
Zhang_Ch06.indd 753
5/13/2008 6:18:51 PM
754
INDUSTRIAL CONTROL TECHNOLOGY
signal conditions can be sometimes so weak that the signal cannot be received at all. It is also possible that sometimes the interference signal is stronger than the signal to be transmitted. Consequently, the data sent during the break is lost. Therefore, the error detection and correction should be able to handle as many errors as possible. However, the applications limit is which error detection and correction schemes are suitable. All applications benefit from the efficiency of the error detection and correction solutions to be used.
6.5.2.1
Error Detection
Error detection is a method that allows some communication errors to be detected. The data is encoded so that the encoded data contains additional redundant information about the data. The data is decoded so that the additional redundant information must match the original information. This allows some errors to be detected. (1) Parity checking. Parity checking is a primitive character-based error detection method. The characters are encoded so that an additional bit is added to each character. The additional bit is to 0 or 1 according to the number of bits set in the character. The resulting number is either even or odd. The extra bit is set according to this result and according to which parity setting, either even or odd, is being used. If even parity is used, the extra bit is always set so that the codeword always contains an even number of bits set. In even parity, the number of bits set in the codeword is always odd. The decoding is done simply by checking the codeword and removing the extra bit. The parity checking will only detect one bit error burst in each codeword. Parity checking has been used in character-based terminals but it is not useful for today’s reliable communications. However, it is being used in memory chips to ensure correct operation. (2) Block check. Block check is a block-based error detection method. The data is divided in blocks in the encoding process. An additional block check is added to each block of data. The check is calculated from the current block. The receiver also performs the same calculation on the block and compares the calculated result with the received result. If these checks are equal, the blocks are likely to be valid. Unfortunately, the problem with all block checks is that the block check is shorter than that of the block. Therefore, there are several different blocks that all have the same checksum. It is possible that the data is corrupted by a random error burst that modifies the block contents so that the
Zhang_Ch06.indd 754
5/13/2008 6:18:51 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
755
block check in the corrupted frame also matches the corrupted data. In this case the error is not detected. Even best block checks cannot detect all error bursts but good block checks minimize this probability. However, the reliability increases as the length of the block check increases. (a) Block checksum. Block checksum is a primitive checksum that is the sum of all characters in the block. The result is a character that is equally long as the characters in the block. Therefore, the result is sometimes referred as the block check character (BCC). Unfortunately, even a long BCC may allow relatively simple errors. In other words, it is easy to find different blocks that generate the same block checksum. Calculating checksum is certainly fast and easy but the reliability of the checksum is not adequate for today’s reliable communications. However, due to its speed it is used in some applications that require that the calculation is done by the software. (b) Cyclic redundancy check. The cyclic redundancy check (CRC) is an intelligent alternative for block checksum. It is calculated by dividing the bit string of the block by a generator polynomial. The value of the CRC is the reminder of the calculation that is one bit shorter than the generator polynomial. This value is also sometimes referred as the frame check sequence (FCS). However, the generator polynomial must be chosen carefully. CRC is a stronger check than the block checksum, and it is being used in today’s reliable communication. Calculating the CRC requires slightly more processing than the checksum. It can be easily implemented by using shift registers and software implementations. The CRC is able to detect all single error bursts up to the number of bits in the CRC and most random errors.
6.5.2.2
Error Correction
Error correction is a method that can be used to recover the corrupted data whenever possible. There are two basic types of error correction, which are forward error correction and backward error correction. (1) Forward error correction. Using forward error correction requires a one-directional channel only. The data is encoded to contain enough additional redundant information to recover from some communication errors. Unfortunately, the forward error correction can recover from errors only when enough information has been successfully received. There is no way to recover from errors
Zhang_Ch06.indd 755
5/13/2008 6:18:51 PM
756
INDUSTRIAL CONTROL TECHNOLOGY when this is not the case. However, the forward error correction operates continuously without any interrupts, and it ensures constant delay on the data transfer that is useful for real-time applications. (a) Hamming single-bit code. Hamming single-bit is a block code in which each block is separate from the others. The input block size can be made as small as necessary. The number of bit errors to be corrected can be specified by adding enough redundant information in the encoding process. The minimum number of bits that differ on all possible codewords is called the Hamming distance. Error bursts shorter than the Hamming distance cannot be detected. Therefore, to detect communications errors, the Hamming distance of the line code must be longer than the length of the error bursts. An N-bit error requires an encoding with a Hamming distance of N + 1 for detection and 2*N + 1 for recovery. (b) Convolutional forward error correction. In block codes, each block is independent of other blocks. On the contrary, in the convolutional forward error correction, the encoded data depends on both the current data and the previous data. The convolutional encoder contains a shift register that is shifted each time a new bit is added. The length of the shift register is called the constraint length and it contains the memory of the encoder. Each new input bit is then encoded with each bit in the shift register by using modulo-2 adders. The decoding is more difficult than the encoding. The data is decoded by using the Viterbi algorithm that tries to find the best solution for the decoding. Of course, all errors cannot be corrected, but the error rate can be decreased. The convolutional error correction has an advantage of using all previous correctly received bits for error correction. (c) Golay forward error correction. Golay codes are block codes that allow short codewords. The perfect Golay code is an encoding that encodes 12 bits into 23 bits, denoted by (23, 12). It allows the correction of three or fewer single bit errors. The extended Golay code contains an additional parity bit which allows up to four errors to be detected. The resulting code is (24, 1, 2) which is also known as half-rate Golay code. The decoding can be performed by using either soft or hard decisions. The soft decisions provide better error correction but require more processing. Golay codes are useful in applications that require low latency and short codeword length. Therefore, Golay codes are used in real-time applications and radio communications.
Zhang_Ch06.indd 756
5/13/2008 6:18:51 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
757
(d) Reed–Solomon forward error correction with interleaving. The Reed–Solomon forward error correction with interleaving is a forward error correction scheme that is intended to be used with high-quality video communications. The encoding is performed by filling a two dimensional array of 128*47 octets, that is, 128 octets in each row containing 124 octets of data and 4 octets of redundant check data. The encoding is done by filling the buffer column with 47 octets at a time. After this has been repeated 124 times the buffer is full, and it is encoded by writing each row one at a time. This encoding allows two cells to be corrected or four cells to be reconstructed. Two interleave buffers are required because a single buffer can only be either read or written at a time. The decoder also needs two buffers for the same reason. The encoder writes a row at a time, then performs the possible recovery and reconstruction of defective cells and reads the array column wise. Unfortunately, the encoding and decoding both cause an additional delay on the data transfer that is equal to the transmission time of a single buffer. This type of encoding does not completely repair all errors but it ensures high-quality throughput in real time. (2) Backward error correction. Using backward error correction requires a two-way communication channel. The sender divides the data in blocks and encodes the data with redundant additional information that is used to detect communications errors. The receiver applies error detection and if the receiver detects errors in the incoming blocks it requests the sender to resend the block. This mechanism is also called the automatic repeat request (ARQ). The ARQ can always repair any errors it can detect, but it causes a variable delay on the data transfer. The basic types of ARQ are idle RQ and continuous RQ. The backward error correction is used in many data transfer protocols. In addition, to use data compression, the communications errors must always be corrected. (a) Idle RQ. Idle RQ is a fundamental backward correction scheme used in many protocols. The data is transferred in packets by using error detection. The receiver checks the incoming packets and sends an acknowledgment (ACK) to the sender if the packet was valid. If the sender receives an acknowledgment in the specified time, it sends the next packet to the receiver. Otherwise, the sender must resend the packet. Idle RQ is very simple but it is often too inefficient. It can only send data in one direction at a time. In addition, the delay on the data transfer may result in a situation where only a small fraction of the capacity of the communications link is used.
Zhang_Ch06.indd 757
5/13/2008 6:18:51 PM
758
INDUSTRIAL CONTROL TECHNOLOGY (b) Continuous RQ. Continuous RQ is an improvement over Idle RQ when there is a delay on the data transfer. It allows several packets to be sent continuously. Therefore, the sender must use packet numbering. The receiver also receives packets continuously and sends an acknowledgment containing a packet number after receiving a valid packet. In case the sender cannot get an acknowledgment, it starts resending packets in either of the two ways. In selective repeat, only each block that was corrupted is resent. Selective repeat is complex, but it is useful when error is common. In go-back-n, once a corrupted block is detected, the transmission continues from the corrupted block and all blocks after the corrupted blocks are discarded. Go-back-n is less effective than selective repeat but it is also very simple and it is almost equally effective if the errors are infrequent.
6.5.3
Flow Controls
Congestion in data transmission occurs on busy networks because senders and receivers are often unmatched in capacity and processing power. A receiver might not be able to process packets at the same speed as the sender. If buffers are full, packets are dropped. To prevent dropped packets that must be retransmitted, flow controls for data transmission are necessary accordingly. With flow controls, end systems and the network must work together to minimize the congestion. A receiver tells the sender how much data to send. It makes the sender wait for some sort of an acknowledgment (ACK) before continuing to send more data. There are two primary methods of flow control: Stop-and-wait, and Sliding Window.
6.5.3.1
Stop-and-Wait
Stop-and-wait is a simple protocol, where the sender has to wait for an acknowledgment of every frame that it sends. It sends a frame, waits for acknowledgment, and then it sends another frame, and again, waits for acknowledgment. For using the stop-and-wait, data frames are transmitted in one direction (simplex protocols) where each frame is individually acknowledged by the receiver with a separate acknowledgment frame. The sequence of performing this protocol is given in the following: (1) The sender transmits one frame, starts a timer and waits for an acknowledgment frame from the receiver before sending further frames.
Zhang_Ch06.indd 758
5/13/2008 6:18:51 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
759
(2) A time-out period is used where frames not acknowledged by the receiver are retransmitted automatically by the sender. (3) Frames received damaged by the receiver are not acknowledged and are retransmitted by the sender when the expected acknowledgment is not received and timed out. (4) A one bit sequence number (0 or 1) is used to distinguish between original data frames and duplicate retransmitted frames to be discarded. The disadvantage with this scheme is that it is very slow. For every frame that is sent, there needs to be an acknowledgment, which takes a similar amount of propagation time to get back to the sender. The advantage is simplicity.
6.5.3.2
Sliding Window
In flow control for data transmission, sliding window is a variable duration that allows a sender to transmit a specified number of frames (or packets) before an acknowledgment is received or before a specified event occurs. The idea behind sliding window is not to wait for an acknowledgment for every frame, but to send a few frames and then get an acknowledgment that acknowledges several frames at the same time. It works by having the sender and receiver have a “window” of frames. The sender can send as many frames as would fit into a window. The receiver, on receiving enough frames, will respond with an acknowledgment of all frames up to a certain point in the window. The window then “slides” and the whole thing starts again. The “window” is implemented by a “buffer.” Data received from the network is stored in the buffer, from which the application can read at its own pace. As the application reads data, buffer space is freed up to accept more input from the network. The “window” is the amount of data that can be “read ahead,” the size of the buffer, less the amount of valid data stored in it. Window announcements are used to inform the remote host of the current window size. If the local application cannot process data fast enough, the window size will drop to zero and the remote host will stop sending data. After the local application has processed some of the queued data, the window size rises, and the remote host starts transmitting again. On the other hand, if the local application can process data at the rate it is being transferred, and if the window size is larger than the packet size, then multiple packets can be outstanding in the network, since the sender
Zhang_Ch06.indd 759
5/13/2008 6:18:51 PM
760
INDUSTRIAL CONTROL TECHNOLOGY
knows that buffer space is available on the receiver to hold all of them. Ideally, a steady-state condition can be reached where a series of packets (in the forward direction) and window announcements (in the reverse direction) are constantly in transit. As each new window announcement is received by the sender, more data packets are transmitted. As the application reads data from the buffer, more window announcements are generated. Keeping a series of data packets in transit ensures the efficient use of network resources.
6.5.3.3
Bus Arbitration
In some distributed control systems, the “bus” in the microprocessor chipset is often used as a special cable to connect several controllers. These distributed control systems probably require the “bus arbitration” for the flow controls in data transmission. The “bus arbitration” is referred to in Section 2.2.4 of this book where its mechanism has been discussed in details.
6.5.4
Sublayers
Data-link layer generally consists of two sublayers the upper of which is called logical link control (LLC) sublayer and the lower, medium access control (MAC) sublayer. Figure 6.35 gives the architecture of data link layer.
6.5.4.1
Logic Link Control (LLC)
Logic link control (LLC) is the IEEE 802.2 LAN protocol that specifies an implementation of the LLC sublayer of the data-link layer. IEEE 802.2 LLC is used in IEEE802.3 (Ethernet) and IEEE802.5 (Token Ring) LANs. The LLC sublayer is responsible for reliable transfer of frames between two directly connected entities. Functions needed to support this reliable transfer include framing (sequence) control, error control, and flow control. Data link layer
Logic link control (LLC) sublayer (IEEE 802.2 standardied protocols) Medium access control (MAC) sublayer (IEEE 802.3 standardied protocols)
Figure 6.35 The architecture of the data link layer.
Zhang_Ch06.indd 760
5/13/2008 6:18:51 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
761
LLC was originated from the HDLC and it uses a subclass of the HDLC specification. LLC defines three types of operation for data communication: (1) Connection oriented. The connection-oriented operation for the LLC layer provides these four services: (1) connection establishment, (2) confirmation and acknowledgment after data has been received (3) error recovery by requesting received bad data to be resent, (4) “Sliding window” that is a method of increasing the rate of data transfer. (2) Connectionless, which is basically sending but no guarantee of receiving. (3) Acknowledgment with connectionless service. The degree to which sequence control, error control, and flow control are provided by the LLC sublayer is determined by whether the link protocol is connection-oriented or connectionless. A connectionless link protocol provides little if any support for these functions. A connection-oriented link might use a “sliding window” technique for these functions, in which frames are individually numbered and acknowledged by their sequence number, with only a few such frames outstanding at any time. The connection-oriented functions of sequence, error, and flow control provide a foundation for services provided by higher layers. As mentioned earlier, not all layer or sublayer functions are explicitly designed or implemented in any given system. Provision of these functions depends on the services required by higher layers. If the connection-oriented functions of the LLC sublayer are not implemented, they must be performed by higher layers, individually or jointly, for reliable end-to-end communication. Connection-oriented LLC protocols are best suited for low quality transmission media where it is more efficient and cost-effective to discover errors and recover from errors as they occur on each hop than to rely on the communicating hosts to perform error recovery functions. An example of a connectionless LLC protocol is frame relay, which defines point-to-point links with switches connecting individual links in a mesh topology. In a frame relay network, end points are connected by a series of links and switches. Because frame relay is defined in terms of the links between frame relay access devices and switches, and between switches themselves, it is an LLC protocol. Connectionless LLC protocols are best suited for high quality transmission media. With high quality transmission media, errors are rarely introduced in the transmission and recovery from errors is most efficiently handled by the communicating
Zhang_Ch06.indd 761
5/13/2008 6:18:52 PM
762
INDUSTRIAL CONTROL TECHNOLOGY
hosts. In this case, it is better to move the packets quickly from source to destination rather than checking for errors at the data link layer. End-to-end communications may be through shared or dedicated facilities or circuits. Shared facilities involve the use of packet switching technology to carry frames from end-to-end; frames are subdivided as necessary into packets, which share physical and logical channels with packets from various sources to various destinations. Packet switching is almost universally used in data communications because it is more efficient for the burst nature of data traffic. On the other hand, some applications require dedicated facilities from end-to-end because they are isochronous (e.g., voice) or bandwidth-intensive (e.g., large file transfer). This mode of end-to-end circuit dedication is called circuit-switched communication. Because the facilities are dedicated to a single user, this tends to be much more expensive than the packet switched mode of communication. But some applications need it; it is an economic trade-off. Dedicated circuits are a rather extreme form of connection-oriented protocol, requiring the same setup and tear-down phases before and following communication. If the circuit setup and teardown is statically arranged (i.e., out-of-band), it is referred to as a permanent virtual circuit. If the circuit is dynamically set up and torn-down in-band, it is referred to as a switched virtual circuit.
6.5.4.2
Media Access Control (MAC)
The medium access control (MAC) sublayer is closely associated with the physical layer and defines the means by which the physical channel (medium) may be accessed. It coordinates the attempts to seize a shared channel by multiple MAC entities to avoid or reduce the collisions in it. The MAC sublayer commonly provides a limited form of error control, especially for any header information that defines the MAC level destination and higher layer access mechanism. Ethernet (IEEE 802.3) is a prime example of a shared medium with a defined MAC sublayer functionality. The shared medium in Ethernet has traditionally consisted of a coaxial cable into which multiple entities were “tapped.” Although this topology still applies conceptually, a hub and spoke medium is now typically used, in which the earlier coaxial cable has been physically collapsed into a hub device. As a contention medium, Ethernet defines how devices sense a channel for its availability, wait when it is busy, seize the channel when it becomes available, and backs-off for a random length of time following a collision with another simultaneously transmitting device. On a shared channel,
Zhang_Ch06.indd 762
5/13/2008 6:18:52 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
763
such as Ethernet, only a single entity can transmit at a time or frames will be garbled. Not all shared channels involve contention. A prime example of a shared medium without contention is token ring (IEEE 802.5), in which control of the channel is rotated between the devices sharing the channel in a deterministic round-robin manner. Conceptually, control of the channel is given to the entity currently possessing a “token.” If the device has nothing to transmit, it passes the token to the next device attached to the topological “ring.” IEEE-defined MAC sublayer addresses are six bytes long and permanently assigned to each device, typically called a network interface card. The IEEE administers the assignment of these addresses in blocks to manufacturers to ensure the global uniqueness that the MAC sublayer protocols rely on for “plug on play” network setup. Each manufacturer must ensure individual device identifier uniqueness within their assigned block.
6.6 Data Communication Protocols The higher layers in the control network are primarily designed for the management of the data communication protocols. The application layer in the CAN networks is a typical example in this aspect. The CAN application layer provides objects, protocols, and services for the event driven or requested transmission of CAN messages and for the transmission of larger data blocks between CAN devices. Furthermore, the CAN application layer offers mechanisms for the automatic distribution of CAN identifiers and for the initialization and monitoring of nodes. Models for data communication protocols defined in the higher layers of distributed control systems include these popular ones: Client–Server model, Master–Slave model, Producer–Consumer model, and Remote Procedure Call (RPC).
6.6.1
Client–Server Model
Client–Server describes the relationship between two controller programs in which one program, the client, makes a service request from another program, the server, which fulfills the request. A server process normally listens at a well known address for service requests. That is, the server process remains dormant until a connection is requested by a client to the server address. At such a time the server process “wakes up” and
Zhang_Ch06.indd 763
5/13/2008 6:18:52 PM
764
INDUSTRIAL CONTROL TECHNOLOGY
services the client, performing whatever appropriate actions the client requests of it, and then replies to the client. Figure 6.36 is a depiction of the client–server model. Although the client–server idea can be used by programs within a single controller, it is a more important idea in a network. In a network, the client–server model provides a convenient way to interconnect programs that are distributed efficiently across different locations. The client–server model has become one of the central ideas of control network. The client–server software architecture is a versatile, message-based and modular infrastructure that is intended to improve usability, flexibility, interoperability, and scalability as compared to centralized, mainframe, time sharing controlling. The following paragraphs first describe two and three-tier client–server architectures, and then describe other two client– server architectures that are linked to the three-tier.
6.6.1.1 Two and Three-Tier Client–Server Two-tier architectures consist of three components distributed in two layers: client (requester of services) and server (provider of services). The three components are (1) User system interface (such as session, text input, dialog, and display management services), (2) Processing management (such as process development, process enactment, process monitoring, and process resource services), (3) Service management (such as data and file services). The twotier design allocates the user system interface exclusively to the client. It places service management on the server and splits the processing management between client and server, creating two layers.
Client 1 Request Server Reply
Client N
Figure 6.36 A depiction of the Client–Server Model.
Zhang_Ch06.indd 764
5/13/2008 6:18:52 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
765
A three-tier distributed client–server architecture (as shown in Fig. 6.37) includes a user system interface top tier where user services (such as session, text input, dialog, and display management) reside. The third tier provides service management functionality and is dedicated to data and file services that can be optimized without using any proprietary service management system languages. It should be noted that connectivity between tiers can be dynamically changed depending upon the user’s request for data and services. The middle tier provides process management services (such as process development, process enactment, process monitoring, and process resource) that are shared by multiple applications. The middle tier server (also referred to as the application server) improves performance, flexibility, maintainability, reusability, and scalability by centralizing process logic. Centralized process logic makes administration and change management easier by localizing system functionality so that changes must only be written once and placed on the middle tier server to be available throughout the systems. With other architectural designs, a change to a function (service) would need to be written into every application. In addition, the middle process management tier controls transactions and asynchronous queuing to ensure reliable completion of transactions. The middle tier manages distributed service integrity by the two phase commit process. It provides access to resources based on names instead of locations, and thereby improves scalability and flexibility as system components are added or moved.
6.6.1.2
Message Server
Messaging is another way to implement three-tier architectures. Messages are prioritized and processed asynchronously. Messages consist of headers that contain priority information, and the address and identification number. Three-tier User system interface
Process management
Service management
Figure 6.37 Three-tier distributed Client–Server architecture.
Zhang_Ch06.indd 765
5/13/2008 6:18:53 PM
766
INDUSTRIAL CONTROL TECHNOLOGY
In database implementation, the message server connects to the relational DBMS and other data sources. The message server architecture focuses on intelligent messages. Messaging systems are good solutions for wireless infrastructures.
6.6.1.3 Application Server The application server architecture allocates the main body of an application to run on a shared host rather than in the user system interface client environment. The application server shares logic, computations, and a data retrieval engine. Advantages are that with less software on the client there is less security to worry about, applications are more scalable, and support and installation costs are less on a single server than maintaining each on a desktop client. The application server design should be used when security, scalability, and cost are major considerations.
6.6.2
Master–Slave Model
In control networking, especially in the CAN networking, master–slave is a powerful design for a communication protocol in which one device or process defined as “master” is used to control one or more other devices or processes defined as “slaves.” Once the master–slave relationship is established, the direction of control is always from the master to the slave(s). Figure 6.38 is a depiction of the master–slave model. In the master–slave protocol model, the master is or runs the controlling process; the slaves then run the processes doing the actual work. Usually,
Master
Slave
Slave
Slave
Slave
Figure 6.38 A depiction of the Master–Slave model.
Zhang_Ch06.indd 766
5/13/2008 6:18:55 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
767
slaves are generated as necessary to perform control function and to solve the problem.
6.6.2.1
Master
The master is responsible for dividing the work among the available slaves, to keep each of them busy without using too much communication. If the straightforward detection of the collision among a number N objects is used in a scene, the master’s responsibility is pretty easy. As in Fig. 6.39, we just put all the object-to-object combinations that have to be checked in a queue and wait for slaves to become available. If a slave is available and there is work to be done, the master assigns a job to that slave; otherwise, the slave is put into a waiting queue until a job arrives. In some implementations, the master does more than just assign work. The master is in an excellent position to look globally at the work that has to be done, so it seems natural to have the master perform the collision detection. After having performed this high-level check, only the necessary jobs are created and put into the queue. These jobs consist of the facelevel checks that have to be performed and which will be executed by the slaves.
6.6.2.2
Slave
For slaves, the most important aspect we have to deal with is the job handling. If a job is submitted to a slave it will contain the two starting nodes of the trees. Before starting the collision detection, the slave will have to locate these nodes in its memory space. One possibility is to specify the nodes by the path to follow starting at the root of the tree.
Master listening Slave ready Add slave to queue
Report
New job Add job to queue
Assign an available job to Slave
Figure 6.39 Message handling in the Master–Slave model.
Zhang_Ch06.indd 767
5/13/2008 6:18:56 PM
768
INDUSTRIAL CONTROL TECHNOLOGY
This would require specifying two values: a bit string containing the “instructions” for the path and an integer containing the number of bits that are valid in the bit string but do not descend too deep in the tree. A faster and maybe a little bit simpler way is to index the trees. Next to the actual tree that approximates the object, it also constructs an index. This index contains two arrays: The first array contains pointers to the actual nodes, and the second array contains offsets, indicating at which offset from the current position the right child of the node corresponding to the current position in the array can be found. This approach obviously requires more memory than the first, but it is certainly faster than the first proposal and seems a bit easier to implement. Also, the extra memory needed can be reduced by eliminating the information about the tree structure contained in the nodes themselves, since this information is now also available (also in constant time) in the index. This reduction has not been performed in the current implementation, so it should be added in the “to do” list. Then, of course, there is also the part that does the actual collision detection. Only the procedure for the tree traversal is different from the sequential version; the rest is identical. The tree traversal has a separate version for parallel execution since functionality has to be added to maintain information about the index. Also, the current search depth has to be checked against the maximum and a “new job” command issued to the master if necessary.
6.6.3
Producer–Consumer Model
The producer–consumer design pattern is based on the master–slave model. The producer–consumer design breaks down the parallel running program processes into two categories, those that produce data and those that consume the data produced.
6.6.3.1
Designs
The producer–consumer pattern is commonly used when acquiring multiple sets of data to be processed in order. Suppose you want to write an application that accepts data while processing them in the order they were received. Because queuing up (producing) this data is much faster than the actual processing (consuming), the producer–consumer design pattern is best suited for this application. The producer–consumer pattern approach to this application would be to queue the data in the producer loop, and have the actual processing done in the consumer loop. This in
Zhang_Ch06.indd 768
5/13/2008 6:18:57 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
769
effect will allow the consumer loop to process the data at its own pace, while allowing the producer loop to queue additional data at the same time. The producer–consumer pattern offers the ability to easily handle multiple processes at the same time while iterating at individual rates. What makes this pattern unique is its added benefit of buffered communication between application processes. When there are multiple processes running at different speeds, buffered communication between processes is extremely effective. For example, an application has two processes. The first process performs data acquisition and the second process takes that data and places it on a network. The first process operates at three times the speed as the second process. If the producer–consumer design pattern is used to implement this application, the data acquisition process will act as the producer and the network process the consumer. With a large enough communication queue (buffer), the network process will have access to a large amount of the data that the data acquisition loop acquires. This ability to buffer data will minimize data loss. This design pattern can also be used effectively when analyzing network communication. This type of application would require two processes to operate at the same time and at different speeds. The first process would constantly poll the network line and retrieve packets. The second process would take these packets retrieved by the first process and analyze them. In this example, the first process will act as the producer because it is supplying data to the second process that will act as the consumer. This application would benefit from the use of the producer–consumer design pattern. The parallel producer and consumer loops will handle the retrieval and analysis of data off the network, and the queued communication between the two will allow buffering of the network packets retrieved. This buffering will become very important when network communication gets busy. With buffering, packets can be retrieved and communicated faster than they can be analyzed.
6.6.3.2
Implementations
As with the standard master–slave pattern, the producer–consumer design consists of parallel loops that are broken down into two categories: producers, and consumers. Communication between producer and consumer loops is done by using data queues. Queues are based on the FIFO semantics. In the producer–consumer design pattern; queues can be initialized outside both the producer and consumer loops. Because the producer loop produces data for the consumer
Zhang_Ch06.indd 769
5/13/2008 6:18:57 PM
770
INDUSTRIAL CONTROL TECHNOLOGY
loop, it will be adding data to the queue (adding data to a queue is called “enqueue”). The consumer loop will be removing data from that queue (removing data from a queue is called “dequeue”). Because queues are FIFO, the data will always be analyzed by the consumer in the same order as they were placed into the queue by the producer. Queues are bound to one particular data type. Therefore, every different data item that is produced in a producer loop needs to be placed into different queues. This could be a problem because of the complication added to the block diagram. Queues can accept data types such as array and cluster. Each data item can placed inside a cluster. This will mask a variety of data types behind the cluster data type. Since the producer–consumer design pattern is not based on synchronization, there is no order of initial execution between the producer and consumer loops. Therefore, initializing one loop before the other begins execution can be a problem. Occurrences can be used to solve these kinds of synchronization problems.
6.6.4
Remote Procedure Call (RPC)
Remote procedure call (RPC) is a protocol that one program can use to request a service from a program located in another controller or computer in a network without having to understand network details. RPC uses the client–server model. The requesting program is a client and the serviceproviding program is the server (Figure 6.40). Similar to a regular or local procedure call, an RPC is a synchronous operation requiring the requesting program to be suspended until the results of the remote procedure are returned. However, the use of lightweight
Application
RPC atub program
T r a n s p o r t
N e t w o r k
N e t w o r k Application specific procedure invocations and returns
T r a n s p o r t
Application or server
RPC atub program
Figure 6.40 Remote procedure calls (RPC).
Zhang_Ch06.indd 770
5/13/2008 6:18:57 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
771
processes or threads that share the same address space allows multiple RPC to be performed concurrently. RPC spans the transport layer and the application layer in the open systems interconnection (OSI) model of network communication. RPC makes it easier to develop an application that includes multiple programs distributed in a network. When program statements that use RPC are compiled into an executable program, a stub is included in the compiled code that acts as the representative of the remote procedure code. When the program is run and the procedure call is issued, the stub receives the request and forwards it to a client runtime program in the local computer. The client runtime program has the knowledge of how to address the remote computer and server application and sends the message across the network that requests the remote procedure. Similarly, the server includes a runtime program and stub that interface with the remote procedure itself. Results are returned the same way. There are several RPC models and implementations. A popular model and implementation is the Open Software Foundation’s Distributed Computing Environment (DCE). The Institute of Electrical and Electronics Engineers defines RPC in its ISO Remote Procedure Call Specification, ISO/IEC CD 11578 N6561, ISO/IEC, November 1991.
Bibliography About (http://www.about.com). 2006. Client/Server Model. http://compnetworking .about.com/od/networkdesign/l/aa050201a.htm. Accessed date: March. Aher, Z. 2006. Data Link—Flow Control. http://www.cs.virginia.edu/~zaher/ classes/CS457/lectures/flow-control.pdf. Accessed date: March. Any Bus (http://www.anybus.com). 2006. Industrial Ethernet Technologies. http:// www.anybus.com/technologies/ethernet.shtml. Accessed date: March. Beidler, G. J. and Wall, R. W. 2006. Distributed Control System for Autonomous Vehicles. http://www.mrc.uidaho.edu/cisr/pubs/051103-01.pdf. Accessed date: March. Bieniawski, Stefan and Wolpert, David H. 2006. Adaptive, Distributed Control of Constrained Multi-Agent Systems. http://collectives.stanford.edu/Library/ applications/bieniawskis_adaptive.pdf. Accessed date: March. CiA (http://www.can-cia.org). 2006a. CAN Physical Layer. http://www.can-cia .org/can/physical-layer/. Accessed date: March. CiA (http://www.can-cia.org). 2006b. CAN Protocol. http://www.can-cia.org/can/ protocol/. Accessed date: March. Cisco (http://ww.cisco.com). 2006. Synchronized Data Link Layer Protocol. http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/sdlcetc.htm . Accessed date: March.
Zhang_Ch06.indd 771
5/13/2008 6:18:58 PM
772
INDUSTRIAL CONTROL TECHNOLOGY
Computer Hope (http://www.computerhope.com). 2006. UART USRT. http:// www.computerhope.com/jargon/u/uart.htm. Accessed date: March. Control Net (http://www.controlnet.org). 2006. Producer–Consumer Communication. http://www.controlnet.org/01_abcn/01_cn_producet-consumer-com munication.htm. Accessed date: March. CSSE 433–522. 2006. Physical Layer—Guided Transmission Media. http://www .cs.mu.oz.au/522/Lectures/Week02_1.pdf. Accessed date: March. D’Andrea, Raffaello and Dullerud, Geir E. 2003. Distributed Control Design for Spatially Interconnected Systems. http://cba.mit.edu/events/03.11.ASE/docs/ Dandrea.1.pdf. Accessed date: March of 2006. Dorman, Eric. 2006. Data Communication Basics. http://www.eng.uwi.tt/depts/ elec/staff/kimal/dcom.html. Accessed date: March. Dunreuil, Marc et al. 2006. Analysis of a Master–Slave Architecture for Distributed Evolutionary Computations. http://vision.gel.ulaval.ca/~parizeau/ Publications/SMC06.pdf. Accessed date: March. EECC694-Shaaban. 2000. Physical Layer—Data Transmission. http://meseec.ce .rit.edu/eecc694-spring2000/694-3-9-2000.pdf. Accessed date: March of 2006. Fair Com (http://www.faircom.com). 2006. Standard Client/Server Model. http:// www.faircom.com/products/models/standard.shtml. Accessed Date: March. FAQ (http://www.faqs.org). 2006. RPC Specification. http://www.faqs.org/rfcs/ rfc1050.html. Accessed date: March. Freebsd (http://www.freebsd.org). 2006. RPC Programming Guide. http://docs .freebsd.org/44doc/psd/23.rpc/paper.pdf. Accessed date: March. Hewitt-Packard (http://www.hp.com). 2006. Client Server Model. http://docs .hp.com/en/B2355-90136/ch01s03.html. Accessed date: March. HITEX (http://www.hitex.co.uk). 2006. CAN in Industrial Control. http://www .hitex.co.uk/softing/canindustrial.html. Accessed date: January. IBM (http://www.ibm.com). 2006. Understanding Threads and Processes (Master–Slave Model). http://publib.boulder.ibm.com/infocenter/pseries/ v5r3/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_ threads.htm. Accessed date: March. Industrial Networking (http://www.industrialnetworking.co.uk). 2006. Industrial Networking and Open Control. http://www.industrialnetworking.co.uk/mag/ v7-1/contents71.html. Accessed date: March. Levine, William S. and Dimitrios, Hristu-Varsakelis. 2006. Handbook of Networked and Embedded Control Systems. http://books.google.com/books ?id=PecCRQikz4oC&pg=PA310&lpg=PA310&dq=control+area+network+ layer&source=web&ots=fF3VsLW54_&sig=jfN7QgwbHsm0vM1dkV6qCr cXL08#PPP9,M1. Accessed date: March. Linktionary (http://www.linktionary.com). 2006. Data Communication Concepts. http://www.linktionary.com/linktionary.html. Accessed date: March. Microsoft (http://msdn2.microsoft.com). 2006. RPC Protocol. http://msdn2 .microsoft.com/en-us/library/ms442469.aspx. Accessed date: February. Microsoft TechNet (http://technet.microsoft.com). 2006. TCP/IP Model. http:// 207.46.196.114/windowsserver/en/library/d1e53415-9a93-4407-87d2-3967 d62182dc1033.mspx?mfr=true. Accessed date: March. Murphy, J. 2006. Data Link Layer—Error and Flow Control. http://csiweb.ucd.ie/ Staff/jmurphy/networks/csd8_4-datalink_2.pdf. Accessed date: March.
Zhang_Ch06.indd 772
5/13/2008 6:18:58 PM
6: DATA COMMUNICATIONS IN DISTRIBUTED CONTROL SYSTEM
773
Murphy, Patricia A. 2006. The Next Generation Networking Paradigm: Producer– Consumer Model. http://www.realtime-info.be/magazine/00q1/2000q1_p026 .pdf. Accessed date: March. Nader, F. Nir. 2007. Foundation of Networking Protocols (OSI Reference Model; 5-Layer TCP/IP Model; ATM Model). http://www.phptr.com/articles/article. asp?p=680842&rl=1. Accessed date: January. Newmarch, Jan. 2006. Distributed Systems Architecture. http://jan.netcomp .monash.edu.au/Distributed Systems Architecture.htm. Accessed date: March. OMEGA (http://www.omega.com). 2006. Digital Signal Transmission. http:// www.omega.com/literature/transactions/volume2/digitalsignal4.html . Accessed date: March. PARC (http://www2.parc.com). 2006. Large-Scale Distributed Control. http:// www2.parc.com/spl/projects/ldc/. Accessed date: March. The Particle (http://www.theparticle.com). Data Link—Flow Control and Error Control. http://www.theparticle.com/cs/bc/net/flowctrl.pdf. Accessed date: March. Peking University Computer Centre (http://www.pku.edu.cn). 2006. TCP/IP Model. http://www.pku.edu.cn/academic/research/computer-center/tc/html/ TC0102.html. Accessed date: March. Prince, Daryl. 2006. Distributed Control. http://www.memagazine.org/backissues/ membersonly/january99/features/distributed/distributed.html. Accessed date: March. PROTOCOLS (http://www.protocols.com). 2006. LAN Data Link Layer Protocols. http://www.protocols.com/pbook/lan.htm. Accessed date: March. Sinopoll, Bruno et al. 2006. Distributed Control Applications within Sensor Networks. http://www.cs.berkeley.edu/~culler/cs294-f03/papers/sinopoli-dist Control.pdf. Accessed date: March. Six Netio (http://www.sixnetio.com). 2006. Open Distributed Control Systems. http://www.sixnetio.com/html_files/products_and_groups/dcs.htm. Accessed date: March. Softing (http://www.softing.com). 2006.CANOpen Master–Slave Model. http:// www.softing.com/home/en/industrial-automation/products/can-bus/more-canopen/communication-protocols/master-slave-model.php?navanchor=3010601. Accessed date: March. Strangio, Christopher E. 2006. Data communication Basics. http://www.camiresearch.com/Data_Com_Basics/data_com_tutorial.html#anchor405943 . Accessed date: March. Syme, Matthew and Goldie, Phillip. 2006. Understanding Application Layer Protocols. http://www.phptr.com/articles/article.asp?p=169578&rl=1. Accessed date: March. Teach Target (http://searchsmb.techtarget.com). 2006a. What Is Universal Asynchronous Receiver/Transmitter. http://whatis.techtarget.com/definition/ 0,,sid9_gci214179,00.html. Accessed date: March. Teach Target (http://searchsmb.techtarget.com). 2006b. What Is Master–Slave Model. http://searchnetworking.techtarget.com/sDefinition/0,,sid7_gci783492,00 .html. Accessed date: March. Toncich, D. J. 2006. Serial Data Communications—Fundamentals. http://www .doctortee.net/files/COMBOOK4pw.pdf. Accessed date: March.
Zhang_Ch06.indd 773
5/13/2008 6:18:58 PM
774
INDUSTRIAL CONTROL TECHNOLOGY
Vondrak, Cory. 2006. Remote Procedure Call. http://www.sei.cmu.edu/str/ descriptions/rpc.html. Accessed date: March. Webopedia (http://www.webopedia.com). 2006. The 7 Layers of the OSI Model. http://www.webopedia.com/quick_ref/OSI_Layers.asp. Accessed date: March. Wiki Books (http://en.wikibooks.org). 2006. Data Link Layer—Error and Flow Control. http://en.wikibooks.org/wiki/Computer_Networks/Error_Control,_ Flow_Control,_MAC. Accessed date: March. WPI Computer Science (http://www.cs.wpi.edu). Data Link Protocol. http://web .cs.wpi.edu/~cs4514/b98/week3-dllprot/week3-dllprot.html. Accessed date: March. Yu, Liyang. 2006. Producer–Consumer Implementation. http://www.codeproject .com/threads/ProducerConsumerModel.asp. Accessed date: March. Zhang, Shengli et al. 2006. Physical Layer Network Coding. http://arxiv.org/ftp/ arxiv/papers/0704/0704.2475.pdf. Accessed date: March. Zurawski, Richard. 2006. The Industrial Communication Technology Handbook. http://books.google.com/books?id=-hsvr6dGhEUC&pg=PT484&lpg=PT 484&dq=control+area+network+layer&source=web&ots=zyl3Q3KP_n&sig= LpAGAxxaSFzJulxUWCaFa4ATEpw#PPT910,M1. Accessed date: March.
Zhang_Ch06.indd 774
5/13/2008 6:18:58 PM
7 System Routines in Industrial Control 7.1 Overview To support the parts required to accomplish the desired control functionality, an industrial control system requires some auxiliary software and hardware. Without electrical power, all the industrial control systems are nothing more than plastics, steel, and semiconductors. The plastics, steel, and semiconductors with which an industrial control system is equipped are not ready to work immediately when the power switch is turned on. Once the power switch is turned on, an industrial control system needs to carry on a preparation to build itself up before performing controls, which is defined as Power-on process. On the other hand, if a running control system immediately stops once the power is switched off, its hardware and in particular its software certainly are damaged. This can be easily understood from the common sense that a car is definitely broken after one violently applies the brake when it is running at a high speed. The transition process from a running state to a still state is necessary for the protection of an industrial control system, and is defined as Power-down process. The hardware and, in particular, the software of an industrial control system required to accomplish the Power-on and Power-down processes are called Power-on routines and Power-down routines, respectively. Any industrial control system hosts some devices and microprocessors that are connected to each other in a given topology. The control functions of an industrial control system are realized by means of the cooperation of all the interior devices and microprocessors. Each microprocessor controller in an industrial control system must dynamically detect the existence of all the connected devices, and check the compatibilities between the detected and the designed devices to decide if they can work. For a distributed industrial control system, the main microprocessor-unit must set up the system’s integrity in power on to ensure that the whole system can be coordinated. Those system routines responsible for installing all the devices and configuring the system are called install and configure routines. An industrial control system can also have some other kinds of system routines to assist the system in (1) detecting the components’ faults, 775
Zhang_Ch07.indd 775
5/13/2008 3:56:30 PM
776
INDUSTRIAL CONTROL TECHNOLOGY
(2) analyzing the errors of the software programs, (3) managing the system power costs, and (4) adjusting the physical attributes of some instruments. All of the system routines given in this book are included in Section 7.4: Diagnostic Routines. Simulation is becoming a very popular method of designing a control system, which can be tested on a simulated, rather than real plant. Also, there are products that enable rapid prototyping, that is, developing the control system using a simulation suite and then downloading it to a target processor of a real-time control system. The simulation routines are given in Section 7.5
7.2 Power-On and Power-Down Routines Power is the rudimentary basis of industrial control. The principal type of power for industrial control is electricity. In an industrial control system, electricity is needed everywhere to support the control operations. Almost every industrial control system has an assembly power supply switch. The control system with the assembly power supply switch starts once this switch is turned on and stops once turned off; either gradually or suddenly. After being switched on, power goes to every part of the system. It requires a period of time to establish the system so that the control functionality can be performed because of the following: (1) The system needs to recognize all the components and devices including their respective attributes and parameters; (2) The different parts of the system need to understand each other for the functions and statuses; (3) The system needs to ensure that it has no mechanical or electronic fault, and no software error. For a control system or control device of at least one microprocessorunit board or chipset, after its power switch is turned on, the procedure immediately starts (1) Each microprocessor-unit board or chipset boots itself independently; (2) Each microprocessor-unit board or chipset, after being successfully booted, communicates with other microprocessor-unit boards or chipsets that are connected with it so as to synchronize each other;
Zhang_Ch07.indd 776
5/13/2008 3:56:30 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
777
(3) The main, or mother microprocessor-unit board (if the system has only one microprocessor-unit board or chipset, it is the motherboard of the system) processes the installation and configuration for the system; (4) When the installation and configuration are complete, the control system enters into a suitable mode or state that is ready to undertake control functionality. The boot sequence for a microprocessor-unit board or chipset in Power-on is being given in Section 5.1.3; this section, here, only covers those issues in the Power-on process after all the Microprocessor-Unit boards of the system have been successfully booted. In contrast to the Power-on procedure, Power-down procedure restores the control system from the working state to an idle state before the power is cut off, so as to avoid the possible damage of mechanical parts, electronic hardware, and software. The Power-down procedure needs a period of time in the following cases: (1) Before the power is cut off, all the microprocessor-unit boards or chipsets in the system need to receive the message such as ‘please prepare for power down’ to withdraw from their current process by saving the program data and changing the program mode; (2) All the physical hardware devices in the system should terminate their movements and activities, respectively; (3) The mother microprocessor-unit board or chipset should realize when all the devices complete the preparation for the power down before it sends a command to turn off the power switch. When an industrial control system is running, if the “power will be down” command is applied to it either by pressing its power switch or by sending a message from its human-machine interface; this command should first arrived at one of its microprocessor-unit boards. The control system then starts a Power-down process. The system will not cut off the power until the Power-down process completes. The Power-down process follows these steps below: (1) When receiving the “power will be down” command, the microprocessor-unit board or chipset receiving this command broadcasts this command message to all the connected microprocessor-unit boards with it; (2) Each of these microprocessor-unit boards or chipsets, after receiving this command message, informs those connected microprocessor-unit boards accordingly;
Zhang_Ch07.indd 777
5/13/2008 3:56:30 PM
778
INDUSTRIAL CONTROL TECHNOLOGY (3) All the microprocessor-unit boards or chipsets of the system terminate the running processes; stop the activities of the subsidiary physical hardware; save all the system and program data necessary for the next running; change their respective system mode, and so on; (4) When the system completes the preparation of Power-down, it sends command to its power supply control device to turn the power source off the control system.
7.2.1
System Hardware Requirements
Both the Power-on routine and the Power-down routine in an industrial control system work not only with software programs, but also with special electronics hardware and electric devices. Low voltage power supply circuit (LVPSC) is necessary for a control system to perform this functionality. In additional to the low voltage power supply circuits, the Basic Input/Output System (BIOS) of the microprocessor-unit boards or chipsets in an industrial control system are also crucial to undertake the Power-on Self Tests (POST).
7.2.1.1
Low Voltage Power Supply Circuit (LVPSC)
The LVPSC is required by an industrial control system for providing electricity of different voltages to different chipsets, boards, and devices; and allows the control system to undertake the Power-on and Power-down routines. Figure 7.1 illustrates a typical LVPSC resident in an assumed distributed control system. From Fig. 7.1, we can learn that low voltage power supply circuits for an industrial control system consist of the following components: (1) System ON/OFF button. This button provides a tool for the user to turn on or turn off the power for the control system. As given in Fig. 7.1, this button is attached to the power state indicator LED. If this button is pressed while the system is off power the electricity immediately flows everywhere in the control system. However, if this button is pressed while the system is working it just starts up the Power-down process that runs for a short time before the control system is off power. (2) LVPS regulator. This is the core of the LVPSC responsible for all the following: (a) Converting Alternating Currents (AC) into Direct Currents (DC);
Zhang_Ch07.indd 778
5/13/2008 3:56:30 PM
779
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
Mother MPU Board -POST SCSI
+3.3 V SB
i=i+1 Disk
ON
Switch State
RS422 MPU - 1
MPU Board -POST
System ON/OFF button
I 2C Watch-dog (OR)
Kick watchdog
MPU - 2 RS422 MPU - 3
Power Supply on/off command
Alternate currents
UPS
Mains IN
LVPS 24 V I/lock
3.3 V 5 V 12 V 24 V To printed wires assembly board
Figure 7.1 Block diagram for low voltage power supply circuits (LVPSC) in an assumed distributed industrial control system.
(b) Switching on or switching off the power; (c) Outputting different values of voltages. In a LVPSC, this regulator connects with the power state indicator LED through the watchdog as shown by Fig. 7.1 and its outputs go to the system’s PWAB that provides the power throughout the control system. Figure 7.2 is an assumed simplex linear regulator power circuit. (3) Watchdog timer. This is used for the surveillance of some physical variables loading to the microprocessor-unit board taken as the power controller for the system. The power voltage is one of these physical variables. Watchdog is always on duty while the microprocessor is running. Its counter or timer is set as zero once the voltage value of this board is large or less than some given quantity so that the microprocessor resets or requests power off for this system. Watchdog guarantees the system works with the desired voltages.
Zhang_Ch07.indd 779
5/13/2008 3:56:30 PM
780
INDUSTRIAL CONTROL TECHNOLOGY R5
Vin > Q3
Vin >
> Vout
+ R3
Heat sink (AAVID)
C2
R4 Vin
Q2 VOLDET
Vout
R1
ADJ
Q1 C1 + C3
+
R2
Figure 7.2 The circuit of a simplex low voltage power supply (LVPS) device.
(4) ON-power state indicator LED. It works as an interface between the ON/OFF button and the microprocessor to generate an interrupt signal to the microprocessor once the states of this button are changed. (5) UPS device. UPS is a conventional device modulating the exterior power source to allow a control system to work under a safe condition. Main functionality of a UPS device is to maintain the current and the voltage of the exterior electricity. (6) Power supply ON/OFF commands path. An interface ASIC passes the message of the microprocessor to the LVPS in response to the user action applied to the ON/OFF button.
7.2.1.2
Basic Input and Output System (BIOS)
In general, the BIOS is a concept comprised of both hardware and software aspects. Its hardware aspect is the firmware of a microprocessor-unit chipset including the microprocessor-unit, the bus arrays, and the registers sets. Its software aspect is the set of instructions used to boot this microprocessor-unit chipset termed as boot program or boot code. When a control system is first powered on, the BIOS is the first thing executed by the system. The BIOS performs all the tasks that need to be done at the startup, which include the tasks performing the self-tests and the tasks initializing the hardware in an industrial control system. All these tasks are known as the Power-on Self-test (POST) process.
Zhang_Ch07.indd 780
5/13/2008 3:56:31 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
781
On most current motherboards of the industrial control systems, the peripheral connecting interface (PCI) bus arrays are popularly used as the main trucks linking their CPU’s internal buses with other types of buses such as ISA, SCSI, RS-series, AGP, and so on, to transact communications between the CPU and the I/O interface devices. The bus systems on motherboards have several bridges that are the schemes of ASIC to undertake these transactions. It is the complex bus arrays that the motherboards and other microprocessor chipsets rely on to perform the POST. The boot programs are typically stored in the EEPROM of a microprocessor chipset, sometimes called Flash. When a control system is first powered on, the motherboard will begin to POST. During this process, the boot programs run the CMOS setup that performs the basic diagnoses on the hardware of chipsets by means of their arrays of the buses and the bridges, and stores the results of the POST into their memories.
7.2.2
System Power-On Process
In the Power-on process, after the booting of each microprocessor unit board or chipset is completed, the synchronization of all the microprocessorunits and devices in an industrial control system is performed. Figure 7.3 is the flow chart of a microprocessor-unit board or chipset to synchronize all the microprocessor-units and devices in the Power-on process. Figure 7.3 gives two stages: (1) Start up of the Operating System (OS) and then carry on the synchronization between the device of this microprocessor-unit and all the devices connecting to it; and (2) Start up of the Application Program to initialize the services, monitors, handlers, interface managers, and so forth. The watchdog timer in a microprocessorunit board or chipset can be used to control the time spent by each step. At the start of a step, for example, to synchronize with one of the connecting devices, the microprocessor through its operating system program resets values to a watchdog timer. When this timer is expired, the microprocessor checks if this step has been completed. If it completes before or as this timer is expired, it carries on the next step; otherwise, it reports error and takes measures to handle this error. An important index for industrial control systems is the time spent in their Power-on processes. It seems acceptable if an industrial control system completes its Power-on process in less than 1 min. If it is longer than 1 min, engineers need to consider updating the microprocessors to fast ones, modifying the boot program or the booting process, and fixing the bugs in the operating system or in the application programs.
Zhang_Ch07.indd 781
5/13/2008 3:56:31 PM
782
INDUSTRIAL CONTROL TECHNOLOGY Power switch
is turned on
Boot process
Not
Boot completes? Yes Startup operating system program (Initialize tasks, events, memory,)
Initialization completes?
Not
Yes Startup synchronization process (Read the POST results by the boot program for the existence of all the connecting devices.)
Not
The ith connecting device exists? Yes
Synchronizing with the ith connecting device
Has the Ith device been successfully synchronized? Yes
Not i=i+1
Have all the connecting devices been synchronized?
Not Error handling process or service
Yes
Startup application program (Initialize services, monitors, …) Yes
After done application programs initialization, Setup Idle state to prepare loading clients’ control process
Figure 7.3 Flow chart of the Power-on process of a microprocessor unit board or chipset in an industrial control system.
Zhang_Ch07.indd 782
5/13/2008 3:56:31 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
7.2.3
783
System Power-On Self Tests
The self-test of the system hardware and software in the Power-on stage is a necessary operation for an industrial control system. Without this operation, the system cannot be properly built up to carry on control activities loaded by users. In computer and industrial control technologies, the POST that means the Power-on Self Test is the typical term to describe this operation.
7.2.3.1 When Does the POST Apply? As mentioned in the earlier chapters, the boot process first comes up once an industrial control system is powered on. The POST takes a large percentage of the operations of the boot process so that the POST is the main task during the boot process. However, the engineers working in industrial control often use this term for those hardware and software tests applied after completing the boot process in Power-on. For example, some industrial control systems issue the self-test routines in the diagnostic routines (see the next section for the Diagnostics routines), or issue some hardware tests in the synchronization stage after the boot process completes. Although these tests can perform the same operations as the POST does in boot process, they do not agree with the POST in a strict technical definition.
7.2.3.2 What does the POST do? In most of the industrial control systems, POST does two kinds of job: hardware initialization and self-diagnostics. The following gives the work performed by POST: (1) Initializing and testing the internal buses of a microprocessorunit. As described in Section 2.1.4, the internal bus system of a microprocessor-unit includes three buses that are the Address bus, the Data bus, and the Control bus. Some registers resident in the microprocessor-unit actually control these buses. The initialization of these kinds of buses needs to write into these registers to set up their states and to configure their functionality and physical parameters. Self-tests can be performed during the execution of these writings and will stop if these buses have hardware faults. (2) Initializing and Testing the Internal I/O Ports of a MicroprocessorUnit. As described in Section 2.1.3, there are the input ports and
Zhang_Ch07.indd 783
5/13/2008 3:56:31 PM
784
INDUSTRIAL CONTROL TECHNOLOGY the output ports in a microprocessor-unit. These ports actually are ASIC containing some registers. The initialization of these ports needs two steps: (1) The first step is to read from these ports to detect whether or not they exist, which also performs one of the self-tests; and (2) The second step is to write into these registers to initialize these ASIC. If these ASIC have hardware faults, these writings are stopped after the fault is recorded in some microprocessor-unit registers. (3) Initializing and Testing the Buses of a Microprocessor-Unit Board. As described in Section 2.1, a microprocessor-unit board or chipset should have other bus systems to connect with the peripheral devices. The bus system of a microprocessor-unit board or chipset is complicated in topography consisting of bridges and arbiters that actually are ASIC of some registers. The initialization of this bus system needs to write into these registers to set up these ASIC. Self-tests are performed during the execution of these writings that will be stopped if these ASIC have hardware faults. (4) Initializing and Testing the Peripheral Devices. A microprocessorunit board or chipset may connect with some of the peripheral devices listed in Section 2.2. These peripheral devices are able to communicate with the microprocessor by means of the bus system. The initialization of these peripheral devices needs three steps: (1) The first step is to assign the memory-mapped I/O registers to each of these devices; (2) The second step is to read their controllers to achieve their existences; and (3) The third step is for writing into the registers inside these devices the work and configuration parameters. During these steps, stopping the initializations if these devices do not exist or have hardware faults also performs the self-tests. (5) Initializing and Testing the Memory of Microprocessor-Unit. The memory residing inside a microprocessor-unit is the SDRAM. This work includes two aspects. The first one works out the geometry of the SDRAM bank. The second one tests the SDRAM controller. Its geometry is probed by writing some values into various locations in the SDRAM, then reading back these values and working out where the holes in the memory map are. This work needs to set the microprocessor into protected mode and execute some loops to ensure that every place in the SDRAM has been probed. The SDRAM controller is initialized with the relevant registers inside the microprocessor-unit. Writing the memory banks map into these registers accomplishes this. The memory banks map includes the number of banks, the starting and ending addresses of each bank, and so on.
Zhang_Ch07.indd 784
5/13/2008 3:56:31 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
785
(6) Initializing the Interrupt Controller of a Microprocessor-Unit. This is for the microprocessor to generate the Interrupt Vectors Table of all kinds of possible interrupts and their handlers. The following are necessary to build up the Interrupt Vectors Table for a microprocessor: (1) For each possible interrupt and its handler, setting up the stack structure of its descriptor; (2) For each possible interrupt and its handler, setting up the stack structure of its Opcode; and (3) For each possible interrupt, setting up the stack structure that can be used to store the seven last stack frames.
7.2.3.3 Who Does the POST? The boot programs or the BIOS of a microprocessor-unit carry on this POST. In most of the industrial control systems, the boot programs or BIOS are stored in an EEPROM (also called Flash) so that they are editable and adjustable. Engineers are therefore able to update their code to develop the microprocessor-unit and the control system. When a microprocessor-unit is first powered up, the program counter of the microprocessor’s engine is pointed into the beginning of the boot program code in the EEPROM, hence the boot code first runs and the POST first implements. As we know, when the booting process completes, the boot program is immediately moved to the Operating System code of the Application programs.
7.2.4
System Power-Down Process
An industrial control system can be powered down by eliminating the power from throughout this system. There are two ways to implement the power off of an industrial control system: (1) Hard Power-down. It is called “Hard Power off” if the power is immediately eliminated from everywhere in the system without allowing any devices to make preparation once a Power-down request is made. By using this way to power off, the system could be damaged; even both the hardware and the software in the control system could be broken. (2) Soft Power-down. It is called “Soft Power Down” if every device of the system accomplishes a preparation before the power supply is cut off to the system. This could protect an industrial control system from the damage to both its hardware and its software.
Zhang_Ch07.indd 785
5/13/2008 3:56:31 PM
786
INDUSTRIAL CONTROL TECHNOLOGY
The Soft Power-down is recommended owing to its obvious safety benefits. The Soft Power-down asks for the system to run a preparation process between the time a Power-down request is loaded and when the power supply is cut off to all of the system. The typical Power-down process follows the procedure listed: (1) The users’ request for a Power-down is first signaled to the system either by pressing its System ON/OFF Button (Fig. 7.1) or by clicking such a shut down button in its human-machine interface (which is like the Shut Down pop-up window in the monitor screen of a Personal Computer). This request, as a signal or as an interrupt, first arrives at the microprocessor-unit located at the board or chipset that connects with this System ON/OFF button. This microprocessor-unit board can be called the Power Control MPU Board. As an example, for a control system given in Fig. 7.1, once the System ON/OFF button is pressed while the system is running, an interrupt is applied through the ‘ON’ LED to the microprocessor on the Power Control MPU Board. (2) After the microprocessor of the Power Control MPU Board receives this signal, it normally sends a message like “Powerdown request” to the motherboard of the system to start the Power-down process. The motherboard will coordinate this Power-down process thereafter. For some control systems, when receiving this Power-down request message from Power Control MPU Board, the microprocessor of the motherboard issues such a “Power is shutting down” display as a Pop-up window in the human-machine interfaces to ask users for confirmation. This confirmation needs to be sent to the microprocessor of the motherboard. (3) After the Power-down request is confirmed to the motherboard, all the microprocessor-unit boards and devices of the control system start the preparation for powering down with the different procedures on different boards and devices: (a) Motherboard. The following operations will be implemented by the microprocessor of the motherboard of an industrial control system: (i) Its Operating System, after it receives this signal or interrupt of the Power-down request, initializes the System Power-down manager to coordinate the Powerdown process of the control system; (ii) The System Power-down manager, after it is initialized, sends such a Power-down message to all the connected microprocessor-unit boards and devices, respectively;
Zhang_Ch07.indd 786
5/13/2008 3:56:32 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
787
(iii) The Operating System of the motherboard then changes its own system mode from such a mode as Normal Mode into such a mode as Idle Mode. Once its system mode has been changed into the Idle Mode, it reports to the System Power-down manager; (iv) After the System Power-down manager of the motherboard receives the reports from all the microprocessorunit boards and devices including itself for the completion of changing their system modes to the Idle Mode, the System Power-down manager sends such a message as Turn Off Power to the microprocessor of the Power Control MPU Board. (b) Power control MPU board. The following operations will be implemented by the microprocessor of this microprocessorunit board: (i) Its Operating System sends a Power-down request message to all the connected microprocessor-unit boards and devices with it, respectively; (ii) It then changes its own system modes from such a mode as Normal Mode into such a mode as Idle Mode. Once it and all the connected microprocessor-unit boards and devices have done the mode change, it sends to the System Power-down manager to report the completion of its system mode change. (iii) Once receiving such a message as Turn off Power from the motherboard, it sends such as the Power Supply Off command to the Power Supply Control Device that is the LVPS in the system given in Fig. 7.1. (c) The microprocessor-unit boards that are neither the motherboard nor the power control MPU board. On each of these microprocessor-unit boards, the following operations will be implemented by its microprocessor: (i) It sends a Power-down request message to all the connected microprocessor-unit boards and devices with it, respectively; (ii) It then changes its own system modes from such a mode as Normal Mode into such a mode as Idle Mode. Once it and all the connected microprocessor-unit boards and devices having done this mode change, it sends to its own motherboard (may not be the motherboard of the control system) to report the completion of its system mode change. (d) Power supply control device. This device is the LVPS in the system given in Fig. 7.1 and implements only one operation:
Zhang_Ch07.indd 787
5/13/2008 3:56:32 PM
788
INDUSTRIAL CONTROL TECHNOLOGY Once receiving the Power Supply off command from the Power Control MPU Board, it cuts the power resource off. Power is totally eliminated from the control system after the power supply control device cuts the power supply off. In the Power-down process of an industrial control system, Change System Mode is the key operation required on each microprocessor-unit board or chipset. The sequence of a microprocessor-unit board to change its system modes from a Normal Mode to an Idle Mode includes these steps: (1) stopping the currently running application processes; (2) saving all the necessary working data of the currently running application processes; (3) saving all the necessary data of all the application processes waiting in the queue; (4) saving all the memories (RAM, DRAM, etc.) contents; (5) saving the reason for this time of Power-down if required; and (6) entering into the Idle Mode and running idle task in the Operating System programs.
7.3 Install and Configure Routines An industrial control system, in particular a distributed control system, consists of one or more controller devices or components to dominate the end equipment. Both the install and configure routines in an industrial control system are software designed to understand the hardware of the system. In this section, we specify the Device Install Routine and the Device Configure Routine for the installation and the configuration of the controller devices or components of an industrial control system. The Device Install Routine is used to work out what devices are hosted and where the devices are located in an industrial control system. The Device Configure Routine is nevertheless used to work out the details of each installed device including the attributes and parameters useful to implement control functionality. Without these two routines, the software of an industrial control system cannot understand and cannot record the controller devices or components, therefore being unable to dominate its hardware and the end equipment. Both the Device Install Routine and the Device Configure Routine are executed in the Power-on process of an industrial control system. Although some aspects of these two routines have been mentioned in the Section 7.2, this section is specially arranged to emphasize their importance in industrial control.
Zhang_Ch07.indd 788
5/13/2008 3:56:32 PM
789
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
7.3.1
System Hardware Requirements
In industrial control and in computer technology, the controller devices are called “Peripheral Components” (or “Peripheral Devices”). These Peripheral Components are electronically connected in accordance with designed topology to form an industrial control system. The equipment used to make this electronic connection between the peripheral components are various types of Peripheral Components Interconnect (PCI) systems. Both the Install Routine and the Configure Routine must work in these PCI systems. This subsection intends to answer the question how the install and configure are aided with these PCI systems. Peripheral Component Interconnect (PCI), as its name implies, is a standard that describes how to connect the peripheral components of an industrial control system together in a structured and controlled way. The standard describes the way that the system components are electrically connected and the way that they should behave. Figure 7.4 is the logical diagram of an assumed PCI based control system. The PCI buses and PCI-PCI bridges are the medium connecting the system components together; the CPU is connected to PCI bus 0, as is the GUI device. A special PCI device, a PCI-PCI bridge, connects the primary bus to the secondary PCI bus, PCI bus 1. In the jargon of the PCI specification, PCI bus 1 is described as being downstream of the PCI-PCI bridge and PCI bus 0 is upstream of the bridge. Connected to the secondary PCI bus are the SCSI and Ethernet devices for the system. Physically this bridge, secondary PCI bus, and two devices would all be contained on the
CPU
PCI Bus 0 PCI-ISA bridge
PCI-PCI bridge Graphic User Interface (GUI)
Upstream Downstream PCI Bus 1
Super I/O controller SCSI
Ethernet
Figure 7.4 An assumed PCI-based system.
Zhang_Ch07.indd 789
5/13/2008 3:56:32 PM
790
INDUSTRIAL CONTROL TECHNOLOGY
same combination PCI card. The PCI-ISA bridge in the system supports older, legacy ISA devices and the diagram shows a super I/O controller chip, which could be controlling the floppy disk or the keyboard.
7.3.1.1
PCI Address Spaces
The CPU and all the PCI devices need to access memory that is shared by them. Device drivers to control the PCI devices and to pass information between them by using this memory. Typically, the shared memory contains control and status registers for the device. These registers are used to control the device and to read its status. For example, the PCI SCSI device driver would read its status register to find out if the SCSI device was ready to write a block of information to the SCSI floppy disk. Or it might write to the control register to start the device running after it has been turned on. The CPU’s system memory could be used for this shared memory but if it were, then every time a PCI device accessed memory, the CPU would have to stall, waiting for the PCI device to finish. Access to memory is generally limited to one system component at a time. This would slow the system down. It does not allow the system’s peripheral devices to access main memory in an uncontrolled way. This would be very dangerous; a rogue device could make the system very unstable. Peripheral devices have their own memory spaces. The CPU can access these spaces but access by the devices into the system’s memory is very strictly controlled using DMA (Direct Memory Access) channels. ISA devices have access to two address spaces, ISA I/O (Input/Output) and ISA memory. With most advanced microprocessors, PCI has three: PCI I/O, PCI Memory, and PCI Configuration space. However, some microprocessors, for example, the Alpha AXP processor, do not have natural access to address spaces other than the system address space. It uses support chipsets to access other address spaces such as PCI Configuration space. It uses a sparse address-mapping scheme that steals part of the large virtual address space and maps it to the PCI address spaces.
7.3.1.2
PCI Configuration Headers
Every PCI device in an industrial control system, including the PCI-PCI bridges, has a configuration data structure that is somewhere in the PCI configuration address space. The PCI configuration header allows the system to identify and to control the device. Exactly where the header is in the PCI configuration address space depends on where in the PCI topology
Zhang_Ch07.indd 790
5/13/2008 3:56:32 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
791
that device is. For example, a GUI card plugged into one PCI slot on the PC motherboard will have its configuration header at one location, and if it is plugged into another PCI slot then its header will appear in another location in PCI configuration memory. This does not matter, for wherever the PCI devices and bridges are the system will find and configure them, using the status and configuration registers in their configuration headers. Typically, systems are designed so that every PCI slot has its own PCI configuration header in an offset that is related to its slot on the board. So, for example, the first slot on the board might have its PCI configuration at offset 0 and the second slot at offset 256 (all headers are the same length, 256 bytes) and so on. A system specific hardware mechanism is defined so that the PCI configuration code can attempt to examine all possible PCI configuration headers for a given PCI bus and know which devices are present and which devices are absent simply by trying to read one of the fields in the header (usually the Vendor Identification field) and getting some sort of error. This describes one possible error message as returning 0xFFFFFFFF when attempting to read the Vendor Identification and Device Identification fields for an empty PCI slot. Figure 7.5 shows the layout of the 256 bytes of PCI configuration header. It contains the fields as are listed in Table 7.1.
7.3.1.3
PCI I/O and PCI Memory Addresses
The devices to communicate with their device drivers running in the kernel on the CPU use these two address spaces. For example, the DEC chipset 21141 fast Ethernet device maps its internal registers into PCI I/O space. Its device driver then reads and writes those registers to control the device. GUI drivers typically use large amounts of PCI memory space to contain GUI information. Until the PCI system has been set up and the device’s access to these address spaces has been turned on using the Command field in the PCI configuration header, nothing can access them. It should be noted that only the PCI configuration code reads and writes PCI configuration addresses; the device drivers only read and write PCI I/O and PCI memory addresses (this is again left to the device driver’s policies).
7.3.1.4
PCI-ISA Bridges
These bridges support legacy ISA devices by translating PCI I/O and PCI memory space accesses into ISA I/O and ISA memory accesses. A lot
Zhang_Ch07.indd 791
5/13/2008 3:56:32 PM
792
INDUSTRIAL CONTROL TECHNOLOGY 31
16 15
0
Device 1d
Vendor 1d
00h
Status
Command
04h
Class code
08h 10h
Base address registers
24h
Line
Pin
3Ch
Figure 7.5 The PCI configuration header.
of systems now sold contain several ISA bus slots and several PCI bus slots. Over time the need for this backwards compatibility will dwindle and PCI only systems will be sold. Where in the ISA address spaces (I/O and memory) the ISA devices of the system have their registers was fixed in the dim mists of time by the early Intel 8080 based PCs. The PCI specification copes with this by reserving the lower regions of the PCI I/O and PCI memory address spaces for use by the ISA peripherals in the system and using a single PCI-ISA bridge to translate any PCI memory accesses to those regions into ISA accesses.
7.3.1.5
PCI-PCI Bridges
PCI-PCI bridges are special PCI devices that glue the PCI buses of the system together. Simple systems have a single PCI bus but there is an
Zhang_Ch07.indd 792
5/13/2008 3:56:32 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
793
Table 7.1 The Fields of the 256-Byte PCI Configuration Header Vendor Identification Device Identification Status Command Class Code
Base Address Registers Interrupt Pin
Interrupt Line
A unique number describing the originator of the PCI device. Digital’s PCI Vendor Identification is 0x1011 and Intel’s is 0x8086 A unique number describing the device itself. For example, Digital’s 21141 fast Ethernet device has a device identification of 0x0009 This field gives the status of the device with the meaning of the bits of this field set by the standard By writing to this field the system controls the device, for example, allowing the device to access PCI I/O memory This identifies the type of device that this is. There are standard classes for every sort of device: GUI, SCSI, and so on. The class code for SCSI is 0x0100 These registers are used to determine and allocate the type, amount, and location of PCI I/O and PCI memory space that the device can use Four of the physical pins on the PCI card carry interrupts from the card to the PCI bus. The standard labels these as A, B, C, and D. The Interrupt Pin field describes which of these pins this PCI device uses. Generally it is hardwired for a particular device. That is, every time the system boots, the device uses the same interrupt pin. This information allows the interrupt handling subsystem to manage interrupts from this device The Interrupt Line field of the device’s PCI configuration header is used to pass an interrupt handle between the PCI initialization code, the device’s driver, and the operating system’s interrupt handling subsystem. The number written there is meaningless to the device driver but it allows the interrupt handler to correctly route an interrupt from the PCI device to the correct device driver’s interrupt handling code within the operating system
electrical limit on the number of PCI devices that a single PCI bus can support. Using PCI-PCI bridges to add more PCI buses allows the system to support many more PCI devices. This is particularly important for a high performance server. (1) PCI-PCI bridges: PCI I/O and PCI memory windows. PCI-PCI bridges only pass a subset of PCI I/O and PCI memory read and write requests downstream. For example, in Fig. 7.4 the PCI-PCI
Zhang_Ch07.indd 793
5/13/2008 3:56:33 PM
794
INDUSTRIAL CONTROL TECHNOLOGY bridge will only pass the reading and written addresses from PCI bus 0 to PCI bus 1 if they are for PCI I/O or PCI memory addresses owned by either the SCSI or Ethernet device; all other PCI I/O and memory addresses are ignored. This stops addresses propagating needlessly throughout the system. To do this, the PCI-PCI bridges must be programmed with a base and limit for PCI I/O and PCI memory space access that they have to pass from their primary bus onto their secondary bus. Once the PCI-PCI bridges in a system have been configured, then so long as the device drivers only access PCI I/O and PCI memory space through these windows, the PCI-PCI bridges are invisible. This is an important feature that makes life easier for Linux PCI device driver writers. (2) PCI-PCI bridges: PCI configuration cycles and PCI bus numbering. So that the CPU’s PCI initialization code can address devices that are not on the main PCI bus, there has to be a mechanism that allows bridges to decide whether or not to pass configuration cycles from their primary interface to their secondary interface. A cycle is just an address as it appears on the PCI bus. The PCI specification defines two formats for the PCI configuration addresses, Type 0 and Type 1; these are shown in Figs. 7.6 and 7.7, respectively. Type 0 PCI configuration cycles do not contain a bus number and these are interpreted by all devices as being for PCI configuration addresses on this PCI bus. Bits 31:11 of the Type 0 configuration cycles are treated as the device select field. One way to design a system is to have each bit select a different device. In this case, bit 11 would select the PCI device in slot 0, bit 12 would select the PCI device in slot 1, and so on. Another way is to write the device’s slot number directly into bits 31:11. Which mechanism is used in a system depends on the system’s PCI memory controller. Type 1 PCI configuration cycles contain a PCI bus number and all PCI devices except the PCI-PCI bridges ignore this type 31
11 10 8 7 2 1 0 Func Register 0 0
Device select
Figure 7.6 Type 0 PCI configuration cycle. 31
24 23 Reserved
16 15 Bus
11 10
Device
Func
8 7
2 1 0
Register 0 1
Figure 7.7 Type 1 PCI configuration cycle.
Zhang_Ch07.indd 794
5/13/2008 3:56:33 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
795
of configuration cycle. All of the PCI-PCI bridges seeing Type 1 configuration cycles may choose to pass them to the PCI buses downstream of themselves. Whether the PCI-PCI bridge ignores the Type 1 configuration cycle or passes it onto the downstream PCI bus depends on how the PCI-PCI bridge has been configured. Every PCI-PCI bridge has a primary bus interface number and a secondary bus interface number. The primary bus interface is the one nearest the CPU, and the secondary bus interface is the one furthest away. Each PCI-PCI bridge also has a subordinate bus number and this is the maximum bus number of all the PCI buses that are bridged beyond the secondary bus interface. Alternatively, to put it another way, the subordinate bus number is the highest numbered PCI bus downstream of the PCI-PCI bridge. When the PCI-PCI bridge sees a Type 1 PCI configuration cycle it does one of the following things: (a) Ignores it if the bus number specified is not in between the bridge’s secondary bus number and subordinate bus number (inclusive), (b) Converts it to a Type 0 configuration command if the bus number specified matches the secondary bus number of the bridge, (c) Passes it on to the secondary bus interface unchanged if the bus number specified is greater than the secondary bus number and less than or equal to the subordinate bus number. So, if we want to address Device 1 on bus 3 of the topology we must generate a Type 1 Configuration command from the CPU. Bridge 1 passes this unchanged onto Bus 1. Bridge 2 ignores it, but Bridge 3 converts it into a Type 0 Configuration command and sends it out on Bus 3 where Device 1 responds to it.
7.3.1.6
PCI Initialization
In an industrial control system, the PCI initialization code can be broken into three logic parts: (1) PCI device driver. This pseudodevice driver searches the PCI system starting at Bus 0 and locates all PCI devices and bridges in the system. It builds a linked list of data structures describing the topology of the system. In addition, it numbers all of the bridges that it finds. (2) PCI BIOS. This is the software layer that should provide the various services required for PCI. This is again up to the operating system and it differs from every other operating system. (3) PCI Firmware. System-specific firmware code tidies up the system specific loose ends of PCI initialization.
Zhang_Ch07.indd 795
5/13/2008 3:56:34 PM
796
INDUSTRIAL CONTROL TECHNOLOGY
7.3.1.7 The PCI Device Driver The PCI device driver is not really a device driver at all but a function of the operating system called at system initialization time. The PCI initialization code must scan all of the PCI buses in the system looking for all PCI devices in the system (including PCI-PCI bridge devices). It uses the PCI BIOS code to find out if every possible slot in the current PCI bus that it is scanning is occupied. If the PCI slot is occupied, it builds a PCI_DEV data structure describing the device and links into the list of known PCI devices. The PCI initialization code starts by scanning PCI Bus 0. It tries to read the Vendor Identification and Device Identification fields for every possible PCI device in every possible PCI slot. When it finds an occupied slot it builds a PCI_DEV data structure describing the device. All of the PCI_ DEV data structures built by the PCI initialization code (including all of the PCI-PCI bridges) are linked into a singly linked list; PCI_DEV. (1) Configuring PCI-PCI bridges—assigning PCI bus numbers. For PCI-PCI bridges to pass PCI I/O, PCI memory, or PCI configuration address space reads and writes across them, they need to know the following: (a) Primary bus number. The bus number immediately upstream of the PCI-PCI Bridge. (b) Secondary bus number. The bus number immediately downstream of the PCI-PCI Bridge. (c) Subordinate bus number. The highest bus number of all of the buses that can be reached downstream of the bridge. (d) PCI I/O and PCI memory windows. The window base and size for PCI I/O address space and PCI memory address space for all addresses downstream of the PCI-PCI bridge. The problem is that at the time when you wish to configure any given PCI-PCI bridge you do not know the subordinate bus number for that bridge. You do not know if there are further PCIPCI bridges downstream and if you did, you would not know what numbers will be assigned to them. The answer is to use a depth recursive algorithm and scan each bus for any PCI-PCI bridges, assigning them the numbers as they are found. As each PCI-PCI bridge is found and its secondary bus numbered, assign it a temporary subordinate number of 0xFF and scan and assign numbers to all PCI-PCI bridges downstream of it. This all seems complicated but the example below makes this process clearer. (2) PCI-PCI bridge numbering: Step 1 (The Linux approach). Taking the topology in Fig. 7.8, the first bridge the scan would find is
Zhang_Ch07.indd 796
5/13/2008 3:56:34 PM
797
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
CPU D1
D2
Bus 0
D1
Bridge 1
D2
Primary bus = 0 Secondary bus = 1 Subordinare = 0xFF Bus 1
Bridge 2
Bridge 3
D1
Bus ?
Bridge 4
Bus ?
D1
D2 Bus ?
Figure 7.8 Configuring a PCI system: Step 1.
Bridge 1. The PCI bus downstream of Bridge 1 would be numbered as 1 and Bridge 1 assigned a secondary bus number of 1 and a temporary subordinate bus number of 0xFF. This means that all Type 1 PCI Configuration addresses specifying a PCI bus number of 1 or higher would be passed across Bridge 1 and onto PCI Bus 1. They would be translated into Type 0 Configuration cycles if they have a bus number of 1 but left unable to be translated for all other bus numbers. This is exactly what the PCI initialization code needs to do to go and scan PCI Bus 1. (3) PCI-PCI bridge numbering: Step 2 (The Linux approach). Linux uses a depth algorithm and so the initialization code goes on to scan PCI Bus 1. Here it finds PCI-PCI Bridge 2. There are no further PCI-PCI bridges beyond PCI-PCI Bridge 2, so it is assigned a subordinate bus number of 2 that matches the number assigned to its secondary interface. Figure 7.9 shows how the buses and PCI-PCI bridges are numbered at this point. (4) PCI-PCI bridge numbering: Step 3 (The Linux approach). The PCI initialization code returns to scanning PCI Bus 1 and finds another PCI-PCI bridge, Bridge 3. It is assigned 1 as its primary bus interface number, 3 as its secondary bus interface number,
Zhang_Ch07.indd 797
5/13/2008 3:56:34 PM
798
INDUSTRIAL CONTROL TECHNOLOGY
CPU D1
D2
Bus 0
D1
Bridge 1
D2
Primary bus = 0 Secondary bus = 1 Subordinare = 0xFF Bus 1
Primary bus = 1 Bridge Secondary bus = 2 2 Subordinare = 2
Bridge 3
D1
Bus ?
Bridge 4
Bus 2
D1
D2 Bus ?
Figure 7.9 Configuring a PCI system: Step 2.
and 0xFF as its subordinate bus number. Figure 7.10 shows how the system is configured now. Type 1 PCI configuration cycles with a bus number of 1, 2, or 3 will be correctly delivered to the appropriate PCI buses. (5) PCI-PCI bridge numbering: Step 4 (The Linux approach). Linux starts scanning PCI Bus 3, downstream of PCI-PCI Bridge 3. PCI Bus 3 has another PCI-PCI bridge (Bridge 4) on it. It is assigned 3 as its primary bus number and 4 as its secondary bus number. It is the last bridge on this branch and so it is assigned a subordinate bus interface number of 4. The initialization code returns to PCI-PCI Bridge 3 and assigns it a subordinate bus number of 4. Finally, the PCI initialization code can assign 4 as the subordinate bus number for PCI-PCI Bridge 1. Figure 7.11 shows the final bus numbers.
7.3.1.8
PCI BIOS Functions
The PCI BIOS functions are a series of standard routines that are common across all platforms. For example, they are the same for both Intel and Alpha AXP based systems. They allow the CPU controlled access to all of
Zhang_Ch07.indd 798
5/13/2008 3:56:34 PM
799
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
CPU D1
D2
Bus 0
D1
D2
Bridge 1
Primary bus = 0 Secondary bus = 1 Subordinare = 4 Bus 1
Primary bus = 1 Bridge Secondary bus = 3 3 Subordinare = 4
D1
Primary bus = 1 Bridge Secondary bus = 2 2 Subordinare = 2
Bus 3
Bridge Primary bus = 3 Secondary bus = 4 4 Subordinare = 4
Bus 2
D1
D2 Bus 4
Figure 7.10 Configuring a PCI system: Step 3.
CPU D2
D1 Bus 0
D1
D2
Primary bus = 0 Bridge Secondary bus = 1 1 Subordinare = 4 Bus 1
D1
Primary bus = 1 Bridge Secondary bus = 3 3 Subordinare = 4
Primary bus = 1 Bridge Secondary bus = 2 2 Subordinare = 2 Bus 2
Bus 3 Primary bus = 3 Bridge Secondary bus = 4 4 Subordinare = 4
D1
D2 Bus 4
Figure 7.11 Configuring a PCI system: Step 4.
Zhang_Ch07.indd 799
5/13/2008 3:56:35 PM
800
INDUSTRIAL CONTROL TECHNOLOGY
the PCI address spaces. Therefore, only Linux kernel code and device drivers may use them.
7.3.1.9
PCI Firmware
The PCI firmware code for Alpha AXP does rather more than that for Intel (which basically does nothing). For Intel based systems the system BIOS, which ran at boot time, has already fully configured the PCI system. For non-Intel based systems further configuration needs to happen: (1) Allocate PCI I/O and PCI memory space to each device, (2) Configure the PCI I/O and PCI memory address windows for each PCI-PCI bridge in the system, (3) Generate Interrupt Line values for the devices; these control the interrupt handling for the device. The following describes how that the code works. (1) Finding out how much PCI I/O and PCI memory space a device needs. Each PCI device found is queried to find out how much PCI I/O and PCI memory address space it requires. To do this, each base address register has all 1’s written to it and then read. The device will return 0’s in the don’t-care address bits, effectively specifying the address space required. There are two basic types of base address register; the first indicates within which address space the device registers must reside, either PCI I/O or PCI memory space. This is indicated by Bit 0 of the register. Figure 7.12 shows the two forms of the base address register for PCI memory and for PCI I/O. To find out just how much of each address space a given base address register is requesting, you write all 1s into the register and then read it back. The device will specify zeros in those don’t care address bits, effectively specifying the address space required. This design implies that all address spaces used are a power of 2 and are naturally aligned. For example when you initialize the DEC Chipset 21142 PCI Fast Ethernet device, it tells you that it needs 0x100 bytes of space of either PCI I/O or PCI memory. The initialization code allocates it space. The moment that it allocates space, the 21142’s control and status registers can be seen at those addresses. (2) Allocating PCI I/O and PCI memory to PCI-PCI bridges and devices. Similar to all memory, the PCI I/O and PCI memory spaces are finite, and to some extent scarce. The PCI firmware
Zhang_Ch07.indd 800
5/13/2008 3:56:35 PM
801
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL 4 3 2 1 0
31
0
Base address
Prefetchable
Type
Base address for PCI memory space 31
2 1 0 1
Base address
Reserved Base address for PCI I/O space
Figure 7.12 PCI configuration header: Base address registers.
code for non-Intel systems (and the BIOS code for Intel systems) has to allocate each device the amount of memory that it is requesting in an efficient manner. Both PCI I/O and PCI memory must be allocated to a device in a naturally aligned way. For example, if a device asks for 0xB0 of PCI I/O space, then it must be aligned on an address that is a multiple of 0xB0. In addition to this, the PCI I/O and PCI memory bases for any given bridge must be aligned on 4 k and on 1 MB boundaries, respectively. Given that the address spaces for downstream devices must lie within all of the upstream PCI-PCI bridge’s memory ranges for any given device, it is a somewhat difficult problem to allocate space efficiently. A recursive algorithm can be used to walk through the data structures built by the PCI initialization code. Starting at the root PCI bus the BIOS firmware code: (a) Aligns the current global PCI I/O and memory bases on 4 k and 1 MB boundaries, respectively. (b) For every device on the current bus (in ascending PCI I/O memory needs), it allocates its space in PCI I/O and/or PCI memory. (c) Moves on the global PCI I/O and memory bases by the appropriate amounts. (d) Enables the device’s use of PCI I/O and PCI memory. (e) Allocates space recursively to all of the buses downstream of the current bus. Note that this will change the global PCI I/O and memory bases.
Zhang_Ch07.indd 801
5/13/2008 3:56:35 PM
802
INDUSTRIAL CONTROL TECHNOLOGY (f) Aligns the current global PCI I/O and memory bases on 4 k and 1 MB boundaries, respectively, and in doing so figures out the size and base of PCI I/O and PCI memory windows required by the current PCI-PCI bridge. (g) Programs the PCI-PCI bridge that links to this bus with its PCI I/O and PCI memory bases and limits. (h) Turns on bridging of PCI I/O and PCI memory accesses in the PCI-PCI bridge. This means that any PCI I/O or PCI Memory addresses seen on the bridge’s primary PCI bus that are within its PCI I/O and PCI memory address windows will be bridged onto its secondary PCI bus.
Taking the PCI system in Fig. 7.4 as our example, the PCI firmware code would set up the system in the following way: (1) Align the PCI bases. PCI I/O is 0x4000 and PCI memory is 0x100000. This allows the PCI-ISA bridges to translate all addresses below these into ISA address cycles, (2) The GUI device. This is asking for 0x200000 of PCI memory and so we allocate it that amount starting at the current PCI memory base of 0x200000 as it has to be naturally aligned to the size requested. The PCI memory base is moved to 0x400000 and the PCI I/O base remains at 0x4000. (3) The PCI-PCI bridge. We now cross the PCI-PCI bridge and allocate PCI memory there; note that we do not need to align the bases as they are already correctly aligned: (4) The Ethernet device. This is asking for 0xB0 bytes of both PCI I/O and PCI memory space. It gets allocated PCI I/O at 0x4000 and PCI memory at 0x400000. The PCI memory base is moved to 0x4000B0 and the PCI I/O base to 0x40B0. (5) The SCSI device. This is asking for 0x1000 PCI memory and so it is allocated it at 0x401000 after it has been naturally aligned. The PCI I/O base is still 0x40B0 and the PCI memory base has been moved to 0x402000. (6) The PCI-PCI bridge’s PCI I/O and memory windows. We now return to the bridge and set its PCI I/O window at between 0x4000 and 0x40B0 and its PCI memory window at between 0x400000 and 0x402000. This means that the PCI-PCI bridge will ignore the PCI memory accesses for the GUI device and pass them on if they are for the Ethernet or SCSI devices.
7.3.2
System Devices Install and Configure Routine
For an industrial control system, the Device Install and the Configure Routine is resident at the respective microprocessor-unit. This means that
Zhang_Ch07.indd 802
5/13/2008 3:56:36 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
803
each microprocessor-unit board or chipset must have its Device Install and the Configure Routine to handle those devices directly controlled by this microprocessor. However, even in the same control system, the Device Install and Configure Routine for a microprocessor-unit board or chipset may be different from the Device Install and Configure Routines for other microprocessor-units. The Device Install and Configure Routine for a microprocessor-unit board or chipset takes the following responsibilities of (1) detecting the existence for each device and keeping the detections’ results in some memory in the control system; (2) allocating PCI I/O and PCI Memory Address space to each existing device; (3) initializing three items for each existing device: PCI I/O registers, PCI Memory Address registers, Configuration Header; and (4) initializing the Device Driver in Operating System code for each existing device. The Device Install and Configure Routine for a microprocessor-unit board or chipset is basically realized with its electronic hardware. For an industrial control system working with the PCI mechanism, the PCI initialization code of this board dominates the installation and configuration of the devices resident on a microprocessor-unit board or chipset. The PCI initialization code, as elucidated in detail in Section 7.3.1, includes these three parts: (1) PCI Device Driver code (2) PCI BIOS (3) PCI Firmware. Section 7.3.1 has precisely introduced the procedure and the methodology for the PCI initialization code to install and configure the PCI devices, so this subsection does not repeat these topics. The Device Install and Configure Routine should be executed as a part of the Power-on process. When the Power-on process precedes the PCI initialization, the Device Install and Configure Routine is inclusively executed. However, as mentioned in Section 7.3.1, whether or not the PCI initialization can be complete during the booting depends on whether or not the microprocessor controlling the devices is an Intel processor. For an Intel based system, the system BIOS fully configures the PCI system during the booting. For non-Intel based systems further configuration needs to be performed after the booting.
7.3.3
System Configure Routine
Modern industrial control systems are usually some distributed systems, consisting of more than one microprocessor-unit board or chipset. Each
Zhang_Ch07.indd 803
5/13/2008 3:56:36 PM
804
INDUSTRIAL CONTROL TECHNOLOGY
microprocessor-unit in a distributed control system has some devices directly connected. Figure 7.1 gives an illustration of this kind of control system that comprises several microprocessor-units, each of them directly connecting with some devices. For a distributed control system, a motherboard is necessary to coordinate the whole system. It is obvious that the Device Install and Configure Routine given in Section 7.3.2 does not gather all the devices in an industrial control system, because each microprocessor-unit board or chipset has its own Device Install and Configure Routine, which is just for installing and configuring the devices connecting with this microprocessor-unit. We therefore need a system routine to understand all the devices in a distributed control system. The System Configure Routine given in this subsection is the routine responsible for the configuration of all devices in a control system, which sets up and stores a record of the configuration data for all the system’s devices. The System Configure Routine resides in the operating system of the motherboard of an industrial control system. It should be emphasized that the System Configure Routine, unlike the Device Install and Configure Routine, is a pure software program and does not use the system’s BIOS and firmware. In a distributed industrial control system, the operating system of a motherboard microprocessor is in charge of this System Configure Routine. This routine can be part of the motherboard’s operating system programs, or part of the motherboard’s application programs. System Configure Routine, although normally being executed during the Power-on process, can be flexibly run at any time. However, this SCR must be run after all the microprocessor-units complete their respective Device Install and Configure Routines during the Power-on process. As given in Fig. 7.3, this routine roughly follows these steps: (1) The motherboard starts to run this routine after it ensures all the microprocessorunits of the system have done their respective Device Install and Configure Routines. (2) Once starting this routine, the motherboard sends a message to the first connected microprocessor-unit to ask for the configure data that contain all the devices connecting with this microprocessor-unit and its children microprocessor-units. (3) The motherboard repeats this step until it exhausts all the connected microprocessor-units and all the system’s devices.
7.4 Diagnostic Routines The necessity for achieving increasing industrial productivity means it is essential to increase the degree of industrial control. To achieve productive
Zhang_Ch07.indd 804
5/13/2008 3:56:36 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
805
and economical usages of an industrial control system, operators must be able to rapidly recognize and eliminate causes of faults to minimize operation stoppages. This is only possible with powerful diagnostic routines in this control system. A differentiation is made between two types of diagnostics applied to industrial control: (1) Process diagnostics. The detection and elimination of faults in the production process, that is, outside the control system. These faults could result from the deficiencies in the production process such as a wrong value of a control parameter or some bugs in the processing programs. (2) System diagnostics. Localizing and the elimination of faults in the industrial control system. The control system faults are those hardware component faults generated in its control level (also called master level), its interface level, and its slave level. System diagnostics is of particular significance because it is always required. In this section, Process diagnostics is discussed in Section 7.4.3. System NVM Read and Write Routines and System diagnostics are discussed in Sections 7.4.2 and 7.4.6 with emphasis on the detections and the calibrations of the end devices at slave level because the diagnostics at the Control level (or called as Master level) have been discussed in Sections 7.2.3 and 7.3. The other three subsections of this section, 7.4.1, 7.4.4, and 7.4.5, are the adjunct routines to the diagnostics operations.
7.4.1
System Hardware Requirements
Industrial control systems are increasingly based on distributed I/O in conjunction with the interfaces between the controllers and the end devices. In the industrial control system, the gap between the controllers and the end devices can be bridged either by FIELDBUS interface or by ActuatorSensor Interface (AS-I). Figure 3.1 displays that there are two ways for the controllers to perform the control of the end devices in industrial control. (1) In the first way as seen on the left side in Fig. 3.1, the actuatorsensor level is demanded over the slave level to drive the end devices or to measure the statuses of the end devices. Owing to the existence of the actuator-sensor level, the actuator-sensor interface (AS-I) is required as the direct connection between the controller and the actuator-sensor level to load control commands onto the actuator-sensor level.
Zhang_Ch07.indd 805
5/13/2008 3:56:36 PM
806
INDUSTRIAL CONTROL TECHNOLOGY (2) As in the right column in Fig. 3.1, the actuator-sensor level is not demanded in the second case, so it does not need the AS-I. Control applies to the end devices simply through the FIELDBUS systems that have been discussed in Section 3.2.1.
7.4.2
Device Component Test Routines
Industrial control systems are unable to always perform normal operations, so diagnostic routines are required. The most failure cases of an industrial control system are attributed to some of its components having been out of order. Industrial control systems use the Device Component Test Routines, also called Components Control Routines, to investigate the validity of device components. The validity of device components indicates whether the conditions and statuses of the device components are adequate to accomplish the required functionality in the control system where they are located. For testing the validity of a device component, Components Test Routines investigate these questions: (1) Can this component correctly communicate with its master controllers in mutual directions? (2) Can this component work according to the commands from its master controllers with exact operations? (3) Can this component properly monitor its slave devices in response to its master controllers? It is crucial that before starting a diagnostic routine, an industrial control system should change its system modes into a diagnostic mode in which only one session special for the diagnostic routine is allowable in the control system so that microprocessors are able to execute this routine without interrupts and the diagnostics results are not interfered with. This topic in respect to the change system modes will be the content of the Section 7.4.5. To issue the Device Components Test Routines, the motherboard’s application programs establish a table containing all the device components to be tested in the control system. Each component to be tested with these routines should be identified by an ID code that uniquely identifies the component. The component ID should be defined enough to discriminate any one component from others. In general, Device Component Test Routines, once started, follow the procedure given below: (1) Change the system mode into a diagnostic mode, (2) Start one Device Component Test Routine, (3) Select one component from the system’s Component Table that is able to be tested with this Device Component Test Routine,
Zhang_Ch07.indd 806
5/13/2008 3:56:36 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
807
(4) Send the command to the master controller of the selected component to start this component test, (5) Communicate with the master controller of the selected component for the test result and display the results if allowable, (6) Send the command to the master controller of the selected component to stop this component test, (7) Select another component from the system’s Component Table that is able to be tested with the corresponding Device Component Test Routine, and repeat the above steps to test…, (8) Stop this Device Component Test Routine, (9) Change the system mode from the current diagnostic mode back to a normal mode.
7.4.3
System NVM Read and Write Routines
Nonvolatile memory (NVM) is a special physical medium that can reserve the stored magnetic cells without electric power. NVM exists in almost all the industrial control systems to keep some important parameters and working data such as devices’ physical and mechanical attributes, system install, and configuration parameters and programmed timer values, and so on. Industrial control systems take advantage of the NVM to be quickly and precisely established at a Power-on as well as correctly execute software processes. To check and to modify the NVM attributes is one of the important measures for finding out the root reason when a software process crashes. The NVM Read and Write Routines are designed for helping the Process diagnostics. An industrial control system can have more than one NVM, each of them being connected through either PCI bus or internal bus with one microprocessor. Through the bus system, a microprocessor can communicate with the corresponding hardware controller of the respective NVM to accomplish the read or write operation. To issue the NVM Read and Write Routines, the application programs of the microprocessor-unit board linking with the NVM hardware controller establish a list of the NVM elements (called the NVM attributes list) to be tested. A unique ID to be discriminated from others should identify each of the NVM elements in this list. In general, the NVM Read and Write Routines, once starting, follow the procedure given below: (1) Change the system mode into a diagnostic mode, (2) Start either NVM Read Routines or NVM Write Routines,
Zhang_Ch07.indd 807
5/13/2008 3:56:36 PM
808
INDUSTRIAL CONTROL TECHNOLOGY (3) Select one NVM element by its ID from the system’s NVM attributes list, (4) Send the command to the hardware controller of the NVM to start this read or write operation, (5) Communicate with the hardware controller of the NVM for the result of reading or writing and display the results if allowable, (6) Send the command to the hardware controller of the NVM to stop this read or write operation, (7) Select another NVM attribute and repeat the above steps to test…, (8) Stop either NVM Read Routines or NVM Write Routines, (9) Change the system mode from the current diagnostic mode back to a normal mode.
7.4.4
Faults/Errors Log Routines
It is inevitable that an industrial control system could partially and occasionally malfunction due to (1) the physical constraints of the electronics hardware, (2) the material deficiencies of machinery systems, (3) the code bugs in software programs, and (4) the incorrect operations by a user or an administrator. All the root reasons resulting in the system’s malfunctions are categorized as faults or errors. When a fault or an error occurs, the control system may lose some services or go down, depending on the scales and degrees of the fault’s impacts. Some industrial control systems create a special system mode, called as degrade mode, to represent the system state and to deal with the malfunction statuses after faults or errors are generated. The degrade mode is included in Section 7.4.5 that gives more about this system mode. During running, a control system may generate faults at any time, which requires special treatment to restore the system. Within a fixed term, the frequencies and the locations of the fault occurrences are the important indications of the system performance. The Fault/Error Log Routine given in this subsection is used to record the frequencies and the locations of the generated faults or errors within a fixed term. The fault records from the Fault/Error Log Routines are the useful references for the system administrator to analyze the system’s performance. Normally, the application programs of an industrial control system keep the record of faults or errors in a NVM or a disk for reservation. Once a fault or an error occurs while a control system is running, a system application program calls this routine to log this error by defined fault ID structure into a NVM or a disk as system fault counter. Each of the logged faults and errors is identified with the defined fault ID structure that
Zhang_Ch07.indd 808
5/13/2008 3:56:36 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
809
may contain the series number, platform ID, device ID, time and positions, and so on. The fault counter or fault record can be checked by the System NVM Read Routine and can be modified by the System NVM Write Routine if it is stored in a system NVM. Both the System NVM Read Routines and Write Routine have been discussed in Section 7.4.3. A scenario of a Fault/Error Log Routine is being used as follows: (1) While a control system is running, an error generates; (2) System software programs, based on the impacts of this error on the system services, decide whether or not to let the system enter the degraded mode in which some application services are closed and only other services are being maintained; (3) This routine is called to log the generated error into a NVM or a special disk to be added to the system fault counter; (4) If the control system is in degraded mode, the system administrator should be informed to take some measures to handle this fault/error. (5) After the malfunction resulting from this error is fixed, system application programs may recover the system to get back the lost services. Sometimes this requires restarting the control system.
7.4.5
Change System Mode Routines
In industry, “System” stands for an assemblage including the machinery, hardware, and software that reside in a group of equipment or devices; and “Mode” represents a state or a status of the System in some circumstances. These lead us to devise a term, “System Modes,” to stand for all the modes of a system. The system modes for a microprocessor-unit board or chipset, have been clearly defined in the computer technologies. Therefore, the system modes of an industrial control system having one microprocessor-unit are simply equal to the system modes of the microprocessor-unit board or chipset. However, for a distributed control system having more than one microprocessor-unit, the system mode is hard to define because at any instant the different microprocessor-units may be in different modes. There is an agreement in industrial control that the system modes for a distributed control system are defined in accordance with the system modes of the main microprocessor-unit board (called the motherboard). In any industrial control system there exists an enumerable set of modes that is defined as its System Modes. At any instant, an industrial control
Zhang_Ch07.indd 809
5/13/2008 3:56:36 PM
810
INDUSTRIAL CONTROL TECHNOLOGY
system must be in one of its system modes. In response to certain events, an industrial control system changes from some mode to another mode.
7.4.5.1
System Modes List
For an industrial control system, its system modes exist in three different levels: (1) the Kernel level, (2) the Operating System level, and (3) the Application System level. Each of the three levels hosts one or more modes, which are associated with each other closely. (1) Kernel level modes. Kernel Level modes are inherent in a microprocessor-unit and are dominated by the firmware of a microprocessor-unit. Every microprocessor-unit has a mode bit in one of the CPU registers for keeping its current Kernel Level mode. There are two modes at this level: (a) Super mode. This mode is the representation that a microprocessor-unit is booting itself (initializing the buses, memories, I/O ports, etc.). This mode is terminated once the microprocessor completes the booting and transfers the control from its firmware into its operating system software. (b) User mode. This mode is set once the microprocessor-unit completes its booting and is kept thereafter unless the control system is powered off. This mode is the representation that control has been submitted to the operating system software. (2) Operating system level modes. On a microprocessor-unit board or chipset, the operating system level modes coexist with the user mode of the kernel level and are kept when the operating system software is taking over the system control. This level has three modes: (a) Normal mode. This mode represents that the current system status is normal, and the main microprocessor-unit of a control system is fully loaded with the application code managed by the multitasking mechanism of the operating system. (b) Idle mode. This is the mode when a control system is idling. In this mode, the main microprocessor-unit is running an idle task cycling a nap loop. However, all the clocks and watchdogs on the main microprocessor-unit board or chipset are still running. The control system can return to normal mode within a few nanoseconds. Cache coherency is maintained in this level of idle. (c) Sleep mode. This mode comes up when the control system is completely idle or nap mode, with only the DRAM state preserved for quick recovery. All the microprocessors in the
Zhang_Ch07.indd 810
5/13/2008 3:56:37 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
811
distributed control system are powered off with their state preserved in DRAM. All clocks in the control system are suspended except for its wall clock. (3) Application system level modes. All the application system level modes must coexist with the user mode of the kernel level and all the operating system level modes. These modes are used in the application system software only for the management of the application service processes. This level has four modes: (a) Running mode. In this mode, all microprocessors in the control system are working in the normal mode of the respective operating system and fully loaded with the application processes. It is kept when nothing in the control system is in malfunction so that the control functionality is normally executed. (b) Degrade mode. If some microprocessors or devices are going down, or errors occur in some application system software, the control system comes into partial malfunction. In these cases, the application system software of the main microprocessor-unit (or other working microprocessor) sets itself (and other microprocessor-units’ application system software) into the degrade mode so as to maintain the execution of the available application services process. However, even though the application system level is in degrade mode, the operating systems of main and other microprocessorunits may still be in their normal mode. (c) Diagnostics mode. When diagnostics routines are being executed, a control system normally stops the control functions. This causes microprocessors to enter the idle mode of their respective operating system. To match with the operating system’s idle mode, the relevant application systems enter the diagnostics mode accordingly. (d) Power-save mode. When the main and other microprocessorunits are in sleep mode, their application system programs should be in such a mode as the power save mode. The system will wake once the main microprocessor-unit’s operating system changes from sleep mode to the normal mode. Table 7.2 gives the correspondence between the modes of these three levels.
7.4.5.2
System Modes Transition
System mode transitions are only permitted between the modes on the same level of the same microprocessor-unit. This means that one of the
Zhang_Ch07.indd 811
5/13/2008 3:56:37 PM
812
INDUSTRIAL CONTROL TECHNOLOGY
Table 7.2 Correspondences between Three Levels of System Modes Kernel level
Operating system level
Application system level
Super mode User mode
Normal mode
Running mode Degrade Mode Diagnostics Mode Power Save Mode
Idle Mode Sleep Mode
kernel level modes only can be changed into another kernel level mode of the same microprocessor-unit; one of the operating system level modes only can be changed into another operating system level mode of the same microprocessor-unit; one of the application system level modes only can be changed into another operating system level mode of the same microprocessor-unit. (1) Between the super mode and the user mode at the kernel level. Once being powered on, any microprocessor-unit in a control system immediately starts booting with the super mode. Once the booting completes, the microprocessor immediately switches the contexts to the operating system and immediately changes the kernel mode from super mode into the user mode. The user mode is then kept until an interrupt occurs when it switches back the super mode. When the service routine of an interrupt is terminated, the microprocessor-unit moves the kernel mode back to user mode from the super mode to resume the previously interrupted application processes. (2) Between the idle mode and the normal mode at the operating system level. If all the tasks or threads are terminated, a microprocessor-unit will go back to an idle task cycling a nap loop. In this case, after several seconds of inactivity, the microprocessor set its operating system mode into idle mode. An industrial control system is in the idle mode if and only if its main microprocessor-unit is in the idle mode. If an interrupt comes to the microprocessor-unit while it is in idle mode at its operating system level, it immediately returns to normal mode thereafter. (3) Between the sleep mode and the normal mode at the operating system level. Some industrial control systems require that if the main microprocessor-unit is in the idle mode over a time limit, this system changes into the sleep mode at its operating system
Zhang_Ch07.indd 812
5/13/2008 3:56:37 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
813
level and the power save mode in its application programs. This is coordinated by the main microprocessor of the control system. This sleep mode will be kept until a client call comes or an interrupt loads when the main microprocessor is waking up and the system returns to normal mode in due course. (4) Between the modes at application system level. The transition mechanism between the application system level modes is totally different in different control systems. Different control systems should have different transition designs between these modes at the application system level.
7.4.6
Calibration Routines
Calibration is a vital operation demanded in all industries. This subsection focuses on the calibrations specially used in industrial control.
7.4.6.1
Calibration Fundamentals
The accuracy of all the electronic components used in all industrial control systems drifts over time. The effects from environmental conditions increase this drift, which causes greater errors in control functionality being carried on within control systems. At some point in time, the drift causes the industrial control systems to partially or totally malfunction, even crash. To resolve this drift all the components and devices in an industrial control system must be calibrated at regular intervals as defined by the manufacturer. Industrial control systems require three types of calibration as given below. (1) System calibration. System calibration is designed to quantify and compensate for the total measurement error in industrial control systems. Cable losses, condition changes, and sensor errors may induce measurement error. By applying known inputs to an industrial control system and reviewing the resultant measurements, an error model is developed to represent the error in an industrial control system. This error model could be as simple as a lookup table of input versus output values, or as detailed as a polynomial. Once this error model is developed, it can be applied to all measurements made by the same industrial control system. Computer-based data acquisition and instrumentation hardware is ideal for this type of compensation because, unlike traditional box instruments, with computer-based hardware the
Zhang_Ch07.indd 813
5/13/2008 3:56:37 PM
814
INDUSTRIAL CONTROL TECHNOLOGY software application that defines the measurement functionality is worked out. In this way, the error compensation and control system calibration can be easily created in this application software. (2) External calibration. When an instrument’s time in service reaches its specified calibration interval, it should be returned to the manufacturer, or a suitable metrology laboratory and metrology agency for a calibration service. The instrument’s measurements will be compared to the external standards of known accuracy. If the results of the measurements do not fall within certain specifications, adjustments are made to the measurement circuitry. In general, the act of external calibration includes the following: (1) evaluation of the instrument’s capabilities to determine if it operates within specifications, (2) adjustments to measurement circuitry and onboard signal references if the instrument does not operate within specifications, (3) revivification of the instrument to ensure that it operates within specifications, (4) issuance of a calibration certificate, stating that the instrument measures within specifications when compared to a traceable standard. Routine performance of external calibration ensures the accuracy of the measurements made. (3) Self-calibration. Self-calibration is a method whereby an instrument uses onboard signal references instead of external references to adjust measurement accuracy. During a self-calibration, the instrument measures the onboard references and adjusts its measurement capabilities to account for changes in accuracy owing to environmental effects such as temperature, humidity, light, and color, and so on. Self-calibration does not replace external calibration. In addition to self-calibration, external calibration must be performed to quantify the references so as to use them during self-calibration. The method that self-calibration and external calibration tools work together can be adopted to ensure the measurement accuracy of the instruments. Some relevant agencies’ measurement products contain highly stable signal references to maintain traceability and facilitate the selfcalibration. Through simple software function calls, the instrument can reach to ensure its top measurement performance through self-calibration.
7.4.6.2
Calibration Principles
The goal of calibration is to quantify and to improve the measurement accuracy of the component or instrument used in industrial control. The
Zhang_Ch07.indd 814
5/13/2008 3:56:37 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
815
principles of maintaining appropriate calibrations with industrial control systems are the following: (1) Frequency. To ensure the functionality of an industrial control system, some of its components should be adjusted periodically while the system is running. The errors found should be reported to the master controllers of the error components or system’s motherboard for them to execute calibration routines. For example, the scanner of any modern copy machine should be calibrated every 1–5 min, depending on whether the scanner is color or black/white, while the machine is running. The calibration frequency, or time interval confirmation, is a key factor ensuring the performance of an industrial control system. If an industrial control system has some components requiring periodic calibrations while the system is running, its application programs must be designed to schedule the calibration processes accordingly. In addition to the calibrations carried on while the system is running, it should determine the time interval for recalibrating other components when it is at rest. The initial consideration for this kind of time interval includes (1) manufacturers’ recommendations, (2) accuracy sought, and (3) environmental influences. To calculate this time interval appropriately, it should check the documentation of the components for the recommended calibration intervals. (2) Accuracy. The result of any measurement is only an approximate estimate of the “real” value being measured. In truth, the real value can never be perfectly measured. This is because there is always some physical limit to how well we can measure a number of the objects. For example, a heat sterilization temperature may be determined in the laboratory, and then monitored in the manufacturing area. Instruments used in either or both locations may indicate temperature to a small fraction of a degree. If either or both, however, are not correctly calibrated or are subject to drift, there may be failure of the process because of inaccuracy. Precision often brings a false sense of accuracy. “Standards” of physical properties such as temperature, pressure, speed, torque, light strength, and color scales are the key to assurance of accuracy. Two types are important; they are primary standards and secondary or reference standards. “Reference” is a term used in two ways in physical values’ measurement. First, it is used to describe the process of comparing the reading of one instrument with another; most commonly the indication of an instrument being calibrated with the “known” physical value of
Zhang_Ch07.indd 815
5/13/2008 3:56:37 PM
816
INDUSTRIAL CONTROL TECHNOLOGY a primary standard material or measurement meter. Second, it is used as a term describing a measurement meter itself to decide if it is a “master reference” or “secondary reference.” Either way, the term “Reference” refers to the comparison process by which correct calibration is assured. Many systems will average the returned data and report the average as the measurement. To determine the statistical uncertainty, you will need to take the standard deviation of all of the measurements and include this value as part of the overall accuracy of the measurement. In metrology, these accuracies are referred to as Type A and Type B uncertainties. Type A uncertainties are those due to statistical methods. In contrast, Type B uncertainties are systematic (gain, offset, etc.). (3) Traceability. Traceability is the unbroken chain of comparisons between your measurement device and national or international standards. Different legal metrology authorities exist for each country. These bodies follow the guidelines defined by the international metrology body and its associated committees to provide quality measurement standards for their associated country. The National Metrology Institutes (NMI) of each member country of the Convention of the Meter also participates in the Mutual Recognition Agreement (MRA). These international and national metrology organizations serve industries with (1) their calibration services including the standards and code; (2) their calibration tools including hardware and software; and (3) the validation issues of their calibration certificates.
These topics regarding the traceability are beyond the scope of this book, and are not mentioned further here.
7.4.6.3
Calibration Methodologies
In an industrial control system, calibration is the comparison of a physical measurement of a component to a standard of known accuracy. If performing a dynamic calibration when the system is running, the standards being some numbers in many cases are stored in the system’s memories such as NVM. The application programs of the control systems to detect if errors take place may use the result of this comparison between the measurement and the stored standard values. If this time of calibration does not find error, the application program continues to run the control process. For each of those components, which require periodic calibration, the application program already sets up a timer to dominate this periodic
Zhang_Ch07.indd 816
5/13/2008 3:56:37 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
817
calibration. When this timer is expired, application programs notify the master controller’ microprocessor to execute this calibration again, and so on. Therefore, the assurance of accuracy of the calibration involves basically two steps: (1) comparison and (2) periodically check.
7.5 Simulation Routines Most control system designs rely on modeling of the system to be controlled. This allows simulation studies to be carried out to determine the best control strategies to implement and also the best system parameters for good industrial control. Simulation studies also allow complex what-if scenarios to be viewed, which may be difficult to do on the real system. However, it must always be borne in mind that any simulation results obtained are only as good as the model of the process. This does not mean that every effort should be made to get the model as realistic as possible, just that it should be sufficiently representative. Creating a model of a dynamic industrial process, whether it is a production or a business or a software process, requires the salient features to be described in a form that can be analyzed. Computer simulation is a fast and flexible tool for generating models, either for current systems where modifications are planned or for completely new systems. Running the simulation predicts the effect of changing system parameters, provides information on the sensitivity of the system, and helps to identify an optimum solution for specific operating conditions. The most important topics addressed at the modeling and simulation for industrial control could include the following: (1) Discrete event simulation is applicable to systems whose state changes abruptly in response to some event in the environment, examples being service facilities such as queues at bank counters and factory production lines. (2) Modeling for real-time systems software is used to define the requirements and high-level software design before the implementation stage is attempted. (3) Continuous time simulation is used to compute the evolution in time of physical variables such as speed, temperature, voltage, and so on. in systems such as robots, chemical reactors, electric motors, and aircraft. (4) Control systems’ rapid prototyping involves moving from the design of a controller to its implementation as a prototype.
Zhang_Ch07.indd 817
5/13/2008 3:56:37 PM
818
INDUSTRIAL CONTROL TECHNOLOGY
7.5.1 Modeling and Simulation 7.5.1.1 Process Models Any system or process can be described with a model of that system. In terms of control requirements, the model must contain information that enables the prediction of the consequences of changing process operating conditions. Within this context, a model is a physical, mathematical, or other logical representation of a system, process, or phenomenon. Models allow the effects of time and space to be scaled, extraction of properties and hence simplification, to retain only those details relevant to the problem. The use of models, therefore, reduces the need for real experimentation and facilitates the achievement of many different purposes at reduced cost, risk, and time. Depending on the task, different model types will be employed. Process models are categorized as shown in Fig. 7.13. (1) Mathematical models. As specified in Fig. 7.13, mathematical models include all of the following: (a) Mechanistic models. If a process and its characteristics are well defined, a set of differential equations can be used to describe its dynamic behavior. This is known as the development of mechanistic models. The mechanistic model is usually derived from the physics and chemistry governing the Process models
Qualitative models
Mathematical models
Fuzzy logics models
Qualitative transfer functions
Linear models
Transfer functions
Statistical models
Black box models
Mechanistic models
Probabilistic models
Nonlinear models
Lumped parameters
Distributed parameters
Time series
Neural network models
Linear
Correlation models
Nonlinear
Figure 7.13 Classification of the process models.
Zhang_Ch07.indd 818
5/13/2008 3:56:37 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
819
process. Depending on the system, the structure of the final model may either be a lumped parameter or a distributed parameter representation. Lumped parameter models are described by ordinary differential equations whereas distributed parameter systems representations require the use of partial differential equations. Nevertheless, a distributed model can be approximated by a series of ordinary differential equations and given simplifying assumptions. Both lumped and distributed parameter models can be further classified into linear or nonlinear descriptions. Usually nonlinear, the differential equations are often liberalized to enable tractable analysis. In many cases, typically due to financial and time constraints, mechanistic model development may not be practically feasible. This is particularly true when knowledge about the process is initially vague or if the process is so complex that the resulting equations cannot be solved. Under such circumstances, empirical or black-box models may be built, using data collected from the plant. (b) Black box models. Black box models are simply the functional relationships between system inputs and system outputs. By implication, black box models are lumped with parameter models. The parameters of these functions do not have any physical significance in terms of equivalence to process parameters such as heat or mass transfer coefficients, reaction kinetics, and so on. This is the disadvantage of black box models compared to mechanistic models. However, if the aim is to merely represent faithfully some trends in process behavior, then the black box modeling approach is just as effective as required. As shown in Fig. 7.13, black box models can be further classified into linear and nonlinear forms. In the linear category, transfer function and time series models predominate. Given the relevant data, a variety of techniques may be used to identify the parameters of linear black box models. The most common techniques used, though, are least-squares based algorithms. Within the nonlinear category, time-series features are found together with neural network based models. The parameters of the functions are still linear and thus facilitate identification using least squares based techniques. The use of neural networks in model building increases in cheap computing power and certain powerful theoretical results.
Zhang_Ch07.indd 819
5/13/2008 3:56:38 PM
820
INDUSTRIAL CONTROL TECHNOLOGY (2) Qualitative models. There are some cases in which the nature of the process may preclude mathematical description, for example, when the process is operated at distinct operating regions or when physical limits exist. This results in discontinuities that are not amenable to mathematical descriptions. In this case, qualitative models can be formulated. The simplest form of a qualitative model is the rule-based model that makes use of IF–THEN– ELSE constructs to describe process behavior. These rules are elicited from human experts. Alternatively, genetic algorithms and rule induction techniques can be applied to process data to generate these descriptive rules. More sophisticated approaches make use of qualitative physics theory and its variants. These latter methods aim to rectify the disadvantages of purely rulebased models by invoking some form of algebra so that the preciseness of mathematical modeling approaches could be achieved. Of these, qualitative transfer functions appear to be the most suitable for process monitoring and control applications, which retain many of the qualities of quantitative transfer functions that describe the relationship between an input and an output variable, particularly the ability to embody temporal aspects of process behavior. The technique was conceived for applications in the process control domain. Cast within an object framework, a model is built up of smaller subsystems and connected together as in a directed graph. Each node in the graph represents a variable while the arcs that connect the nodes describe the influence or relationship between the nodes. Overall system behavior is derived by traversing the graph, from input sources to output sinks. Fuzzy logics can also be classified to build qualitative models. Fuzzy logic theory contains a set of linguistics that facilitates descriptions of complex and ill-defined systems. Magnitudes of changes are quantized as negative medium, positive large, and so on. Fuzzy models are being used in everyday life without our being aware of their presence, for example, washing machines, autofocus cameras, and so on. (3) Statistical models. Describing processes in statistical terms is another modeling technique. Time-series analysis that has a heavy statistical bias may be considered to fall into this model category. Statistical models do not capture system dynamics. However, in modern control practice, they play an important role particularly in assisting in higher level decision making, process monitoring, data analysis, and obviously, in statistical process control.
Zhang_Ch07.indd 820
5/13/2008 3:56:38 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
821
Owing to its widespread and interchangeable use in the development of deterministic as well as stochastic digital control algorithms, the statistical approach is made necessary by the uncertainties surrounding some process systems. This technique has roots in statistical data analysis, information theory, games theory, and the theory of decision systems. Probabilistic models are characterized by the probability density functions of the variables. The most common is the normal distribution that provides information about the likelihood of a variable taking on certain values. Multivariate probability density functions can also be formulated, but interpretation becomes difficult when more than two variables are considered. Correlation models arise by quantifying the degree of similarity between two variables by monitoring their variations. This is again quite a commonly used technique, and is implicit when associations between variables are analyzed using regression techniques.
7.5.1.2
Process Modeling
Modeling has been an essential part of process control since the 1970s. However, it is an important problem that industrial control requires adequately accurate models that are not easy to achieve. As a compromise for this purpose, both data-driven modeling and computational intelligence provide additional modeling alternatives for advanced industrial control applications. These alternatives are illustrated in Fig. 7.14. (1) Phenomenological modeling approach aids in the construction and analysis of models whose ultimate purpose is providing the insights into process performance and understanding of how cooperative phenomena can be manipulated for the accomplishment of the design strategies that promote the intensification of processes. Within the fundamentals of the phenomenological modeling approach that guides the model structuring, the process tasks are decomposed into the relevant phases and physicochemical phenomena involved, identifying the connections and influences among them and evaluating the individual rates. The intensification of rate-limiting phenomena is finally achieved by diverse strategies derived from dealing with the manipulation of driving forces. Empirical models are mostly used for phenomenological modeling to develop a control system. The structure and parameters of empirical models do not necessarily have any physical significance, and, therefore, these models cannot be directly adapted to different production conditions.
Zhang_Ch07.indd 821
5/13/2008 3:56:38 PM
822
INDUSTRIAL CONTROL TECHNOLOGY Analyses – – –
Demands Quality Operation conditions
Industrial systems Control designs – – –
Understanding Structure Tuning
Phenomenological modeling – – –
Physical Chemical Mathematics
– – –
Data-driven modeling – – –
Identification Parameter estimation Data
Performance evaluation Oscillation Production conditions
Computational intelligence – – –
Expertise Data Adaptation
Figure 7.14 Alternatives of modeling and simulation for advanced control applications.
(2) Data-driven modeling approaches are based on general function estimators of black-box structure, which should capture correctly the dynamics and nonlinearity of the system. The identification procedure, which consists of estimating the parameters of the model, is quite straightforward and easy if appropriate data is available. Essentially, system identification means adjusting parameters within a given model until its output coincides as well as possible with the measured output. Validation is needed to evaluate the performance of the model. The generic data-driven modeling procedure consists of the following three steps. The objective of the first step is to define an optimal plant operation mode by performing a model and control analysis. Basic system information, including the operating window and characteristic disturbances, as well as fundamental knowledge of the control structure, such as degrees of freedom of and interactions between basic control loops, will be obtained. The second step identifies a predictive model using data-driven approaches. A suitable model structure will be proposed based on the extracted process dynamics from operating data, followed by parameter estimation using multivariate statistical techniques. The dynamic partial least squares approach solves the issue of autocorrelation. However, a large number of
Zhang_Ch07.indd 822
5/13/2008 3:56:38 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
823
lagged variables are often required that might lead to poorly conditioned data matrices. Subspace model identification approaches are suitable to derive a parsimonious model by projecting original process data onto a lower dimension space that is statistically significant. In case a linear model is not sufficient due to strong nonlinearities, neural networks provide a possible solution. The third step is model validation. Independent operating data sets are used to verify the prediction ability of the derived model. (3) Intelligent methods are based on techniques motivated by biological systems and human intelligence, for instance, natural language, rules, semantic networks, and qualitative models. Most of these techniques were already introduced by conventional expert systems. Computational intelligence can provide additional tools since humans can handle complex tasks including significant uncertainty on the basis of imprecise and qualitative knowledge. Computational intelligence is the study of the design of intelligent agents. An agent is something that acts in an environment—it does something. Agents include worms, dogs, thermostats, airplanes, humans, organizations, and society, and so on. An intelligent agent is a system that acts intelligently: What it does is appropriate for its circumstances and its goal, it is flexible to changing environments and changing goals, it learns from experience, and it makes appropriate choices given perceptual limitations and finite computation. The central goal of computational intelligence is to understand the principles that make intelligent behavior possible, in natural or artificial systems. The main hypothesis is that reasoning is computation. The central engineering goal is to specify methods for the design of useful, intelligent artifacts. Modeling is used also on other levels of advanced control: high-level control is in many cases based on modeling operator actions, and helps to develop the intelligent analyzers and software sensors. The products of modeling are models that are used in adaptive control, and in direct modelbased control. Smart adaptive control systems integrate all these features as given in Fig. 7.15. Adaptive controllers generally contain two extra components compared to the standard controllers. The first is a process monitor, which detects the changes in the process characteristics either by performance measure or by parameter estimator. The second is the adaptation mechanism, which updates the controller parameters. In normal operation, efficient reuse of
Zhang_Ch07.indd 823
5/13/2008 3:56:38 PM
824
INDUSTRIAL CONTROL TECHNOLOGY Analyses on production conditions – – –
Intelligent analysers
System adaptation Fault diagnostics and fixes Performance and product quality
Intelligent control
Measurements
Intelligent actuators
Dynamic simulation – –
Controller design Prediction
Figure 7.15 Features of a smart adaptive control system.
controllers developed for different operating conditions is good operating practice, as the adaptation always takes time. An adaptation controller is a controller with adjustable parameters. Traditionally, the online adaptation has been considered a main feature of the adaptive controllers. However, controllers should also be adapted to changing operating conditions in processes where the changes are too fast or too complicated for online adaptation. Therefore, the area of adaptation must be expanded; the adaptation mechanism can be either online or predefined. (1) Online adaptation includes self-tuning, autotuning, and selforganization. For online adaptation, changes in process characteristics can be detected through online identification of the process model, or by assessment of the control response, that is, performance analyses. The choice of performance measures depends on the type of response the control system designer wishes to achieve. Alternative measures include overshoot, rise time, setting time, delay ratio, frequency of oscillations, gain and phase margins, and various error signals. The identification block typically contains some kind of recursive estimation algorithm that aims at the current instant. Figure 7.16 gives the online adaptation with model identification. The model can be a transfer function, a discrete time linear model, a fuzzy model, or a linguistic equation model. Adaptation mechanisms rely on parameter estimations of the process model: gain, dead time, and time constant.
Zhang_Ch07.indd 824
5/13/2008 3:56:38 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
Design
Controller
825
Identification
Process
Figure 7.16 Online adaptation with model identification.
Classical adaptation schemes do not cope easily with strong and fast changes unless the adaptation rate is made very high. This is not always possible: some a priori knowledge about the dynamic behavior of a plant or factory should be used. One alternative for these cases is a switching control scheme that selects a controller from a finite set of predefined fixed controllers. The multiple model adaptive control is hence classified into modelbased control. Intelligent methods provide additional techniques for online adaptation with traditional techniques. Fuzzy self-organizing controllers give an example of intelligent modeling methodologies. Another example in this aspect is a metarule approach in which parameters of a low-level controller are changed by a metarule supervisory system whose decisions are based on the performance of the low-level controller. Metarule modules consist typically of a fuzzy rule base that describes the actions needed to improve the low-level fuzzy logic controller. (2) Predefined adaptation is becoming more popular as the use of modeling and simulation provides flexible methods. Gain scheduling including fuzzy-gain scheduling and linguistic equation based gain scheduling provides a gradual adaptation technique for a fixed control structure. The predefined adaptation allows using very detailed models; for example, distributed parameter models can be used in tuning of the adaptation models of a solar powered plant. The resulting adaptation models or mechanisms should be able to handle in real-time operation all these special situations without using the detailed simulation models. Predefined actions give adaptations fast enough to remove the need for online identification, or for classical mechanisms based on performance analysis. In these cases, the controller could be called a linguistic equation based on gain scheduling. The adaptation model is generated from the local tuning results, but the directions of interactions are usually consistent with process knowledge.
Zhang_Ch07.indd 825
5/13/2008 3:56:39 PM
826
INDUSTRIAL CONTROL TECHNOLOGY
7.5.1.3
Control Simulation
A control simulation is defined as the reproduction of a situation with the use of the process models. For complex control projects, simulation of the process is often a necessary measure for validating the process models to develop the most effective control scheme. Many different kinds of process simulations are used in product development today, which in principle are categorized into these two kinds: Comparison and Inverse Model as illustrated in Fig. 7.17. The purpose of modeling and simulating the industrial processes can be to create controllers for the processes. A controller is either a software toolkit or an electronic device. The controllers developed are mostly used for performing so-called model-based control. Model-based control is widely applied to industrial applications. The following paragraphs briefly discuss the various algorithms arisen from the controller designs in modelbased control. (1) Feed-forward control. Feed-forward control can be based on process models. A feed-forward controller has been combined with different feedback controllers; even the ubiquitous threeterm proportional integral-derivative (PID) controllers operate for this purpose. Proportional-integral controller is optimal for a first-order linear process without time delays. Similarly, the PID Model simulation
Outputs from model simulation
Comparison
Inputs Real system process
Outputs from real process
(a) Desired outputs
Inverse of process model
Desired inputs
Real system process
Real process outputs
(b)
Figure 7.17 Classification of process simulations: (a) Comparison and (b) inverse model.
Zhang_Ch07.indd 826
5/13/2008 3:56:39 PM
827
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
controller is optimal for a second-order linear process without time delays. The modern approach is to determine the settings of the PID controller based on a model of the process. The settings are chosen so that the controlled responses adhere to user specifications. A typical criterion is that the controlled response should have a quarter decay ratio. Alternatively, it may be desired that the controlled response follows a defined trajectory or that the closed loop has certain stability properties. A more elegant technique is to implement the controller within an adaptive framework. Here the parameters of a linear model are updated regularly to reflect current process characteristics. These parameters are in turn used to calculate the settings of the controller as shown schematically in Fig. 7.18. Theoretically, all model-based controllers can be operated in an adaptive mode. Nevertheless, there are instances when the adaptive mechanism may not be fast enough to capture changes in process characteristics due to system nonlinearities. Under such circumstances, the use of a nonlinear model may be more appropriate for PID controller design. Nonlinear time-series, and neural networks, have been used in this context. A nonlinear PID controller may also be automatically tuned, using an appropriate strategy, by posing the problem as an optimization problem. This may be necessary when the nonlinear dynamics of the plant are time varying. Again, the strategy is to make use of controller settings most appropriate to the current characteristics of the controlled process. (2) Model predictive control. Model predictive control (MPC) is widely adopted in industry as an effective means to deal with large multivariable constrained control problems. The main idea of MPC is to choose the control action by repeatedly solving on line an optimal control problem. This aims at minimizing a performance criterion over a future horizon, possibly subject to
Desired output +
Process output Σ
Controller
–
Calculate controller parameters
Process Model builder
Figure 7.18 Schematic of adaptive controllers.
Zhang_Ch07.indd 827
5/13/2008 3:56:39 PM
828
INDUSTRIAL CONTROL TECHNOLOGY constraints on the manipulated inputs and outputs, where the future behavior is computed according to a model of the plant. Issues arise for guaranteeing closed-loop stability, to handle model uncertainty, and to reduce online computations. PID type controllers do not perform well when applied to systems with significant time delays. Model predictive control overcomes the debilitating problems of delayed feedback by using predicted future states of the output for control. Figure 7.19 gives the basic principle of the model-based predictive control. Currently, some commercial controllers have Smith predictors as programmable blocks. There are, however, many other model based control strategies with dead-time compensation properties. If there is no time-delay, these algorithms usually collapse to the PID form. Predictive controllers can also be embedded within an adaptive framework, and a typical adaptive predictive control structure is shown in Fig. 7.20. (3) Physical-model-based control. Control has always been concerned with generic techniques that can be applied across a range of physical domains. The design of adaptive or nonadaptive controllers for linear systems requires a representation of the system to be controlled. For example, observer and state-feedback designs require a state-space representation; polynomial designs require a transfer-function representation. These representations are generic in the sense that they can represent the linear systems drawn from a range of physical domains including: mechanical, electrical, hydraulic, and thermodynamic. However, at the same time, these representations suffer from being abstractions of physical systems: the very process of abstracting the generic r(t) Predicted outputs Manipulated inputs t t +1
t +1 t +2
u(t+k) t +p
t +p +l
Figure 7.19 The basic working principle of model predictive control.
Zhang_Ch07.indd 828
5/13/2008 3:56:40 PM
829
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL Desired output
Process output Σ
+
–
Controller
Calculate controller parameters
Predicted output
Process
Model builder
Process output predictor
Figure 7.20 Schematic of adaptive predictive controllers.
features of physical systems means that system-specific physical details are lost. Both the parameters and states of such representations may not be easily related back to the original system parameters. This loss is, perhaps, acceptable at two extremes of knowledge about the system: the system parameters are completely known, or the system parameters are entirely unknown. In the first case, the system can be translated into the representations mentioned above, and the physical system knowledge is translated into, for example, transfer function parameters. In the second case, the system can be deemed to have one of the representations mentioned above and there is no physical system knowledge to be translated. Thus, much of the current body of control achieves a generic coverage of application areas by having a generic representation of the systems to be controlled which are, however, not well suited to partially known systems. This suggests an alternative approach that whilst achieving a generic coverage of application areas, allows the use of particular representations for particular (possibly partially known, possibly nonlinear) systems. Instead of having a generic representation of systems, a generic method, called Meta Modeling, for automatically deriving system-specific representations has been proposed; it provides a clear conceptual division between structure and parameters, as a basis for this. (4) Internal model and robust controls. Internal model control systems are characterized by a control device consisting of the controller and of a simulation model of the process, the internal model. The internal model loop computes the difference between the outputs of the process and of the internal model, as shown in
Zhang_Ch07.indd 829
5/13/2008 3:56:40 PM
830
INDUSTRIAL CONTROL TECHNOLOGY Fig. 7.21. This difference represents the effect of disturbances and of a mismatch of the model. Internal model control devices have been shown to have good robustness properties against disturbances and model mismatch in the case of a linear model of the process. Internal model control characteristics are the consequence of the following properties: (a) If the process and the controller are (input–output) stable, and if the internal model is perfect, then the control system is stable. (b) If the process and the controller are stable, if the internal model is perfect, if the controller is the inverse of the internal model, and if there is no disturbance, then perfect control is achieved. (c) If the controller steady-state gain is equal to the inverse of the internal model steady-state gain, and if the control system is stable with this controller, then offset-free control is obtained for constant set points and output disturbances. As a consequence of (c) above, if the controller is made of the inverse of the internal model cascaded with a low-pass filter, and if the control system is stable, then offset-free control is obtained for constant inputs, which are set point and output disturbances. Moreover, the filter introduces robustness against a possible mismatch of the internal model, and, though the gain of the control device without the filter is not infinite as in the continuous-time case, its interest is to smooth out rapidly changing inputs. Robust control involves, first, quantifying the uncertainties or errors in a “nominal” process model, due to nonlinear or time-varying process behavior, for example. If this can be accomplished, we essentially have a description of the process under all possible operating conditions. The next stage involves the design of a Disturbance
Desired output +
Σ
Controller
Process
+
Σ
Process output
+ –
Process model
– Σ
Feedback filter
Figure 7.21 Strategies of internal model control.
Zhang_Ch07.indd 830
5/13/2008 3:56:40 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
831
controller that will maintain stability as well as achieve specified performance over this range of operating conditions. A controller with this property is said to be “robust.” A sensitive controller is required to achieve performance objectives. Unfortunately, such a controller will also be sensitive to process uncertainties and hence suffer from stability problems. On the other hand, a controller that is insensitive to process uncertainties will have poorer performance characteristics in that controlled responses will be sluggish. The robust control problem is therefore formulated as a compromise between achieving performance and ensuring stability under assumed process uncertainties. Uncertainty descriptions are at best very conservative, whereon performance objectives will have to be sacrificed. Moreover, the resulting optimization problem is frequently not well posed. Thus, although robustness is a desirable property, and the theoretical developments and analysis tools are quite mature, application is hindered by the use of daunting mathematics and the lack of a suitable solution procedure. Nevertheless, underpinning the design of robust controllers is the so-called “Internal Model” principle. It states that unless the control strategy contains, either explicitly or implicitly, a description of the controlled process, then either the performance or stability criterion, or both, will not be achieved. The corresponding internal model control design procedure encapsulates this philosophy and provides for both perfect control and a mechanism to impart robust properties (see Fig. 7.21).
7.5.2
Methodologies and Technologies
Industrial control can be classified as industrial process control and industrial system control, which divides the modeling and simulation in industrial control into Process Modeling and System Modeling accordingly. Process Modeling is the representation of a process mathematically by application of material properties and physical laws governing geometry, dynamics, heat and fluid flow, and so on. to predict the behavior of the process. For example, finite element analyses are used to represent the application of forces (mechanics, strength of materials) to a defined part (geometry and material properties), and to model a metal forging operation. The result of the analyses is a time-based series of pictures, showing the distribution of stresses and strains, which depict the configuration and state of the part during and after forging. The behavior predicted by process models is compared with the results of actual processes to ensure the models are correct. As differences between theoretical and actual behavior
Zhang_Ch07.indd 831
5/13/2008 3:56:41 PM
832
INDUSTRIAL CONTROL TECHNOLOGY
are resolved, the basic understanding of the process improves and future process decisions are more informed. The analysis can be used to iterate tooling designs and make processing decisions without incurring the high costs of physical prototyping. System Modeling is for a system typically composed of a number of networks that connect the different nodes of the system, and where the networks are interconnected through gateways. One example of such a system is given by current automotive systems that include a high-speed network (based on the controller area network) for connecting engine, transmission, and brake related nodes; and one network for connecting other “body electronics functions,” from instrument panels to alarms. Often, a separate network is available for diagnostics. A node typically includes sensor and actuator interfaces, a microcontroller, and a communication interface to the broadcast bus. From a control function perspective, the vehicle can be controlled by a hierarchical system, where subfunctions interact with each other and through the vehicle dynamics to provide the desired control performance. The subfunctions are implemented in various nodes of the vehicle, but not always in a top-down fashion, because the development is strongly governed by aspects such as the organizational structure (internal organization, system integrators, and subcontractors). The models and simulation features should form part of a larger toolset that supports the design of control systems to meet the main identified challenges: complexity, multidisciplinary, and dependability. Some of the requirements of the models are as follows: (1) The developed system models should encompass both time- and event-triggered algorithms, as typified by discrete-time control and finite state machines (hybrid systems). (2) The models must represent the basic mechanisms and algorithms that affect the overall system timing behavior. (3) The models should allow cosimulation of functionality, as implemented in a computer system, together with the controlled continuous time processes and the behavior of the computer system. (4) The models should support interdisciplinary design, thus taking into account different supporting methods, modeling views, abstractions and accuracy, as required by control, system, and computer engineers. (5) Preferably, the models should be useful also for a descriptive framework, visualizing different aspects of the system, as well as being useful for other types of analysis such as scheduling analysis.
Zhang_Ch07.indd 832
5/13/2008 3:56:41 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
833
In this subsection, the manufacturing process is selected as the representative of process control and the computer control system as the representative of system control to discuss the methodologies and technologies used for the modeling and simulation in industrial control, respectively.
7.5.2.1
Manufacturing Process Modeling and Simulation
In manufacturing processes, modeling and simulation will be the way of doing business to ensure the best balance of all constraints in designing, developing, producing, and supporting products. Cost, time compression, customer demands, and life-cycle responsibility will all be part of an equation balanced by captured knowledge, online analyses, and human decision making. Modeling and simulation tools will support best practice from concept creation through product retirement and disposal, and operations of the tools will be transparent to the user. The modeling and simulation activities in manufacturing process of a manufacturing enterprise can be divided into five functional elements for assessment and planning purposes: (1) Material processing. Material Processing involves all activities associated with the conversion of raw materials and stocks to either finished form or readiness for assembly. The enterprise process modeling and simulation environments provide the integrated functionality to ensure the best material or material product is produced at the lowest cost. The models treat new, reused, and recycled materials to eliminate redundancy. An open, shared industrial knowledge base and model library are required to provide: Ready access to material properties data using standard forms of information representation (scaleable plug, and play models); Means for validating material property models before use in specific product and process applications; a fundamental science-based understanding including validated mathematical models of the response of material properties under the stimuli of a wide range of processes; standard, validated time and cost models and supporting estimating tools for the full range of material processing processes. Material Processing includes four general categories of processes, each of which facilitates the respective modeling and simulation: (a) Material preparation and creation processes such as material synthesis, crystal growing, mixing, alloying, distilling, casting, pressing, blending, reacting, and molding.
Zhang_Ch07.indd 833
5/13/2008 3:56:41 PM
834
INDUSTRIAL CONTROL TECHNOLOGY (b) Material treatment processes such as coating, plating, painting, thermal conditioning, such as heating or melting or chilling, and chemical conditioning. (c) Material forming processes for metals, plastics, composites, and other materials, including bending, extruding, folding, rolling, shearing, stamping, and similar processes. (d) Material removal and addition processes such as milling, drilling, routing, turning, cutting, sanding, trimming, etching, sputtering, vapor deposition, solid freeform fabrication, ion implantation, and similar processes. (2) Assemblies, disassembly, and reassembly. This functional element includes all assembly processes associated with joining, fastening, soldering, integration of higher-level packages as required to complete a deliverable product (e.g., electronic packages); it also includes assembly sequencing, error correction and exception handling, disassembly and reassembly that are the maintenance and support issues. Assembly modeling is well developed for rigid bodies and tolerance stack-up in limited applications. Rapidly maturing computer-aided design and manufacturing technologies, coupled with advanced modeling and simulation techniques, offer the potential to totally optimize assembly processes for speed, efficiency, and ease of human interaction. Assembly models will integrate seamlessly with master product models and factory operations models to provide all relevant data to drive and control each step of the design and manufacturing process, including tolerance stack-ups, assembly sequences, ergonomics issues, quality, and production rate to support art-to-part assembly, disassembly, and reassembly. The product assembly model will be a dynamic, living model, adapting in response to changes in requirements and promulgating those changes to all affected elements of the assembly operation including process control, equipment configuration, and product measurement requirements. (3) Qualities, test, and evaluation. This element includes designs for quality, in-process quality, all inspection and certification processes, such as dimensional, environmental, and chemical and physical property evaluation based on requirements and standards, diagnostics as well as troubleshooting. Modeling, simulation, and statistical methods are widely used to establish control models to which processes should conform. Characterization of processes leads to models that define the impact of different process parameters and their variations on product quality. These models are used as a baseline for establishing
Zhang_Ch07.indd 834
5/13/2008 3:56:41 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
835
and maintaining in-control processes. In general, the physics behind these techniques (e.g., radiological testing, ultrasonic evaluation, tomography, and tensile testing) should be well understood. However, many of the interactions are treated probabilistically and, even though models of the fundamental interactions exist, in most cases empirical methods are used instead. Although needing to be improved, today the best way to find out whether a part has a flaw is to test samples and analyze the test data. “Models” are used in setting up these experiments, but many times the models reside only in the brains of the experts who support the evaluations. (4) Packaging. This element includes all final packaging processes, such as wrapping, stamping and marking, palletizing, and packing. Modeling and simulation are critical in designing packaging to ensure product protection. Logistic models and part tracking systems help ensure the proper packaging and labeling for correct product disposition. These applications range from the proper wrapping for chemical, food, and paper products to shipping containers that protect military hardware and munitions from accidental detonation. Future process and product modeling and simulation systems will enable packaging designs and processes to be fully integrated in all aspects of the design-to-manufacturing process and will provide needed functionality with minimum cost, and minimum environmental impact, with no nonvalue-added operations. Advanced packaging Modeling and Simulation systems will enable product and process designers to optimize packaging designs and supporting processes for enhanced product value and performance, as well as for protection, preservation, and handling attributes. (5) Remanufacture. It includes all design, manufacture, and support processes that support return and reprocessing of products on completion of original intended use. Manufacturers would reuse, recycle, and remanufacture products and materials to minimize material and energy consumption, and to maximize the total performance of manufacturing operations. Advanced modeling and simulation capabilities will enable manufacturers to explore and to analyze remanufacturing options to optimize the total product realization process and product and process life cycles for efficiency, cost-effectiveness, profitability, and environmental sensitivity. The products are going to be designed from inception for remanufacture and reuse either at the whole product or the
Zhang_Ch07.indd 835
5/13/2008 3:56:41 PM
836
INDUSTRIAL CONTROL TECHNOLOGY component or constituent material level. In some cases, ownership of a product may remain with the vendor (not unlike a lease), and the products may be repeatedly upgraded, maintained, and refurbished to extend their lives and add new capabilities.
7.5.2.2
Computer Control System Modeling and Simulation
Modeling and simulation activities in computer control systems could be benefited if the following technical issues are taken into consideration. (1) Modeling purpose and simulation accuracy. One well-known challenge in modeling is to be able to identify the accuracy required for the given purpose. Consider, for example, the implementation of a data-flow over two processors and a serial network. A huge span of modeling detail is possible, ranging from a simple delay over discrete-event resource management models (e.g., processor and communication scheduling) to low level behavioral models. The mapping of these details between models and real computer networks will change the timing behavior of the functions due to effects such as delays and jitter. Some reflections related to this are as follows: (a) The introduction of “application level effects,” such as delays, jitter, and data loss, into a control design, could be an appropriate abstraction for control engineering purposes. The “mapping” to the actual computer system may be nontrivial. For example, delays and jitter can be caused by various combinations of execution, communication, interference, and blocking. More accurate computer system models will be required to compare alternative designs (architectures), and to provide estimations of the system behavior. In addition, modeling is a sort of prototyping, and as such important in the design process. (b) It is interesting that the underlying model of the computer system could be more or less detailed, given the right abstraction. For example, if a fairly detailed Computer Area Network model has been developed, it could still be used in the context of control system simulation given that it is sufficiently efficient to simulate and that its complexity is masked away. (c) It is obvious that the models of the computer system need to reflect the real system. In early stages, architecture only exists on the “drawing board.” As the design proceeds, more and more details will be available; consequently, the models used for analysis must be updated accordingly.
Zhang_Ch07.indd 836
5/13/2008 3:56:41 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
837
(d) To achieve accuracy, a close cooperation between the software and hardware developers is necessary and required at every stage during the system development. (2) Global synchronization and node tasking. Both synchronous and asynchronous systems exist; industrially asynchronous systems predominate but this may change with the introduction of newer safety critical applications such as steer-by-wire in cars, because of the advantages inherent in distributed systems based on a global clock. On a microscope, the communication circuits typically are hard synchronized that is required to be able to receive bits and arbitrate properly. In a system with low-level synchronization, and/or synchronized clocks, the synchronization could fail in different ways. For asynchronous systems, the clock drift could be of interest to incorporate. Given different clocks with different speeds, this will affect all durations within each node. A conventional way of expressing duration is simply by a time value; in this case the values could possibly be scaled during the simulation setup. All the above-mentioned behaviors could be of interest to model and simulate. The most essential characteristic of a distributed computer system is undoubtedly its communication. In early design stages, the distributed and communication aspects are often targeted first; but then node scheduling also becomes interesting and is, therefore, of high relevance. It is very common that many activities coexist on nodes. Also, these activities typically have different timing requirements and may also be safety critical to different degrees. The scheduling on the nodes affects the distributed system by causing local delays that can influence the behavior of the overall system. When developing a distributed control system, somehow the functions and elements thereof need to be allocated to the nodes. This principally means that an implementation independent functional design needs to be enhanced with new “system” functions, which for example (1) perform communication between parts of the control system, now residing on different nodes; (2) perform scheduling of the computer system processors and networks; (3) perform additional error detection and handling to cater for new failure modes (e.g., broken network, temporary node failure, etc.). A node is composed of application activities, system software including a real-time kernel, low-level I/O drivers, and hardware functions including the communication interface. (a) The node task model needs to include the following: (i) A definition of tasks, their triggers, and execution times for “execution units.”
Zhang_Ch07.indd 837
5/13/2008 3:56:41 PM
838
INDUSTRIAL CONTROL TECHNOLOGY (ii) A definition of the interactions between tasks in terms of scheduling, intertask communication, and resource sharing. (iii) A definition of the real-time kernel and other system software with respect to execution time, blocking, and so on. Some issues in the further development include what types of inter task communication and synchronization should be supported (e.g., signals, mailboxes, semaphores, …) and whether, and to what extent, there is a need to consider hierarchical and hybrid scheduling (e.g., including both processor’s interrupt and real-time kernel scheduling levels). (b) The functional model used in conventional control design should be reusable within the combined function computer models. This implies that it should be possible to somehow adapt or refine the functional models to incorporate a nodelevel tasking model. (c) Communication models. The types of communication protocols are confined by the area under consideration. Nevertheless, a number of different communication protocols are currently being developed in view of future embedded control systems. Computer Area Network is currently a de facto standard, but there is also an interest in including the following: (i) Time triggered Computer Area Network referring to Computer Area Network systems designed to incorporate clock synchronization suitable for distributed control applications. This rests on the potential of the recent ISO revision of Computer Area Network to easier implement clock synchronization in Computer Area Network and where the retransmissions can be turned off. (ii) Properties reflecting state-of-the-art fault-tolerant protocols such as the Time-Triggered Protocol. Fault-tolerance mechanisms of these protocols, such as membership management and atomic broadcast, then need to be appropriately modeled. It is often the case that parts of the protocols are realized in software, for example, dealing with message fragmentation, certain error detection, and potential retransmissions. Both, the execution of the protocol and the scheduling need to be modeled. The semantics of the communication and in particular of buffers is another important aspect; compare, for example, with overwriting and nonconsuming semantics
Zhang_Ch07.indd 838
5/13/2008 3:56:41 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
839
versus different types of buffering. Whether the communication is blocking or not from the point of view of the sender and receiver is also related to the communication semantics. Another issue is which “low-level” features of communication controllers need to be taken into account. Compare, for example, with the associative filtering capability of Computer Area Network controllers, and their internal sorting of message buffers scheduled for transmission (relates back to hierarchical scheduling). (d) Fault models. The use of fault models is essential for the design of dependable systems. The specific models of interest are very application specific. For inclusion in models and simulation, it is of interest to investigate generic fault models and their implementation. There are a number of available studies on fault models dealing with transient and permanent hardware faults, and to some extent also categorizing design faults. As always, there is also the issue of insertion in the form of a fault, an error, or a failure. Consequently, this is a prioritized topic for further work. (3) System development and tool implementation. While developing distributed control systems, it would be advantageous to have a simulation toolbox or library in which the user can build the system based on prebuilt modules to define things such as the network protocols and the scheduling algorithms. With such a tool, the user can focus on the application details instead. Such a tool is possible since components like different types of schedulers and Computer Area Network and standardized across applications are well defined. In the same way a programmer works on a certain level of abstraction, at which the hardware and operating system details are hidden by the compiler, and the simulation tool should give the user a high level of abstraction to develop the application. Such a tool will enforce a boundary between the application and the rest of the system. This will speed up the development process, and gives the developers extra flexibility in developing the application. It is clear that the implementation of the types of hybrid systems we are aiming to model requires a thorough knowledge of the simulation tool. Cosimulation of hybrid systems requires the simulation engine to handle both time-driven and event-triggered parts. The former includes sampled subsystems as well as continuous time subsystems, handled by a numerical integration algorithm that can be based on a fixed or varying step size. The latter may involve state machines and other forms of event-triggered
Zhang_Ch07.indd 839
5/13/2008 3:56:41 PM
840
INDUSTRIAL CONTROL TECHNOLOGY logic. Some aspects that need consideration for tool implementation are as follows: (a) If events are used in the computer system model these must be detected by the simulation engine. How is the event detection mechanism implemented in the simulation tool? (b) At which simulation steps are the actions of a state flow system carried out? (c) How can actions be defined to be atomic (carried out during one simulation step)? (d) How can preemption of simulation be implemented including temporary blocking to model the effects of computer system scheduling?
7.5.3 Simulation Program Organization 7.5.3.1 Simulation Routines for Single Microprocessor Control Systems For a target industrial control system of one microprocessor-unit, only one simulation routine is needed for this system. Although unnecessary, it strongly suggests that the simulation routine programs consist two branches: the operating system programs and the application system programs. This is because the target application software programs normally have these two branches, the operating system programs and the application system programs, so that this way allows engineers to develop the simulation routine programs by using the target application software programs; and to work synchronistically between developing and testing the application software programs. If both the target programs and the simulation programs use the same programming language compiler, engineers can modify the target programs into the simulation program of the same industrial control system. However, if the programming language of the simulation programs is different from the programming language writing the target programs, engineers need to write the whole simulation routine, which definitely is not recommended.
7.5.3.2
Simulation Routines for Distributed Control Systems
Any distributed industrial control system normally consists of more than one microprocessor-unit board or chipset. In principle, one microprocessorunit requires at least one simulation routine in any distributed control system.
Zhang_Ch07.indd 840
5/13/2008 3:56:41 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
841
In a distributed control system, the simulation routines for different microprocessor-units are independent from each other. The simulated communications between the microprocessor-units should be the same as those in the target control system. Therefore, the simulation routine for any microprocessor-unit should keep the same interface programs as those of its target control system. In principle, all the simulation routines for a distributed control system should be run in parallel. However, one routine or some routines can be alternatively run at a certain time.
7.5.3.3
Simulation Routine Coding Principles
When coding the simulation programs for an industrial control system, to avoid damaging the computers used for running the simulation routines these principles below should be followed: (1) The simulation programs must be separated from the boot code and firmware of a microprocessor-unit, in particular separated from the bus system and the memories’ controllers; (2) The simulation programs must be separated from the interrupt vector of a microprocessor-unit; (3) The simulation programs must be separated from the task context switch and the task scheduler of a microprocessor-unit’s operating system; (4) The simulation programs must be separated from the I/O device driver and interface low-level code; (5) The simulation programs must be separated from the data transmission control circuits or devices.
7.5.4
Simulators, Toolkits, and Toolboxes
To facilitate the modeling and simulation for industrial control, tens of modeling and simulation tools and simulators have been developed. A brief introduction for five popularly used tools and simulators is given in this subsection, which include: MATLAB, SIMULINK, SIMULINK RealTime Workshop, ModelSim, and Link for ModelSim. For their details, you can refer to the respective manuals provided by vendors.
7.5.4.1
MATLAB
MATLAB is an integrated, technical computing environment that combines numeric computation, advanced graphics and visualization, and
Zhang_Ch07.indd 841
5/13/2008 3:56:41 PM
842
INDUSTRIAL CONTROL TECHNOLOGY
high-level programming language. MATLAB gives an interactive system whose basic data element is an array that does not require dimensioning. This allows solving many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar noninteractive language such as C++, C, or FORTRAN. MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis. MATLAB is used in a variety of application areas including signal and image processing, control system design, financial engineering, and medical research. Typical uses provided by MATLAB include Mathematical operations; Numerical computation; Algorithm development; Data acquisition; Modeling, simulation, and prototyping; Data analysis, exploration, and visualization; Scientific and engineering graphics; and Application development including graphical user interface (GUI) building. (1) The MATLAB system. The MATLAB system consists of five main parts: (a) Development environment. This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path. (b) The MATLAB mathematical function library. This is a vast collection of computational algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier transforms. (c) The MATLAB language. This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both “programming in the small” to rapidly create quick and dirty throwaway programs, and “programming in the large” to create large and complex application programs. (d) Graphics. MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions
Zhang_Ch07.indd 842
5/13/2008 3:56:42 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
843
for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications. (e) The MATLAB external interfaces (API). This is a library that allows you to write C++, C, and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files. (2) MATLAB-related toolboxes. MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow learning and applying specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others. Typical MATLAB related toolboxes include the following: (a) MATLAB (b) SIMULINK (c) Extended Symbolic Mathematical Toolbox (d) Fuzzy Logic Toolbox (e) MATLAB Compiler (f) MATLAB C++ Mathematical Library (g) Multiple-Analyses and Synthesis Toolbox (h) Neural Network Toolbox (i) Optimization Toolbox (j) Partial Differential Equation Toolbox (k) Signal Processing Toolbox (l) Spline Toolbox (m) Statistics Toolbox (n) Wavelet Toolbox (o) Communications Blockset (p) Communications Toolbox (q) Control Systems Toolbox (r) DSP Blockset (s) Data Acquisition Toolbox (t) Excel Link (u) Financial Toolbox (v) Fixed-Point Blockset
Zhang_Ch07.indd 843
5/13/2008 3:56:42 PM
844
INDUSTRIAL CONTROL TECHNOLOGY (w) (x) (y) (z) (aa) (ab)
7.5.4.2
Image Processing Toolbox MATLAB C/C++ Graphics Library Model Predictive Control System Toolbox Nonlinear Control Design Blockset Robust Control Toolbox System Identification Toolbox.
SIMULINK
SIMULINK is an interactive tool for modeling, simulating, and analyzing dynamic systems. It integrates seamlessly with MATLAB, providing immediate access to an extensive range of analysis and design tools. It supports linear and nonlinear systems, continuous time and sampled time systems, or hybrid systems. With SIMULINK, simulations are interactive so that parameters can be changed immediately to see what happens in the simulations. SIMULINK allows moving beyond idealized linear models to explore more realistic nonlinear models, and has instant access to all of the analysis tools in MATLAB to take the results and analyze and visualize them. (1) Using SIMULINK for modeling. Model analysis tools include linearization and trimming tools, which can be accessed from the MATLAB command line, plus the many tools in MATLAB and its application toolboxes. And because MATLAB and SIMULINK are integrated, models can be simulated, analyzed, and revised in either environment at any point. For modeling, SIMULINK provides a graphical user interface for building models as block diagrams, using click-and-drag mouse operations. With this interface, you can draw the models just as you would with pencil and paper (or as most textbooks depict them). This is much better than those simulation packages that require you to formulate differential equations and difference equations in a language or program. SIMULINK includes a comprehensive block library of sinks, sources, linear and nonlinear components, and connectors. You can also customize and create your own blocks. Blocks represent elementary dynamical systems that SIMULINK knows how to simulate. A block comprises one or more of the following: a set of inputs; a set of states; and a set of outputs. To introduce blocks in your model, choose the block from the library, click on it, and drag it in your model. Double clicking on the block will allow you to change the block parameters. Models are hierarchical, so you can build models using both top-down
Zhang_Ch07.indd 844
5/13/2008 3:56:42 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
845
and bottom-up approaches. You can view the system at a high level, and then double-click on blocks to go down through the levels to see increasing levels of model detail. This approach provides insight into how a model is organized and how its parts interact. (2) Using SIMULINK for simulating. After you define a model, you can simulate it, choosing integration methods, either from the SIMULINK menus or by entering commands in MATLAB command window. The menus are particularly convenient for interactive work, while the command-line approach is very useful for running a batch of simulations (e.g., if you want to sweep a parameter across a range of values). Using scopes and other display blocks, you can see the simulation results while the simulation is running. In addition, you can change parameters and immediately see what happens, for “what if” exploration. The simulation results can be put in the MATLAB workspace for post-processing and visualization. Simulating a dynamic system refers to the process of computing a system’s states and outputs over a span of time, using information provided by the system’s model. SIMULINK simulates a system when you choose Start from the model editor’s Simulation menu, with the system’s model open. Simulation of the system occurs in two phases: model initialization and model execution. (3) Model initialization phase. During the initialization phase, SIMULINK: (a) Evaluates the model’s block parameter expressions to determine their values. (b) Flattens the model hierarchy by replacing virtual subsystems with the blocks that they contain. (c) Sorts the blocks into the order in which they need to be executed during the execution phase. (d) Determines signal attributes, for example, name, data type, numeric type, and dimensionality, not explicitly specified by the model and checks that each block can accept the signals connected to its inputs. (e) Determines the sample times of all blocks in the model whose sample times you did not explicitly specify. (f) Allocates and initializes memory used to store the current values of each block’s states and outputs. (4) Model execution phase. In this model execution phase coming up after initialization phase, SIMULINK successively computes the states and outputs of the system at intervals from the simulation start time to the finish time, using information provided by
Zhang_Ch07.indd 845
5/13/2008 3:56:42 PM
846
INDUSTRIAL CONTROL TECHNOLOGY the model. The successive time points at which the states and outputs are computed are called time steps. The length of time between steps is called the step size. The step size depends on the type of solver used to compute the system’s continuous states. SIMULINK computes the current value of a block’s continuous states by numerically integrating the derivatives of states. The numerical integration task is performed by a SIMULINK component called solver. SIMULINK allows choosing the solver that it uses to simulate a model. The solvers that SIMULINK provides fall into two classes: fixed-step solvers and variablestep solvers. Fixed-step solvers divide the simulation time span into an integral number of fixed-size intervals called time steps. Then, starting from initial estimates, at each time step, a fixedstep solver computes the value of each of the system’s state variables at the next time step from the variable’s current value and the current value of its derivatives. The accuracy of the estimation depends on the step size, that is, the time between successive time steps. Generally, a smaller step size produces a more accurate simulation but results in a longer execution time because more steps are required to compute a system’s states. A variable-step solver dynamically varies the step size to meet a specified level of precision. Such a solver expands the step size when the state variables are changing slowly (as indicated by the magnitude of the state derivatives) and decreases the step size when the state variables are changing rapidly. A variable step solver can, depending on the application, produce more accurate results without sacrificing execution speed. Selecting parameters from the simulation menu, you can set up the simulation parameters: start time, stop time, and type of solver.
7.5.4.3
SIMULINK Real-Time Workshop
The SIMULINK Real-Time Workshop automatically generates C++ and C code directly from SIMULINK block diagrams. This allows the execution of continuous, discrete-time, and hybrid system models on a wide range of computer platforms. The SIMULINK Real-Time Workshop can be used for: (1) Rapid prototyping. As a rapid prototyping tool, the Real-Time Workshop enables you to implement your designs quickly without lengthy hand coding and debugging. Developing graphical SIMULINK block diagrams and automatically generating C++ and C code can implement control, signal processing, and dynamic system algorithms.
Zhang_Ch07.indd 846
5/13/2008 3:56:42 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
847
(2) Embedded real-time control. Once a system has been designed with SIMULINK, code for real-time controllers or digital signal processors can be generated, cross-compiled, linked, and downloaded onto your selected target processor. The Real-Time Workshop supports microprocessor boards, embedded controllers, and a wide variety of custom and commercially available hardware. (3) Real-time simulation. Code can be created and executed for an entire system or specified subsystems for hardware-in-the-loop simulations. Typical applications include training simulators (pilot-in-the-loop), real-time model validation, and testing. (4) Stand-alone simulation. Stand-alone simulations can be run directly on a host machine or transferred to other systems for remote execution. Because time histories are saved in MATLAB as binary or ASCII files, they can be easily loaded into MATLAB for additional analysis or graphic display. In conclusion, Real-Time Workshop provides a comprehensive set of features and capabilities that provides the flexibility to address a broad range of applications: (a) Automatic code generation handles continuous-time, discrete-time, and hybrid systems. (b) Optimized code guarantees fast execution. (c) Control framework Application Program Interface (API) uses customizable Make files to build and download object files to target hardware automatically. (d) Portable code facilitates usage in a wide variety of environments. (e) Concise, readable, and well-commented code provides ease of maintenance. (f) Interactive parameter downloading from SIMULINK to external hardware allows system tuning on the fly. (g) A menu-driven, graphical user interface makes the software easy to use.
7.5.4.4
ModelSim
ModelSim is the industry-leading, Windows-based simulator for VHDL, Verilog, or mixed-language simulation environments. ModelSim offers VHDL, Verilog, or mixed-language simulation. Coupled with the most popular HDL debugging capabilities in the industry, ModelSim is known for delivering high performance, ease-of-use, and outstanding product support. ModelSim delivers the unique combination of native compiled code architecture and outstanding simulation performance. An easy-to-use
Zhang_Ch07.indd 847
5/13/2008 3:56:42 PM
848
INDUSTRIAL CONTROL TECHNOLOGY
graphical user interface enables the user to quickly identify and debug problems, aided by dynamically updated windows. For example, selecting a design region in the Structure window automatically updates the Source, Signals, Process, and Variables windows. Once a problem is found, you can edit, recompile, and resimulate without leaving the simulator. ModelSim fully supports the VHDL and Verilog language standards. You can simulate behavioral and gate-level code separately or simultaneously. ModelSim also supports all application-specific integrated circuit (ASIC) and field programmable gate array (FPGA) libraries, ensuring accurate timing simulations. Major product features are: (1) Source window templates and wizards. Templates and wizards allow you to quickly develop HDL code without having to remember the exact language syntax. All the language constructs are available with a click of a mouse. Easy-to-use wizards step you through creation of more complex HDL blocks. The wizards show you how to create parameterized logic blocks, test bench stimuli, and design objects. The source window templates and wizards benefit both novice and advanced HDL developers with timesaving shortcuts. (2) Project Manager. The Project Manager greatly reduces the time it takes to organize files and libraries. As you compile and simulate, the Project Manager stores the unique settings of each individual project, allowing you to restart the simulator right where you left off. The Project Manager automatically compiles any design and offers Windows-like project-file sorting. Simulation properties allow you to easily resimulate with preconfigured parameters. (3) TCL interface. ModelSim redefined openness in simulation by incorporating the TCL user interface into its HDL simulator. TCL is a simple but powerful scripting language for controlling and extending applications. (4) Signal Spy. From any point in the design, the Signal Spy feature allows you to locate, drive, force, and release signals and signal nets buried deep in a VHDL or mixed-language design hierarchy. This can be done without having to modify any of your design’s existing code. This feature is very useful in test bench design. (5) Platform and standards support. ModelSim PE supports both VHDL and Verilog as well as accelerated, Level-1 compliant VITAL 2000 cell libraries and VITAL memory. ModelSim PE runs on the Windows 98, 2000, NT, and XP platforms.
Zhang_Ch07.indd 848
5/13/2008 3:56:42 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
7.5.4.5
849
Link for ModelSim
Link for ModelSim is a cosimulation interface that integrates MATLAB and SIMULINK into the hardware design flow for field programmable gate array (FPGA) and application-specific integrated circuit (ASIC) development. It provides a fast bidirectional link between MATLAB and SIMULINK and ModelSim. Link for ModelSim enables direct cosimulation and efficient verification of the ModelSim Real-Time level models from within MATLAB and SIMULINK. The traditional SIMULINK system-level design and simulation environment supports mixed-language simulation of MATLAB, C, C++, and SIMULINK blocks. Link for ModelSim uses client/server architecture to provide the interface between MATLAB and SIMULINK and ModelSim. It interfaces MATLAB with ModelSim and SIMULINK with ModelSim separately and independently. This means that you can use just one of these interfaces or both simultaneously. Using Link for ModelSim, you can set up an efficient environment for cosimulation, component modeling, and analysis and visualization, for various applications, such as (1) developing software test benches in MATLAB or SIMULINK; (2) including larger-scale system models developed and simulated in SIMULINK; (3) generating test vectors to test, debug, and verify your FPGA or ASIC model code against its original MATLAB or SIMULINK specification; (4) providing behavioral modeling capabilities for your FPGA or ASIC simulation in MATLAB and SIMULINK; (5) verifying, analyzing, and visualizing the implementations in MATLAB and SIMULINK.
Bibliography Abensur, David and Tomas, Sebastien. 2006. A Power-On Self-Test. http://focus .ti.com/lit/an/spra838a/spra838a.pdf. Accessed date: April. Apple (http://www.apple.com). 2006. The Boot Process. http://developer.apple .com/documentation/MacOSX/Conceptual/BPSystemStartup/Articles/ BootProcess.html. Accessed date: April. Bequette, B. Wayne. 2003. Process Control. Modelling, Design and Simulation. New Jersey: Prentice Hall. Carnegie Mellon University Libraries (http://www.library.cmu.edu). 2006. Control Tutorial for MATLAB and SIMULINK.
Zhang_Ch07.indd 849
5/13/2008 3:56:42 PM
850
INDUSTRIAL CONTROL TECHNOLOGY
Cisco (http://www.cisco.com). 2006. Install and Configure System Components. http://www.cisco.com/iam/unified/ipcc1/Install_and_Configure_System_ Components.htm. Accessed date: April. DEW (http://www.dewassoc.com). 2001. BIO Power-On Self-Test (POST). http:// www.dewassoc.com/support/bios/bios_poweron_self_test_post.htm . Accessed date: April 2006. GlobalSpec (http://www.globalspec.com). 2006. Control System Module. http:// process-equipment.globalspec.com/Industrial-Directory/control_module. Accessed date: April. Howe, Denis. 2001. Power-On Self-Test (POST). http://www.cacs.louisiana.edu/ ~mgr/404/burks/foldoc/2/92.htm. Accessed date: April 2006. IBM (http://www.ibm.com). 2006. Process Simulations. http://publib.boulder .ibm.com/infocenter/dmndhelp/v6rxmx/index.jsp?topic=/com.ibm.btools .help.modeler602.doc/doc/concepts/simulation/simulation.html. Accessed date: April. IFE (http://www.ife.no). 2006. Process Simulation. http://www.ife.no/main_ subjects_new/petroleum_research/processimulation?set_language=en&cl=en. Accessed date: April. Lawrence, David. 2006. BIOS (Basic Input Output System). http://www.thedav idlawrenceshow.com/bios_basic_input_output_system_002381.html . Accessed date: April. Maine Technical (http://www.mainetechnical.com). 2003. Calibration. http://www .mainetechnical.com/downloadfiles/Calibration.pdf. Accessed date: April 2006. Micro Process (http://www.microprocess.com). 2006. PCI Bus. http://www .microprocess.com/formation/English%20Training/d8_e.htm. Accessed date: April. Microsoft TechNet (http://technet.microsoft.com). 2006. Power-On Self-Test Process. http://www.microsoft.com/technet/prodtechnol/windows2000serv/ reskit/core/fnbb_str_coli.mspx?mfr=true. Accessed date: April. PC Guide (http://www.pcguide.com). 2006. BIO Power-On Self Test (POST). http://www.pcguide.com/ref/mbsys/bios/bootPOST-c.html. Accessed date: April. Red Hat (http://www.redhat.com). 2006. Hardware Installation and Operating System Configuration. http://www.redhat.com/docs/manuals/csgfs/browse/ rh-cs-en/ch-hardware.html. Accessed date: April. Rusling, David A. 1996a. The Linux Kernel—Modules. http://tldp.org/LDP/tlk/ modules/modules.html. Accessed date: April 2006. Rusling, David A. 1996b. The Linux Kernel—PCI. http://tldp.org/LDP/tlk/dd/pci .html. Accessed date: April 2006. Russo, Lou and Isermann, Howard P. 1999. SIMULINK Tutorial. http://www.rpi .edu/dept/chem-eng/WWW/faculty/bequette/lou/simtut/simtut_html.html. Accessed date: April 2006. Sigmon, Kermit. 1992. MATLAB Primer. http://www.math.ufl.edu/help/matlabtutorial/. http://www.mines.utah.edu/gg_computer_seminar/matlab/matlab .html. Accessed date: April 2006.
Zhang_Ch07.indd 850
5/13/2008 3:56:42 PM
7: SYSTEM ROUTINES IN INDUSTRIAL CONTROL
851
Teach Target (http://searchwincomputing.techtarget.com). 2006. Device Driver— Installation and Configuration. http://searchwincomputing.techtarget.com/ tip/0,289483,sid68_gci1219303,00.html. Accessed date: April. Willis, Mark J. and Tham, Ming J. 2006a. Process Model. http://lorien.ncl.ac.uk/ ming/advcontrl/sect2.htm. Accessed date: April. Willis, Mark J. and Tham, Ming J. 2006b. Model-Based Automatic Control. http:// lorien.ncl.ac.uk/ming/advcontrl/sect3.htm. Accessed date: April.
Zhang_Ch07.indd 851
5/13/2008 3:56:43 PM
Zhang_Ch07.indd 852
5/13/2008 3:56:43 PM
Index Page numbers in italics refer to tables and illustrations function, 275 physical characteristics, 275–276 in real-time environment, 277–279 system limits, 276–277 working principle, mechanism, 266–274 data transfer, 269–274 master-slave principle, 267–269 Accelerator accelerator pedal, 362 ISA accelerator, 334–335 Accumulator, pulse accumulator, 576 Actuator A-S (Actuator-sensor) interface, 259–279, See also AS interface diaphragm actuator, 146, 148 disk actuator, 133, 136 electric actuator, 88–100 application guides, 96–98 calibrations, 98–100 operating principle, 88–90 technical specifications, 94–95 types, 90–93 electrohydraulic actuator, 146–147 FLUSH actuator, 191, 206 gear actuator, 141 hydraulic actuator, 111–125 application guides, 119–122 calibrations, 123–125 operating principle, 111–115 types, specifications, 115–119 linear actuator, 91–93, 101, 116–117 manual actuator, 141–142 multilayer actuator, 133 multi-turn actuator, 93 piezoelectric actuator, 125–141 calibrations, 137–141 operating principle, 126–132 technical specifications, 136–137 types, 132–135 piston actuator, 146, 148, 152, 154
A Abstract abstract event object, 620 abstract interface, 677 abstract object, 620, 688 abstract syntax tree, 244 AC (alternative current), 24–25, 88, 148, 778 ACK (acknowledge) frame, 337, 757–758 AGP (accelerated graphic port), 344–345, 781 parallel port, 344–345 API (application process/program interface), 493, 843, 847 ASCII code, 349, 452–454, 699–700, 708, 729, 847 ASCII standard, 698–700 ASDU (application service data units), 501–502, 502 ASIC (Application specify integrated circuit), 13, 240–255, 784, 848–849 designs, 242 functional simulation, 243–244 integrity analyses, 247–248 specifications, 242–243 synthesis, 244–246 verifications, 246–247 field-programmable gate array, 250–255 architecture, 252–254 important data, 251–252 programming, 255 types, 251–252 programmable logic devices, 248–250 AS interface, 259–279 architecture, components, 260 type 1, 261–263 type 2, 263–266 system characteristics, important data, 275–279 function range, master modules, 277
853
Zhang_Index.indd 853
5/29/2008 6:34:37 PM
854 Actuator (contd.) pneumatic actuator, 100–111 application guide, 106–11 assemble on valve, 106–111 calibrations, 137–141 operating principle, 100–104 technical specifications, types, 104–106 ring actuator, 133, 136 roller actuator, 40 rotary actuator, 90, 92, 101, 103, 117–119 shaft actuator, 41–42 stack actuator, 127, 136 style actuator, 139 tube actuator, 133, 136 turn actuator, 93 ultrasonic actuator, 133, 136 valve actuator, 95, 101, 105, 146–147 Adapter, 383 add-in-adapter, 571 graphical adapter, 328 host adapter, 336 object adapter, 677–678 PCI adapter, 329 plug-in-adapter, 571 types of, 333 Adaptor, 108, 218, 263 game I/O adapter, 215 Address address cycle, 802 address bit, 206 don’t-care address bits, 800 SA19 (system address bit 19), 330 LA23 (unlatched address bit 23), 330 address bus, 206, 213, 215, 234, 600, 783 14-bit address bus, 187 16-bit address bus, 187 20-bit address bus, 187 24-bit address bus, 188, 328 26-bit address bus, 333 32-bit address bus, 188, 205 address data, 330, 332, 348 address driver, 191 address line, 231, 239, 328–329, 601 LA17 address line, 332 SA0 address line, 332
Zhang_Index.indd 854
INDEX address phase, 219, 220–221, 224 single address phase, 219 address register 2-byte address register, 238 base address register, 225, 237, 793, 800, 801 current address register, 237 temporary address register, 237 address signal, 206 address space, 612–613 I/O address space, 214, 225, 333 PCI address space, 790 process’s address space, 625, 628 program’s address space, 655 system’s address space, 572 user address space, 655 Agent, 823 idependent agent, 14 outside agent, 618 PCI-compliant agent, 219 Alarm, 10, 82, 413 alarm handling, 495 automated alarm tracking, 517 failure alarm, 454 severe failure alarm, 454 Algorithm adaptive algorithm, 675 algorithm types, 526–527 asynchronous simulation algorithms, 244 basic data compression algorithm, 742–749 Lempel–Ziv algorithm, 742–744 Shannon–Fano algorithm, 742 derivative algorithm, 520 error-checking algorithm, 729 EVEN parity algorithm, 729 fairness algorithm, 223 fuzzy logical algorithm, 564 integral algorithm, 520 MARK parity algorithm, 729 ODD parity algorithm, 729 proportional algorithm, 520 reinforcement learning algorithm, 675 SPACE parity algorithm, 729 synchronous simulation algorithms, 244 viterbi algorithm, 756 Allocator, 586–587 full-blown memory allocator, 618
5/29/2008 6:34:37 PM
855
INDEX Alpha AXP processor, 790 Ambiguity, 356–357 model ambiguity, 357, 359 Amplifier, FSK isolating amplifier, 396 Analog, 6, 59 24-bit analog input, 135 analog and digital, 513 analog current, 81 analog frequency, 81–82 analog I/P, 147 analog multiplexer, 722 analog sensors, 68 analog slaves, 265 analog tachometers, 554–556 analog voltage, 81 Angle angle valves, 91 ‘coupling’ angle, 552 sonic cone angle, 12, 59 Animation, 481, 843 Antiparallel polarized, 132 Arbitration, 338 arbitration field, 296 bus arbitration, 760 bus system arbitration, 222–223 message wise arbitration, 295 nondestructive bitwise arbitration, 293–294, 294 Array, 504 FPGA (field-programmable gate arrays), 241, 250–255 frame transfer area array, 67 full-frame area array, 67 interline transfer area array, 67 linear array, 67 multiple sonic nozzle array, 180–181 PGA (pin-grid array), 205 PIN-photo-diode arrays, 9 Aspect, 4, 840 communication aspect, 837 aspect ratio, 71 electrical aspect, 682 functional aspect, 682 hard aspect, 598, 618 hardware aspect, 780 security aspect, 514 software aspect, 780
Zhang_Index.indd 855
Assemble, assemble on valve, 106–111 Assembly, 834 AS interface slave assembly system, 265 assembly modeling, 834 automation assembly equipment, 22 control valve assembly, 143, 152 electrode assembly, 53 Asynchronous asynchronous algorithm, 244 asynchronous character, 727, 729 asynchronous coupling, 279 asynchronous frame, 727 asynchronous message passing, 624 asynchronous receiver, 624 UARTs (Universal Asynchronous Receiver/Transmitters), 500, 706–708 USART asynchronous receiver, 714–715 asynchronous transmission, 343, 708, 713, 726–733 asynchronous transmitter, 713 Autopilot, 359 Autotuning, 824 Axis bent-axis, 115 inline-axis, 115 linear axis, 469 multiaxis, 482, 548, 550 rotary axis, 472 single axis, 72, 80, 471 stack axis, 136 B BIOS basic input/output, 778 interfaces, 216–218 mode 0, 227 techniques, 213–215 PCI BIOS, 795–796, 803 functions, 798–799 system BIOS, 571, 577, 800, 803 Background suppression diffuse sensor with, 49–50 by triangulation, 47 Backlash, 91, 131, 147, 151–152 Bandwidth, 320, 327, 339, 342–344, 514, 703–704
5/29/2008 6:34:37 PM
856 Barrel, 56–57, 59, 115 Barrier, 190, 284, 637–638 barrier by semaphore, 638 barrier solution, 638 intrinsic safety barrier, 394, 407 IS barrier, 381 light barrier, 51 Batch batch control, 532–539 recipe-based, 536–538 standards, 534–536 batch controller, 532–539 batch process steps, 539, 540 batch server, 537–538 Bend bending, 79, 140, 373, 834 mechanical bending, 1 bender multilayer piezoelectric bender, 134 piezoelectric bender, 131–132 Biaxial biaxial load, 78, 83 biaxial measurement, 78, 83 Bimetallic switch, 1–6 Bimorph, 125, 133, 136 Bipolar, 36, 132 Bit-synchronization, 726–727, 734–737 BNC, 137 Bolt, 79, 80, 87, 93, 109, 122, 140 bolt flange, 160 Boot boot code, 780, 785, 841 for microprocessor unit chipset, 570–578 boot record master boot record, 573–575 partition’s boot record, 578 boot program, 575, 577, 780–781, 785 boot sequence, 575–578, 777 Branch, 32, 188, 190, 197, 263, 315, 747, 840 branch predication, 194 improved branch prediction, 200 Bridge, 302, 795 bridge balancing, 76–77 bridge completion, 74–75 bridge excitation, 75
Zhang_Index.indd 856
INDEX bridge to H1-HSE coupling, 285 HOST bridge, 344 PCI-ISA bridge, 791–792 PCI-PCI bridge, 789, 792–798, 801–802 Brokers, event brokers, 618–622 Bucket, 64, 70 Buffer, 330, 596, 600, 759 buffered offset nulling, 77 cascade buffer, 231 controller’s sector buffer, 335 data bus buffer, 230, 233, 708, 710 deeper write buffer, 200 read sector buffer, 335 receive buffer, 711 three-state buffer, 216 traditional bounded buffer, 626 write sector buffer, 335 Bus address bus, 206, 213, 215, 234, 600, 783, See also Address, address bus AGP bus, 344–345 APIC bus, 193, 194, 201, 232 bus-arbitration, 760 bus-operation, 219–221 bus-termination, 301 CAN bus, 291–309 data bus, 221, 230, 233, 331, 332, 336, 338, 783 fieldbus, 148, 278, 280–327, 414, 417, 420, 423 FSK-bus, 395–396 interbus, 309–319 ISA bus, 328–332, 792 PCI bus, 794–798, 802 PROFIBUS, 289–291 SCSI bus, 336–338 serial bus, 291–292, 339–342, 485 X-bus, 335 Byte, 696 C C++, 440, 493, 521, 616, 676, 842, 843, 844, 846, 849 CCD, 64, 66–67, 68 CAN (control area network) system CAN bus, 291–309 CAN open, 304–305, 307, 455 CLK, 235, 330, 332, 709
5/29/2008 6:34:37 PM
INDEX CNC (computer numerical control), 462–487 components,architecture, 463–466 control mechanism, 466–474 controller specification, 483–487 part programming, 474–482 CMOS, 196, 392, 781 chipset, 235 comparison of CCD and, 68 image sensors, 64–65, 67–68 CiA, 305, 307 CMR, 30, 38 C-o-M, 562–563 CORBA (common object request broker architecture), 676–677 CPLD, 249–250 CPU (central process unit), 195–196, 227, 229–231, 234, 279, 355, 431, 464, 589–590, 600, 657–658, 709, 790–791 boot CPU program, 576 trableshooting the CPU, 462 CW (clockwise), 99–100, 230–231 CCW (counter clockwise), 99–100 Cache, 201, 810 CLS (cache line size), 225 code cache, 194, 198 data cache, 193, 198 more cache, 199–200 processor cache, 198 unified L2 cache, 193 Calibrator, 9, 138, 404 cash flow calibrator, 180 Camera, 15–17, 19–21, 71–72, 134, 514, 820 Capacitor, 57, 61, 73, 84, 695 Cascade, 230, 526, 545, 603, 830 cascade buffer, 231–233 cascade mode, 240 Ceiling Priority, 631–632 Cell biaxial load cell, 78, 83 canister cell, 81 donut cell, 80 lithium cell, 204 load cell package, 80–81 logic cell, 250–251 macro cell, 250 pancake cell, 80
Zhang_Index.indd 857
857 photoelectric cell, 84 process cell, 535 shear cell, 79 tension cell, 72, 80 triaxial load cell, 78, 83 Chain, 70, 119, 138, 141, 174, 240, 546, 816 daisy-chain, 333, 752 safety chain, 643 Channel, 75–76, 134, 237–240, 276, 318, 343, 394, 623–624, 762–763 I/O channel, 331, 454, 741 routing channel, 253 ‘U’ channel, 57 Character, 696–697 character-oriented synchronous transmission, 737–738 physical, 275 synchronization, 727–729 system, 275–277 Checksum, 196, 386, 411, 644, 721, 749 block, 755 Chip chipset boot code for microprocessor unit chipset, 570–578 CMOS chipset, 235 data transfers within an IC chipset, 691–692 direct memory access controller chipset, 235–239 microprocessor unit chipset, 187–224 programmable interrupt controller chipset, 229–232 programmable timer controller chipset, 233–234 Chrominance, 9 Chunk, 418, 580, 583, 651–652, 695 Client, client-server model, 763–766 Cluster, 194, 201, 594, 599, 770 Codeword, basic codeword standards, 698–700 Coding arithmetic coding, 744–746 bit coding, 297, 303 BWT coding, 747, 747 color coding, 458
5/29/2008 6:34:37 PM
858 Coding (contd.) Huffman coding, 745, 747 linear coding, 742 LZ77 coding, 743, 744 LZ78 coding, 743, 744 manchester coding, 283, 317 routine coding, 9 Collection data collection, 327, 367, 375, 507, 539, 570 Garbage collection, 616–617 Composite, 562, 620, 704, 834 Condition, 634–638 stem conditioning, 145–146 Conductor, 28–29, 36, 60, 73, 172, 248, 311, 550, 691–692 Configuration HART system, 400–402 PCI, 790 registers, 224–225 Connection-oriented transmission, 680, 761–762 Connectionless transmission, 680, 688, 761 Constraint, 245, 365–366, 637, 648 Contention, 618, 692, 762–763 request, 598–599 Controller batch controllers, 532–536 CNC controller, 483–487 computer numerical controllers, 462–483 controller area network, 291–308 data acquisition controllers, 488–511 digital controllers, 429–526 direct memory access controller, 235–237 fuzzy logic controllers, 558–563 HDLC controller, 721 industrial intelligent controllers, 429–455 input/output protocol controllers, 650 programmable interrupt controller, 229–232 programmable logic controllers, 429–455 programmable timer controller, 233–234
Zhang_Index.indd 858
INDEX proportional-integration-derivative controllers, 519–526 SDLC controller, 719–720 servo controllers, 539–550 Core, 22–24, 166–167, 194, 198, 723 Creep, 2, 4–5, 87, 122 Crisp, 560, 562–563 Crosstalk, 247, 248 Crystal, 10, 30, 38, 67, 78, 84, 126, 132, 372, 431, 833 Cylinder, 80, 109, 111–113, 116–117, 119–122 D DAV (data-valid line), 349–350 DC (direct current), 25–26, 41, 42, 49, 56, 77, 91, 141, 435, 455, 531, 552–554, 778 DCS, 374–375, 537, 539, 675, 678 DHCP (dynamic host configuration protocol), 325 DMA (direct memory access), 226, 235–240, 329–331, 577, 600, 790 DNP3, 502–512 DRAM (dynamic random access memory), 203, 334, 810–811 DVD player, 249 DWDM, 704–705 Data data acquisition, 374–376, 457, 488–519 data communication, 675–771 data compression, 740–742 data flow, 620 data-link, 749–762 data logging, 56, 78, 82, 181, 377 data transmission, 705–723, 725–742 Deadlock cycle, 659 Debug, 208, 212, 247, 456, 612, 645, 668, 842, 846, 847 Deck multiple-deck, 42 single-deck, 41 Decoding, 741–742 Decoupling, 67, 266, 677 Default, 99, 223, 316, 342, 362, 610 Degrade mode, 808, 811, 812 Delineator, 706
5/29/2008 6:34:37 PM
859
INDEX Demultiplexer, 704, 722 Desktop, 401, 437, 462, 516, 700, 766, 842 DeviceNet, 97, 306, 377, 416, 419, 484 Diagnose / Diagnostic, 316, 423, 460, 519, 642, 781, 804–816 Dialog, 647, 683, 764, 765 Dielectric, 8, 53, 57 Diffuse, 12, 17–18, 45–46, 48–50, 58, 241 Dimension, 15, 20–21, 60, 91, 372, 473 Diminish, 18, 248, 289, 572 Diode, 9, 44, 51, 203, 218 Dipole, 126, 127 Discrete, 65, 148, 216, 259, 277, 281, 415, 437, 539, 547, 590, 749, 817 Disable, 208, 406, 601, 608–609, 678, 714, 716 Disc / Disk, 4, 34, 36, 89, 107, 125, 133, 158, 160, 333, 556, 599, 614 Distributed control, 675–690, 840 Disturb / disturbance, 18, 28, 143, 151, 279, 436, 822, 830 Dome, 18 Drift, 4, 25, 28, 77, 406, 727, 736, 813, 815, 837 Driver, 595–599, 796–797 Duplex full-duplex, 703 half-duplex, 702 E EDO (extended data out), 203 EPROM (erasable PROM), 203–204, 251, 316, 391, 456 EE-PROM (electrically erasable PROM), 251, 269, 271, 431, 542, 575, 781, 785 ETB, 731, 732 ETX, 731, 738–739 Edge, 18, 63, 66, 69, 140, 304, 328, 330, 447, 664, 727–728 Eigenvalue, 10 Electromagnetic device, 62, 133, 167, 248, 279, 373 Embedded embedded control, 292, 455, 558, 847 embedded system, 572, 595, 601, 606, 638, 641, 707 Encoder, 91, 237, 263, 414, 556–557, 748, 756–757
Zhang_Index.indd 859
Enable, 44, 64, 164, 213, 221, 330, 608–609 Entity, 495, 676–677, 763 Ergonomics, 375–376, 834 Ethernet, 319–327 Event event broker, 618–622 event handling routine, 622 event notification, 619–621 event trigger, 621 Exposure, 60, 64–65, 68, 70, 72, 85 F FCS, 720, 751, 753, 755 FIFO (first input first output) FIFO queue, 591 FIFO semantics, 769 FIQ, 606–607 FPGA (field-programmable gate array), 250–258 FSK bus, 388, 395 FSM (finite state machine), 245, 496, 659–661, 663–664, 667 Ferroelectric material, 78, 84 Ferromagnetic, 10, 22, 24, 30, 32, 36 antiferromagnetic, 37–38 Ferrous, 43, 60, 61 nonferrous, 53, 60, 61 Fiber-optic, 50–51, 62 Field fieldbus, 148, 278, 280–327, 414, 417, 420, 423 field communication, 377–420 field level, 260, 261 field network, 415–420 Finite state automata, 659–669 Firewire, 339–343 Firmware, PCI firmware, 800–802 Flash memory, 204, 431 Float valves, 172–175 Flow valves, 177–181 controls, 758–760 Fluxgate, 33 Force sensor, 79, 80, 87 Foreground receiver, 46 Fork/Forking, 93, 175, 624, 656, 665 Foundation Fieldbus, 280–289
5/29/2008 6:34:37 PM
860 Frame/Framing control, 749–752 Friction, antifriction, 92, 140 Fuzzy logic controllers, 558–564 G G, 445, 475, 575 G-code, 467, 476, 477, 478, 480–481 GB, 187, 340, 344, 696 Gbps, 342, 705 GMR, 34, 36–38 GUI (graphic user interface), 572, 654, 789, 791, 802, 842 Gateway, 261, 382 Gauge foil gauge, 73 strain gauge, 72–73, 74, 75, 78–79, 83, 84–86 tension gauge, 56, 77, 82 Gear, gearwheel, 32 Geophysical, 33 Giant magnetoresistance, 30, 36 Glare screen, 376 Gray gray scale, 65 gray value, 63 Grid, 28, 51, 73, 78–79, 252, 503 Guideline, 51, 140, 164, 289, 376, 458, 816 H HART HART communication, 378–386 HART-compatible device, 387, 389, 394–395, 397–400, 402, 403, 411–412 HART interface, 16, 389, 394, 401 HART multiplexer, 392–394, 400 HART protocol, 406–414 HART system, 387–405 HDL controller, 718 HDLC (high-level data-link control) HDLC controller, 721 HDLC frame, 750, 751 HDLC protocol, 749 HDLC transfer mode, 721 Highway, 377–423
Zhang_Index.indd 860
INDEX Half half-bridge, 74, 75 half-duplex, 702 half-circle, 157, 160 Hall hall effect, 28, 29, 35, 60 hall sensor, 29, 30, 35, 36 hall voltage, 28–30, 36 Handshaking, 216, 228, 345, 347, 349, 537 Hardwired, 262, 513–515 Hazardous, 6, 27, 52, 59, 105, 177, 280, 284, 394, 539 Hex (hex decimal), 228, 409 Human-Machine human-machine interaction, 353–370 human-machine interface, 351–376 Humidity, 26, 71, 137, 372, 564, 814 Hybrid system, 832, 839, 844, 846, 847 I IBM, 187, 328, 516, 698, 719, 752 IC (Integrated circuit), 240–241, 252, 691–692, 693, 695 IDE (integrated drive electronics), 333–334 IEC 60870 – (standard), 498–503 IEC 61512 – (standard), 534–537 IEEEIEEE-488, 347–351 IEEE-1394, 343 IEEE 802.2, 760 IEEE-802.3, 760 IEEE-802.5, 760 I/O (Input/Output) I/O interface, 215, 226, 227, 251, 306, 372, 397, 464, 705, 781 I/O port, 226–228 mapped I/O, 213, 215, 600 PCI I/O, 791 peripheral I/O, 226–228 programmable I/O, 216, 226, 250 IGMP (internet group management protocol), 325–327 ISO (international standard organization), 70–71, 282, 291, 300–301, 307–308, 500, 681, 750 ISP (internet service provider), 323
5/29/2008 6:34:37 PM
INDEX ISR (interrupt service routine), 210–213 Intel, 187–188, 191–195, 199–200, 210, 328, 590, 800, 803 Inductive sensor, 434 Initialization, 795–796 Inlet port, 113–115, 167, 171 Inline-axis, 115 Insertion, 178, 735, 839 Instruction, 199, 448–454 Integrator, 375, 832 Interaction, 353–370 Interbus, 309–318 Interconnection, 137, 236, 241, 250, 348, 459, 498, 708, 771 Intermittent network, 368 Interrupt interrupt bit, 453 interrupt controller, 194, 229–232 interrupt handler, 210, 569, 572, 601–608 interrupt line, 207, 218, 226, 230, 793 interrupt operation, 207–212 interrupt routing, 223–224 interrupt vector table, 209–210, 211, 224, 578 J JPEG, 71 Jacket material, 172 Joystick, 351, 374, 485 Jumper, 98–99, 227, 329, 644–645 Junction box, 259, 262, 283, 380 K KG (Kilogram), 36, 38 KHz (Kilo Hertz), 133, 135, 136, 703 KV (Kilo voltage), 134 Kbps (Kilo bit per second), 290, 319, 345 Km (Kilometre), 309, 311, 691 Kernel, 571–572, 639–640 Keyboard, 176, 215, 351, 372, 376, 429, 571, 577, 596, 611, 696, 739, 790 Keypad, 372 Kinetic energy, 117, 178 L LAN (local area network), 319, 372, 418, 484, 489–490, 682, 694–695
Zhang_Index.indd 861
861 LED (logic electronic device), 6, 8, 18, 99–100, 459–460 LLC (logic link control), 760–762 LVDT (linear variable differential transformer), 22–23, 25–26, 117 LVPSC (low voltage power supply circuit), 778–780 LZW compression, 743–744 Ladder logic ladder logic diagram, 445, 445 ladder logic form, 441 Latch, 218–220, 251, 329–330, 444 Layout, 225, 242, 247, 539, 574, 610 Leakage, 114, 120–122, 143–144, 394 Limit switch, 38–43 Link for ModelSim, 849 Load sensor, 79, 80 Locker, 634–638 Lockout cylinder, 113 Lookup table, 251–252, 668, 813 Loop closed loop, 103, 117, 135, 146, 465, 505, 519, 524, 531, 542, 544, 827–828 control loop, 281, 310, 519, 520 interbus loop, 313, 314, 316, 317 loop control, 180, 505 loop device, 313 loop-back, 312, 313 open loop, 465, 531–532, 542, 544, 552, 554 process loop, 154 Loose mounting, 122 Loss compression, 741 Lossless compression, 741 M MAC (medium access control), 762–763 MATLAB, 841–844 MB (megabytes), 187–188, 220, 328, 330–336, 614, 696, 741, 801–802 MBR (Master boot record), 573–575 MCU (microprocessor control unit), 464–465, 542, 543 MMU (memory management unit), 613–614, 618, 653 MMX (matrix mathematical extension) MMX instruction, 200–201 MMX technology, 199–201
5/29/2008 6:34:37 PM
862 MPU (microprocessor unit), 465, 786–788 MTU (master terminal unit), 496–498 MHz, 8, 188, 196, 220, 328, 330, 336, 515, 703 MR (Magnetoresistive) AMR (anisotropic MR), 34, 36 CMR (colossal MR), 30, 38 GMR (giant MR), 30, 36–38 Magnetic magnetic control system, 27–38 magnetic field, 54 magnetic level switch, 28, 32 magnetic switch, 27–28, 32 Magnetization, 30–31, 61, 553 demagnetization, 553 Malfunction, 17, 111, 172, 222, 412, 440, 459–460, 808, 811, 813 Manual actuator, 141–142 Master-slaver master-slave model, 766–768 master-slave principle, 267–269 master-slave protocol, 408, 766 Menu-driven, 135, 481, 847 Message message passing, 622–625 message queue, 622–629 Metrology, 5, 180, 181, 814, 816 Microprocessor unit bus system operations, 218–226 chipset, 187–207, 570–579 input/output rationale, 213–218 interrupt operations, 207–213 Microprocessor chipset, 13–14, 16, 223, 621, 760, 781 Miniature, 79–80, 81, 137, 554 Modem DSL modem, 249 FSK-modem, 390–392, 396 HART modem, 398, 400 ModelSim, 847–849 Monolithic kernel, 573 Monolithic SCADA system, 488–489, 489, 490 Motherboard, 16, 227, 333, 571, 277–578, 777, 781, 786–787, 791, 804, 815 Multicast, 293, 321, 325–327 Multichannel, 22, 541
Zhang_Index.indd 862
INDEX Multiplexing mode, 703–705 Multiplexer digital, 722–723 time division, 723–725 Multitasking, 579–581 Multithread, 619 Mutual exclusive, 658–659 N NDAC (not-data-accepted line), 349–350 NRFD (not-ready-for-data line), 349–350 NRZ (non-return to zero), 297, 303, 712, 735, 742 NRZI (non-return to zero inverted), 735, 737 Numerical control computer, 462–487 NVM (non-volatile memory), 807–808, 816 O OCR (optical character recognition), 69 OPC (operation planning and control), 291, 377, 493 ORB (object request broker), 676, 677–678 Object-oriented, 419, 617, 620, 842 Open architecture, 367 Operating system, real-time, 579–618 Optical optical beam, 8 optical sensor, 12 Optimal control, 526, 827 P PCI (peripheral component interconnect) PCI BIOS, 795, 798–800 PCI bridge, 792–795 PCI bus, 218–219, 220, 222, 224, 328, 789, 793–797 PCI card, 247, 790, 793 PCI firmware, 795, 800–802 PCI I/O, 791 PCI-ISA bridge, 790, 792 PCI-PCI bridge, 792–795 PCMCIA card, 333 PGA (pin-grid array), 205 PID (proportional-integration-derivative) controllers, 519–532
5/29/2008 6:34:37 PM
INDEX PLC (programmable logic control) controllers, 429–462 POST (power-on self-test), 783, 785 PROFIBUS, 289–291 PROM (programmable read only memory), 203–205 Package, 71 Parallel parallel bender, 131–132, 132 parallel interface, 178, 485, 682 parallel mode bit-parallel mode, 700–701 word-parallel mode, 701 parallel port, 344–345 Parallelism, 189, 244, 624 Parity bit, 385, 411, 706–707, 727, 729 Photoelectric photoelectric device, 44–52 photoelectric sensor, 44–51 photoelectric switch, 44, 52 Phototransistor, 44 Pilot, 144, 162, 167, 169, 361, 847 Pipeline, 109, 142, 189–190, 197, 200, 504 Pneumatic actuator, 97, 101–102, 103, 105 Point-to-point, 259, 289, 379, 399, 685, 752 Polarization, 50, 132 Pole, 466, 486 Prefetcher, 194, 198–199 Process control, 180, 280–281, 406, 455, 493, 539, 654 Processor, 188–189, 194, 482, 569, 571, 577, 602, 655, 657 Protocol CAN protocol, 291, 296–299, 303, 484 data communication protocol, 763–771 data link protocol, 749–763 data transmission protocol, 725–749 HART protocol, 406–414 Proximity proximity sensor, 52, 54, 55, 60–62 proximity switch, 35, 52, 53, 61 Q QoS (quality of service), 321, 327, 726 Quad word, 200, 201 Quantum, 28, 67, 658 Quarter-turn actuator, 93, 95
Zhang_Index.indd 863
863 R RAM (random access memory), 203–205, 251, 300, 372, 570, 578 RDRAM (Rambus DRAM), 203, 340 REQ (request), 222–223, 337, 339 RGB (red green blue), 63, 65 ROM (read only memory), 202, 203–205, 225, 251, 431, 571, 577 RPC (remote procedure communication), 770–771 RTC (real-time clock), 235 RTDB (real-time data base), 493, 495, 496 RTU (remote terminal unit), 375, 488, 497 RTL (register transfer level), 243, 245, 255 RVDT (rotational variable differential transformer), 22, 24–26, 24 R/W (read/write), 202, 205, 214, 607 Real-time control, 14, 484, 569 Reed switch, 34–35, 174 Reset pin, 207 Resistance measurement, 79, 125 Resonance frequency, 129, 136 RSRS-232, 345–347 RS-422, 345–347 RS-485, 345–347 RS-530, 345–347 S SCADA (supervisory control and data acquisition) SCADA controller, 488–519 SCADA network, 488, 519, 678–679, 690 SCADA system, 354, 375, 488–490, 491, 496–498, 512–517, 678 SCSI (small computer system interface), 335–339 SCXI-1122, 75 SCXI-1321, 76, 76, 77 SDLC (synchronous data link control) SDLC controller, 719–720 SDLC frame, 719, 720, 752 SDLC protocol, 749, 752 SIMULINK, 844–846
5/29/2008 6:34:38 PM
864 SQL, 492, 495 Sandwich material, 37 Scheduler, 589–593 Self-test, 409–410, 577, 644–645, 783 Self-actuated, 155–165 Segment, 208–209, 277, 304, 454, 612, 653, 668–669 Semaphore, 629–638 Sensor color sensor, 6, 8–9, 63, 67–68 current sensor, 54, 55 diffuse sensor, 45, 48–49, 58 direction sensor, 27 distance sensor, 10, 11, 12 hall sensor, 29, 30, 35–36 image sensor, 63, 65–67, 69, 71 LVDT sensor, 22, 26 magnetoresistive sensor, 30–32 mechanical sensor, 27, 61 monochrome sensor, 63, 67, 68 photoelectric sensor, 38, 44–45, 47–48, 49–51, 58, 434 position sensor, 52, 134 proximity sensor, 52, 54, 55, 61–62 range sensor, 21 RGB sensor, 63 scan sensor, 63–72 section sensor light, 15–21 ultrasonic sensor, 12, 13 Server, 650–652 Servo controllers, 539–558 Shift register, 251, 312, 450–451, 715–716, 728, 755–756 Simplex, 390, 779, 780 mode, 701–702 Slave, 767–678 master-salve model, 766–768 master-salve principle, 267–269 Sliding window, 759–760 Solenoid valves, 165–172 Spin, 37, 640 Stack, task stack, 582–585 Stepper motor, 91, 128, 373, 544, 554, 640 Stop-and-Wait, 758–759 Stroke, 90, 91, 95, 98–99, 110, 113, 117, 122–124, 156–157
Zhang_Index.indd 864
INDEX Switch bimetallic switch, 1–6 context switch, 589–593 electric switch, 44, 52 electromechanical switch, 1 float switch, 173–175 level switch, 28, 32, 173, 175 limit switch, 38–44 magnetic switch, 27–28, 32 physical switch, 41–42 position switch, 41–42 power switch, 205, 775, 776, 777 proximity switch, 35, 52–53, 61 reed switch, 28, 32–33, 34–35, 43 rotary switch, 41–43, 393 torque switch, 95 Supervisory control, 488–519 Synchronization, 657–658 bit synchronization, 726–727, 734–737 character synchronization, 727–730 frame synchronization, 730–733 Synthesis, ASIC, 244–246 T TDM (time division multiplexer), 723–725 TDMA (time division multiplexing access), 408, 418 TCP/IP protocol, 491, 492 Task task allocator, 585 task body, 586 task context switch, 589–593 task control, 579–595 task object, 581, 585–587 task queue, 588–589 task scheduler, 589–593 task stack, 582–585 task state, 585–586 task thread, 593–595 task timer, 645–646 Terminal, 41, 76, 332, 371, 439, 458, 461, 496–497, 595, 710 Thread, 611, 634–637, 657–658 task thread, 593–595 Timer programmable timer controller chipset, 233–235 timer creation, 646–647
5/29/2008 6:34:38 PM
865
INDEX Tool toolbox, 7, 841–849 toolkit, 841–849 toolset, 832 Transformer, 22–27 Transmitter UART (universal asynchronous receiver transmitter), 706–708 USART (universal synchronous receiver transmitter), 708–709 USART (universal synchronous/ asynchronous receiver transmitter), 709–718 Trigger, event trigger, 621 U UART (universal asynchronous receiver transmitter), 706–708 UDP (user datagram protocol), 322, 327, 419, 688–689, 689 UHCI (universal host control interface), 341–342 UML (unified modeling language), 664–665, 666 UNIX (network operating system), 490, 516, 696 UPS device, 780 USB (universal serial bus), 339–344 USART (universal synchronous/ asynchronous receiver transmitter), 709–718 USRT (universal synchronous receiver transmitter), 708–709 Ultrasonic ultrasonic actuator, 133, 136 ultrasonic frequency, 136 ultrasonic motor, 128–129 ultrasonic sensor, 12, 13 Upsteam, 142, 145–146 V VHDL (verilog and hardware description language), 242, 246, 255, 847–848 VITAL memory, 849 VLAN (virtual LAN), 322–324, 326 VLSI circuit, 248 VNAV (vertical navigation model), 360–361, 363
Zhang_Index.indd 865
VPN (virtual private network), 322 Valve ball valve, 100, 144, 159, 165, 166 check valve, 155–161 control valve, 142–155 directional valve, 112, 155 float valve, 172–177 flow valve, 177–181 pneumatic valve, 101, 124 relief valve, 161–165 rotary valve, 104, 146, 151, 154–155 solenoid valve, 165–172 Vector vector of state, 355 vector table, 209–210, 211, 578, 609–610, 610 Virtual memory, 613–616 Visual Basic, 493 W Watchdog watchdog mechanism, 641 watchdog timeout, 641, 642, 646 watchdog timer, 640–645 Waveform, 138, 233, 385, 391, 504, 556, 735, 737 Wheatstone wheatstone bridge circuit, 80, 83 wheatstone resistor bridge, 34 Window command window, 10, 13 sliding window, 759–760 Wired, 96, 262, 294, 306, 378–381, 571 Wireless, 135, 374, 381–383, 514–515, 682, 708, 742, 766 Word code word, 742, 744, 747 command word, 230, 709 control word, 230–231, 234, 237, 709–710 Word-parallel mode, 701, 702 X XYZ, 8, 506 XYZ client, 506 XYZ server, 506
5/29/2008 6:34:38 PM