CONTROL SYSTEMS FUNCTIONS AND PROGRAMMING APPROACHES Dimitris N. Chorafas CORPORATE CONSULTANT IN ENGINEERING AND MANAG...
33 downloads
742 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
CONTROL SYSTEMS FUNCTIONS AND PROGRAMMING APPROACHES Dimitris N. Chorafas CORPORATE CONSULTANT IN ENGINEERING AND MANAGEMENT, PARIS
VOLUME B Applications
1966
@ ACADEMIC PRESS
New York and London
CoPYRIGHT © 1966, BY ACADEMIC PRESS INC. ALL RIGHTS RESERVED. NO PART OF THIS BOOK MAY BE REPRODUCED IN ANY FORM, BY PHOTOSTAT, MICROFILM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS.
ACADEMIC PRESS INC. III Fifth Avenue, New York, New York 10003
United Kingdom Edition published by ACADEMIC PRESS INC. (LONDON) LTD. Berkeley Square House, London W.I
UBRARY OF CONGRESS CATALOG CARD NUMBER: 65-26392
PRINTED IN THE UNITED STATES OF AMERICA
To H. Brainard Fancher
FOREWORD A striking feature of the scientific and technological development of the past 25 years is an increasing concern with the study of complex systems. Such systems may be biological, social, or physical and indeed it is easy to give examples of systems which combine elements from more than one of these areas. For instance, an unmanned satellite such as "Telstar" or "Nimbus" can be considered in purely physical terms. However, when an "astronaut" is to be involved in the system, a whole new realm of biological problems must be considered and, even more, the interaction between the biological and the physical subsystems must be taken into account. As we advance to large space stations involving crews of several men, we must add the complication of social problems to the systems analyses. A characteristic feature of most complex systems is the fact that individual components cannot be adequately studied and understood apart from their role in the system. Biologists have long appreciated this property of biological systems ana in recent years have attached considerable importance to the study of ecology or the biology of organisms in relation to their environment. Engineers and social scientists have profited from adopting this point of view of the biologists, and biological and social scientists are coming to an increased appreciation of the utility of mathematical models which have long been a principal tool of the physical scientist and engineer. In recent years there has emerged the beginning of a general theory of systems and a recognition of the fact that, whatever their differences, all goal-directed systems depend for their control upon information. Its encoding, storage, transmission, and transformation provide the basis for the essential decisions and operations by which a system functions. As the volume of information necessary to control a system has increased and as the transformations that are required to be performed on this information ix
x
FOREWARD
have become more intricate and time-consuming, systems designers have turned more and more to that information processor "par excellence"-the digital computer. In fact, the problems of control have become so complex that it is now necessary to consider in some detail the subject of Information and Control Systems. The designer of an information and control system must be concerned with such questions as, "What is the nature of the information that can be obtained about the system I hope to control?", "Where and how can it be obtained and how can it be transmitted to a digital computer?", "What transformations of the input information are required in order to provide the output information necessary to control the system?", "What are the timing requirements on the output information?", "How do the answers to the above questions affect the design of hardware and programs for the digital computer?" It is to problems such as these that Professor Chorafas, drawing on his wide background as an industrial consultant, directs his attention in this book. OTTIS W. RECHARD Director, Computing Center and Professor ofInformation Science and Mathematics Washington State University
CONT ENTS
;,
F'ORGWORD
CONTCm'S Of' VOlUMt: A l"fRODUCTION
XV
xvii
PART Vl
Process-Type Cases and Data Control Chapter XXI.
Chapter XXI I.
Computer Usage in the Process Industry Transitional Path in Computer Applic:uions
I 2
Evolution tcw;ard ProeeiiS-Type Studlc.!
5
Integrated Applications in the Refinery Systems Concept in Data Control
13
Applications with Teehnieal l'roblems Comp uter Us~ in Chemical and Petroleum Engineering Studying Pipeline Problems
Simulation Proble.ms ''F«dfoa·ward.. Concepts
Chapter XX Ill.
Chapter XXIV.
The Rationalization of Manaeement Data
7
17
18 22
26 30
Oevdopin.g an lmegr.ned lnrormation System
32 33
Computatiooal Requircmcnu in Dispatcl'lina lkin& Applied Mathematics Example 10i1h G.. Oispalchin'
40 41 44
Applications in The Field of Accounting Computerizing Oil and Gas Data
47
General Accoun ting ~Type Appli<:•tions Approaching the Credit Card Problem Accounting Control through Sampling
49 53
ss
60
xii
Chapter XXV.
CONTENTS
ConiTolling a Power Production Plant Input and Throughput Action for Power Plants A Process Control System ror Boiler Openuions Operational Characteristics and logging Case Study on Data Collection in Boiler TeMing
64 6$ 70 7$ 78
PART VII
Applications in the Metals Industry Chapter XXVI.
Chapter XXVII.
Computer Usage in the Steel IndusiTy
8$
Applications Review in Steel Works Systems Study in a Steel Process Mathematical Simulation in Steel Works
86
Evaluating tbe Data Load Advantages to B<: Gainc
Chapter XXVlll.
Production and Inventory ConiTol at the Forge Metal Works East, CenLral, and West Divisions Tube and Spec.ialcy Divisions
Chapter XXIX.
Quality Assurance as a Real-Time Applkation Quality Effects of Mass Production Using Product Assurance Indicators Case Study in a Tin Plate Plant
91 96 102 103 109 114
119 121 12$ 138 139 144 151
PART Vlll
Guidance for Discrete Particles Chapter XXX.
Airline Reservations Systems Evaluating Systems Requireme.nls The Basic System's Servioe Some Operating Characteristlcs Associated Systems SeiVice
Chapter XXXI.
Guidance Approaches to Air Traffic Simulation ror Air·Traf6c Control Rea1-Time Air·Traffic Control \Veather Data Collcctjon Using Weather Data for Predictive Purposes
159 160 163
169 174 177 179
183 187 192
CONTENTS
Clulpter XXX II.
AirlTaft Detection and In-Hig ht Trac king Airc-raft Detection Systems An Early Experiment in Data Automation Tec-h ni q ue-~ Computer Role in Satellite and Missile Tracking A Satellite Trncking Network
Chapter XXXIII.
Railroad, Subway, and Car Traffic Problems Toward Railroad A utomation Planning and ControUing the RoiJing StoCk
Rtservation Systems for Railroads Developments in Subways
Motor Traffic Control
Chapter XXXIV.
Digital Automation in Banking
xiii 197 197 201 205
209 213 214 217 221 224 225
230 230
looking Ten Years Ahead Problem Definition at the Bloomington Saving.s Bank Automating Basic Bank Operation~ Quc: ~ tio ns o f Obsolesc·cnce
246
INDEX
249
2J4
239
CONTENTS OF VOLUME A Part I. Chapter Chapter Chapter Chapter
The Dynamics of Digital Automation I. II. III. IV.
Evaluating Data Control Opportunities The Function of a Data System Studying the General Structure Principles of Systems Analysis
Part II. Data Collection and Teletransmission Chapter Chapter Chapter Chapter
V. VI. VII. VIII.
Data Collection Problems Conversion Methods Data Carriers Structural Aspects of Teletransmission
Part III. Numerical, Logical, and Stochastic Processes Chapter Chapter Chapter Chapter
IX X XI. XII.
The Use of Numerical Systems Fundamentals of Boolean Algebra Classifications in Information Retrieval Stochastic Searching
Part IV. Mathematics [or Systems Control Chapter Chapter Chapter Chapter
Part V. Chapter Chapter Chapter Chapter
XIII. XlV. XV. XVI.
Analysis, Speculation, and Evolution The Mathematical Simulator Evaluating Mathematical Programming Establishing Systems Reliability
Programming [or Real-Time Duty XVII. XVIII. XIX XX
Considering Programming Approaches In-Memory Operations Real-Time Executive Routines Programming Aids for Process Control xv
Introduction
In Volume B we shall be concerned with the key aspects of applications analysis. We shall consider mainly generic developments and, whenever necessary, historical views wiII be dominated by structural considerations. To read such a history of the computer applications effort is to discover the mainstream of the future usage of powerful man-machine ensembles. A number of developments in computer usage have left a deep imprint on the art. In the years of its existence, the art of analysis for information systems has presented some outstanding achievements and, also, certain drawbacks. Most drawbacks were due to limited human imagination; to human reluctance to seek advice; to expediency in the hope that one could save time by doing away with preparation, background, and skill. Reasonably, whenever and wherever this was the case, the results were deceiving. How often do we forget or fail to appreciate breakthrough in applications? For information systems to grow and evolute, their background technology should gain broad approval within the industry and among computer users as the first large-scale time-sharing systems are put in use. With this, better design of information systems, new uses for computers, efforts to develop more efficient programming aids, and "better overall utilization" might come to characterize the computer industry. With respect to hardware, new equipment appearing on the scene in 1966-1967 will predictably fall in these categories: • Time-sharing computers, involving polymodular systems with communications and real time capabilities; • A variety of industry-oriented terminals, for remote data transmission and response; • Advanced random access storage units and information retrieval equipments; xvii
xviii
INTRODUCTION
• Microcomputers, able to be used both as slave machines and as independently operating devices. But hardware, while important itself, is only one of the subjects on which our interest must focus in the years to come. Since 1950, experience has taught the user that with computers and information science at large, he depends not upon one but upon three pillars: hardware, software, and applications analysis. These are considered in order of increasing difficulty, uncertainty, and challenge: • Hardware has reached a level of development beyond which advances,
though significant, are not likely to leave us breathless. What is more, computer manufacturers have brought hardware design to a plateau-not only the leading companies but practically any company can make the score. And while some machines are better than others in "this" or "that" characteristic, they all present strong and weak points and-as of today-we don't see differences of great costl efficiency significance among them. • Software, including applied programming, libraries, languages, and
processors, has been, so far, less developed than hardware. Yet, in its current form it holds no particular mystery for computer manufacturers and users. The "uncertainty" to which we make reference is in respect to what the future may hold-not to the present facts. Rather than being played in the arena of professional secrecy or of technological breakthroughs, the battle for software is basically economic and financial. Less cryptically, this means that some manufacturers can afford to build a solid software support around their machines. Others just do not have the will or the money. • Applications analysis is the least developed of the three-and the one in which the greater problems and opportunities lie ahead. Ironically, it is the users, not the manufacturers, who have put the most work in applications analysis so far-and who have gotten the most results out of it. Design automation, message switching, process control, and unified management information systems are but a few examples. To be successful in applications analysis, the management of a company must be very keen, and knowledgable of its task and ofthe need for thorough systems work. Without such work, it will not be able to implement efficient digital control schemes. In turn, this may call for organizational studies which will reach deep at the roots. To optimize production, a company will find it necessary to simulate the whole process of its plant through mathematical equations. To improve quality standards, management can no more be based on a sampling analysis which arrives too late for corrective action.
INTRODUCTION
xix
It must speed up the reaction cycle so that quality can be followed on a
timely basis, or, otherwise, digital automation will be of no avail. A study which we did in the German industry, from November 1965 to February 1966, thoroughly documented this point. The study involved seventy seven leading industrial complexes, governmental organizations, and universities. One hundred and fifty-nine senior executives were interviewed. The results are reasonably conclusive and help underline the outcome of research in American industry which we did in early 1965.These results can be phrased in a nutshell: "In the information systems field, the customer awakes after twelve long years of autohypnosis." With this in mind, many of the practical examples included in this book have been selected for their ability to clear up the ideas of the reader. These examples result, for the most part, from the author's own experience. Similarly, the principles and theoretical considerations, which are outlined in Volume A, are based on systems research projects personally accomplished during fourteen years on computer systems practice and of an equal number of years of teaching in applied mathematics, systems analysis, engineering design, and electronic data processing. * Significantly enough, the users become conscious of the complexity of the problem and try to do something about it. An excellent reference is what we have been told by many executives in leading German computer installations: they would gladly accept "basic English" in programming statements, in order to help simplify and promote the exchange of programs and skills in information technology-for they feel that communications in information technology can make or break the applications effort in the years ahead. An example could be helpful. The "number one" subject for a major automobile manufacturer has been an integrated production planning and control system. The projected programming scheme will start with sales analysis and proceed through production dispatching and materials control. Over the past few years, this application was mostly financial and bookkeeping in nature. The new emphasis is based on forecasting techniques and on optimization through mathematical models. The "number two" subject in management interest is design automation. The company plans to start a DAC model with particular accent placed on a well-rounded approach to design automation. The impact of such projects on computer requirements is self-evident; they add to management desire * Every cliapter of the present book has been read by members of systems manufacturers, and user organizations, with which we work in a consulting capacity, and has received extensive commentaries. Every chapter has also been tested in formal university courses and executive development seminars, in order to assure that this will make both a professional reference and a basic academic training book.
xx
INTRODUCTION
for a change to third-generation computers, provided the applications make it justifiable. Similarly, several computer users have stressed the need for "data assurance." This concerns not only input, computing, storage, and output operations but also automatic checking on mistakes which can be introduced by the man component. At long last, the user has come down from the "cloud nine" of complacency. The trouble with conventional thinking in information technology is that it has been "always late and always wrong." Another subject which has so far attracted less than its proper share of attention is "systems compatibility." Without it, the user does not realize the full impact of equipment in which he has invested his money. The executive of a chemical combine whom we met, dramatized this point by stating that "Compatibility, not cost, has been the critical touchstone in the recent choice of new computing gear." This company has long been a "one-manufacturer" bastion. Now, management considers two other computer makes and is impressed by a third-because of this very reason. For computer companies, the future share of the market will greatly depend on compatibility in "applications orientation"-which in turn imposes special limits on the hardware-software packages. These thoughts are based on extensive research. The multinational research which we undertook during 1966 took us to twenty-four countries in four continents, and involved personal meetings with two Prime Ministers, six ministers of industry and commerce, five ministers of planning, eighty-five University professors, and a golden horde ofleading industrialists-in all two hundred and seventy cognizant executives representing one hundred and eighty-eight organizations. The message is clear. Its phrasing is based on fifteen years of constant pitfalls. Technology, nowadays, is as important as ideology. Large-scale information machines, at present and in the years to come, bring about the need for a complete overhaul of many basic technologies. This will have an impact on the whole range of the evolution which we can forecast with man-made systems. The reference is significant, for while in the 19th and the "20A" century man was mainly concerned with energy production, in the "20B" century we are currently living, the focal point has shifted to information science. November 1966
Dimitris N. Chorafas
PART VI Chapter XXI COMPUTER USAGE IN THE PROCESS INDUSTRY During the last ten to fifteen years, the process industry has been a leader in automatic controls and thus has achieved a substantially low ratio ofworkers per dollar value of product. In view of this existing favorable position, a significant number of people in the industry were fairly convinced that it would be difficult to financially justify the introduction of expensive, master computers for real-time control applications. This, nevertheless, proved not to be the case. One of the basic reasons why, though automated at the ground floor level, the petroleum and chemical industries look toward digital master guidance is that minor loop control through independent automatic control devices has limitations. These limitations are more striking in the process efficiency termed "yield" and in the quality control end of the line. Furthermore, some industries, by using electronic computers to optimize plant design and plant operation, placed the competition, which used "conventional" methods, in an unfavorable position. In order to bring the operational advantages of digital automation under proper perspective, in the present and in the following parts of this work we will attempt: • To define the problems associated with the use of master digital computer control • To outline certain solutions to these problems • To review case studies More precisely, as far as the process-type problems are concerned, we will consider cases associated with the use of computing equipment, ranging from refinery operations to pipeline design and petroleum marketing. In our discussion we will focus attention on the advantages to be gained from computers, stressing the principal characteristics of planning for data
2
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
control, compared with the information handling practices of the past. For the designer of a real-time, integrated data system it will be most important to keep these differences in mind; the computer, like any other machine, must have the proper "fuel" for the control system it will command.
TRANSITIONAL PATH IN COMPUTER APPLICATIONS One fundamental reason why we look toward the performance of real-time 'computer operations in the process industries, in an integrated manner, is that "guidance information" is not compartmentalized by function. Guidance information seeks to transcend the divisions that exist in a company and to provide the basis on which integrated plans can be made. This information must be flexible enough in its structure so that it can be used to measure performance and help in holding specific operations more accountable. It needs to cover fairly long periods of time, a requirement which is not apparent at first sight. To illustrate the principles involved one might consider data needs in a small section of a refinery. In a crude-oil distillation column, for example, one of the lowest control levels would have the task of calculating from measurements of the temperature and pressure drop across an orifice in the feed pipe, the instantaneous rate of flow of crude oil. This will need to be corrected to NTP passing into the pipe-still. The next control level, would use this value to calculate the mass of oil passing into the pipe-still during a given period of time. For this purpose, the elapsed time and the density of the crude oil must be known. At present, in most cases, quality -density data are obtained from laboratory experiments. With a digital control system, it would be possible to monitor the density continuously and to use this as a further input. The raw material input so calculated is required by the next "higher control level" for the calculation of the material balance which in its turn is necessary for the determination of the over-all economy of the plant. Evolutionary applications of this nature must, of course, be based on "past experience." We have acquired by now considerable experience in off-line processing with electronic computers, but our on-line skill is still thin. Is the knowledge gained from off-line processing transferable to on-line processing? To help answer this question, we will review the transitional path taken by computers in this field. When this book was written, 1965; the Little Gypsy experience was one of the few known closed-loop applications-research projects in the world. Maurin* has stated that the Louisiana Power and Light Company does not
* See reference
in Chapter XIX.
XXI.
COMPUTER USAGE IN THE PROCESS INDUSTRY
3
believe that there is a present economic justification for computer automation of a steam electric generating unit. The popular justifications are based on predicted reductions in major mishaps, increase in fuel economy, reduction in manpower requirements, reduction in equipment maintenance, and reduction in major unit outages. According to the opinion of the Louisiana Power and Light Co., these remain largely future benefits to be fully realized with the refinement, standardization, and full utilization of computer control programs. For this company, the purpose of automating the first unit was to demonstrate the feasibility of computer automation, expecting that the automation of the second unit will, as well, demonstrate its practicability. As Maurin was to say, the main reason for pursuing computer automation is the realization that automatic control of a generating unit must be achieved in order for the industry to satisfy its future generating requirements. With power producing units becoming too complex and qualified manpower too scarce, the human operator will not be able to continue to do his job unassisted. The technology, methods, approaches, and solutions being developed as a result of efforts along the digital automation line are a crucial link in future developments. Evolution has been a step-by-step approach, often proceeding by mutations and involving a substantial variety of subjects, as a brief discussion on petroleum applications over the last ten years helps identify. Computer applications in the process industry have followed a long transition with batch processing being the most outstanding. "Necessity" and mere economics were mainly responsible for the steps that were taken and for the evolution in data processing which followed those steps. An important German chemical firm, for instance, had been faced with the high, and perhaps unnecessary, cost of the shipment of its products from several plants to no less than sixty distribution points. Management was particularly concerned about transportation costs, which seemed to be unreasonably high. An intermediate computing system, to which this chemical company submitted its distribution puzzle, tried out all possible routes for individual shipments and came up with answers that provided the lowest transportation cost in each case. The solution of the problem effected major monthly savings. Another leading German chemical concern has had a large-scale electronic data processing machine installed in its main office, the first computer of this size to be placed in a private European company. Accounting operations included handling of a 43,OOO-employee payroll, personnel statistics, and inventory control. Some linear programming problems had been programmed and run. The machine was also used to solve diverse technical and scientific research problems to further the company's production program.
4
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
Ambitious as this range of applications might have been, for its time, the computer was still not on-line. Nonetheless, the cost of these off-line operations was substantial. The size of these computing systems helps to judge the level. In a major British paint industry, for instance, a computer was used mainly for commercial and financial work. The first data to be processed related to factory costs, the records for control of raw material stocks, sales statistics, accounting, and production planning. Correspondingly, an American chemical company stated that it put "the equivalent of 25,000 trained mathematicians" to work at its headquarters, when it installed a large-scale data processor. The first task of the machine was to prepare and turn out the company's big payroll. Proudly, the user announced: "With programming of the payroll application completed, the computer raced through all the necessary calculations in just twenty minutes, as compared with the seventeen hours required on previous equipment, and started turning out pay checks at the rate of more than 4,000 an hour. ..." Some chemical and petroleum concerns that acquired a data processor worked out applications in quality control evaluation and analysis of manufacturing and distribution costs for particular products and product groups. Linear programming was by far the most favored model at this time. Eventually, the area of application was widened to cover some assignments that might be interpreted as the first, timed approaches towards real time. One of these "openings" was data integration with a view to subsequent guidance action. Similarly, the use of simulation methods opened new horizons in computer usage in the petroleum and chemical industries. Some ten years ago, a petroleum company in Texas used a large-scale computer to simulate completely the operation of its oil refinery, running it in countless different ways. Many different types of crude oil had to be handled, and the mixture could be varied. The still can produce usable products directly, or it can produce feedstock for a cracking unit. This company's problem was to find the best combination of methods and processes, given a particular day's batch of crude oil. The company's scientists achieved this by building a mathematical model of the refinery. The crude batch on hand was fed to the computer and the machine produced a complete material balance for the given set of conditions. This company also made use of a similar type system to answer a variety of technical questions and enable the company's management to obtain immediate mathematical solutions not heretofore obtainable promptly enough to be of value. Questions the company specialists "asked the computer" included:
XXI.
COMPUTER USAGE IN THE PROCESS INDUSTRY
5
• How can we pinpoint the geological structures most favorable for productive drilling? • What is the day-to-day status of the company's crude oil reserves? • Which mix of available crude oils and rate of operation of various refining units will yield optimum type and volume of products? At about the same time, in Philadelphia, in one of the large petroleum refineries in the United States, processing 183,000 barrels of crude oil a day, an intermediate computing system was used to expedite technical tasks such as mass spectrometer analysis, manufacturing estimate of yield quantities of products obtainable from crude oil, and hydrocarbon type analysis. Here also the computer helped to develop a complete refinery mathematical model to simulate refinery operations and assist engineers in investigating optimum operating characteristics. In this same refinery, the daily tank gauges and inventory records provided by the computer comprised a complete inventory by grades of the contents of approximately 850 oil tanks, and included gross barrels, unavailable oil, available oil, room left in tank, and net barrels corrected for water and/or temperature as desired. The rapid computing speed of the data processor helped in oil and gas meter calculations concerning the volume of oil or gas flowing through a pipeline from data supplied by a flow meter. This machine handled twenty-five such meter calculations in a few minutes, a task that required more than ten hours with earlier methods. With data from the same integrated file, the refinery's computer did billing price computations, daily tank gauges and inventory, and propane-propylene calculations-used to set up and control an operating budget at the refinery.
EVOLUTION TOWARD PROCESS-TYPE STUDIES If this was the state of the art some ten years ago when computer applications were just starting, how far have we advanced? Some examples may help answer this question. A recently built refinery in Delaware has developed the ground work necessary for continuous, automatic blending, automatic data logging, reporting of yields, and inventory control. Another petroleum company developed automatic evaluation procedures for the handling of crude oil. Crude oil evaluation does not necessarily follow any set pattern in computerization. Each oil company has tended to use its own approach. Calculation along this line is normally made on the assumption that the crude oil to be evaluated is processed through some hypothetical refinery at incremental yield and at incremental product values. Here exactly lies one ofthe great challenges in the use ofdigital data control
6
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
systems. The output data from the computer offer a basis for a rational comparison of several crude oil formulas. Hence, as with most data generation practices, the critical operation is the formulation of an accurate and yet not very complex mathematical model of the operations or processes in question. This point helps bring into proper perspective the fact that process control can be implemented efficiently only if we consider an integrated, both mathematically and electronically, approach to data processing. One of the basic uses of mathematical simulation in refinery operations is the determination of accurate over-all refinery material balance consistent with optimal product rates and available plant capacities. Although a highly detailed optimizer may result in a mathematical megasystem, which we are not yet in a position to handle, it is also true that a simulator built to the objective should include certain provisions for determining under what conditions pre-established processing options are to be exercised. Allowing for flexibility is a challenging problem for a computer programmer. A possible solution to problems of this type is to permit a library of data to be built up in the program, which covers all of the manufacturing options exercised to date on a particular unit. These data include charge, rate, yield, and disposition of the products as well as the type of "change" that can be expected. Calculations of this nature are quite tedious, which accounts for the interest in electronic computational media. With digital process control applications, as with any other type of major systems analysis projects, it all starts in the brains ofthe researcher. He is the one who identifies the variables, studies the perimeter of the system, and defines the limits. Depending on the breadth and scope of the simulator, entire refinery operations might be completely scheduled by the data processor. The computer could also summarize all the product dispositions by name; pricing each component at its sales value, and extending the amount. A similar recapitulation can be performed on the crude charges; the difference in totals may provide a measure of the gross profit for the refinery before operating costs are deducted. But should one be content when he has reached this point? The technical feasibility is not the real problem. Applications of this nature have been tested, and they have proven to be quite successful for the oil industry-though not always the most economic. One manufacturer uses a similar approach for a detailed crude oil evaluation, where consideration is given to the entire refinery scheduling problem rather than simple shortcut approximations. A typical run consists of processing, for instance, fifteen batches of crude oil with the entire refinery balanced. The data handling time is approximately three quarters of an hour to one hour. Computers are also used for the evaluation of new processes for the refinery, particularly in cases where they augment or replace existing processes.
XXI.
COMPUTER USAGE IN THE PROCESS INDUSTRY
7
It is essential to stress that every case must be examined on its own merits. Analytic studies for the petroleum industry, for one, indicate that, to be of a significant value, research on gasoline blending must go far beyond the simple combining of components in some best fashion. Gasoline economics invariably involves establishing a basis for decision making on such matters as replacement costs, pricing of by-products, and optimal distribution of processing costs among joint products. The principal reason for taking a favorable attitude toward potential digital experimentation is the financial results obtained thus far along this line. Mathematical simulators have been used to advantage by the process industry in optimizing raw stock purchases, product sales, multiple plant processing, etc. In most cases, the crucial variables are plant capacity in terms of barrels per day, sales potential for all products that can be produced in the vicinity, and the availability and price of all those raw material stocks that can hypothetically be run to make the products in question. The computer may in turn select the combination that utilizes available plant capacity to the maximum extent possible and produces products that realize the greatest profit. The answers to such manufacturing problems lie in the evaluation of quantitative data, which is unfeasible by manned evaluation methods. They also include such economic information as the incremental profitability for each product that was made and the size increment to which this profitability applies. Not only can mathematical experimentation be used efficiently in the optimization of operating conditions, but the simulator once built can be used to evaluate future facilities. Among the most time-consuming problems facing any processing industry is that of determining the justification for new facilities. Electronically processed models may effectively remove much of the guesswork that enters any management decision on plant expansion. The results are reasonably encouraging. In a number of applications accomplished to date, mathematical models have succeeded in comparing alternative manufacturing policies and in providing management with quantitative data. For instance, a simulator, by experimenting on operating and raw stock costs, and amortizing investment, can accurately determine how many commodities can be produced before the operation on existing or projected facilities becomes unprofitable. Studies of this type have led to drastically altered concepts concerning the profitability of certain operations, and there are cases where million-dollar projects have been canceled after management received financially unfavorable answers based on experimental data.
INTEGRATED APPLICATIONS IN THE REFINERY
The major interests of computerization in the oil industry is the gradual
8
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
integration of the whole production system into one automatic network. Nevertheless, such integration can be accomplished only by developing a number of outstanding experiences in applied mathematics. One example is the automatic handling of crude oil evaluation procedures. By means of an electronically processed simulator, management may have available data on a larger number of alternatives. Normally, the multiplicity of refinery scheduling problems prevents the engineer from having a free hand in varying the operation of a distillation tower to determine its performance. There exist today calculation techniques that are both refined and reliable, but also time consuming. These represent ideal applications for electronic computers in that the mathematical techniques are known and proven, and the engineers who use the answers have confidence in the methods. Data processors can then be used advantageously to eliminate the constraint of the time requirements. Figure I presents a schematic diagram of such an installation. The mathematical application in the foregoing case uses statistical methods, such as multiple regression, with criteria being imposed to determine which variables make a significant contribution to the correlation being derived. The objective here is to develop an analytic function that will describe the performance of an operating unit, and for this analytic function to contain the minimum number of variables that will define the unit's behavior within the accuracy of the basic data from which the function was derived. A distillation tower design program uses correlation methods to predict the tower requirements needed to accomplish a specified multicomponent separation. This program is intended to supplement the more rigorous and time-consuming techniques, used in conjunction with a plate-to-plate model to establish a first approximation to tower design. Results of the calculation include the minimum stage and minimum reflex requirements for the specified separation, a distribution of components other than keys at total reflux, and a tabulation of theoretical stage vs. reflux requirements. A regression analysis program has been written for the study of several variables to see how they are interrelated; for example, to experiment on the change of the dependent variable when two or more independent variables change. A multiple regression equation does this by combining all the evidence of a large number of observations into a single statement, which expresses in condensed form the extent to which differences in the dependent variable tend to be associated with differences in each of the other variables as shown by the sample. The program in question was designed to calculate automatically the constants and coefficients for a wide range ofpolynomial regression equations. It will also indicate to the program user the accuracy, in a statistical sense, of the selected equation.
XXI.
COMPUTER USAGE IN THE PROCESS INDUSTRY
9
The process
Interfoce computer
Direct line to or
moster computer
Input - output media
8BB Printer
Punched tope
Cord reederpunch
unit
FIGURE
I
Heat exchanger calculations can be performed by means of another routine. Its objective is to enable operating personnel to maintain an up-todate record of heat exchanger performance. The program may also be used to evaluate the effect of process changes on heat exchanger performance and to compare alternate designs. Thus better maintenance planning and more frequent evaluation of heat economy can be realized. The program uses the same mathematical methods as is normally used in hand calculations. Physical properties of the material are obtained from laboratory analyses and from standard references. A heat and material balance is calculated, depending on the information available. The actual or assumed configuration
10
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
of the exchanger is used to determine correction factors for the heat transfer equations. A major oil producer has cut the cost of testing potentially new catalysts and testing catalyst life by more than fifty per cent using a pair of specially designed pilot plants, along with a computer analysis of the data. This setup permits rapid preliminary evaluation ofcatalyst performance, and a thorough study of catalyst preparation techniques. It also makes possible a study of the gross effects of process variables as related to a variety of refining processes, such as hydrogeneration, isomerization, and catalytic reforming. Results from the analysis made on the computer give the catalyst activity index, the catalyst activity decline index, and the statistical dependability of the data. Applications along this line are in sharp contrast to our experience in the early fifties, when, as far as mathematics was concerned, computers were used mainly for the solution of sets of simultaneous equations that express the process relationships. The computer would first calculate the secondary characteristics or other criteria from the measured primary variables. It would then determine the corrections for the controller settings by relating these factors in accordance with fixed mathematical expressions. In this sense, the process itself was controlled dynamically by the instruments, There was no heavy time burden on the over-all computer operation, nor were there anyon-liners in existence. As we have emphasized throughout this work, a more sophisticated approach to optimization places the computer in the role of an automatic and efficient on-line experimenter. Consider for instance the process depicted in Fig. 2. Primary measurements are fed into the machine, which then calculates secondary characteristics and changes one or more primary variables to determine the effect ofthe changes on the secondary characteristics. From a microscopic point of view, "this procedure" is continued until the optimum conditions are obtained. But what about the gray matter that needs to be invested in order to make this approach possible? This last reference includes many critical questions concerning systems performance. Within the framework of the basic architectural design, the analyst must establish his horizons at a broad enough level to permit the optimization of total performance. Yet, these horizons must be sufficiently limited to allow a deep and serious look into each function included within their boundaries. As Maurin correctly stated, the design of a computer control system also should recognize the need for standardization of programs and for improvement of man-machine communications. This particular reference supports the thesis that control systems development should recognize that the computer program must be completed in blocks, with each block corresponding to a definable plant subsystem.
XXI.
COMPUTER USAGE IN THE PROCESS INDUSTRY
II
Reference input
~
0---------
fi:
Fractionating tower
y
FIGURE 2
This is the case with subsystems such as feedwater, boiler, turbine latch, turbine acceleration, and stator cooling. Accordingly, the control program on Gypsy I is composed of twenty subsystems, and on Gypsy II, of twentythree. Each of the plant subsystems may be further subdivided into major equipment components or segments. To achieve standardization in the programming work, each block or segment is composed of "standard" elements, containing four fundamental operators: the "Actual Status Determiner," the "Desired Status Determiner," the "Plan Picker," and the "Worker." Quoting from Maurin: • The Actual Status Determiner analyzes associated and/or related inputs and stored information and determines and stores the status of the associated system and/or equipment. This stored status is now available, from memory. to all other interested programs. • The Desired Status Determiner analyzes plant and system conditions and determines and stores the process and system requirements of associated equipment.
12
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
• The Plan Picker analyzes the results of the actual status and desired status determiners and other pertinent stored information to determine the desired course of actions, if any, to be taken . • The Worker is a very brief and specialized program designed to execute the prescribed action.
These programs are brought into operation by an executive routine, as they become necessary. The necessity, of course, is dictated by particular structural situations or by control system restrictions. Executed in the stated sequence, if combined, or on change of state of determinations, if separated, the subject functional programs are used to start up, run, and shut down the generating unit.* The preservation of the running state of each equipment, component, or subsystem, and the corresponding execution of corrective actions is accomplished by triggering the programs in question to be initiated on input scan detection of a change in operational conditions. With respect to control functions, according to the same reference, the input scan checks all contact closure inputs each second and analog inputs at a frequency determined by the program. These analog scan frequencies are chosen at 1,2,4,8,16,32, and 64 seconds. The actual status determiner is executed if anyone of three conditions is realized: The contact closure scan detects a change in state of an associated contact closure input; the analog scan detects a deviation from limits of an associated analog input; or there is a change in state of a program determined status which might affect the established operational systems procedures. Similarly, at Little Gypsy the desired status determiner is executed if there is a change in state of a program determined status, or a request that might affect the determination of this program. Within this complex, it is also necessary to underline the importance of efficient man-machine communications. The same is true about the need for adequate systems provisions for experimentation purposes. If experimentation is to take place, simulation of the process would have to be included in the supervisory control loop. The computer can then evaluate alternatives to determine the adjustments that should be made to the process. These optimum set points could be automatically transferred to process controllers at appropriate intervals. Correspondence to actual process conditions can be achieved by correcting the steady-state conditions of the simulator before the cycle is repeated. This implies the development of a substantial number of mathematical programs. The goal is the dynamic control of the process under consideration. Apart from computers and mathematics this will require control instrumentation
* Startup and shutdown are accomplished by the proper sequencing and direction of these programs.
XXI.
COMPUTER USAGE IN THE PROCESS INDUSTRY
13
which will monitor each channel. Processes that operate with fluctuations in secondary characteristics caused by small upsets, and those with timevariant characteristics caused by such factors as deterioration of catalysts and drift of heat exchange controls, could greatly benefit by using a digital computer to maintain consistent optimum conditions. Furthermore, the advantages offered by the usage of digital computers in process industries make continuous plant operation at peak efficiency a distinct possibility. By incorporating the computer in its control system, a process industry obtains a high-speed computation on complex problems, versatility and adaptability to the various types of applications, and the ability to time-share operations among sources of input data and output responses. Thus, almost all phases of a process can be analyzed, and corrective measures can be computed and applied almost simultaneously. But applications of this type, although realistic, should be approached with care.
SYSTEMS CONCEPT IN DATA CONTROL In principle and in practice, the techniques of process control are never any more advanced than the men who conceive them, and the people and machines that execute them. If new ideas have led to the development of new machines, it is equally true that machines themselves have stimulated methodological changes. But the implementation of these changes has never been better than the quality of the men called to put them into effect. More often than not, "man" has been the delay element in industrial evolution. The search for new systems and procedures, having satisfactory data characteristics, began with an appraisal of plant operations, and the first form under which it became known was that of study for an "improved method." Today we no longer speak of method changes; we are concerned with system evolution. The study of the industrial problems within this enlarged framework produced the "systems concept." The systems concept became, in turn, the guide for formulating the approaches to be employed in industrial management. The on-line integrated data system designed to be operative on a plant-wide basis, using the latest devices and methods of processing data, is but part of the whole problem of data control. To make data control systems work requires an intelligent systematic effort, an effort which goes well beyond superficial thin-skinned research. The systems analyst should start with a snapshot, with a sound review of the current activities. This review must encompass all functional areas. It must not be oriented in the light of what is now being done, but with a view toward achieving results with what would be the production actuality five to ten years from now. It is essential then that, before a control systems study is
14
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
initiated, someone establish the operational objectives of the resulting ensemble. It is only natural to consider current practices. The systems analyst should, however, concentrate on handling these facts in a manner that would make the best use of the equipment he contemplates and of the procedures he projects. Then, the analyst would need to have a thorough background of the current state of the art. Computers, for instance, able to meet the data deadlines imposed by the physical system that they control, have achieved thus far the following functions: • Preparation of process operating reports, including the assembling, screening, evaluating, and editing of operating data. • Mathematical analysis of process operations, using operating data as the ground material, and applying predetermined criteria to interpret process operations. • Scanning for low or high values, calculation of yields, and integration of flow rates are but a few of the many possible applications. • Calculation of controller set points. • A logical extension to analysis of process operation is calculation of the corrective action to be taken when an upset occurs. In addition to correcting upsets, the computer can be used to optimize the operation, using past performance as one of the criteria. • Control of systems dynamics, in a guidance-oriented sense, with various degrees of sophistication. It is to this type of control action in particular that we made reference when we spoke about the three interrupt modes and the desired multistorage capabilities of the information machine. Computer control function can be adjusted to conform to the changes in plant dynamic responses brought about by system nonlinearities and time-varying factors. In this sense, information processing machinery represents an extension of communications engineering, of servomechanisms and automatic control. The deadline requirements make the time sensitivity of a computer a crucial characteristic of its on-lineness, which is judged by the ability of the machine to accept information input at many terminals, to enter, process, and extract information directly, and to offer responses to individual information requests. Another word used to characterize a data-control system is: "integrated,':" meaning that all, or nearly all, data processing functions can be handled by the system, or through a network of machines communicating to each other in an automatic manner. *Not to be confused with "integrated data processing," where our main reference is to the integration of the files and their unique location.
XXI.
COMPUTER USAGE IN THE PROCESS INDUSTRY
15
An example comes from a bakery. To help understand the usage of automatic control in a biscuit bakery it is advantageous to briefly review the process involved. This process, up to and including the mixing of the dough, is carried out on a batch production basis, all the remaining manufacturing processes are of a continuous nature. In many bakeries, dry ingredients are mixed in predetermined proportions. Nondry materials are added to the mixture in appropriate quantities. Besides industries making biscuits, cakes, chocolates, and sweets, other industries whose manufacturing process could be described in a similar manner include cement, dry batteries, and fertilizers. All these processes are virtually dependent on the correct proportioning of a variety of ingredients together with adequate mixing. Both proportioning and mixing are normally carried out in batches since it is easier to weigh, or measure, the volume of the ingredients and to mix in batches, than to use continuous production methods. Technically speaking, the automatic determination of the proportioning of ingredients is a feasible operation. Recently designed equipment, for instance, is arranged to: • Indicate when the level in the bins or silos reaches maximum and minimum allowable values • Regulate feeders, blowers, and control valves • Measure the weight and volume of both dry and liquid ingredients • Deliver weighed ingredients to a selected mixer and control the dry mixing time • Deliver liquids into a mixer and control the mixing time • Deliver mixed ingredients to a belt or other suitable receptacle • Record the time of delivery, source, weight, and quantity of each ingredient The data regarding the quantities of each ingredient and the source from which each is to be taken is punched on paper tape in coded form. When the tape is fed into the reader, a control panel, associated with the mixers, is operated to select a particular mixer and to initiate the selection of ingredients and the delivery of the weighed dry ingredients to the mixer. The dry mixing time is set by adjusting a timing circuit. The liquids are simultaneously measured and released into the mixer at the completion of the dry mixing period. The wet mixing period is set by adjusting another timing circuit. When all the ingredients have been delivered to the mixer, an operator can signal another mixer to take a second batch and again initiate the process. Different types of products can be handled simultaneously, the number of stores and mixer control units provided being different in different installations to suit the production requirements. The level indicators in the bins or silos may be arranged to switch off the
16
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
feeders when the low level is reached so as to prevent faulty measurement of the delivery of a blast of air to the scale pans. In addition, they may be arranged to cause the bin, or silo, to be automatically refilled or to switch the feed to a spare one containing the same ingredient while the original is being refilled. Weight scales for a variety of maximum weights can be provided, according to the range of measurements required. To enable the operation of an integrated information circuit, each weight scale can be connected to a proportioning control panel on the main control desk by media including a motor-driven coded disk. This forms a feedback system which compares the weight and holds the hopper valve open until the two weights coincide. The nominal resolution of the feedback system is one part in a thousand, its actual value depending on the full-scale reading of the particular weight scale and on the accuracy required. A number of variations on the mode of operation are possible, and, if the feed arrangements allow, several weight scales may be used simultaneously. Naturally, the electronic control need not be confined to the preparation of the mix. Data-carrying punched tape can be used to select or guide many of the functions that occur beyond the point at which the dough is delivered from the mixer. In this way, a substantial extension of data control isfeasible to the remainder of the plant. This is the case, for instance, with the control of speed for the rolling process, the thickness of the rolled dough, the selection of dough-cutting dies, and the control of oven temperature and baking and cooling times. Such operations could be carried out automatically. In this same sense, when the bakery products have been packed, they can be automatically sorted into types, counted, and transported to the appropriate section of the stock room. Hence, the greater part of the production process and the various mechanical handling stages may be integrated into a single electronically controlled production system, the control being exercised from one (or several) station. Information regarding the state of affairs at a number of monitoring-points may be fed back to any suitable location.
Chapter XXII APPLICATIONS WITH TECHNICAL PROBLEMS In Chapter XXI, we advanced the thesis that, through integrated real-time operations, computers can be used in process-type industries in a variety of ways involving the efficiency of the operations and the quality ofthe product. The field where this holds true ranges in fact from the "simple" preparation of traditional statistics and payrolls to the control of entire processes. Multiprogrammed machinery can simultaneously determine optimum gasoline blends, control requests for maintenance, order materials to replenish depleted warehouse stocks, simulate actual refinery operations, select crude oils for refinery runs, help design equipment, check operating conditions, speed up refinery oil accounting, or schedule startup and shutdown procedures. Speed, efficiency, accuracy, and hitherto "impossible" calculations become possible through the sophisticated use of the data processor. Nevertheless, we also made reference to the fact that on-line applications, aside from closed-loop control, are, as of today, new and timid. On the contrary, a variety of technical problems faced by chemical, petrochemical, and petroleum industries have been successfully approached by means of computers on an off-line basis. Distinguished among them are:
• Pipe stresses, such as stresses due to thermal expansion or cold springing. • Pipe network distribution systems. Computers have been used successfully in connection with balances of flow and pressure drops within loops and systems of loops. • Scheduling and dispatching system for gas and oil, including the simulation of ensembles involving oil pipelines, delivery depots, and dispatching centers. • Problems of scientific analysis, such as isothermal flash equations, heat and mass balances, pressure drop in lines, compressor horsepower requirements.
17
18
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
• Problems for molecular weight, boiling point, critical temperature, critical pressure, critical density, gross heating value per cubic foot of gas, or per pound ofliquid, free energy functions, heat content functions, the number of atoms of HP, C, S, and N per molecule, and the like. • Design of self-supporting tower foundations, including the calculation of the minimum cost foundation for the allowable soil bearing pressure, wind load, etc.-where the obtained results indicate the steel and concrete required, with a size, quantity, and price breakdown. • Other studies, such as the design of cooling towers, with particular emphasis on functions relating air and water temperatures and tower dimensions, and the calculation of compressor foundation vibrations. Some of these programs have in fact been quite successful, with respect to both their scientific and their marketing values. A pipe stress analysis scheme developed by a major computer manufacturer has been used by over 140 refineries, chemical plants, and petroleum construction firms. This program permits the checking of design piping by the most rigorous methods possible, and helps to check installed piping. It also solves pipe stress problems due to temperature expansion for both two-anchor and threeanchor systems, and can handle members of different size and material in the same system. Since we established that off-line application in research, engineering, management, and general business are the only "wealthy" references in computer usage we now have, in this and in the following chapter we will present examples of this type of application. * These examples will include technical studies as well as accounting and marketing problems. Throughout our discussion, we will often make reference to the corresponding real-time system.
COMPUTER USE IN CHEMICAL AND PETROLEUM ENGINEERING With field operations, for one, the mathematical calculations required in preparing geophysical data for the most effective interpretation are laborious and in some cases unfeasible without the use of electronic computing methods. The analysis of seismic data, to obtain the best results in interpretation and precision, requires the use of complicated mathematical procedures. By enlisting the aid of the computer, new reservoirs are being discovered and their extent and depth more accurately evaluated. Other
* For
an on-line design application, see the discussion on DAC-l in Chapter XV.
XXII.
APPLICATIONS WITH TECHNICAL PROBLEMS
19
applications in this field include the reduction of seismic data, seismic migration, conversion of seismic travel time to depth, velocity profiling, and calculations to determine the reflection response from a series of multiple transition layers. In addition to land studies, computers have been effectively used for the over-water electronic surveying method. Positions are determined in terms of hyperbolic coordinates. The conversions of these coordinates to those commonly used in surveying and mapping is the task of data reduction. To point out the problems that arise in water flooding, it is sufficient to mention the need of considering the variations in pressure all over the reservoir. To arrive at an injection well pattern that will result in the most economic recovery of oil, one third of a million points may have to be considered. A change in pressure at one point in the field affects the pressure at every other point. In addition, to calculate the behavior of an oil reservoir during water flood, multiple physical factors have to be considered such as porosity ofthe oil bearing rock, permeability, hydrostatic pressure, gravity and capillary forces. For another case, when an oil reserve is discovered the drilling and production of this source present new problems for study. Proper recovery involves many changing factors. In the development of modern oil fields, geologists use the data obtained from drilling the discovery and exploratory wells to evaluate underground conditions and the extent of the field. Other problems include: water drive, analysis of reservoir performance,design of water flood injection patterns, flash calculations, reduction of field data, reduction of laboratory data, harmonic analysis of ocean waves, calculations of the depletion history and future performance of gas cap drive reservoirs, the setting of interstage pressures in surface separation equipment, reservoir mechanics, pressure buildups and reservoir flow, X-ray tables, analysis of displacement data, and relative permeability. At the refinery level, among the most important applications are catalytic cracking and the necessary experimentation to obtain the maximum catalytic activity and reduce the consumption of catalyst; gasoline blending and testing for the selection of type and quantity of stocks to be blended; the definition of crude oil processing values; and throughputs and equipment design problems-such as multi component distillation calculations, heat exchanges, and process furnaces. An electronic data processing system, designed specifically for scientific computation, is the heart of a large-scale technical computing center of a petroleum manufacturer. With this machine, the engineering and laboratory divisions of the company carry out the major part of their calculation load. Primarily, the computer is used to carry out calculations on the design of oil processing equipment.
20
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
Outstanding scientific analysis with blending and design work has been done with various types and sizes of computing equipment. For instance, an intermediate system is used to process 100 analyses of samples of hydrocarbon gas from refineries in eighteen minutes. This task required three days with conventional punched card equipment. In other technical uses the machine helps engineers to investigate the optimum operating characteristics for refineries, and to study crude oil allocation, transportation and product mix. The data processing system in question handles refinery operating data to help increase quantity and quality of product yield, and prepares profitability reports for product planning purposes. A number of machine programs have been developed by computer manufacturers to assist the process industry in its usage of the computer. An example is a routine developed for plate-to-plate distillation calculations. It can be used profitably by the refinery for easy, rapid distillation, tower evaluation in conjunction with tower design, tower efficiency calculations for maintenance purposes, and for evaluation of alternative modes of plant operation. Heat balances can be made on every plate and a vapor rate profile calculated and stored for use in the next trial. Tray temperatures are recalculated and stored during each trial, so that enthalpy and relative volatility data will be evaluated at the proper temperatures. Provision is made for including the effect of a concentration variable on relative volatilities. Feed plate matching is automatic, as is the handling of the non distributed components. Heat exchanger calculations have frequently been performed using a whole group of computer subroutines. The objective of these programs is to enable opesating personnel to maintain an up-to-date record of heat exchanger performance. The program may also be used to evaluate the effect of process changes on heat exchanger performance and to compare designs. Thus a better maintenance planning and a more frequent evaluation of heat economy can be realized. The program uses the same mathematical method as is normally used in hand calculations. Physical properties of the material are obtained from laboratory analyses and from standard references. A heat and material balance is calculated, depending on the information available. The actual or assumed configuration of the exchanger is used to determine correction factors for the heat transfer equations. Other process-type, off-line applications of the computer make use of statistical methods, such as multiple regression, with the criterion that the variables considered will be restricted to those that make a significant contribution to the correlation being derived. The objective is to derive an analytic function that will describe the performance of an operating unit. In many cases, it is advantageous that this analytic function contain the
XXII.
APPLICATIONS WITH TECHNICAL PROBLEMS
21
minimum number of variables able to define the unit performance within the accuracy of the basic data from which the function was derived. A major oil producer has cut the cost of testing potentially new catalysts, and of testing catalyst lifetime, by more than fifty per cent, using a pair of specially designed pilot plants along with a computer analysis of the data. This setup permits rapid preliminary evaluation of catalyst performance. High-speed experimentation permits a thorough study of catalyst preparation techniques, reproducibility of preparations, etc. It also permits a study of the effects of process variables as related to a variety of refining processes, such as hydrogeneration, hydrofining, isomerization, and catalytic reforming. Results from the analysis made on the computer give the catalyst activity index, the catalyst activity decline index, and the statistical dependability of the data. An area where computers have been successfully used is that of more accurately studying the operation of the piece of processing equipment. Distillation is probably the single most important process where relatively large profits can be obtained by means of efficient operation. The refinery engineer, however, is generally unable to study the operation of distillation towers in detail because he must sometimes use the analog approach with the only analog computer available being the distillation tower as it exists in the plant. Normally, the multiplicity of refinery scheduling problems prevents the engineer from having a free hand in varying the operation of a distillation tower to determine its performance. There exist today calculation techniques that are refined, reliable, but also time consuming. These represent ideal applications for electronic computers, in that the mathematical techniques are known and proven, and the engineers who use the answers have confidence in the method. A distillation tower design program uses correlation methods to predict the tower requirements needed to accomplish a specified multicomponent separation. This program can be used in conjunction with a plate-to-plate program to establish a first approximation to tower design. Results of the calculation include the minimum stage and minimum reflux requirements for the specified separation, a distribution of components other than keys at total reflux, and a tabulation of theoretical stage vs. reflux requirements. The interrelation of several variables can be determined by the use of regression analysis programs in distillation studies, for example, experimenting on the change of a dependent variable when two or more independent variables are changed. A multiple regression equation does this by incorporating the evidence of a large number of observations in a single statement, which presents, in condensed form, the extent to which differences in the dependent variable tend to be associated with differences in each
22
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
of the other variables, as shown by the sample. The program in question was designed to calculate automatically the constants and coefficients for a wide range of polynomial regression equations and to indicate to the user, in a statistical sense, the validity of the equation selected.
STUDYING PIPELINE PROBLEMS
An important aid to engineering calculations is a pipe stress analysis scheme. Programs of this type have been developed by several computer manufacturers and are being used by a score of refineries, chemical plants, and petroleum construction firms. They permit the checking of piping design by rigorous methods. They also help to check installed piping, to solve pipe stress problems due to temperature expansion for both two-anchor and threeanchor systems with up to ninety-nine members, and to handle members of different size and material belonging to the same system. Engineers working on pipeline studies can readily recognize the need for accurate data in the design of new systems and in the expansion of existing systems. When a design engineer is faced with the problem of strengthening an existing system to accommodate additional loads, he must necessarily make use of every source of reliable data at his command. The more accurately these data reflect actual operating conditions, the more secure he can feel that he is working with a dependable tool. Probably the best basis for load data collection is the actual customer's past usage. Three major steps are involved: • Development of design factors • Accumulation of load data • Application of load data The contribution of historical data to pipeline design can be substantial. One of the problems present in pipeline design is the determination of the most economical number of pump stations. Another problem is the calculation of the most economical diameter and thickness of a pipeline. These are laborious questions even when all major design conditions have been established; but it is particularly true if some design factors, such as throughput, viscosity, length of line, or cost of steel, have not been established and it is desired to calculate the investment and annual operating costs for a number of combinations of these factors. Thus far, computer studies have dealt with the determination of pipe diameter and thickness, length of line, pipe specifications, and number of pumping stations. A recent computation involved the following information for each set of pipeline conditions:
XXII.
APPLICA nONS WITH TECHNICAL PROBLEMS
23
• The internal pressure of the pipe at l000-ft intervals • The number of feet of pipe required for each pipe wall thickness • The tonnage of pipe required for each pipe wall thickness • The discharge pressure at each pump station • The horsepower required at each pump station • Proper spacing of pump stations • The cost of pipe • The cost of pump stations • The cost of the pipeline • The total annual charges including operating cost, depreciation, and return on investment The competitive advantage of a computer calculation is that, for any set of design conditions, the program may be run a number of times for different combinations of line diameter and pump stations to select the most economical solution; for instance, the line having the lowest total annual charges. The program may then be run to select the most economical line for other sets of design conditions, to show the effect on line size, total investment, and total annual charges resulting from variations made in design specifications. In a pipeline application developed by a major petroleum concern, the data processing system first calculates the Reynolds number. Then the Re value is used to obtain a value ofthefriction factorf This factor used in the flow equation for turbulent flow is interpolated -by the computer from five sets of correlations of logarithms. Reynolds number vs. log f is placed in the computer memory, each set corresponding to a different range of pipe sizes. In determining the friction factor for a given Re and pipe size, the computer selects five values of Re closest to the given Re from the logf correlation for the pipe size in question. With these five values of log Re, and their corresponding value of log f, the machine generates a fourth-order polynomial equation in log Re and logf, to fit these points. The logf, which is converted to f and subsequently used in the flow equation, is calculated from this equation. After obtaining the pressure drop per 1000 £1 from the flow equation, the pressure required at each pump station is computed. The program then calculates the maximum allowable working pressure for the initial section of the pipe. The line hydraulic calculations are begun at the end of the pipeline, instead of the beginning, as would be done if a computer were not used, and the pressure is calculated at each 1000-£1 interval. At each of these intervals, the internal pressure is compared with the station pressure and maximum allowable line pressure, which were previously calculated to determine whether or not a new pump station or a new, heavier pipe wall thickness is required at this point.
24
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
If, for example, the wall thickness of the pipe had been 5/16 in. for the previous 100,000 ft and the internal pressure had risen with each l000-ft interval until it was equal to the maximum allowable working pressure of 5116-in. wall pipe, the wall thickness of the pipe would be changed from 5/16 to 3/8 in. The point at which the wall thickness changes is stored by the machine and appears on the final answer sheet. Since this increase in wall thickness decreases the inside diameter of the pipe, the computer calculates the pressure drop per 1000 ft for the new thickness. In addition, at this point a new maximum allowable working pressure corresponding to the new wall thickness is computed. The machine then makes the hydraulic calculations for the section of pipelines leading to the next pump station. This process continues until the end of the pipeline is reached, which for this program will be the initial pump station, since calculations are started at the discharge end of the line. With this design procedure a pipeline may have pipe with up to five different wall thicknesses, the heaviest sections naturally being next to the discharge side of the pump stations, where the pressure is greatest. By reducing the thickness ofthe pipe as the pressure decreases, a considerable saving in steel tonnage is realized. In the design of pipelines having injection points at which various amounts of liquid are continuously added to the main pipeline at various points along its length, the fluid characteristics of each injected stream are taken into account. In the example considered, since the pressure drop per 1000 ft is recalculated after each change in wall thickness, change in pipe diameter, or change in the main stream due to an injection, the pressure profile for lines having turbulent flow simultaneously in different parts of the line can be computed. In such a program, pump stations can be placed in the line at points where the pressure has risen to a level specified in advance. The number of pump stations can be determined by the value set for this discharge pressure. Changes in pipe diameter, which are made at the pump stations, can also be predetermined by placing the sequence of sizes, starting from the discharge end of the line, in the input data. For hydraulic calculations for a heated pipeline it is necessary to determine the temperature, viscosity, and pressure drop per unit distance at a large number of points. It may also prove desirable to carry out these calculations varying the pipe diameter, heater spacing, temperature to which the oil is heated, and coefficient of heat transfer between the pipe and its surroundings. As with the general case of pipeline studies, in the computations relating to heated pipelines the input data must provide a quantitative description of the system. Data must include information on the geometry ofthe pipeline, the characteristics and locations of pump stations, the characteristics and
XXII.
APPLICATIONS WITH TECHNICAL PROBLEMS
25
locations of heaters, the thermal properties of the surrounding medium, and the order, quantity, and physical properties of the pumping cycle. Based on these data the computer can report the conditions along the line, pressure, temperature, Reynolds number, etc., for different line fills corresponding to various positions in the pumping schedule including the average throughput for the complete pumping cycle. As an example, consider a uniform pipeline pumping one type of crude oil only, with a single pump station and a single heater, both at the input end. The computer program developed for this calculation carries out the following routines: • • • • •
Divides the line into a number of equal intervals. Makes a first estimate of the throughput of the line. Reads the pressure at the input end of the line from the pump curve. Calculates the temperature of the crude oil leaving the heater. Here, if the pressure or temperature of the crude oil, or both, exceed the permissible maximum, it substitutes the maximum values for the calculated figures. • Determines the Reynolds number and the viscosity of the crude oil on leaving the heater. • Determines the cooling rate and pressure gradient at the exit of the heater. • Moves one interval down the line and calculates the temperature at the end of this interval. • Computes successively the viscosity, Reynolds number, and pressure gradient at the same point. If the flow type has not changed over the interval, the mean of the two gradients is used to determine the pressure drop over the interval. If the flow type has changed, an appropriately weighted average gradient is used for this calculation. The program then repeats the procedure for successive intervals, until each section of the line has been calculated. If the calculated output pressure does not agree with the value previously laid down for it, the computer makes a better guess and starts again with the reading of the pressure at the input end of the line. Through successive iterations, the calculated and specified pressures agree within a certain specified accuracy, and the problem is solved. The forementioned program is flexible enough to allow for a substantial degree of experimentation. Heaters, for one, may be situated at various points along the line. The computer takes note of each in turn and calculates the temperature rise and pressure drop. The maximum temperature limitations are always checked. The number of cases to be examined are at the discretion of the engineer.
26
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
Computers other than digital have also been used with considerable success. * Recently a study was made on an analog machine for a gas-gathering system run with three specific objectives: locating bottlenecks, evaluating pressure variations, and checking the operation of the transmitting links. The initial data included: pipe diameters, length of pipe sections, the loss coefficient of the pipe, a constant to account for flow units, the pressuredrop function, the pressure-voltage conversion factor, and a factor for converting pipe coefficients to fluistor coefficients. The pipe diameters and lengths were known from the physical properties of the system. The remaining information was obtained through the use of an appropriate gas formula. The study focused on the checking of the operation of the system. More specifically: • Can the required output be delivered at the required pressure? • How much flow from each well is required for the established pressures? To answer the first question, the system was energized, and the required well pressures set. The method used to set these pressures was to measure the voltage between the wells and a reference point. The voltages (pressures) were balanced by setting the load rheostats for the proper consumption rates. These pressures were maintained, and the source current was measured. The results indicated that the system was transmitting the desired results as far as pressures were concerned. To further check the operation of the system, it was necessary to read each load meter to see if the required flow in each load was correct. This would answer the second question. These readings did check, and their sum was that of the input flow. If, however, they did not all meet their individual requirements, the system would have to be re-evaluated. The question then asked was: • Can pressure requirements be relaxed so that the flows could be maintained? In answering this question, it would be necessary to determine the pressure variations that would result from fulfilling the load flow requirements. Ifthey were slight, it might be possible to use the existing system.
SIMULATION PROBLEMS The simultaneous flow of gas and liquid in a pipeline as separated phases has rarely been subjected to an analysis leading to a practical and reliable
* See also "Systems
and Simulation," Chapters XXIV and XXV.
XXII.
APPLICATIONS WITH TECHNICAL PROBLEMS
27
design procedure. The basic objective is the prediction of line pressure drop when gas and liquid rates are specified. The problem is of importance in the case of wet-gas transmission lines, in crude oil lines carrying free gas as an extra service, and in some process-transfer lines in which flashing vapor produces a substantial gas phase. The complexity of the problem may be appreciated when it is realized that a two-phase gas and liquid flow may occur in any of several different types. Problems involved in the computation include: • The line-pressure drop which determines the amount of gas dissolved in the liquid phase, hence the gas-liquid ratio. • The amount of dissolved gas which determines the physical properties of the liquid phase, viscosity, density, and surface tension. All of these pressure-dependent factors are used for the determination of the internal line pressure, thus forcing a trail-and-error solution. The problem is further complicated by the known possibility of several types of flow, most of which have been correlated by different empirical equations. The computation sequence involves the following: • Take input data describing the line, gas, and liquid rates, and physical properties. • Calculate the physical properties of gas and liquid phase using an appropriate assumption for the line pressure, as these properties are dependent upon pressure. • Find the .values for the correlation parameters that are determined by gas and liquid rates and fluid physical properties. The values ofthese parameters can then be used as indices to determine the flow type most likely present in the line. • Compute the two-phase pressure drop over the line length. • Compute the average line pressure and compare with the previously assumed average line pressure for the evaluation of physical properties. • Compare the computed two-phase line-pressure drop with 10% of the computed terminal line pressure. If the pressure drop is greater than the 10% value, break the line length into as many segments as necessary to produce an estimated pressure drop over each, not greater than 10% of the estimated terminal line pressure. • If segmentation has been necessary, repeat the foregoing procedure for each segment and add the segment pressure drops to find the twophase drop over the entire line. As boundaries separating different types of flow are encountered in the systematic increase of either the gas or oil rate, flow type oscillation may
28
PART VI.
PROCESS·TYPE CASES AND DATA CONTROL
occur. The occurrence of oscillation simply signals a boundary condition between flow types, warning that extrapolation into the flow range involved is subject to the hazard of indeterminate flow type. In a different study, the specific controller duty for computer simulation included line startup and recovery from line upset due to forced pump unit or station shutdown. For both situations researchers wanted to know whether the local automatic control action would lead directly, without hunting, to a stable operating condition for all stations. Initial data included the maximum power level to be permitted at each station, and the initial operating pressure for the first station on the line. The pressure boost at the first station initiates line flow at a rate determined by the first station pressure boost and its attenuation by line friction, station loss, and static head effects. The determination of this rate requires a trial-and-error solution based on the relation between required pressure boost and pressure developed by active pumping units. The resultant section and discharge pressures at each station are then determined by direct computation. For the stations at which computed pressures are not within specified limits, throttling is incremented by an amount proportional to the amount of discharge pressure lying beyond operating limits. The line is then rebalanced and each station is examined for a possible reduction or increase in power level. The computation continues until all stations have reached present maximum power levels. The determination of factors affecting pipeline design is a very important matter, and it is a sound policy to complete this task before any data collection on consumption and usage is begun. If not thoroughly understood, historical data may sometimes be misleading when used for design purposes. One customer in a small group may use more gas during the maximum hour than another in the same group. For typical network distribution systems, distribution engineers are not primarily interested in this effect, but rather they are interested in the over-all effect of a particular system's maximum hour consumption on the main feeder lines within a system. Gas supply companies often work on the assumption that all classifications of customers use gas annually for two primary purposes. The first is base load usage, which is used at a somewhat constant rate each month of the year. The second is the heat load usage, and it is assumed that it varies according to the total annual degree-day deficiences. A certain gas company considers the following basic information concerning each customer as being necessary: • Annual consumption • Average number of customers for the past year • Average monthly consumption during the months of July and August for the last year
XXII.
APPLICA nONS WITH TECHNICAL PROBLEMS
29
• Average monthly consumption during the period of maximum consumption • Annual degree-day deficiencies for the past year The method used by this company for determining the annual base load consumption is to multiply the sum of the average July and August consumption values by the factor 6.40. The numerical value was developed by the statistical department and is used to obtain an annual base load consumption that is free from any heat consumption. Given that a two-month average is being taken, the multiplier for the yearly calculation should have been equal to 6. The value of this multiplier is increased from 6 to 6.4 to allow for reduced consumption in the summer because of vacations and warm weather cooking habits. This value reflects the needs of the particular population from whose consumption data it was developed. It should, obviously, be carefully re-examined for suitability before any further application is attempted. With manual computational practices, the ratio of the peak day to the average day for base load consumption was taken as an empirical figure of 7%. Analytic studies were not possible because with manual means there were no daily customer consumption records kept by this company, by which this ratio could be obtained. With computer usage, new methods were developed, including the correlation of load data by locating groups of customers along various streets in the distribution system. Several groups of customer account numbers are formed by the pattern of the meter reading route. The customer account numbers obtained are then located on the computer summary. The load, in megacycles per hour, for the pipe section is calculated by totaling the average hourly load for each customer account number on the pipe section. In this calculation, the areas are given proper increase in saturation based upon experience and knowledge of the distribution system. New and proposed commercial, municipal, and residential developments are also taken into consideration. An investigation was made in an area of 60,000 meters to determine the feasibility of developing design factors for commercial and municipal customers. A wide variation in use pattern was found and was attributed to conditions such as oversizing of heating equipment for rapid recovery, method of operating equipment, and sizing equipment to accommodate future expansion. Essentially, electronic data processing media accomplished the following: • Reduced the time spent in analyzing distribution systems. • Relieved the distribution engineer of the tedious task of compiling the load data by manual methods. • Helped develop more efficient technico-economic design features.
30
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
"FEEDFORWARD" CONCEPTS Having established some of the main domains of the technical effort in digital automation, we can now proceed with a synthesis. The task of the engineer in an automated process factory, be it chemical, petroleum, steel, power production, or other, starts with the design of efficient and reliable "instrumentation." This includes delicate sensors to gather precise data on temperature, pressure, and all key variables of the production process. It is his task, too, to establish and install the transmission devices to assure information transfer to the central computer, to choose the proper actuators and their associated gear. But, although the foregoing paragraph is good enough for a general direction, it says practically nothing about the special conditions that will need to be met, a major one among them being the need of a forward look for control purposes. In Chapter IV, we placed particular emphasis on the basic requirement for analyzing beforehand just how a production process will work, determining the information needed to control this process. Working closely with applied mathematicians, the engineer should help develop nearly exact mathematical relationships between the process variables and the optimum plant control points, leaving the way open for improvements and corrections on the model. Design approaches for feed forward should definitely be based on the total systems concept. As Tetley was to say, a system possesses at least the following properties: • It is an ensemble of specific functions. • It is a complete entity and definable within a boundary.
• Coupling exists between these functions. • It has a definable input. • It produces a definable output or product.
• Satisfactory operation of the individual functions does not necessarily insure satisfactory operation as an ensemble. • It is susceptible to a generalized form of the "Second Law." • In addition, it quite often possesses the property of being a servomechanism and of involving stratagems. The enumeration of these characteristics, and the fact that each should be observed in its own way, properly identifies the physical task of updating the feed forward programs. This is particularly true for cases involving a bewildering system of computers. The task would be insurmountable without some form of machine-operated updating monitor. Such a function could conceivably cycle the system through all statistically possible intrusion patterns in an effort to findflaws in the Iogic, the mathematics, or the programs.
XXII.
APPLICATIONS WITH TECHNICAL PROBLEMS
31
The simulator should be built around the "feedforward" concept, a radical departure from the fundamental control loop. As presently established, a "control loop" system works by making direct responses to "errors." If one occurs the computer should take the proper control action to correct it. But the approach has limitations, the important one being that it keeps the production system operating just as it was. Contrary to this, in "feedforward," when the first sign of a "change," or deviation, from an established process equilibrium is flashed to the computer, corrections for all parts of the plant are calculated, and the proper control setting is made before the "change" reaches each process stage. Though there is still much to be done before we obtain sophisticated feedforward systems for whole processes, the approach constitutes, nevertheless, an evolutionary departure from the feedback concept and the use of historical data. From a technological point of view, many of the studies we have covered will need to be re-evaluated in feedforward terms. The impact of mathematical simulation in this connection is evident. We will return to this point in Chapters XXV, XXVI, and XXVII.
Chapter XXIII THE RATIONALIZATION OF MANAGEMENT DATA In an industrial concern, the accounting system exists primarily to meet the company's internal data needs. Yet, accounting reports rarely, if ever, focus on "success factors" or help pinpoint trouble spots in an efficient manner. This is particularly true in petroleum distribution: allocation of expenses, transfer prices, and the like; they often tend to obscure rather than clarify the underlying strengths and weaknesses ofa company. The point here is that integrated data processing might cure these ills if established in a rational, well-planned manner. This means that: Every management level should receive the data that concern it-s-nothing more, nothing less. A nd the information should be timely. The president should not be burdened with the mass of data a computer can produce; he should be given only critical ratios. Inversely, the information contained in these critical ratios may not be sufficient at all for the personnel working at a lower level. This information must be received in a timely and accurate manner; it must project future events and calculate the risks of these projections. For simplicity, most of the information should be presented to company executives in graphic form. The exhibit must highlight, for instance, only the reports used for retail gasoline marketing, if this is the function of the particular office to which it is submitted. Fuel oil marketing, commercial and industrial marketing, and other topics, though interesting, may have nothing to do with the function of the man who gets the report. Such items should be carefully omitted. For efficiency ofhandling, the whole accounting system should be reworked on a mathematical basis. This has been the approach taken by a leading chemicals and metals manufacturer in France. A group of experts spent two years at a factory site to re-establish the basis of data collection for accounting purposes. Through matrix analysis the original accounting data have been processed, at every stage, with an accuracy and precision unheard of before. 32
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
33
For economy of storage, files should be integrated. This integration must be most carefully planned. It is not always easy to cancel the proliferation of data, and lack of planning may cost the loss of valuable information. The storing of data and their subsequent retrieval will be considered here. The problem of logical decisions and comparisons throughout the whole filtering process will also be considered.
How deeply this approach to integrated timely management information can take root in most business and industrial enterprises can be attested by observing the reaction of companies to matters concerning their conversion to a third-generation computer. In a recent study in Belgium, for example, the author found corporate management highly concerned about information handling and the interaction that should exist between the computers at the headquarters and those installed at the factories for process control. One major steel company after study ordered four compatible systems: two were to be installed at the main office for accounting and sales jobs, and the other two for process guidance at the plant. Another corporation had three interconnected computers installed-one, the largest, at the headquarters, receiving the management-relevant data transmitted by the two factory machines. A third Belgium corporation started the study of real-time applications and process control at the general management level. Throughout the industrial world, many companies have found that the most effective approach to determining requirements for planning information, whether it be for one executive or an entire company, is to first set objectives, then develop the procedures and decide among alternative reporting practices. The following discussion is oriented toward this "objective-seeking" duty.
DEVELOPING AN INTEGRATED INFORMATION SYSTEM In redesigning a management information system, certain objectives must be formulated at the very beginning. The environment within which this system is to operate must be described and its constraints specified. As an example, we will consider a system designed to meet changes in methods and policies. Throughout our discussion, use is made of the concepts advanced in the introductory paragraphs. First, an integrated information system should provide management with critical data only. For integration of related information processing functions to become efficient, maximum use should be made of information, after it is introduced into the system. Files and intermediary information storage devices must be designed for use in several applications. This may require
34
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
substantial changes in the mechanics, as, for instance, designing a new machine-oriented numbering system. A few years ago, the author studied such a conversion, which took place in a major company. The changes involved three systems: • Customer numbering • Order numbering • Article numbering The customer numbering system that was developed represented a new and different approach to the establishment of a unique number for customer identity. To that end, substantial time was spent in experimental analysis. Also, the final design of a customer numbering system depends, to a considerable extent, upon the characteristics of the equipment selected to implement the system. Similarly, for experimentation purposes the new information system must include a number of mathematical simulators. We will consider one example. The petroleum industry has made profitable use of simulation for the determination of oil storage capacity. Say, that a 5000-ton/day pipe still is fed from a storage tank that is supplied with crude oil by a number of 10,000-, 20,000-, and 30,OOO-ton tankers. The compositions of the fleet are shown in Fig. 1. A ship is scheduled to arrive every four days, but 30%of them are one day late and 20% two days late. It is desirable to find the optimal size of the tank.* This is an excellent case where the interrelationships between managerial needs and mathematical treatment can be exemplified. Similar models could be done for, say, unloading imported iron ore from various types of ships or 25%
50%
25%
\
, C1
C
\:
,
10,000 tons
I
7
0
p
,
20,000 tons
c1 30,000 tons
I P
E
7
~
Tonk
" x"
,
J
Tons
~
s
T I L L
FIG. I. One tank is scheduled to arrive every four days: On time, 50% of occasions; one day late, 30% of occasions; two days late, 20% of occasions.
* Reference is made to an example presented by a British petroleum concern in a European seminar for the petroleum industry. which the writer organized in Holland in October 1959. See also the discussion in a latter part of the present chapter.
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
35
any other application of a usage nature. * Here, through the use of mathematical theory we are attempting an accurate forecast of what would happen at a particular port. In forecasting work, port authorities (or, for that matter, fleet management in a particular petroleum combine) may be able to use data on the distribution of the times of arrival on the variation of unloading rates of ships of different sizes. Generally, for each ship handled at a port there is a cycle of operations: • The ship arrives • It waits for a berth or a tide • It is berthed, unloaded, reprovisioned • It waits to leave the berth • It leaves The operation of the port can be considered basically as a combination of many ship cycles. The duration of each part ofthe cycle may vary from time to time, depending on the ship arrivals, the type of merchandise to be unloaded, the hours of work operated by the dock labor, and generally a number of factors which can be analytically determined. From the analyst's point of view, the assumptions he must make are of capital importance. Also critical is the value assigned to each element ofthe cycle, to be decided in each case by a separate random selection from an applicable mass of times or events derived from past experience. Some ofthe times may be generated during the simulation itself; the time a ship waits for a berth may depend on the experiences of the ship or ships that arrived before it. If this process of simulation is carried on long enough the adverse circumstances will occur in their proper proportions and therefore are not allowed to bias the ultimate decision more than they need, due to their relative weight. In a limited sense, the interest of such studies is to establish how long the ships were queued up outside the port, how long they occupied a berth, or how long the berthing facilities were used. But mathematical experimentation for management purposes can lead much further than that. Port authorities, for example, may want to know the effect of altering the rules of operation ofthe port. For its part, company management may obtain significant experimental data in shaping up future policies about the nature, composition, and usage of its tanker fleet. Let us assume that small, medium and large tankers arrive "at random"; their established frequencies being 25%, 50%, and 25%, respectively. The delays are also of a random nature. Let us suppose further that the particular tank size we wish to test is 40,000 tons. Through computer processing we can "See also "Systems and Simulation," Chapter XVIII on Cargo Handling.
36
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
operate the mathematical model for a simulated period of, say, five years, recording the number of times the tank becomes empty or overflows. The longer the simulation the more closely do these counts approach the longterm averages we are seeking. This operation can be repeated, for a range of tank sizes in order to draw graphs showing the expected frequency of a full tank and of an empty tank against the size of the tank. The subsequent choice of the best size is of course a managerial decision, that is, a decision that is based on numerical estimates of the risks involved. The need, nevertheless, for a definite management policy in this direction is apparent. Second, the over-all systems exhibit should not be lost in detail; only the major data processing functions should be considered. A systems exhibit must be prepared as a result of the study, outlining the major data processing work load and the established limits and interaction. The total management information system could be divided, for instance, into the following subsystems:* • Initial order handling • Sales analysis and statistics • Dispatching • In-process orders • Inventory control • Cost control • Evaluation and financial • Sales commissions • Payroll • Accounts payable • Accounts receivable • General statistics and day-to-day reports • Budgeting and management accounting. In the petroleum industry, for instance, the sales analysis and statistics subsystems may involve several routines: Daily Sales Invoice Pricing and Price Checking. In petroleum sales accounting bulk stations submit a daily or periodic report of sales coverage, current cash, and charge sales invoices. These copies of invoices which are received in random sequence as to customer, product price, tax status, etc., are the original documentation for inventory and stock control reports, freight charges, marketing sales expenses, whole sales accounts receivable, service station rentals, use of loaned and leased equipment, and a broad variety of marketing reports. The subject invoices must be carefully edited. In one random sequence pass
*See also the discussion on management information subsystems at the end of Chapter XXVII.
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
37
of the detail cards containing only the product, package, customer, bulk station, and tax codes, the computer can verify the price charged, check the extension of quantity times price, check the invoice addition, check that the proper freight charges were made, furnish full customer classification codes, and the like. The machine can summarize major sales by bulk station, automatically produce the accounts receivable, debit entry and zero balance to bulk station sales reports, present the totals of cash sales, charge sales and quantity, etc. Inactive accounts may be analyzed and after certain periods of inactivity a "sales follow-up" record can be made. Summary information for each invoice can be recorded for use in future processing. Several levels of summary information or types of indicative information can be carried forward in this way. Stock accounting. This routine can become a dynamic proposition with delivery notes and goods received notes sent from the branch offices to the head office-and handled in a timely manner by the data processor. Other items of this category are accounts for outgoing goods, quantity and value specifications, summary cards for later calculations of gross earnings, incoming goods accounts, summary of individual results towards branch offices and headquarters. Sales research. Research can be effectively accomplished by means ofcomputer operations. Among the reports management can obtain in a timely and accurate manner, some are of considerable importance in decision-making: sales per product and salesman, sales per district and product, sales per customer group and product, analysis of transport, analysis of gasoline type, analysis of turnover per depot, and monthly and annual market analysis. Other applications. Applications, might include sales commission accounting, reports on loaned and leased equipment, financial planning, centralization of retail accounts receivable, and system simulation in inventory control and distribution. The usage of the computer in forecasting distribution loads and using inventory requirements, in relation with a mathematical model which can be used to simulate future data on customer demand, is becoming almost mandatory. Here is how a chemical company handles its sales analysis problems. The information referring to a given customer is identified by means of a code reference that consists of a single letter for the country, a number for the trading area, and the number ofthe particular house in the area. During input each house reference is converted to a number. This was found to be a reasonable compromise situation in order to preserve certain features of the numerical system for marketing distribution the company used over a substantial period of time.
38
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
At the data processing center, the sales information is converted on magnetic tape and these files are sorted into numerical order. Customers are arranged in ascending order within each country. Also within each country like products are totaled, and an analysis is made of the current trading period. A statistical evaluation takes place, to avail for a mathematical comparison of the current figures with those for the corresponding period in the previous year, and with those of the total market potential. The magnetic tape files referring to the corresponding period of the previous year are used to carry forward the relevant figures. Variable information is also considered as the figures competition has achieved become available. The company in question is vitally concerned with the precise structure of sales at each of its trading areas. The marketing analysis problems that are on balance may be summarized as follows. The computer performs a monthly analysis including all areas for every type of product which results in some 6000 separate headings. Monthly and running totals since the beginning of the company's financial year for each heading are compiled, and a profitability evaluation is made, taking into account the outcome of a tight cost control evaluation performed on computers at the factories. Third, the system presented in this exhibit must be designed to be flexible and adaptable to future equipment and systems changes. For a management information system to be flexible in accepting modifications in processing techniques as new requirements and systems techniques develop, it must be based on a building-block concept. The several functions necessary to complete the data processing requirements should be segmented to show logical computer runs. These computer "unit runs" should be identical black boxes,elemental pieces of the subsystem structure.* The unit runs will then be combined into configurations that best fit equipment characteristics and capacities. Each unit run diagram must show the input, processing, and output. The input must be identified both as to origin and content. The objective of the unit run is to present answers to questions such as: • What is the input? • What is the processing range? • What operations are to be performed on the data? • What special considerations should be given to these operations'! • What is the output? To avail for the optimal combination of the unit runs, it would be
* As hereby defined, a unit run is a unitary, homogeneous operation basically requiring one pass through the computer. The subject will be elaborated upon to substantial detail in another work, which we now have in preparation.
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
39
advantageous to simulate the time the unit runs will require on a daily, weekly, and monthly basis (Fig. 2). Both detailed and summary data processing workload charts should be developed and made to respond to both peak and nonpeak periods. Following the evaluation of sample applications, the analyst would then be able to obtain summary monthly charts that are objective enough for the job. Comparison charts for alternative solutions should be included-if applicable.
>0
12 II
Doily •
.g 10
0
..
..
9
8
Weekly Monthly _
~
1
6
Quarterly _
=>
Annuolly
'" =>
5 4 3
c.
o
bi&1
2 I
I
1 2 3 4 5 6 7 8 9 10 (0)
etc .s:
C o
..'" e
.. :.c III
E
=>
:uc.
c
u
'" e '"
-"" E
'" =>
'0
C
..'"
'" =>
~
c
I
Q.
(b)
FIG.
2 (a) Summary monthly chart. (b) Monthly chart by function.
The specific approach for handling these problems will vary from company to company. In some cases, the systems analyst might decide to leave aside certain applications even if "theoretically" they seem to be good "opportunities" for further integrated data processing. In fact, depending on the occasion, a number of activities either do not represent significant work loads, or do not have a direct relationship to the main framework ofinformation. It may be better to have an electronic accounting machine at hand than to load the large-scale system with trivial work.
40
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
COMPUTATIONAL REQUIREMENTS IN DISPATCHING As an example of the integration of management information for a processtype industry, we will consider the automating of dispatching operations. The total work of scheduling and controlling the movement of multiple tenders through a system of pipelines can be divided into the following functions: • • • • •
Batching Sequencing Estimating pump rate and flow rates Recalculating Reporting
Batching refers to the "dividing" into portions of particular types of product, pumped into a line as one continuous unit. Where numerous grades of products are handled, proper sequencing is necessary to minimize the losses that result from degrading higher-valued materials to lower-valued materials. Optimal sequencing is also necessary to facilitate coordination of the movement of batches through limited tankage at the company's source station and intermediate tanks. Deliveries must be sequenced in a firm manner so that flow rates in the various line sections can be computed. Where numerous terminal points exist on a line, it is usually desirable to limit the number of simultaneous deliveries to two or three. More than this number of deliveries occurring simultaneously would result in a continuous change in the line flow pattern, requiring almost endless starting and stopping of pumping units. In addition to excessive wear and tear on motor-starter equipment, the operating personnel at the various stations would be occupied in observing the operation of the pumping equipment and would be unable to perform other duties. Also, like delivery sequencing, delivery rates must be scheduled to permit the computation of line section flow rates. Optimum delivery rates are those which permit the steadiest flow of products through a majority of pipeline sections downstream from the various delivery points. Where lines 'are of a telescoping nature, caused by either the reduction of line size or pumping power, delivery rates must be set to facilitate the pumping of the desired quantities into the lines at the source points. Quite often, delivery rates must also be varied to satisfy unusual conditions existing at various terminal points. Line pumping-rate computations focus on an accurate estimate of the average rate required to accomplish a desired movement over a scheduled period. Generally, where lines are powered by multiple pumping units, possible pumping rates vary from optimum rates. Therefore, the desired
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
41
rates must be adjusted both upward and downward over given periods of time. Further adjustment to the desired rates is often necessary to facilitate the coordination of movements through feeder lines, carrier company tankage, and system lateral lines. The computation of line-section flow rates is also necessary. A line section can be defined as that section of line immediately downstream from each terminal point and extending to the next downstream terminal point. The flow rate in each section is the difference between the delivery rate at the terminal and the flow rate in the upstream line section. Estimates of this type explicitly point to the need for recalculations as conditions change. For instance, based on the inventory in a line at any given time, the position of the various batches with respect to the various stations and terminals must be recomputed. Then, by the application ofline section flow rates, the time that changes should occur can be re-estimated. In the sense of the foregoing discussion, one of the contributions of the data processing equipment is to help develop operational forecasts. Frequent revisions are normally required to account for variations between quantities scheduled to be pumped and delivered and the quantities actually pumped and delivered. When necessary, these forecasts should be teletransmitted to the various field locations, where they provide a basis for estimating the operations of station pumping and delivery equipment. To date, electronic data processing equipment has been used to advantage in several dispatching systems. Computers avail continuous checks on deliveries, as is, for instance, the case in crude oil delivery to power stations that use fuel for steel production. By means of an automated dispatching setup an oil company was able to match supply and demand, keeping its attention focused on demand variations in a timely manner. This pipeline network supplies twelve other crude oil consumption points. In total, fifteen telemetering units are being monitored continuously, whereas in the past instruments providing the necessary data were read hourly and the flow value at each point computed manually with a resulting substantial delay. By making frequent telemeter checks and flow calculations for each purchase or delivery point, the dispatching department of the oil company maintains control over system demands. The output from the computer is in the form of data charts presenting both l-hour flow and 24-hour total flow for each of the telemeter stations, plus certain combinations of the data.
USING APPLIED MATHEMATICS Other examples can be taken from simulation. A mathematical study was recently conducted to coordinate pipeline-arrival depot operations. Two
42
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
elements were represented stochastically: the occurrence of ship arrivals and turbine breakdowns. All other parts of the model were based upon engineering calculations or upon established decision rules used in the pipeline operations. Briefly, the model consists of a master routine, referred to as the "monitor program," and several subroutines which represent various phases of the operation. These subroutines are: • Generate ship arrivals and the berthing and loading of such ships. • Calculate flow rates in the pipeline. • Accumulate throughput and update inventories. The monitor program controls the entire sequence in the computer model. It calls in the subroutines for data, as required, processes this information in accordance with pipeline and terminal operating logic, and prints out resulting information on flow rates, ship delays, inventories, cutbacks in throughput, accumulated throughput, and changes in turbine status. Demand is placed on the system by the "ship berthing and arrival generator section" that produces a ship-arrival pattern that approximates previous experience and moves the ship into berths in accordance with operating rules. Provision is made for variations in the size of ships loaded, changes in demand for oil, storms, number of berths, restrictions on loading of very large ships, availability, of bunkers, and variations between the loading rate at various berths. Since the results were sensitive to the pattern of ship arrivals, the generation of ship arrival times and the corresponding lifts were incorporated in a separate computer program, thus permitting the use of the same arrival pattern for several case studies. Ship arrivals did not differ significantly in a statistical sense, from a negative exponential distribution having the same average time between arrivals. Individual arrival times were generated by random sampling from the negative exponential distribution. Statistical methods were used to insure that the cumulative numbers of generated arrivals over specific time periods were within control limits calculated from actual arrival data. Random sampling of a distribution relating expected frequency of occurrence to barrels lifted per ship was used to generate the size of ship cargos. The distribution used was derived from actual data by grouping all liftings into seven classes. The values for these classes as well as the expected number of arrivals, control limits on arrivals, and the like could be varied from case to case." The ship berthing section uses the arrival and cargo-size information
* A similar discussion
is presented at the beginning of this chapter.
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
43
from the arrival generator in determining when each cargo would have been removed from central dock inventory and what delays would have been incurred to ships. The input to the model provides for assigning a "berth holding interval" for each ship's "size class," at each berth. The berth holding interval is the time that a berth is not available for other assignments while a tanker is being loaded. "Very large" tankers are given a priority and are assigned to berths capable of accommodating them. Otherwise, tankers are preferentially berthed in order of ship size to allow the earliest completion of loading. The largest tankers are placed in the most efficient berths, but only until delays are encountered. When a ship cannot be berthed upon arrival, because of conflicts with larger ships in all available berths, all ships other than the very large tankers are rescheduled to a berth in order of arrival until the congestion is relieved, thereby preserving the first come, first served policy required by the pipeline's contractual arrangements. The period between the arrival and the time a berth becomes available is recorded as a delay due to lack of berths. If sufficient oil is not available in tankage by the time the ship would normally have completed loading, the ship departure is delayed until a full cargo is available. The delay is recorded as being due to "inventory." Weather data are used to determine port closures. Ships arriving during the closure are delayed and the delays are recorded as being due to storms. To satisfy the demand produced by ship arrivals, oil is made available at the issue point in the quantities determined in the flow calculation subroutine. Flow rates in each of the four sections of line are calculated every six hours and whenever a turbine-powered unit is shut off or started. These flow rates are used in determining oil availability at the issue point and at main pump-station tankage. A stochastic element in the model exercises its effect in this subroutine. In developing the mathematical simulator, it has been assumed that the main pump stations will, because of adequate horsepower and multiple pumping units, be able to hold the maximum allowable discharge pressure. However, a turbine unit that goes off the line causes a major variation in flow. Shutdowns of turbine units are the results of mechanical failure, schedules maintenance, and excessive inventories in downstream tankage. Mechanical failure of turbines, because of its unpredictable timing, has been represented stochastically. The time of occurrence of mechanical failure, its duration, and the turbine affected are determined by random sampling from probability distribution supplied as input data. Profiles of turbine-horsepower degeneration, due to wear and to random events, are included as input data for each turbine, and are used to determine the horsepower for calculation of flow rates. Provision is made for periodic maintenance shutdowns of different durations.
44
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
Results thus far indicate that the simulation model realistically represents the actual system. The model was sensitive to the number, size, pattern of ship arrivals, to the distribution of turbine downtimes, and to the frequency and duration of storms at the issue point. Provided the necessary accuracy is maintained, the model provides information upon which to make efficient decisions about changes of facilities or operating policies.
EXAMPLE WITH GAS DISPATCHING
Satisfying the needs of the clientele, as weather permits, and within the limits of a pre-established 24-hour peak, is the important responsibility of the gas-dispatching department of any gas company. The gas-dispatching department, taking into account weather, customer demand, and available gas supply, must match supply and demand. To do so, it has to monitor gas deliveries into its system from different gas-producing stations. In one application of electronic information machines to gas-dispatching problems, a total of thirty telemetering units recording fifty-five separate values at sixteen discrete points are monitored. Prior to the use of a data processor, instruments providing necessary telemetered data were read hourly and the flow value at each point was computed manually: Under the computer system, the points are monitored every six minutes, thus eliminating the dispatching problems. In making hourly telemeter checks and flow calculations for each purchase or delivery point, the dispatching department maintains control over system demands and the potential cost that could occur unless demands can be limited to certain values related to the pre-established peak. The dispatching department accomplishes this limitation by exercising precontracted service interruptions with large industrial and commercial users. The usage of a computer in gas-dispatching operations has been well oriented toward the future. This was a structural need, for although acceptable flow calculation accuracy had been realized with manual operation, past growth had already overtaxed the gas-dispatching department's manual calculations workload. Future growth trends indicated that a computer was the only substitute that would avoid expanding dispatcher personnel, and labor costs was one of the prime reasons for considering computer usage. In another case, the usage of a data processor enabled a major gas company to control peak system demands without incurring high demand charges. This company buys its gas on a two-part rate that includes a straight commodity charge plus a demand charge based on the peak demand established during anyone day in a year. The demand charge is then applied over the other eleven months. Thus, a severely high peak demand during just
XXIII.
THE RATIONALIZATION OF MANAGEMENT DATA
45
one hour of one day in a year can directly affect operating expenses for the entire year. As a result, it was necessary to insure control of peak gas usage in the gas distribution system at all times. To do so, the gas company in question monitors its demand on a 24-hour basis throughout the year. Adjustments are made by interrupting service to industrial customers who buy gas at preferential rates on an interruptible service basis by agreeing to curtail use whenever demand in the utility's area approaches the condition of exceeding a pre-established peak usage point. The gas load dispatcher must monitor the hour-by-hour demand, anticipate unusual demands due to weather conditions, and evaluate the hourly load increase in terms of necessary industrial curtailment. Data from various purchase and delivery points on the system in the form of static pressure, differential pressure, temperature, etc., are telemetered to the dispatching center where the flow must be computed for each point to determine the total system demand. Some 75 telemeters are monitored every six minutes. In this as in all other applications in optimizing process control, the key to success is matching data processing with the real-time requirements. Substantial amounts of data, reflecting variations in the process, must be collected, analyzed, and displayed to permit control decisions to be made in time to effect corrective and optimizing action. When large numbers of variables with rapidly changing values are involved, the factor of time is especially important. Time lost in the preparation of data suitable for making decisions results in possible losses in quality, reliability, efficiency, and safety. It cannot be repeated too often that the primary advantage of computer process control is that it permits control decisions to be made at rates that match the time constants of the process and system involved. These time factors vary from process to process, and each process control situation requires control elements custom-tailored to particular specifications. In a certain specific case, in order to apply the computer to the process, it was first necessary to define the exact specifications of the process to which the computer was to be attached, and the desired functions the machine had to perform. The initial estimates showed that only a minor fraction of the computer time would be necessary for the dispatching calculations, although for reasons of on-lineness the machine had to work on an around-the-clock basis. The foregoing conclusion, which followed the first analysis, is typical enough of real-time applications. To take full advantage of the discrepancy between computational time and machine availability, the utility company
46
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
programmed the computer to carry out the prime objective of demand calculations and, in addition, perform engineering calculations for other divisions. A monitor program was established to set up a priority sequence of routines for the computer to follow.* This executive program makes it possible for the computer to perform the monitoring and calculation every ten minutes and again at the end of the hour. It then takes up additional computational work in the vacant five minutes before sampling periods. * See also the discussion on executive routines in Chapter XIX.
Chapter XXIV APPLICATIONS IN THE FIELD OF ACCOUNTING The well-known advantage a computing system offers for accounting report preparation and file updating is the preparation of all required data with one handling of the information. By eliminating individual, diverse, and overlapping steps, time and cost savings can be realized along with an increase in efficiency and accuracy. But the "computerized" methods thus far used in petroleum general accounting applications have left much to be desired. In Chapter XXIII, we made explicit reference to what we consider to be a rational information system for management use. Accounting should act as the feedforward element of this system, and this in itself means that accounting should work in close connection with mathematical simulationit should use the most recent concepts and devices in advance index evaluation, optimization, cost analysis, and experimentation. But how often is this the case? How many companies or organizations have the guts to "derust" their accounting systems? Examples of the effects of patching, and of the outcome of the partial measures applied to rusty systems are numerous. A good example comes from eastern Europe. The Russians, for one, have been considering computer control of fuel-power supplies. Their objective was to establish the most economic methods of distributing coal throughout the entire country from existing coal basins, but no evaluation was carried out to determine whether coal is indeed the most economic fuel to distribute. Piped gas, oil, or highvoltage electricity have proven to be less costly commodities for distribution. A study of this nature obviously should start with fundamentals, establishing a standard cost system and implementing rational cost accounting procedures. It is useless to use simulators and computers to determine the most economic distribution of existing fuel supplies, while being in no position to evaluate the cost effectiveness of the different alternatives because of old-fashioned bookkeeping. 47
48
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
The cost of a thorough analytic study to cover the foundations of the systems and procedures work is small compared with the magnitude of fuel power problems in a major industrial country which has its fuel power sources and its industries spread over such a large slice of the earth's surface. About one-third of all Russian freight turnover, it seems, is taken up with the transport of fuel. Fuel power production and distribution absorb around a quarter of all industrial investment and one out of every ten industrial workers. The size and scope of the problem is rapidly changing as industry expands and the proportion of gas and oil to other fuels rises. Some five years ago, the Russian government demanded an optimal control plan for fuel power production and distribution for the entire country, and for separate economic regions. A plan was produced, but apparently it did not yield the desired results, if one judges from the commentaries this plan got within the country: "Control must be optimal in the strictest sense of the word because a deviation of even a few per cent causes losses measured in bil1ions of rubles ...." Or " ... can one get an optimal balance for fuel power as a whole merely by adding together the optimal balances for coal, oil, gas, and electric energy?" Exact and analytic cost accounting is the first of two major conditions that must be met before further progress can be made. The second is a precise appreciation of the relative merits of basic fuel and power supplies. As in many other cases, applied mathematics and computers should have been considered in the next step; instead they were treated first. The Russian analysts established: • The quantities of fuel power resources in all economic councils of the Soviet Union. • The "firm" requirements in coal, oil, gas, and electric energy. • The "conditional" requirements in caloric values which can be supplied by any of the four fuel power sources mentioned, and other factors relating to distance from sources, transport costs, and the like. "But to establish balances for the future, this is still insufficient," they commented, "What about fuel power resources for factories now being built or reconstructed?" For this, one needs economical1y valid prices for timber, metal, machines, and general material resources used in the production of fuel and power; in equipment for fuel power installations; and in power transmission facilities. One also needs valid transport tariffs based on real production costs. It is like the English recipe on how to cook a rabbit: "First catch the rabbit ...."
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
49
COMPUTERIZING OIL AND GAS DATA Gasoline accounting, for one, presents a good potential for an integrated data processing approach. This involves three main phases. The first is gas measurement. The second consists of the allocation of volumes and values in connected field systems of gas facilities. The third includes royalty accounting and disbursements, preparation of earnings and expense vouchers, and preparation of reports required for company operations and governmental agencies. Input, processing, and output of data throughout the range of petroleum operations is shown schematically in Fig. 1.
Company management
FIGURE
I
Many of the foregoing problems are of general nature to commercial concerns. In recent years, for example, the government has required each business to compile records of the number ofemployees on the payroll, hours worked, wages paid, and various contributions that are deducted from the employees' wages. Apart from the obligations imposed by the Federal Government, the company must keep records for the State Government, and also give each employee a detailed statement of wages earned, deductions for federal and state income taxes, social security and take-home pay. In addition, petroleum companies have problems of a more specific nature.
50
PART VI.
PROCESS·TYPE CASES AND DATA CONTROL
The area of application of integrated data processing in the field of general ledger accounting for an oil company are: • • • •
Capital and surplus Creditors, account charges, deferred liabilities Movable assets in existence at the effective date Depreciation reserve on movable assets in existence at the effective date • Debtors, prepayments, and deferred charges • Cash at bank, on hand, and in transit The way integrated files would be used can be exhibited by advancing the foregoing classification one more step in detail, as shown in the accompanying tabulation: I. Capital and surplus
Share surplus Earned surplus Dividend paid
2. Creditors, account charges, deferred liabilities
Accounts payable Deposits of cash Retention fees withheld Liability estimates Mobilization advances Unclaimed payments Salary and wage control Liability for goods and services not billed Liabilities in general Accrued staff movement expense
3. Movable assets in existence at the effective date
Opening balance Additions Retirements Sales/transfer
4. Depreciation reserve on movable assets in existence at the effective date
Opening balance Current provision Retirements Sales/transfer
5. Debtors, prepayments, and deferred charges
Accounts receivable pending Unbilled integrated charges to the company Provision for bad and doubtful accounts Amount due by employees Claims and deposits Deferred payroll transaction
6. Cash at bank, on hand, and in transit
Mounting credits Cash at bank, current Bank interest receivable
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
51
Under former procedures the information necessary to accomplish these accounting objectives was almost always scattered throughout many departments. One department is responsible for billing information to all accounts, another department is responsible for credit information, another for accounts receivable, while the cash balance may be handled by a different division altogether. The burdens imposed on a company by this method of operation are significant. Inquiries to an account from either inside or outside the company frequently result in a maze of intercommunication to obtain the desired information. This is a costly operation, and one should not underestimate the possibility of errors caused by scattered handling of the files. Worse yet, this maze of disorganized data can mask the facts. An accurate and timely accounting system begins with the proper handling of the initial source information. As far as customer billing is concerned, this means measurements. Measurements, in the sense used here, encompass the work heretofore performed in the major producing division offices of a petroleum concern. This, in turn, includes several stages. It is necessary to compute the flow of gas through orifice meters. Meter charts containing continuously recorded pressure data are converted to numerical quantities by means of a chart integrator, which is a special-purpose mechanical analog computer. The integrator result must be converted to quantities expressed in standard units of volume by the multiplication of a series of factors which give effect to kinetic and thermodynamic laws which govern gas measurements. Where meters are not installed to measure gas, such as in the case of small volumes used as fuel in field operations, a system of estimating is employed. Nevertheless, in either case, it is necessary to accumulate figures that will enter into the following phase, namely, the allocation and assembling of volumes and values in a connected field system of gas facilities. The latter phase is the heart of the entire oil and gas application. It is the area where utmost accuracy is demanded. In allocation and assembling, the objective is to determine the amount of gas each lease contributes to various types of dispositions. These dispositions include sales to transmission companies, gasoline plants, carbon-black manufactures; as well as gas used for fuel, repressuring, gas lifting, and gas that is flared. The salient problem here is to maintain the files with factual information and bring the proper figures into perspective. Frequently, notification of changes must be routed to a considerable number of locations. The physical difficulties involved in maintaining files in this manner cause delays in posting of charges. These delays result in ·less satisfactory service to the customer, while errors, which must later be corrected, are introduced. An efficient data handling system should enable the company to record all of the information pertaining to an account in an integrated file:
52
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
• Customer description • Use history • Accounts receivable • Name and address • Credit history • Buying information All this information should be included, along with the proper statistics and past performance evaluations. The central computer should be made "responsible" for all inquiries on the customer accounts, communicating through the interface machines with the branch locations. Thus, a high degree of timeliness can be obtained while, for all practical purposes, discrepancies will be nonexistent. Figure 2 presents a data organization scheme for customer accounting purposes. Mathematical statistics have been used in data reduction, to help identify change situations and use "data tendencies" in an efficient manner. A parallel system keeps the customer quality history, including both the use of statistics (in a comparative basis) and credit information (Fig. 3). In the background of this data handling activity is the performance of five major functions: Heading
~(f) ~
~
Meter
ev
"8 -'
LL
~mQ) Etb tn~~ ~o2 OL" 0'0 il,-ev w ~;~g ~:g8~8 §~ ~~~"6~6
6
°
0
Demand (previous year)
Data
o
l.-
-
::I
CIl~CIl
ijlOCll{j
a::
_o
001£ u~
,
-0
~~
a.
12 month
. .1. bests '
(1,)
-
month
IJ
en
0'
c
.... ~ ::I _0
,-
~"5 '"
0::
..ccl~tJ)Q.)o-g
123 56 789 iuans 2a
~
This
Q)
(/)
0i
Ledge
Q)
\.... Q> O c ; : ; : ' _
~iLO c .... C/)=-::u,!?Q)c ~mou 0-0 5 Q)
Open items -
-?-§UJ~ 0 C
C CP"O
E
C E ::J 0 c
oOE
"'0 -00 :!:: c:
Q,)Q)~
~_.~~~cSog
.!O(Do~ 2~oU. t?
§
u 0
=.
m
° Statistics
FIGURE
2
w w
ev L c: "§ev '0 Q
.~~c _::I
",0
c: 0
a=
evLev U ev
"'U
I!''08 "w U ,,~
-
Credit statistics Quality statistic
iiJ~J'~ c:
U
Q
c
-o
0 .-
i;
gev
c:
V;
-:: CL
"0
.2 o
::I
w
::I
Q
ev> 0:
0
....J
Performance and use statistics
in
£
c:0 0°
~
Ledger Last entry
w
o ev.
";: 0
~
]r~
L
0
ev
FIGURE
ev uev '0
'0 0
C ::I o
E
3
• Proper allocation of facts and figures concerning the exploitation of the system. • Computation and preparation of the bill. • Accounting for the payment of the bill and credit evaluation.
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
53
• Cost control and relating evaluations. • Profitability analysis within the framework of the operations. This last function has as a prerequisite the establishment of operating statistics, in order to determine that billing schedules are equitable both to the customer and to the company. This, in turn, imposes a number of other requirements. To compute the amount of gas each lease contributes to any or all types of dispositions, it is necessary to use a combination of several sources. Obviously, metered volumes are a prerequisite, but a precise meaning for "volumes" must be established. There may be several volumes for one lease and one volume for many leases. These include well meters, lease meters, system meters, plant meters, and sales meters. Where the purchaser of the gas measures the amount purchased it is necessary that he report these volumes as they will take their place as a source for allocating. Another source includes estimates for fuel consumption which normally discloses small volumes of gas used to operate pumps, engines, and other field equipment. To complete this picture, we will add another input, namely, theoretical gas plant production figures based on gas/oil ratios. After the foregoing items have been considered, some report-making for management can get started. The financial, accounting, and general company reports are dependent upon allocation as the primary form of input. This variable type input coupled with a selection method of indicative data in the form of a table produces accounting entries for earnings, receivables, expenses, and deferments, in a format required by chart of account codes. A report indicating all of the various dispositions can also be prepared, in edited form. This information might be furnished to producing division offices to be used in establishing an audit trail for reconstruction purposes. A good example of a data handling job, within this framework, is the computation of royalties. An examination of royalty calculations discloses that there is no fixed formula establishing the applicable dispositions. The accepted generalities are filled with exceptions because of special contracts. The total amount of royalty to be paid for any given lease is determined in the allocation phase. The calculated amount is then either paid or placed in suspense. In addition to writing checks and preparing a check register, this phase includes the maintenance of all current division of interest records, the preparation of a categorized listing of all payments due, the preparation of suspense records, and state and federal income reports.
GENERAL ACCOUNTING-TYPE APPLICATIONS The advantages to be derived from data integration and the usage of teletransmission media can be better exemplified by making reference to a system
54
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
that eliminates individual, dispersed treatments, thus realizing time and cost savings along with an increase in efficiency and accuracy. Within this organizational approach, accounting applications can be projected as shown in the accompanying tabulation. Daily journal Payroll calculation Voucher consolidation Accounts payable Gas accounting reports Follow-up overdue accounts Goods in process ledgers Drilling cost reports Refining cost reports Overhead allocations Long-range planning Economic analyses Calculation of company earnings Profitability analysis Preparation of operating budgets Budgeted vs. actual expense calculations
General ledgers Pension accounting Accounts receivable Oil accounting summary reports Agent, dealer, and customer billing Raw material ledgers Detailed cost computations Production cost reports Administrative and overhead expenses control Return on investment calculations Business research Evaluation of alternative depreciation plans Earnings-to-profits evaluation Capital budgeting Detailed expense distributions Budgetary re-evaluations
Financial profitability studies, investment decisions, and all matters having to do with allocation among alternative ends should be treated using mathematical simulators. But here again, as in the case we discussed in the preceding section, the greatest attention should be paid to the data which will be used as raw material in the computation. Once more, the design of any automatic data system should duly reflect the fact that much of the accounting input data is prepared by operating line people "on the job." One of the basic difficulties of this system is inaccuracy of account coding, which stems from a lack of understanding of the system requirements. Accounting source records are always open to misinterpretation. Controlling the input data, both when dispatched between offices and through a computer network, must be one of the main concerns of the systems analyst. The analyst should establish appropriate procedures for carefully recording, counting, and checking this information before it is dispatched. Within the framework of the total management information system, the analyst should point out data consolidation from the lowest cost center level up through the various levels of supervision within the organization. Direct, controllable costs should include unit measures of efficiency; service department expenditures should be properly described so that queries can be traced back to the originating party and to the original source document, if necessary. Cost accounts should be clearly and simply defined, and understandable to the users.
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
55
The tendency to allow accounts to be established that have limited usage or are not really justified or understood, should be carefully avoided. This tendency usually proves to be one of the basic causes of misallocations and misuse of the cost system, and it tends to render the entire scheme less effective and more cumbersome. The proposals for refinements should be approached with caution. The general outline of a cost system with built-in coding capabilities, isolating individual cost centers in all areas of company operations, will be advantageous; but any suggestion about developing, shifting, or changing accounting systems must be kept as direct and simple as possible. Care must be taken to avoid schemes that lack redundancy and are vulnerable to error. A number of other operations can make efficient use of the data automation: warehouse processing, clearance of burden and budgetary control accounts, payroll distribution, and the calculation of depreciation and depletion expense. A series of programs can be used in the preparation of warehouse vouchers and in editing statements of current warehouse activities, more precisely designed to fill the following functions: • Process inventory balances and current receipts to calculate new average prices. • Examine new average prices to see that they are in line with prior month average prices. • Apply standard cost prices to issues and calculate deviations. • Compute price holdover issues from prior months. • Calculate handling charges where applicable. • Prepare voucher current issues, handling charges, and holdover items. • Accumulate statistical information for each warehouse, such as the number of warehouse items, number of active items, and numberofissues. • Accumulate processing controls for each voucher with a record count, gross debit, and credit money evaluations. • Prepare a catalogue of warehouse material codes and descriptions, including a monthly warehouse activity report. • Prepare a balance forward of warehouse stock for the next month's processing. Integrated processing proved to be advantageous in handling data for certain operating facilities such as automobiles, airplanes, helicopters, boats, field service units, and district and division offices, which are recorded initially in accounting controls for the purpose of identifying total expense with specific units and later are cleared to other accounts. Clearing account programs have been designed to handle processes on the bases of (a) wells, connections, or volumes; (b) percentages based on service, time, floor area, minutes of flying time, miles driven, or ownership interests, and the like. The
56
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
clearing operation is performed by bringing a master file with the base for clearance for the particular unit into the computer and the current detail for the particular unit. When all vouchers have closed for business of the current month and the last documentary control and verification program has been processed, the computer can provide voucher registers, general ledgers, subsidiary ledgers, voucher listings, and various work papers used for the preparation of financial, statistical, and operating cost reports. A preparatory program can be designed to separate the basic detail file into pre-established major classifications, accumulate totals for comparison with master data control, summarize for the voucher register and general ledger, and extract all voucher entries that were compiled in the computer for preparation of voucher listings. In addition to the voucher register, general ledger, subsidiary ledgers, and vouchers, many completed statements, work papers, analyses, and classifications can be prepared during the processing at the close of the accounting month. Some of the reports which might be necessary on a monthly basis are the following: • • • •
Statements of earnings and expenses for each lease. Statements of indirect or burden costs. Statements of expenses for each plant or distribution system. Statements of minor capital projects for each location showing comparison with the current budget. • Analysis of compensation insurance costs for respective governmental agencies. • Comparison of expenditures for plant and equipment with budget estimates giving a breakdown analysis for each division. • Statements of detail for completed work in process jobs which are used in auditing completed jobs before clearance to plant and equipment. Even if it is a piece-by-piece electronic data processing job, careful planning is absolutely necessary throughout the preparatory and conversion phases. First, before anything else, comes the question of problem definition and system development. A general study should be made to determine the flow of work from field offices to district offices to division offices. This must be followed by a detailed study, including revision of present procedures, and the establishment of procedure flow diagrams for integrated operations. Such studies should also include analyses of sales contracts, consideration of the requirements of the state and federal commissions, and a review of state and federal statutes and the regulations of state regulatory bodies pertaining to oil. A variety of communication problems can be expected, which have to be studied and resolved.
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
57
APPROACHING THE CREDIT CARD PROBLEM*
The usage of credit cards involves, among other things, two major subjects: the willingness of the company to enter into a large-scale investment in accounts receivable, and the availability of a system able to face the necessary record keeping and data processing. The second requirement is a function of both the data volumes inherent in a credit card system, and of the continuing increases in sales volume which this approach is expected to bring. In turn, these two subjects of financial investment and efficiency in data handling are interrelated. Behind the investment are thousands of dollars of capital expenditure necessary to provide the tools for a large staff of employees devoted to credit administration and accounting procedures. Behind the interest in providing integrated means for credit card applications lies also the critical financial aspects of an efficient follow-up on the card holders' population behavior. Out of a total of some seventy million credit cards circulated in the United States, for instance, no fewer than one and a half million are lost each year. Of these, sixty thousand have been stolen. Illicit charges on a stolen card run up to an estimated average offive hundred dollars; to the companies that issue credit cards, dollar losses from their misuse increased eightfold in four years. Within the domain of oil operations, many firms have studied the credit card subject extensively. In most cases, from a long-range point of view, it is the desire of management to bring the large volume of delivery tickets to one location, where consideration might be given to machine processing. Obvious savings are apparent in dealing with the considerable bulk of customer invoices. Hand filing alone involves three separate sorts to bring the ticket down to customer level. With this type of operation maximum production cannot exceed the level of 150 tickets per hour per employee, whereas, comparatively, machine sorting of the key punched delivery tickets points to incredible potentials. But, while a single computer processing can increase the processing pace considerably and update management reports more accurately, integrated processing is still a step forward from the simple use of a computer. As with most automated operations, an important incentive for the introduction of a computer was to obtain complete accuracy in filing. Human error involved in allocating a delivery ticket to a particular account grouping, either by number or by customer name, can be a constant problem. Due to the delivery ticket volume it is usually impossible to provide any diversification
*The importance and the impact of this problem in accounting operations can be better appreciated if the seventy million credit cards now in use in the U.S. are brought under correct perspective. See also the discussion on credit cards in banking in Chapter XXXIV.
58
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
of work for personnel assigned to these jobs. Furthermore, the use of the credit card as the customer credit authority has always presented certain problems at the service station source. Problems of this nature are usually due to the possibility of error in recording customer number on the delivery ticket. Illegibility can seriously handicap the operations of the accounting department. Among other things, the development of the digit check principle can give to the accounting services of the company the opportunity of setting up a method of checking the accuracy of customer numbers. In the following we will consider, as an example, the case of a major petroleum manufacturer in western United States. After the usage of a computer was decided upon, for the handling of their credit card account, consideration was given to the possible scanning of the imprinter type of delivery ticket, produced from the embossed plastic credit card. As a result of these studies, it was found necessary to redesign the credit card delivery ticket to provide a tissue original for the customer, and a 51 column card copy for processing in the accounting office. At the time of conversion the total number of credit card holders was a little in excess of half a million and the total credit card requirement, including multiples, approached the one million level. This total included quarterly renewal and the complete replacement of cards in the hands of annual credit card holders. It was necessary that production be carefully scheduled to meet the expiration date of existing quarterlies and follow with the orderly replacement of the annual credit cards. The production and handling of the credit cards has always been tied in very closely to statement production, in view of the fact that the release of cards, to inactive and active accounts in releasable condition, is the responsibility of the accounting office. Cards are held on accounts in delinquent condition and delivered to the respective credit office. One of the primary objectives of this whole application was to introduce a system able to utilize some type of automatic scanning equipment. Furthermore, within the credit department of the company, impetus has been given to the conversion of quarterly credit cards to annuals made available through improved information processing. Prior to the plastic card program, conversion of quarterlies to annuals was based on a three-year "good pay" experience, with credit department policy providing for the review of the condition of an account after one year of association. Systematically, as quarterly renewals are run, the accounting unit produces listings for the indicated credit offices. These include the credit card numbers, names, and balances owed by all current accounts attaining the first anniversary. From these lists, credit history files are reviewed and notifications delivered to the credit card accounting office to proceed with conversion to annual, if in order.
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
59
Coincidental with the identification of accounts to credit for conversion from quarterlies to annuals, are name and address details in card form provided for the sales people, on accounts inactive four to six months inclusive, to prompt personal constancy and sales solicitation. These groups of cards ultimately reach the hands of the service station dealer within whose area the customers reside, to proceed with the contact. Furthermore, periodically, the index and address files are cleared of accounts twelve months inactive, after having issued three quarterly or one annual renewal and a final notice of discontinuance of credit card mailing. This action prompts customer reply if he wishes to have the mailing of credit cards perpetuated. Customer activity is identified each month as the company rounds out statement production, prior to the time the active and inactive customer records are reassembled. A phase of the credit card accounting operation, which may be of popular interest, involves the processing of customer payments. Under arrangement with the bank, all customer remittances are picked up by them; contents are extracted and reconciliation completed, balancing the check totals with remittance stubs. The bank processing involves features that are peculiar to the type of material handled, including appropriate identification of balances paid in full, partial payments, and the accumulation of unidentified remittances for referral to credit card accounting for research. On partial payments, the bank punches the alternate field in the customer remittance stub. Customer payments, supported by the wrong end of the statement, are handled in a separate group. The bank is also permitted to handle, as fully paid, those remittances within the tolerance of "xx" cents short, writing the net difference into an accumulative shortage account for the day or banking period. Initially, the same tolerances on overpayments were accepted; however, the frequency of customer requests for adjustments, representing overpayments of nominal amount, prompted the discontinuance of the practice. The details of the remittance stubs (customer numbers and amounts) are machine listed in batch groupings and delivered to the accounting office, each day, with certification of total deposit. To facilitate the research of unsigned checks and other items rejected for charge-back to customer accounts, the depository bank maintains microfilm records of checks at the proof machine source, to correspond with each batch of remittance stubs. Though the discussion focuses on batch processing applications for the credit card problem, the cases mentioned help exemplify the potential an integrated data processing system for petroleum marketing may have. Starting with the optimization of the distribution network, for all petroleum products, it can extend into planning the supplies and controlling the inventories for oil, oil products, and other items in coordination between
60
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
the central warehousing and the dealership organization, invoice making, product pricing, and price checking, credit card accounting and banking follow-up, cost accounting, evaluations, and profitability analysis from the refinery to the warehouses and the dealerships, commission calculation and detailed sales analyses with mathematical tests to evaluate company performance against competition. ACCOUNTING CONTROL THROUGH SAMPLING This discussion of integrated data handling for accounting purposes would be incomplete if consideration were not given to sampling procedures and data evaluation, using mathematical statistics. Statistical methods have a great contribution to make in accounting. The notion of statistical sampling enables the modern accountant to quite consciously select valid parts of a population and, through mathematical manipulation, obtain perfect information for control purposes. The idea here is basically the same with the sampling schemes for industrial process control, to which we made reference in the first four chapters of the present work. Its application in accounting matters, as in every other case, has as a prerequisite the need for a correct analysis of the problem. A data sampling plan should be studied in detail beforehand, which means that careful thought must be given to (a) the area of interest, that is, the population to be sampled; (b) the ability of the sampling process to give a representative sample of this population; (c) the precision required in this answer, which in turn will help establish the size of the sample. Many of the problems involved in the processing of sample data are in no measure different from the problems of processing data collected on a complete count basis. An efficient method of adjusting to the relative importance of groups within a certain population is the use of "stratification." Invoices, for one, can be divided into dollar groups and a different plan applied to each one of them; in this method, sample sizes are proportional to value. To select a plan of this type, a detailed analysis must first be made of the value and volumes of invoices handled, and the extent of error. For instance, a study the writer did at a major manufacturing company in the Midwest, in order to establish the relative importance of dollar orders received, yielded the graph shown in Fig. 4. This type of study is necessary to obtain an idea of the distribution of the dollar value and the extent and location of the error, both as to dollar value and quantity. It also can efficiently help the sales people orient a marketing strategy and their approach to the clientele. Top management can get a better idea of where profits lie-a comparison of Fig. 4(a) and 4(b) exemplifies this point. Finally, as far as the accounting people are concerned,
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
61
I.. >.
o
t:
.."
<:T
~
."t
"0 >
.. >
.-Ei :; E
. /1 "
o"
'.'. / ~--'--_--L-_--'--......L----'---50--J...-f9 /
30
100
$
200
300
400
.'
I
I
I
i_ 1000
Volue of individual sales orders _ lb)
FIG.4 (a) Distribution of individual sales orders by dollar value. (b) Cumulative dollar value per dollar class of the individual sales orders.
it is possible to determine critical cutoff points, and subsequently where it would be more economical to check all invoices because of the smaller volume involved and the greater risk due to high value errors. Stratification can also be carried into greater detail by subdividing the population into various "risk groups." Smaller sample sizes can be taken as the risk decreases. A statistical analysis may indicate that the low value invoices, usually representing a large volume, need not be checked at all. This was the example of the foregoing case-its implementation basically
62
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
means management by exception. Random sampling with its computed risks provides greater accuracy at less cost, improves recovery with less fatigue, and results in lower error rates. Concentration on the lots containing most of the "errors" promotes more efficient checking. Sampling of accounting data permits a fast and reliable means of determining areas of greatest error, so that effective corrective action can be taken. In a systems approach, error reduction can be handled through the usage of control charts in connection with sampling. To date, statistical quality control charts have been used mainly in connection with industrial production. Their suggested use in accounting is based on the generic fact that the process of controlling "quality" is unchangeable; it is only affected by the dependent nature of a verification. The accountant carrying out the quality assurance operation may be affected by what he sees in determining whether the element is identical with the "truth" or whether there is a discrepancy. The use of mathematical statistics presents in itself solid safeguards that the outcome of an audit will not be biased by subjective judgment. As a management tool, the statistical sample should constitute an integral part of the data network. The airlines in America, for one, have adopted a sampling system for their interline accounting.* Many passengers buy "through" tickets for journeys that utilize more than one airline. The whole fare is collected by the company that originates the journey, and so the problem is to account for the money that has to be paid to other lines for the later stages of the trip. Since there are journeys in the opposite direction, there also exists a reverse flow of credit. Hence, the cash payments necessary between airlines are a final balancing operation and are small compared with the actual turnover in each direction. It has, therefore, been agreed that the determination of the adjustment with "absolute accuracy" is not justified because this would necessitate the clerical processing of all ticket vouchers. The solution that has been chosen calls for a sample ofvouchers to be taken; the results from this sample determine the cash adjustments between the airlines. To our inquiry on this matter, a leading airline answered as follows: We utilize a sampling technique to settle interline balances with seven other large airlines. The following steps are involved: a. Tickets are sorted into the following categories First Oass Coach Exclusions (This includes half-fare tickets, excess baggage tickets and several other low volume categories.)
* Similarly, the railroads are said to proceed in a comparable manner in their accounting procedures, after having experimented with parallel runs on the accuracy of the inductive accounting approach.
XXIV.
APPLICATIONS IN THE FIELD OF ACCOUNTING
63
b. The exclusions are billed on an individual basis. c. A 10% sample of the First Class and Coach categories is obtained by selecting tickets with serial numbers ending in a predetermined digit. The digit is selected by means of a table of random numbers and a new selection is made each month. d. The sample tickets are priced and the remaining tickets are billed based on an average value that is developed from the sample tickets . . . . there are a number of built-in validity checks and safeguards against getting a biased sample.
Or, as another airline was to say: The sample of passenger tickets selected is based on the terminal digits of the tickets. The size of the sample is decided by the number of terminal digits selected. The local (point-topoint) fare is determined for the population and the actual revenue earned is determined for just the sampled items. The relationship (regressive estimate for domestic tickets and ratio estimate for international tickets) of local fare to actual fare for the sampled items is then applied to the population local fares in order to estimate the population actual fares.
Diverse areas of financial and accounting control, such as overtime work, absenteeism, budgetary measures, and performance evaluations, indicate a need for effective inspiration and measurement methods to provide the company with sharper tools than are otherwise available. But if sampling is a means for implementing approaches to data collection, subsequent data reduction techniques must also be considered. Traditionally, the results of sample surveys are produced in the form of distributions, means, medians, or aggregates. With computer use, it is possible to determine, for instance, the second moment of a distribution, so that added insight is obtained into the performance of the accounting system. This brings our discussion back to the need of establishing and testing appropriate models for the composite examination of multiple variables. The results of a statistically designed audit should also be accompanied with statements of the precision involved in their making. Structural to this requirement is the establishment of estimates of parameters from large masses of data. Such is the case with the determination of intraclass correlations at the level of clusters of small size, and of the analysis of variance. The mass scale determination of correlation coefficients involved in ratio and regression estimates might lead to more efficient utilization of complex stochastic processes, requiring smaller samples while assuring precise results. In conclusion, the use of mathematical statistics for financial control purposes falls within the framework of digital control functions. Accounting management has a structural similarity with a test of significance based on the null hypotheses. In the twenties, Shewhard treated the problems of quality control through a significance test adapted for workshop use. This helped point out the exceptions that needed attention; exceptions due to "assignable causes." But tests of significance are also particularly appropriate in helping managers to judge control information.
Chapter XXV CONTROLLING A POWER PRODUCTION PLANT Electric utility companies are constantly searching for methods of producing electric power more economically. This must be accomplished while maintaining safe, dependable operations and, without risking service interruption to the customer. In turn, such requirements imply substantial improvements in the operating performance and reliability of individual plant equipment and a reduction of the operating costs of the entire system: (1) The upward trend offuel costs is met by the installation of larger, more efficient generating units. Power stations are built on the unit principle and there has been a great increase in the size of the individual units in the last few years: 500-MW and I,OOO-MW units are currently projected. The nature of the load and the advent of nuclear power require that data control systems be implemented to maintain their efficiency. (2) Rising labor costs have been offset by a reduction in plant operating personnel. A reduction in operating costs is achieved by consolidation of widely scattered boiler, turbines, and generator control panels into one central guidance network. For this network, an automatic real-time supervisory control system is required, to properly optimize certain manipulated variables and to perform calculations vital to improved operability. Digital process control can provide a more effective plant equipment design and allow operation closer to design limits. It can detect equipment failure at an early stage of error development, and maintain high standards of safety.
Digital automation at the power factory level is often approached in a "step-by-step" fashion. A first approach is to establish a system able to collect, correlate, and compute data to produce information that would guide the operator to possible means of increasing plant efficiency. The following step involves expansion of the required equipment to accomplish automatic startup and shutdown of the entire generating unit. The third step 64
XXV.
CONTROLLING A POWER PRODUCTION PLANT
65
calls for a computer control system able to actually run the plant. The generating unit is directly under digital control, with the information machine correlating the operational data to produce and execute decisions.
INPUT AND THROUGHPUT ACTION FOR POWER PLANTS As with all process control applications, the essential functions in guidance for power are: collection of operational data, correlation and reduction of the data to produce guidance information, evaluation and comparison of this information with references to decision making, and, finally, execution of the decisions to achieve the desired operating conditions. Implementation of these functions in power plants calls for storage and considerable amounts of data, such as norms, past performance, formulas, and the like, which are subject to frequent changes. The efficient collection of operational data implies the existence of a substantial number of: • Contact sense points for detecting on-off conditions of the plant equipment. • Contact operate points for on-off switching of the plant equipment. • Analog inputs for measurements of plant variables. • Analog outputs that represent the analog equivalent of calculated digital values to adjust the controller's set points. • Pulse counts to accept plant information in digital form for kilowatthour meters, etc. • Priority interrupts that instantaneously interrupt the routine operation of the computer in order to handle high-priority occurrences in real time. The frequency of automatic logging cycles cannot be decided in a general or arbitrary manner. It must be determined experimentally by the rate; of change of the process variables. In the case of a base-load station, the routine logs usually need not be more frequent than, say, once an hour. The trend points might be logged at 24-hour intervals. For stations operating on a two-shift basis, it is advantageous to have a variable logging interval, so that during startup and loading the logging frequency can be greater than during normal running times. Every care should be taken to incorporate in the design of the data logging equipment such fundamental conditions. The sensory elements will be coordinated to the real-time computer through an "input interface." This needs to include interconnections between the plant measuring instruments, transducers, and the data logging equipment. It is at this interface that difficulties usually arise, not only at the design stage but also during systems testing. Some of these problems may
66
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
result from the conversion of input signals into digital form. * The provision of a converter for each input channel brings about certain economic considerations, and for this reason a shared converter is often employed. This means that all the input signals must first be translated to a certain common language. A data logger organization is shown in Fig. 1. The interface system must answer in an able manner questions of variety and of incompatibility. Apart from the electrical input signals, from transducers associated with process variables, such as pressure and flow, the data logger must be capable of accepting inputs representing the output of a variety of other media. A distinction here is that the instrument signals are at relatively low level while, for example, the power measurements are derived from current and potential transformers, with operational characteristics at a higher voltage level. Also, many of the input signals will be derived from primary measurement instruments which have nonlinear relationships between the output of the transducer and the physical quantity being measured. The nonlinearities may be merely a slight departure from strict proportionality over the range or parts of the range. In other cases, the measurements may be of an implicit nature involving functions of a variable. The linearization function requires that the signal be modified, and this modification depends upon the value of the signal. All these aspects will have to be studied in a detailed and precise manner. An interface operation should obviously reflect basic operational requirements, for instance, the continuous automatic scanning of inputs and printing records of any points that deviate from preset low and high limits. This brings into perspective the subject of output coordination. Instantaneous output will be needed to give point identification numbers the time of the alarm and the value, and the time and value when the point returns to normal. Depending on operational requirements, the occurrence of an abnormal condition may also need to actuate a visual or acoustic warning device to attract the attention of the operator. The output devices may be strip-printers, page-printers, typewriters, or visual displays. Their selection should be based on technico-economic considerations with technical requirements becoming imperative in the selection of media for recording the occurrence of alarms. A preferred format of the record today is a print-out of the time of the alarm followed by the point identification number, high or low limit symbol, and the measured value. In a more sophisticated data logger, a page-printer can be used for the alarm records, the additional printing space being used to record the high and low alarm limit settings. In a process control for boiler operations, for instance, the format
* See also Chapter VI.
I-
s:
CI>
0-
e
on on CI> o
.J
I
FIGURE
1
tJI
comparotor
Li~
•
&;j7I
Warning
~;~system ~L.Y L
To input/output media and the central processing system
Z
-..I
0'\
-l
Z
>-
l'
-e
o
-l
-z
o o c:: o
"t:I ;>::l
;>::l
rn
o :E
-e
>-
CJ
Z
rr-
o
;>::l
-l
o o
<:
~ ~
68
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
of the log sheet can be conveniently divided into separate sections for the boiler and turbine points. The record will usually commence with the time of the log followed by a character identifying the type of log and the values of the points, each block of characters being separated by a space. The points within limits can be printed in black and the alarm points in red. This allows for rapid identification of abnormal conditions from the log sheet. The operator should also be able to select a particular input for visual display on some form of digital indicator. This is particularly useful during abnormal conditions or when corrective action is necessary. To perform these input, throughput, and output operations for a power plant, the computer must contain preplanned, stored, and automatically available major routine programs for starting and stopping the auxiliary equipment; testing the stand-by emergency and running equipment at regular intervals; detecting and alarming off-normal conditions; starting and stopping subloop control systems; optimizing plant operations; taking proper action when a subloop or other equipment fails; testing the stand-by, emergency and running equipment at regular intervals, and performing the normal corrective functions. * In a digital control application carried out not long ago for power production purposes, two main programs were required to achieve normal operational control of the system. One program computes the power that each generator will be expected to provide through a grid of time scales, say at the half-hour level, to meet the predicted load in the most economic manner. This program then signals the power stations accordingly. The second routine computes the power required of each generator in a much denser time grid, say a 3-minute time, and provides data for the necessary changes in output. With respect to computation, because of the greater changes involved in the half-hour time scale, the work is more detailed and complex than that corresponding to the 5-minute loading. Throughout this projection, experimentation, and optimization, it is necessary to maintain, in the computer, an up-to-date picture of the operational system, by means of a mathematical simulator. All switching and plant changes have to be recorded before any new power requirements are scheduled. Similarly account must be taken of the security of the power system. The computer must check for a given distribution of load and generation to make sure that the loss of, say, a large generator or a heavily loaded transmission line will not lead to an interruption of supply to the consumers. Hence, a reliability simulator should be incorporated into the guidance program, able to guarantee the systems assurance requirements.t *See also Part V, "Programming for Real-Time Duty." t See Chapters XVI, XX, and XXIX.
XXV.
CONTROLLING A POWER PRODUCTION PLANT
69
From the foregoing, we can define the object of a process control computer as being that of taking over as much routine work as possible. The operator can then devote himself to obtaining the best performance from the plant. The guidance system might also monitor and reset the main control loops, the computer given commands to keep performance optimal. This is, in effect, a sizeable contribution. The importance of a computer-made experimentation and the manipulation of variables can be better seen if one considers that the operator cannot read "efficiency" as simply as he does the temperature on a meter. He would have to make several readings and perform a complex calculation upon them. In doing so, he would take considerable time, and conditions of working might change so that the results no longer apply and the factory is necessarily operated in an inefficient manner. Financially, the application of computers in steam electric stations can be justified on the basis of the tangible benefits, such as labor and fuel savings, and also the intangible benefits, such as reduction in the possibility of major equipment damage and reduction in generating units outage time. Before making his choice in what concerns digital automation equipment, the design engineer must consider items such as reduction in equipment maintenance, simplification in backup electronics for instrumentation, and greater safety for men and machines, which result in definite savings, even though they are difficult to evaluate. Fuel and labor savings in existing units could be greater than in a new power production installation, because the older units are not automated to the same extent as modern conventional power plants and also due to the fact that control boards are scattered throughout the plant, thus requiring a greater number of operators. Contrary to this, the savings from reduction in outage time are small because the capacity of an old unit normally constitutes a small percentage of the total electrical system capacity, and it can be substituted with the system reserve, so the differential energy charges are small. With both "new" and "old" power plants, the implementation ofguidance action must be studied through a systems approach. A power-generating plant is a self-contained ensemble comprising a boiler, turbine, and alternator, together with their many auxiliaries such as fans, pumps, and, in coal-fired sets, pulverized fuel mills. Instrumentation, in the sense discussed in Chapters V and VI, is extremely important in the achievement of both safety and minimum running cost. A substantial number of units is significant for data pickup, dispatching, and corrective action. The specification of alarm-scanning and data-logging equipment will depend upon the nature, size, and complexity of the particular power station for which it is studied. Among the principal factors will be:
70
PART VI.
• The • The • The • The
PROCESS-TYPE CASES AND DATA CONTROL
number of alarm and logging points types of primary measurement instruments physical construction of the power production units disposition of the equipment
Other factors will be common to all systems, so that a generalized specification can be considered, the most obvious being that the data-logging equipment must be suitable for continuous operation under operational conditions. Environmental constraints might include air-borne coal dust and. ash, and vibrations transmitted via the building structures from the turbine, the fan and pump motors, the pulverizing mills, etc. Allowances must also be made for operational accidents. As a matter of principle, the installation of the data-logging equipment must be in step with the boiler/turbine unit, and it should be operational during the commissioning and acceptance trials of the unit. It must be designed for reliability of operations, since it will be used continuously, year in and year out. Diagnostics and prognostics for fault location are essential if good availability figures are to be obtained.
A PROCESS CONTROL SYSTEM FOR BOILER OPERAnONS The following example comes from a data control system operating in a power plant. Figure 2 presents a diagram of the total power production system. The control is exercised by a computer, which, for identification purposes, we will call NEURON. * The NEURON system consists of a central computer, an input section, and various output devices. The computer is completely solid state, and it is designed specifically for utility applications. The input section of the computer accepts operational data directly from the process Alphanumeric input data and the program are entered through the paper tape reader. Manual inputs are entered through the consoles. A scanner receives input signals, one at a time, for use by the computer. Two types of scanners are available with the system: a mercury-wetted relay matrix scanner, and a digital fast scanner, providing a rapid method of detecting the open-closed status of several contacts. The boiler disposes a number of local automatic controllers, to maintain a log of critical values. These are pressure at the superheater outlet, temperature at the superheater outlet, reheat temperature, and drum water level. The points where the temperature and pressure are of most
* Fictitious name. This is not the description of a single application. The writer has collected under one envelope, the best points he observed in following the procedures of five different process control cases.
XXV.
CONTROLLING A POWER PRODUCTION PLANT
71
Control level data
~ t
"Neuron" On-line output devices
computer
f
eretronsmisSion lines
On-line output devices
FiGURE
2
interest are inlets and outlets of the various boiler sections. The outlet of one section may be physically so near the inlet of the next that it is not worth having two sets of instruments. Temperature differences between the inlet and outlet of the superheater sections are measured by thermocouples. Feed-water pressures and flows are measured in the drum and economizer. Automatic controllers have to be examined to check that the difference between the measured value and the controller's set point is insignificant: • Inadequate pressure in the feed-water heater. • Excessive main-stream or reheated-steam temperature. • Leakage of any high-pressure portion of the boiler or interconnecting pipes-or, for that matter, any rupture. Leakage, for instance, will be indicated by an abnormal pressure or temperature distribution proportional to the seriousness of the fault.
72
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
• Excessive rate of change in drum, superheater of reheater temperature. • Excessively high or low water in the drum. The most crucial test for a digital control system in a power system will be "starting." Reference is made particularly to the condition of starting when the boiler is cold. When there exists a positive steam pressure in the boiler, the warm start is easier. Additional operations are required, in a cold start, to expel air and drain water from the physical ensemble. The computer has to monitor a piece of equipment in which many items are irrelevant at the start and are gradually introduced until normal load is reached, while others are relevant in the early stages and are discarded later. Hence, digital control at this point deals with a changing list of points and alarm limit values for each step of the sequence. This fact alone is enough to justify a great caution on behalf of the systems designer. For automatic boiler operations, the starting sequence needs to be much more sophisticated than what a simple time-table for switching on various auxiliary items of equipment on the generating set could offer. Changes in production patterns, as a function of time, will be necessary to automatically guide and supply a given load demand. "Advancing the starting sequence" is the first step for bringing the boiler under full load. After taking this action, the computer must check that the preceding step has been carried out correctly. Each step leads to the issue of a computer output. Following the output, one or more input points will need to be checked to determine whether it has produced the correct result or not. Within the framework of this operation, the computer must be able to halt the "starting sequence" when a specified value of the critical measurements is attained. For control purposes it must also assure dependability operations before carrying out the next step. Ensuring that individual items remain healthy means that one of the following courses should be taken: halt the sequences until the fault has been corrected; if there is a stand-by device, switch and bring the stand-by in; if there is no stand-by, halt the sequence and begin an error routine until the point of difficulty has been successfully located and corrected. This points to the magnitude and depth of the systems analysis work that needs to be done. The task is quite complex and the challanges are many. Not only do systems concepts not reflect, as yet, new frontiers in control and guidance knowledge, but also mechanical equipment and component manufacturers find it difficult in adjusting themselves to the needs of digital automation. To this reference, Aswell* said that heavy equipment manufacturers seem
* Don Aswell, general superintendent of production, Louisiana Power and Light Company; from alecturepresented atthe Eighth National Instrumentation Symposium, New York, May 1965.
XXV.
CONTROLLING A POWER PRODUCTION PLANT
73
to be ignoring the "facts" about systems control. He mentioned two manufacturers of boilers who now produce equipment with startup characteristics that require such fast response and accurate action by the controls that neither company, for safety reasons, will recommend that a man even be permitted to control these operations manually. And what about our relative lack of experience in closed-loop performance? More than on-line ness, closed-loop performance requires a tremendous engineering job since it is mandatory that every control action must have been defined in detail and converted into computer language. When all control actions are to be performed or even supervised on a real-time basis by a computer, the operating instructions placed in the computer must be very extensive and complete. In this, there exists no room for errors and omissions due to lack of experience, negligence, or any other type of excuse. As Aswell defines it, an outline of the computer instructions required in starting a pump supplying water to the boiler involves the following operations and checks: • Sense need of water in the boiler. • Signal pump to start after checking it to be sure water is available to the pump and that pump valves are in their proper position. • Check to see that flow is established through the pump, that pump and bearings are not overheating, lubricating oil temperature is satisfactory, pump valves assume their proper position, pump vibration is satisfactory and pressures are normal. • If any of the transmitted signals indicates trouble, the computer must analyze the pump operation quickly: decide if the pump is in trouble, or if a transmitter is in error, then take corrective action or shut the pump down. He further states that the actual testing of the program by simulation is necessary before the computer control can be applied to the plant operation: Many man-years of engineering time have been expended so the utility engineer can become familiar with the computer and how it can be used in power station operation .... By analysis and program preparation, or flow charting, more has been learned about the plant equipment, its operation and preventative maintenance than would have been done for many years. If no other result had been obtained to date, this alone is worth much of the effort expended. Within our own company the present knowledge about exactly how a unit should be operated is available to all personnel long before the unit has been operated for many months. We know our equipment better than ever before because of this intimate study of the plant cycle. The transmitted information to the computer requires reproducible and usable signals for proper action. Unfortunately, we have not always provided the operator with reliable information and perhaps have depended too much on his reasoning ability to determine
74
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
errors in input information. By being aware of his situation, we have taken steps to provide him with better and more reliable information.
In the NEURON organization, the central computer, which carries on this guidance action, consists of the memory, the control, and the arithmetic unit. The memory stores the program, the program constants, and the data, and also provides working storage for intermediate calculations. The control section of the central computer interprets and executes the instructions automatically according to the sequence prescribed by the program in storage. Alternate sequences may be taken depending on certain conditions that exist at the time a particular instruction is executed. Alarm scanning is carried out for points that normally lie within prescribed limits. The limit values are stored in the memory. The points are sampled sequentially and compared with the limits. Visual and audible warning is given when an abnormal value is detected. The value of alarm scanning and logging equipment lies in its timely monitoring of the plant. The operator is relieved of routine tasks, while the routine plant record is obtained on a single chart, which is easily interpreted and has a layout that can be expertly designed. This is, of course, the major advantage for similar installations all over the process industry. With respect to systems, for alarm purposes, points are divided into homogeneous control groups. These groups are also used as the basic building blocks in producing a log sheet for the plant, both at definite time intervals and on demand by the operator. The values are read during a normal scanning cycle and stored in the memory ready for printing on a typewriter. Identifying information is preprinted on the chart. Daily totals are also printed for selected variables. This is only a part of the machine output. The output section provides logs in a variety of forms to serve the purposes for which this information is used. For quick visual indication of the performance of some part of the system, the visual display method is utilized, just as the alarm printer serves as a record of the off-normal condition. The logging typewriter is valuable for the historic log on an hourly or on-demand basis. The paper tape punch may be used to provide common language input to another computer for compiling weekly or monthly summaries. Trend recorders are provided to record one or more quantities on a continuous graph form. Scanning, alarming, and logging are interrelated functions. We have spoken of alarm scanning, but the boiler control system also provides a time-share basis during which each operational point is scanned according to a prescribed sequence and frequency. After each point is scanned, the value is converted into measurement units and then compared against the pre-established limits for trend control purposes. These limits are stored in the memory of the computer and may be changed by the operator.
XXV.
CONTROLLING A POWER PRODUCTION PLANT
75
The continuous scanning operation proceeds at all times and is not interrupted when there is an on-demand log required or when it is time for the regular periodic log. When an on-demand log is requested or when the time has arrived for a regular periodic log, the computer will perform the necessary calculations from the most recent set of scan quantities. Performance calculations include over-all heat rate, unit heat rate, boiler efficiency, and condenser efficiency. Whenever normal scanning and printing is carried out, an operation that occupies NEURON for a good part of its time, the program is unsophisticated and does not saturate the computer time available between successive throughput or output operations. In minimal machine configurations, NEURON'S capacity is usually limited by the speed of the selector switch and printer rather than by the numerical computation.
OPERATIONAL CHARACTERISTICS AND LOGGING Several types of output are available with the NEURON system. These include logs, alarms, trend recording, and a visual display. Logs are typed out under computer control, usually at intervals of an hour. A paper-tape may also be provided to serve as common language input to off-line computers for weekly and monthly calculations and reports. All the important systems variables are logged. Many of the variables are averaged values for the past hour. In addition, the performance calculations are made and typed out on the hourly log. Since the computer is able to process the input data and log information, such as heat rate, much of the information normally logged hourly in a steam plant can be eliminated. Aswell* masterly treats this subject of communication between "the system" and its human component. In the Little Gypsy project, communication between the control board devices and the man were approached through the usage of "trend recorders, digital displays, mimic alarm panels, and high-speed printers which permit utilization of alphanumeric messages." With them, centralized controls have become more effective than ever. Quoting from a reference: Computer-to-man communications ... is a serious shortcoming of the present day computer system and is slowly being solved. As long as the philosophy that prevails today exists-that is, that the man must be the ultimate judge who determines when the computer is not doing an effective control job-then the man must be provided with rapid and easily understandable information. The foreseeable ultimate would appear to be the computer advising the man exactly where maintenance should be done if operating interference is
* See reference on page 72.
76
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
experienced; meanwhile, the computer would continue its other control function. It is even considered possible now that this problem will be eliminated by omitting the manual operated portion of the control board; thus, if the automatic controls fail, the plant will be removed from service.
This is a design approach that has to be first tested and experimented upon in order to establish in unambigious terms its technical feasibility and financial acceptability. In the model plan we are developing for discussion and demonstration, it will be appreciated that we have retained somewhat more conservative operational characteristics. The NEURON system can also be used to provide pertinent historic and operation data, so that the operator is not inundated by a mass of typewriter paper. The log is usually typed on paper that has preprinted column headings. The operator has the option of calling for an on-demand log at any time during the hour. This on-demand log contains all of the information contained on the hourly log, however, average values will be averaged only for the part of the hour since the last hourly log. Special logs for the end-ofshift or the end-of-day may also be provided. When alarm limits are specified on the point summary sheet for a particular analog input variable, the variable is checked against the limits each time it is scanned. If the variable is outside of the normal limits, the computer notifies the operator by turning on an audible alarm and printing out in red, either on an alarm typewriter or on an alarm printer, the time, the point identification number, and the value of the point. The computer program has been prepared so that as soon as any point in a group goes into "alarm," the appropriate green light is extinguished and a red one flashes on, the common alarm bell rings, and a print is given on the alarm printer. The alarm is then acknowledged by the plant operator, the bell is silenced, and the red light becomes steady. If another point in that group goes into "alarm," the bell again rings, there is a print-out, and the red light starts to flash. Hence, each new alarm point has to be acknowledged, except for the case of a transient alarm of a point. This last one is not acknowledged by the operator until after it has returned to normal. Then, a complete printed record is made, but the flashing red light returns directly to a steady green when the "acknowledge" button is pressed. This procedure ensures that no transient alarm will escape attention. One or more trend recorders can be connected to the NEURON system. A specified input can be called for by the operator and repeatedly recorded on a trend recorder. This is useful to the operator when he desires to record a point that has caused an off-normal alarm. A four digit in-line indicator can be used to display the value of an input variable. If this is supplied, the operator can select an analog input point to be displayed repeatedly on the visual indicator.
XXV.
CONTROLLING A POWER PRODUCTION PLANT
77
As an alternative to the visual display, the selected point should be typed or printed. The function of monitoring startup and shutdown involves checking the manual control operations performed by the plant operator during the various startup and shutdown procedures associated with operating the boiler-turbine-generator unit. The NEURON system can provide this function by checking the critical manual operations against the correct sequences and by going through a check list at each functional breakpoint for the entire startup procedure. The monitoring function can be divided into: • Identifying for the operator, the next step in the sequence. • Identifying a portion of a previous sequence omitted. • Checking the boiler warm-up rate and turbine acceleration rate. • Guiding the operator in correcting excessive deviation from acceptable rates. • Monitoring the manual control operations. • Indicating by alarms any misoperation. The monitoring function guides the actual sequence of operations by providing the human monitor with a visual indication of the next step to be performed in the control procedures. This indication is provided on an operator's sequence controller and operating panel which would be located on the unit control board. The panel is an extension of the system's console. In addition to the push button, selector switches and indication lights are included on the system console. The "sequence controller" and operating panel also contain rows of indicating lights. These lights are arranged to provide the operator with a visual indication of the sequence steps in progress, and the sequence steps that have been completed. Indication of a step completed would give the operator clearance to proceed to the next step in the control sequence. The monitoring function also provides the operator with a visual indication of an excessive or low boiler warm-up rate or turbine generator acceleration rate. This is provided by indicating lights. Any deviation from the acceptable rates stored in the computer memory would initiate an alarm print-out, in addition. Furthermore, if any manual operation is performed prematurely, or out of sequence, and could result in a dangerous operating condition, an alarm is sounded, and the operation is identified. Since all functions performed by the NEURON system are directly related to the operating variable being scanned, the most important part of the specifications which should result from a systems study relates to the description of all points to be scanned, the rate at which they are to be scanned, and the characteristics of the sensors to be employed. Additional functional information on each input point is also required. This includes such
78
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
information as whether or not the point is to be logged, averaged, integrated, checked against limits, alarmed, or trend recorded. The described information must be available before programming and the engineering of the input-output equipment can proceed. Careful thought should be given to this specification since any changes or modification made after programming and engineering have begun may inconvenience the user and manufacturer. Spare points may be specified initially to insure that the scanner will be of sufficient capacity.
CASE STUDY ON DATA COLLECTION IN BOILER TESTING This research project was conducted jointly by a major mechanical manufacturer and by a company producing specialized instruments. The object of the project was to reduce the manpower and time required for fieldtesting work on boilers. Equipment was visualized which could be installed in a power station and operated by a staff of two or three men. It was decided that the data output would be in a form ready for transmission to a remote computer by teletype, in order to reduce the time interval between testing and obtaining results. The study aimed at establishing the stability of the data-taking system and the ability to detect changes in boiler conditions. An assemblage of automatic data-gathering equipment was installed on a boiler in Pennsylvania, and teletype communication was established with a computer in New York. The data taken in Pennsylvania was transmitted to New York, fed into a computer, and transmitted back to Pennsylvania in less than an hour after the test run was started. Rapid return of calculated results to the test site hinged on scheduling the computer time to correspond with the test schedule. The first system built had a capacity of 149 data points, 120 for thermocouples, 20 for flow, pressure, and draft, and 9 for oxygen analysis. The numerical values were punched on teletype tape in about 100seconds. The oxygen sampling took about 12 minutes to read, hence, a test run required about 13 minutes and 40 seconds. At the operating center, the output ofthe physical system was paper punched in teletype code and used for the teletransmission of the data. This tape also contained certain computer commands, time, and test number. With respect to systems, the basic building block was a device called an "analog scanner," consisting of a stepping switch, a standard self-balancing potentiometer, and a retransmitting slide-wire device. Its function is to convert the primary element or transducer signal to a standard analog signal representing the value of a measured variable. An analog scanner can be
XXV.
CONTROLLING A POWER PRODUCTION 'PLANT
79
connected to anyone of20 thermocouples or transducers through its stepping switch. Temperature, pressure, and flows may be intermixed on any given scanner. These analog scanners may be located anywhere throughout the boiler, provided such locations are convenient to the points to be measured. The signal from one transducer at a time flows to the analog unit where it is converted to a slide-wire position and retransmitted as a percentage ofthe total resistance. Then, it is converted by a digitizer to a four-digit number, this number being the data value. The four digits of the data value are then loaded, simultaneously, into the four memory cells releasing the digitizer for work on the next data value. The four digits are converted within the memory to teletype code and then read into the perforator, where they are punched, in code, on the tape. A "sequencer," synchronized by the perforator, performs the function of stopping the analog scanners, resetting the digitizer, loading and dumping the memory units, and sequentially reading out the loaded memories at the proper time and in proper sequence. The logical unit dictates the format of the tape words and permits nondata information to be entered on the tape. The function of the analog scanner is to convert the thermocouple millivolts, pressure transducer output to a standard analog signal for conversion to a digital code. This unit consists of a stepping switch able to select 20 inputs, one at a time, and a servomechanism designed to convert the input to a shaft position. The analog output is the position of a slider of a potentiometer mounted on the shafts. Each analog scanner has six hand-adjusted resistors, which are used to insert into the data-gathering system items of data not readily measured automatically, such as heat content of fuel, analysis of the fuel, and unburned combustible refuse. Temperatures are measured in units of millivolts, pressure in psi, draft in inches of water, and flows in thousands of pounds per hour. Because the proper location of the points of measurement for boiler testing must always be carefully considered, the analyst installed an oxygen analyzer equipped with a step-switch and a series of electronically operated valves. It was used to analyze nine flue gas samples and two samples of bottled gas. This installation of thermocouples in gas stream has two objectives: to determine the "local value" and the "average value" of gas temperature. If the object of the measurement is to study contours and how they change with operating conditions, then a large matrix with a fine mesh is required. In an application like this, there exist two sources of possible error. They are defined as static accuracy and random error. Random errors arise mainly from changes in the environment. The static accuracy can be found at three places in the system: the primary elements and transducers, the scanners, and the analog-to-digital conversion process. The changes in the temperature in the thermocouples were assumed to
80
PART VI.
PROCESS-TYPE CASES AND DATA CONTROL
be steady for the period of the test. The pressure, flow, draft, and differential transducers were calculated prior to the test and at the end of the test, and they all remained constant. The scanner itself has three sources of errors: • The value of the circuit resistors • The linearity of the analog conversion • The stability of the reference voltage The circuit resistors and voltage were checked periodically, and no changes were found. Two computer output forms have been used: a printed result summary and a punched paper tape containing the major results. The punched tape is converted from computer code to teletype code for transmission. The major results calculated from the data obtained are: • • • •
Boiler efficiency Heat absorption Gas temperature Tube bank conductances
Boiler efficiency is calculated using the heat-loss method. The losses due to dry gas, moisture in the fuel, and hydrogen in the fuel constitute a loss of some 80 %. The remaining 20% result from CO in the fuel gas, combustibles in the refuse, radiation, sensible heat in the refuse, unburned hydrocarbons, and moisture in the air. Boiler efficiency has been calculated using three different fuels assuming the same values for total air, exit-gas temperature, and inlet-air temperature. Little variation was found in this comparison. The first model of the assemblage was in service on a central station boiler for six weeks. It met the specifications of portability, data point flexibility, simplicity of operation, and compatibility with an electronic data processing machine. It included about 200 relays, and no failure was encountered with anyone of them. The boiler was instrumented to allow the calculation of its efficiency, the heat absorbed in each part of the connection pass, the gas temperature entering and leaving each section of the connection pass, and the over-all conductance of each major section of this pass. During the subject tests, the two companies involved in the project split up their testing program. One company carried out all its tests at one time, whereas the other made the tests independently of each other. A comparison of the results for a period when data were taken together shows a maximum deviation between automatic and manual results of 0.8%. The 0.5% of this deviation is accounted for by differences in the reported values of exit-gas temperature and oxygen analysis. The remaining 0.3% resulted from the use of preset fuel analyses which were different from the ones actually experienced.
XXV.
CONTROLLING A POWER PRODUCTION PLANT
81
On the basis of the performed test, management concluded that the results obtained by the automatic setup were at least equivalent to those obtained by manual means, but that the process in question is superior to a manual one in several other respects. For instance, an automatic setup, as the one in question, can obtain all temperature, pressure, flow, and draft data on the boiler or turbine in about three minutes. Manual means require a longer time for collecting the same data. Furthermore, the computer process can effectively analyze nine fuel-gas samples in nine minutes, three times an hour, over a long period. A man with an orsat could analyze nine samples in one hour for the first few weeks; after a while he would need relief. The automatic process reads temperature and pressure at the rate of one per second. Man cannot keep up with this performance. Though the installation of a digital process control system involves a high initial cost, it has the competitive advantages of being faster, more consistent, and more accurate.
PART VII
Chapter XXVI COMPUTER USAGE IN THE STEEL INDUSTRY
Digital automation is being introduced in many steel industry operations, such as rolling mills, hot strip mills, continuous annealing lines, blast furnaces, and basic oxygen furnaces. The blast furnace is the biggest volume producer and appears to be an attractive area for application of automatic control systems. Benefits include improving the quality of the pig iron, increasing the throughput, and lowering the cost of the operation. The digital automation of a "typical" mill area would include most of the following functions: • Order service activities and communications with the sales network of the company or associated dealerships. • Sales analysis and comparative evaluations. • Guidance of rolling and processing units, including setup adjustments, running, shutdowns, etc. • The implementation of emergency corrective actions associated with the foregoing function. • Production planning, including forecasting of labor requirements. • Machine utilization problems and the planning of equipment replacement schedules. • Systematic maintenance procedures for all equipment used in the factory, including diagnostic works. • Automatic materials handling flow through the area. • Ordering and expediting of products at all stages of processing, as well as raw materials, supplies, and spare parts. • Maintenance of inventory levels or raw materials, steel in process, and the like. • Packing, labeling, warehousing, and shipping of finished products. • Accounts receivable and payable. 85
86
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
• Payroll and general accounting. • Cost control and productivity evaluations. • Automatic budgeting and budget updating procedures. A number of outstanding applications of computing equipment have been achieved in the steel and related industries, within this broad area. For this reason, we will start the present chapter with a brief review of certain applications in the steel industries, to date. We willthen consider case studies on the integration of discrete digital automation "islands" into an integrated network. APPLICATIONS REVIEW IN STEEL WORKS The first works programmed on computers in the steel industry were primarily billing, sales analysis, and payroll. These were followed by other accounting operations, such as the computation of salesmen's commission. Production scheduling came next and so did cost accounting, spare parts inventory, and technical computations. By means of its electronic digital computer, the engineering department of a submarine builder found an able assistance in its work on atomic submarine designing problems. These studies included vibration frequency analyses, expansion reaction and stresses in piping systems, the solution of problems in radiation technology and health physics, and heat transfer problems. But, both in business and in engineering, computer usage was made in an off-line manner. This remained the state of the art over a considerable period of time. Among the developments that were the forerunners of digital automation were some applications concerning Monte Carlo simulation techniques. Experimentation along this line was, for one, made to study the movement of ingot trains. However, these first attempts were timid ones, for, in most cases, management wanted some assurance of success before committing a budget. Other studies focused on the rolling mill, where the problem existed of estimating the potential output of the hot strip mill. These were basically congestion problems, the presence of one slab at a certain point preventing work proceeding on another slab. Though a number of research projects on the potential of process control were under consideration, during the late fifties, few if any applications took place. Most of these projects were cosponsored between the computer manufacturer and the user and aimed at creating some "package deals." Instrumentation and mathematical simulation were two distinct but crucial problems which had to be resolved first. Let us consider, for example, an experimental project carried out in relation to steel plating. The steel to be plated moved at a rate of up to
XXVI.
COMPUTER USAGE IN THE STEEL INDUSTRY
87
3000 feet per minute. It was sold on a per foot basis and all controls had to be applied on this basis. Every foot of finished material had to be tested for conformity to customer specifications. Moreover, from instrument readings, the computer was required to determine what corrections, if any, were necessary. This amounted to keeping track of several dozen rapidly changing variables and of making a complete set of logical decisions and calculations on a millisecond basis. Typical choices to be made by the computer involved switching of plating current generators, calculation of line-speed assignments, and a possible decisiors to shut down the line when attempted corrections did not eliminate occurrence of quality defects. Also, the computer had to optimize economic usage of the line. For each customer's order, it was necessary to print out a delivery ticket, and to handle such tasks as recording incentive-pay data for the operating crew and printing of statistical reports to be issued over predetermined time periods. Mathematical simulation was tackled in this connection, but here again experience was thin. The nature of the process to be controlled did not simplify matters either. Given that the operations under consideration interact with each other, it was not possible to work through as much as a whole operational cycle; the cycle had in turn to be resolved into components, and each component considered under the perspective of its own operational characteristics. A steel company, facing the rationalization of its furnaces, .approached this problem by dividing the complete cycle of a furnace into: • • • • •
Tapping Fettling Solids charging Fixed time melting Refining
In the course of fettling, for example, the furnaces were not interactive and had no associated activity in the service bays. The value of time for these components could, hence, be sampled and added to furnace time since this time was not going to change, no matter what might happen elsewhere during the period. In fact, many more refinements in sequential process analysis will be necessary to give to a digital automation study its proper value. * We have moved a long way since the time of a complete trial and error approach to steel works problems. By now, all but the smallest industrial plants have shown considerable evolution in this respect. Yet, much ofthis work has been performed in a nonhomegeneous manner. Some steel firms *See also the section on "Mathematical Simulation in Steel Works," in "Systems and Simulation."
88
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
developed accounting and inventory control systems using digital computers, while others were satisfied with simply batching certain operations. Other steel mills have established separate systems and procedure groups to devise methods to improve the handling of data, on the correct assumption that the orderly flow of material from the open hearth to the shipping platform is an operation that can be efficiently planned and scheduled. This has been the case with business related to steel. In these industries, the material in process appears uniform if observed at the proper distance, although in various lots or orders, the dimensions, analysis, and properties vary considerably. Consider a company serving rolling mills, crane builders, and the mining industry, where a need exists for high wear-resistant qualities in moving parts. This company manufactures wheels, gears, rolls, valves, fittings, piping and accessories, and steel fixtures for a wide variety of customers. The information necessary for handling customer orders, which are basically of a nonrepeating nature, must be distributed to engineering, metallurgy, purchasing, production, and shipping. Internal data handling became cumbersome and serious problems arose as the volume and complexity increased. Several and frequent adjustments were made in the order system, but the basic procedural methods in batch processing on a digital computer remained. With this approach to data applications, it took a long time to move an order through the various departments for production. Too often, poor communications with the field sales staff resulted in customer irritation. Procedures incorporating a manual and punched card system, which has long since been abandoned, did not improve the situation. At one time the company had five different setups for handling the same type of job in its warehouses. Then management decided to make the following changes: • A messenger service was replaced by wire communications between the general office and the warehouses. • Repetitive manual typings gave way to punched tape. • Punched tape gave way to teletransmission. Some forty-six different documents were replaced by four basic forms, closely related to each other. With the aid of teletransmission media, warehouses were put under the control of the central office. This offered to company management: • Immediate transmission of orders to production with complete data, including shipping date, bill of materials, and operation schedules. • Centralized scheduling, materials, and production planning. • Immediate transmission of production, cost, shipping, and consumption data.
XXVI.
• • • •
COMPUTER USAGE IN THE STEEL INDUSTRY
89
Immediate billing on merchandise shipped. Accurate perpetual inventory of in process and of finished goods. A complete check on all data handled for financial control purposes. Reduced physical effort in preparing orders and reports.
Following the data reorganization, company warehouses receive a steady stream of shipping orders within minutes after credit approvals. Data needed for sales and production planning is captured on the original input document. This same document is used for processing sales analyses by item, territory, and salesmen, along with a history of selling performance. The information is dispatched to the regional offices on a timely basis so that a salesman can review the buying history of a customer before he makes his sales call. He can see which items are probably in need of replenishment, thus increasing his sales efficiency. For corporate use, a materials breakdown of the parts applied in the manufacturing cycle over set periods of time was plotted and the purchase of components forecasted. Careful planning from this data helped increase sales by insuring against inventory shortages. Costs were reduced by eliminating the necessity of small lot manufacturing of parts for production. Since the weight of the entire order and its components were precalculated, labor savings were effected by discontinuing manual weighing of each shipment. This example helps bring into perspective the fact that it does not suffice to think and talk about simulation and process control at the plant level, while forgetting the integration of managerial information. The integration of data handling procedures in a steel plant should start at the "order analysis" level. At present, the mechanized operation of order tracking and accounting is based on data manually collected at each process. This is the weakest link in the chain, and no system is better than its weakest link. Each customer's order must be analyzed, evaluated, screened, scheduled, followed and given the proper prescribed treatment at each step in its path to the shipping platform. The steel in each order increases in value at every step in the process. It also suffers losses due to cropping, shearing, trimming, and accidents. No efficient data control could be established without taking into account these facts. The benefits to be derived are challenging enough to induce precalculating of the financial functions and integrating the production data with those concerning optimization and efficiency within the total framework of company operations. This will necessarily include everything from sales to accounts receivable. Steel companies are in fact becoming conscious of the total approach to the information problem in the steel industry, which we are hereby contemplating. The idea of a continous stream of data and its interrelation to
90
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
m~~:~~~ent ~ level
~
C==V'
~
~ t~ \
#
Central computer
Ij Foctory level
~ Ingots
f-""'::"";--(
~ Slobs
FIGlIRE
f-';';';;':':--(
===:>
End product
1
the flow of the product through the factory is, indeed, the basis of this approach (Fig. 1). While in the "early days" of computer usage, a carryover of procedures from mechanized data systems was acceptable, new developments require a high degree of sophistication, if mathematical analysis is to find its proper place. At the factory end, we imply a high-speed data acquisition system and associated computer programming for data reduction. We want on-lineness, connecting the computer directly to the data acquisition scheme to function as system controller and to provide feedforward analysis to test data. Measurements of test unit variables, such as pressure, strain gauge readings, position, and temperature, have to be digested and recorded on magnetic tape. Such requirements bring forward specifications that change completely our past perspective of the subject. The following are some applications to which reference is being made when talking about digital control: • • • •
Monitoring analog signals. Logging operation data. Controlling the annealing furnace. Classifying the finished product and directly preparing accounting and inventory records. • Planning the finishing train at the hot strip mill.
XXVI.
COMPUTER USAGE IN THE STEEL INDUSTRY
91
• Calculating and setting mill stand screws, stand speeds, and automatic gauge control reference values. • Monitoring and logging hot strip mill output. • Controlling the operation of both main drive and edger in a reversing mill. • Calculating screw settings and speeds for optimum production rates within the ratings of the drive motors.
SYSTEMS STUDY IN A STEEL PROCESS In the preceding section we made reference to certain "islands" of automation in a steel factory and expressed the opinion that this was the initial, and, for that matter, easiest approach. In fact, it is relatively simple to identify and automate operations such as the soaking pit, slabbing mill, hot strip mill (with the reheat furnace), roughing mill, and finishing trains. More complex is the linking of these "islands" into a centralized digital automation scheme (Fig. 2). Area v
m
~
A~
Area
~
n
Shear
/
FIGURE
2
We will consider as an example the blast furnace, a structure for reducing iron oxide to molten metal. The large amount of heat required for the process
92
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
is generated by the combustion of coke. This combustion also provides the carbon monoxide necessary for the reduction of the iron oxide. There are a number of independent control variables which may be manipulated by the operator, to a greater or lesser degree, in order to influence the furnace operation. These variables can be divided into two principal categories: charge variables and operating variables. The charge variables are: • Coke-to-ore ratio • Stone-to-ore ratio • Proportion of each of the many iron-bearing materials to the total ore input For example, one or more grades of sized ore or of sinter may be charged into the furnace, as well as a variety of different unprocessed ores which, in general, come from different mines and have, therefore, different analyses and structural properties. The charge variables are all expressed as ratios, rather than absolute flow rates. This follows from the assumption that the furnace level is kept at a given fill mark; charge materials are added in chosen proportions at the rate necessary to maintain the desired level: The operating variables are: • • • •
Blast temperature Blast moisture Wind rate Top pressure
If the furnace is modified to permit oxygen enrichment and natural gas injection the two variables, O 2 and CH 4 , are added to the list of independent, controllable operating variables. These quantities can be changed by the operator as desired; their influence is rapid compared to the long delay before adjustments in charge variables take effect. Smooth operation of the blast furnace is disrupted by several major disturbing factors. These may be viewed as independent variables which affect furnace behavior but are beyond the direct control of the operator, who may institute control action only in response to, or as a reaction to, the disturbance. Among the significant disturbances are: • • • • • •
Variations in the analyses of the dry charge materials Moisture content of charge materials Shaft efficiency Heat losses Flue dust losses Porosity of the stock column in the furnace
The blast furnace may be operating smoothly at optimal conditions, but
XXVI.
COMPUTER USAGE IN THE STEEL INDUSTRY
93
each time a disturbance occurs, some changes must be made to the process in order to counteract the disturbance, or to take advantage of it if the disturbance is a desirable one. The problem is to detect each disturbance and find the new optimum set of conditions under which the process should be operated. The controllable input variables will then be changed accordingly in order to bring the process to the new optimum. Variations in the shaft efficiency, heat losses, and moisture content of charge materials, for instance, drastically affect the heat requirements of the process. When such disturbances occur, the heat input to the system must be changed in such a way as to maintain satisfactory operation. Under current practices, the possibility of disturbance is taken into consideration by supplying somewhat more coke than necessary, thereby assuring an adequate heat supply for adverse circumstances. This represents a waste of coke, for the excess energy is removed from the furnace as sensible heat of the top gas. Substantial savings may be achieved by reducing the coke consumption. This measure would further result in increased production from a given furnace because a greater working volume would be available for processing the ore instead of the coke. Similarly, variations in the analyses of the charge materials and different dust losses alter the material balances of the process. A significant optimization problem also exists with respect to selecting the most economic combination of burden materials which satisfies the desired material relationships. In this area, the importance of a mathematical simulator is self-evident. Consider, as a second example, a hot strip mill. As an operation in a steel factory it is preceded by the open hearth and followed by the cold mill. Digital control at this point would have two basic functions: • Mill operating • Mill scheduling The "mill operating" digital control function will need to perform many routine decisions, and will put forward considerable storage requirements. * A prerequisite for the scheduling function is that the computer collect process information such as temperature, rolling times, gauge and width and correlate this information on a per slab, per coil, per order, and per turn basis. This is essentially a tracking operation. Slabs are tracked as they are pushed through the reheat furnaces. The identifying code for each slab is stored in the machine as the slab is charged into the furnace. Further, charging and discharging operations are confirmed to the computer by means of hot metal
* Due to current lack of experience and skill along this line, most installations still have the operator share functions with the computer. While keeping human interference to a minimum, they believe that this arrangement is flexible and advisable.
94
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
detectors, relay and limit switch closures, or by the furnace operators. The computer tracks the slabs as they come out of the furnace and progress through the mills, each data input being assigned to the proper slab or coil in machine memory, in order to serve for further evaluation purposes. Because of this continuous and complete data tracking operation, the computer knows what slab is in any specific location at a specified time. It can, hence, display the information required to set up the mill for the first slab of the next order. Depending on the degree of digital automation, it can initiate the operations necessary to set up the mills for the next order. This same application can be described as an optimizing system. The inputs consist of the incoming slab thickness, width, and temperature received from sensors. The grade of steel and desired delivery gauge and speed must be input manually in case they vary from lot to lot; no program provisions have been made for such variation. The computer, then, will calculate the required rolling instructions in accordance with the optimization program. The operating tolerances set by the optimization program must include speeds, screws, guides, and gauges for the roughing mills and for the finishing mill, which will be fed automatically into the appropriate regulating controls. The digital control computer should observe through sensors the operating results, and compare these results with the specifications. Depending on the mathematical laws stored in its memory, the machine can update and revise rolling instructions for the next slab. Digital control should also consider the limits of operation, for instance, electrical drive rated limit, maximum roll separating force, minimum and maximum delivery speeds, as well as maximum loading and unloading specifications. To achieve optimal design criteria, the preparation of operating tolerances for a digital control system should be realized jointly by the user and the computer manufacturer. These constitute the basic instructions to the programmers and also form the body of the contract between the user and the supplier. As such, they must contain the process operational description. While preparing the process control program, the analyst cannot be expected to develop, on his own, the description of the operation ofthe mill, which means a substantial list of articles. Among them: system objective and general operations, mill description and data, slab and order tracking, roughing train, finishing mill and other control functions, production logging, operating principles, system protection requirements, nature and sophistication of the necessary diagnostic works. The proper establishment of these data needs, basically, team work. The description of the mill or process should be reasonably detailed, including a sketch showing location and physical relationship of one part to another, and a flow diagram of the process itself. Consider, as an example, the process for producing strip steel: it starts from the point at which the
XXVI.
COMPUTER USAGE IN THE STEEL INDUSTRY
95
molten steel is available from the converters. From the customers' orders for steel strip the quality of the steel to be produced is known. The converters are, therefore, charged in such a manner so as to produce this grade. The molten steel is poured into molds: the type of mold depending on the grade of steel. A diagram of the function showing all related hardware, and its interrelationships can be easily obtained thus far. This is equally true of the following operations. Molds are assembled on a train at least two hours before teeming is to take place. A short period before this occurs, the steel is tested to see if it is of the required grade. Should it not be to specification a different set of molds may be required, and, for this purpose, a spare train is held in readiness. After the steel has cooled sufficiently, the molds are removed, and ingots are obtained. Stripping is the following operation in the line. Since this does not always progress according to deterministic specifications, exception routines have to be developed to handle a variety of exceptional cases. At this point the programmer should be given a complete description of what the function of the computer should be throughout this cycle. This must necessarily include the tables and values utilized to perform these operations, under current manned or semiautomatic media. The conversion to digital automation at this stage poses substantial problems. If, for instance, the grade of steel produced is incorrect and the spare train used, the ingots may be kept at the ingot stockyard. The computer, then, must specify where the ingots are to be placed in the stockyard and the schedules for the succeeding stages must be modified to take into account the resulting gap. Also, directions for dealing with the unused train of molds must be produced, and instructions given to assemble a further spare train. Repetitive as these exceptions may be, within the steel industry, the organization and arrangement of the specification are up to the ingenuity of the individual, and one man's system will seldom be what the other man would choose. Finally, the analyst should carefully consider the treatment of general management data, as discussed in the preceding section of this chapter. Digital control will not be complete until the necessary data analysis for management reports has been considered. Steel management, at all levels, must be supplied with essential up-to-date information. Data reduction will help management make more rapid decisions, because it eliminates the mass of mainly irrelevant documents and obsolete facts. Familiar, and generally applicable, as this may sound, it is nevertheless a point important enough for iteration. Digital control computers operating on-line are capable of collecting all inventory and accounting data pertinent to each process. The real-time machine can reduce this data, convert it to the desired form, and feed it into the plant central accounting system quickly and
96
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
accurately. That is why an on-line system should be carefully designed to fit into the plant accounting system-and also why the plant accounting system should be redesigned so as to accept data from the digital equipment and make use of it in the most efficient manner. With a little imagination, the list of "new specifications" can be extended significantly. And we need not limit the "imagineering" process to central digital applications alone. Consider, for instance, adaptive control systems in which the control device "learns" from experience, improving its performance on the basis of a combination of observation and analysis. In an integrated and complex operation, some major units can be controlled by satellite computers, receiving direction from a centralized digital control system able to coordinate many products and functions. This can be extended geographically to include use of digital computer systems for remote control of geographically dispersed operations. The resulting ensemble will then pose some most intricate problems with respect to design, programming, and reliability. MATHEMATICAL SIMULATION IN STEELWORKS
Much of the variation in the refining time of an open hearth furnace cast can be considered as independent, random events. In turn, this is the cause of considerable difficulty when carrying out operational studies to estimate what will happen under conditions that cannot be studied directly. An example is the estimation of the number of soaking pits required to handle ingots from several steel-making shops and to ensure the supply of hot steel to a primary mill. The critical item here is that the arrival of steel at soaking pits follows an irregular pattern. "Queuing theory," can be used to advantage in the foregoing sense, with the prerequisite that the variations of times involved can be expressed in mathematical form. The hypothesis is made that ingot trains arrive at a stripper bay and a queue forms whenever there are more ingot trains requiring service than there are stripping cranes available. The delay is a measure of production time lost when plant equipment is not able to meet demands. By using a mathematical model, the performance of a system can be studied over a suitable period of time. Certain characteristics of the system, such as the "laws" or rules governing its operations and data relating to its operational times, can be used in the form of frequency distributions derived from routine records from special observations of the plant concerned. Some of the data may themselves be the product of computer simulation. Figure 3 presents the flow chart of a program prepared to that end.*
* See also Chapter XII in "Systems and Simulation."
XXVI.
COMPUTER USAGE IN THE STEEL INDUSTRY
97
y
y
FIGURE
3
The frequency distribution of operational times shows the pattern in which the subject times occurred within certain time ranges which, cumulatively, constitute the total range of variation. Curves A, B, and C of Fig. 4 give three different distributions of open hearth furnace refining times. As such,
98
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
TimeFIGURE 4
they can be used in Monte Carlo simulation for varying effects on the queuing line mentioned. This places the experimenter in a position to study the total number of operations in a certain period of time or the proportion of time when a slabbing mill is idle, because no hot steel isavailable from the soaking pits. The accuracy of the estimate is itself a function of the length of time that is simulated and of the assumption made as to the exact nature of the original distribution. Consider as an example the process illustrated in Fig. 5. The traditional Gantt chart is no longer valid for planning purposes due to the increasing complexity of the process. The application of PERT, through electronic computing media, will be valuable, but before this is done we would need to examine the time estimates between events and to experiment as to whether or not they reasonably reflect the actual situation. Say, then, that the problem is to determine how the operating time of a
!O~~:~:e-M~ 0' Reversi~g mill
Open hearth
Annealing line
~11-1\Jf-l/~1 Hot
Pickle line
~/1--~l-iiJ Temper mill
Warehousing
FIGURE
5
XXVI.
COMPUTER USAGE IN THE STEEL INDUSTRY
99
120-ton open hearth furnace would be affected when the preheated combustion air is enriched with oxygen during the charging period. An analysis carried out on the routine furnace data for the trial period shows that the use of oxygen enrichment causes an increase of about 17%in the production rate. The possibility of increasing production by using oxygen enrichment prompted the company to ask for an estimate of the production capabilities under this approach, given that limitations imposed by ancillary equipment and by congestion delays had also to be taken into account. Reasonably, an estimate of future furnace production must be based on assumptions regarding factors which, themselves, are dependent upon the operating policy of the department or of the factory as a whole. They include the availability of furnaces, the materials constituting the charge, the fuels (including oxygen), the available services for supplying materials, and the services for removing products from the work place. The subject factors also include the composition of the charge and the grades of steel that the company makes. Oxygen, for one, was a newcomer, and its effects upon production times were what the experimenters wanted to assess. Given the foregoing disposition of the problem the research workers felt it necessary to develop a method flexible enough to permit the study of all the possible sets of conditions that could be forecasted. The mathematical model for tonnage output included as parameters the most critical factors liable to change. This was a formidable job, since a melting shop is an operational unit where a substantial number of interactions occur between furnaces, which must share the same equipment. By using the services of a charging crane, one furnace may be denying such service to another furnace; or a delay to a furnace caused by an inadequacy of cranes in the casting bay may have effects in the scrap loading bay since the time at which the furnace next requires scrap will also be delayed, and so on. The mathematical model to which reference is made covered eight-hour periods and included: • • • •
Casting bay operations of furnace tapping Teeming Hot metal preparation Hot metal addition (scheduled on the basis of the movements of the two casting bay cranes)
A starting condition was set up on the chart and the computer acted as a production controller, simulating operations for a period of time long enough to give the required accuracy to the result. This result was calculated as tonnage output per week, summing the number of casts made in each weekly period and establishing the resulting frequency distribution throughout the experimentation.
100
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Certain built-in rules for decision were necessary during the subject simulation. When, for instance, a furnace is ready to tap, the program must decide which crane would teem the cast. If no crane is available it must record a furnace delay until a crane is free and the tapping can proceed. The same is true for a variety of other operations. The furnaces, which can be fired with fuel oil or coke oven gas, are serviced by slewing-type chargers which collect their scrap boxes from stands on the edge of the stage remote from these furnaces. Adjacent to this edge is the scrap loading bay which has tracks for scrap wagons. Scrap is loaded into the boxes by cranes, while another crane in the scrap bay handles sets of three limestone boxes, and so on. To handle the variety of the resulting servicing alternatives, the computer must use many decision rules, which had to be established and programmed. This is exactly what the research workers did in preparing the simulator. It was, in fact, a web of decision rules. Say that one of these rules concerns the selection of a casting bay crane. Then, if a selection is possible, the second event, "furnace taps," can occur, and is simultaneous with the first event. If no selection is possible the furnace cannot tap but must join a queue for casting bay cranes, and receive service after any other furnace that may be in the queue at that time. Following the occurrence ofthe second event, a value of fettling times needs to be sampled through the use of random normal numbers, and added to the cumulative furnace time. With the completion of this operation, the furnace becomes ready for charging; this constitutes the following event. A good deal of the foregoing production can be described by straightforward mathematical equations. Melting time, for one, is related to charging time so that one increases as the other decreases. The value of melting time sampled should, hence, be modified to account for the relationship. After reaching the melt-point, the refining time must be sampled and added to furnace time to complete this particular section of the process. Similarly, following the activity "molds arrive in melting shop" there is a very simple procedure for choosing a teeming platform according to a preference list. Here again, the decision should be built into the simulator in a straightforward manner. Most particular attention had to be given to the interrelationships between different activities. Charging times, for example, are not random but depend to some extent upon the number of furnaces that are charging at the time. Any such relationship might be accounted for, but due consideration should be given to the fact that the number of other furnaces that are charging may change. The sampled value is required at the beginning of charging, and the times when any other furnaces will be charging are not then known. Also, it may be desirable to consider some change of conditions in the charging and scrap bays, such as would permit a comparison of shop output for the
XXVI.
COMPUTER USAGE IN THE STEEL INDUSTRY
101
case of no crane breakdowns and for breakdown delays as they are at present. The simulator should also give the proper weight to the different delays that "might" occur. For instance, while the furnace is charging solids, three types of delay are possible. This furnace might require a charger, but all chargers happen to be occupied; or the charger might be available but there are no full scrap boxes on hand to be charged. Then, an assimilation delay might occur during wl.ich the furnace digests its scrap. Hence, when a furnace is ready to charge, the computer must decide whether or not an immediate action is possible, and so on, every time, recording the outcome and its respective frequency. A number of assumptions are also necessary. Some of them concern the flow of materials into and out of the plant. For instance, the experimenters might assume that scrap will always be available in the scrap bay, and that mold trains will be removed from and returned to the casting bay as required by the melting shop. It is equally necessary that a deliberate attempt be made to achieve a certain flexibility in the simulator, to facilitate modifications when different conditions are examined. Such is the case with a varying number of furnaces and cranes in use, the economic number of storage vessels (to use with a tonnage oxygen plant), a comparison of several proposed layouts for a mold preparation bay, matters concerning the limitations imposed by port and wharf facilities upon the unloading of iron-ore ships, different iron/scrap ratios, or varying oxygen supply-as in the forementioned example.
Chapter XXVII EVALUATING THE DATA LOAD
With the decision that some or all of the functions of a data processing system are to be done on an on-line basis, a number of problems immediately become apparent. One of these problems is control of load fluctuation. By this is meant the efficient handling of the variations in the demand for service from the system by the environment that it serves. While most of the day-to-day functions of a data processing system may be conceived as an integral part of the documentation circuit, there exist some functions, such as handling of exception reports, which, at least in the beginning, will constitute a subset of operations that: • Do not have the immediate relevance to the operations in the environment that on-line functions do. • Can be initially excluded from integrated processing so as to speed handling of the balance, or cut down the amount of required nonproductive work. This approach may be acceptable as long as the necessary provisions have been made for eventual integration of the manned functions into the automatic total-through the proper system analysis and design. With this approach, it is essential that in establishing a data collection and processing system one should be able to: • Easily adjust to operational difference between the headquarters and the plant. • Install the system in an efficient manner and with minimum change in normal operations. • Provide for easy linkage to the different departments, and for a "comprehensive cooperation" between the humans and the computer. • Establish the framework for extension into the quality assurance stage. 102
XXVII.
EVALUATING THE DATA LOAD
103
• Provide for the total data integration of such functions as accounting, engineering, purchasing, and warehousing. From a total system standpoint, it must be borne in mind that the preparatory stages of data automation are primarily concerned with the definition of existing decision-making processes and with the provision of data for model construction and test. This means that a sizable understructure is required to provide facilities for control and implementation of the solution arrived at. ADVANTAGES TO BE GAINED FROM DIGITAL AUTOMATION In certain process industries, and the steel industry is no exception, the classic functions of sales, engineering, and manufacturing do not present a problem as far as digital control and their integration into a network is concerned. Management is usually aware of their existence and more often than not realizes that their proper, efficient handling is necessary for corporate health. Nevertheless, this awareness is not an immediately apparent fact. Disciplines originated in the fifties are still being partially ignored by some company management, because of either a lack of interest in the function or a lack of understanding of the proper role of that function within the whole corporate structure. To stimulate the proper interest, it is necessary to establish the profitability of the faculties in question in facts and figures. As a matter of principle, it is impossible to estimate, in a generalized manner, the full value of the return upon the investment in digital control equipment. The return will consist not only of tangible monetary savings in terms of reduced cost, reduced inventory levels, improved product quality, and decreased maintenance, but also of such intangibles as increased management control, improved market position, improved accounting and order system, accurate and complete records, and better customer service. Expressing these points in terms of money value requires knowledge of the specifics in plant observations and costing systems, and a thorough understanding of the proposed integrated control scheme. The development of a fairly sound estimate of digital automation economics implies that specific proposals must be developed in terms of step-bystep application of control systems. The effects on the plant's operating economy can then be evaluated as a function of monetary return on the investment. Intangible returns, which may be as important as monetary returns on investment, can also be documented. This ground floor approach provides quantitative data, to be used along with the guidelines of a certain process. We will now consider such a framework, applied to the steel industry.
104
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Increased Mill Utilization In the hot strip mill, for instance, the computer replaces the operator's "functions" of roll, screw, and speed settings for the various rolling schedules. Operators vary in their methods of operating the mill, thus, understandably, varying quality and quantity. Inversely, digital control would tend to exceed production at the highest manual operator level. Savings can then be estimated in terms of a reduction in operating time. Furthermore, the rolling rate of a mill is not dependent solely on the power rating but also upon the capacities of furnaces, coilers, and other auxiliaries, and upon operator efficiency. A digital control system, using the order tracking function tied into the input of the computer, would reduce mill delays due to setup time, would practically eliminate mistakes in setup, and would reduce delays due to check sheet errors, check board malfunctions, cold steel, and so on. Similarly, a digital control scheme will make possible a more effective bookkeeping while facilitating a comprehensive study of mill operation. Although the major benefits of the system are to be realized over a period of several years as a result of the subject conversion, considerable savings could be achieved in the initial stages simply by providing management with more effective control information and some fundamental analytical techniques. This is a point on which we have greatly insisted throughout the present work. No system can be better than the mathematical tools it uses, for it is these tools which will make feasible the increase of efficiency in systems performances. It is understood that the implementation of an effective simulator to cover all angles of systems operations implies in itself a substantial number of prerequisites which should be given due attention. One of the prerequisites is the preparation in a clear and unambiguous manner of block diagram which could show operation sequence and the interrelations existing between the different posts. Similarly, data inputs have to be clearly and unambiguously defined. Here Fig. I presents a data-logging form to help identify one method of approach. The important aspect in this design is the simplification in data communication procedures, through the standardization of the crucial variables. To the need of starting with the fundamentals, we have made due reference throughout the present work. Accurate Records and Timely Instructions Instructions and mill data are manually entered, re-entered, and manipulated by operators, clerks, typists, key punch operators, and messengers. All these operations are potential sources of error, while their handling represents an additional time delay in obtaining process data vital to the operation of the mill.
• c:
a L...
U
'"
'"
_c
Q)
0
Code
General information
Data point Ol
c:
..... '" 'c .... ::>
~
c:
:§ ~ Cl
::>
Cl
'" :::l
u '" L...
:::l 0 V!
Limits
-~
c: 13'" E .2 ._ c:o. s: 0. 3: 8' Ol '" >. -' ..... u ~S2 U'" :E .9
'" 0
-'
"'> ..... '" "0
c: 0
:ffi
:::l ~
'"
U
•
Frequency
Function Ol
c: ZQ) -0 C:0l ~ :::l'" c: 0'" 0
U:::l U
U
X
X
"0
c:
1 2 3 4
'" E
'" c:
"0
0
'"
"0""" ",L...
"0'"
o E U'" .....
L...
''"" L...
. '" 0.
~
8'
E
2
U
c:
~
c: '"
LL
c:
L...
LL
~
- ;::0 ....... 3: "' ..... 80 ..... '" >. @t 8 E "" X C 13~ ..... "'u s: '" ..... V! '" 0:::l
.....
.E
c:
'"
8N
L... .....
00
'<;J"
00
L...
X
x
L...
LU
:::l
FIG. I. Data sheet for process control.
Consider as an example a steel company with some twenty different manufacturing shops. Each operates in a relatively autonomous manner, making different products, under different conditions, with different machines and processes. The services of functional organizations, such as accounting, purchasing, and engineering, are organized at the headquarters for the whole manufacturing network. The shops are linked to each other through data communications which generally cover subjects such as: • Payroll and wage incentives • Equipment ordering and analysis
• Stock and inventory control • Production planning
Orders placed by customers determine production loads in each shop, which means that manufacturing is organized mostly on a job-shop basis, except for a smaller part of the yearly business where orders are filled from stock. 105
106
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
The approach chosen by this company in handling its integrated data requirements, was in contrast to that frequently taken, which covers one shop in depth for a variety of stages at one time, thus renovating the operation of the shop while leaving the rest of the company with the previous systems and methods. Here, management decided to choose functional areas of application and automate them across the board. The data network that resulted, for real-time data handling and process control, was composed of two large-scale systems in parallel, installed at the headquarters, and of satellite machines at conveniently selected geographic locations. The resulting improvement in communications and control was impressive. Data concerning mill output, which usually is not available until after a turn has been completed, was instantaneously processed through the mill office, retyped or rewritten, and carried to the operating centers. In contrast to "error and delay" procedures of the past, digital control efficiently presented machine-processed instructions for immediate order scheduling. Accurate reports were also made upon demand oronaregularpre-established basis. Mill schedules were changed almost instantaneously, if required, by the occurrence of cobbles, rejects, or product diversions. In fact, a variety of quality assurance information was handled in an effective manner. At the management information end of the business most attention was paid to articles with capital financial impact, such as master shop scheduling, man-machine utilization, and budgetary control. Other, detailed datahandling sequences at the shop level were left purposely manned, with only marginal methods improvements made. An example is given in Fig. 2, with the dispatching of order and respective collection of production information. This made possible the selection of interface machines disposing very limited output media. Printing, for instance, is done on a typewriter, saving the cost of a printer for business purposes-s-a teleprinter being included for logged data. To the contrary, the interface computers were well equipped for direct process control duty, as can be appreciated through the following descriptive list. Control. Ranges, unit allocation, and upper and lower alarm limits are selected through an internal program. A digital clock is included in the system and can be used for scan initiation. Time is logged at the start of each scan. A counter is provided to enable the state of on-offdevices to be logged. Channels. Because of varying requirement among shops, a very versatile channel construction is chosen, the interface being designed to provide facilities for logging from 30 to 200 channels. Scan Initiation. The following modes of scanning are provided: (a) continuous scan, (b) periodic scans at intervals timed by the digital clock, (c) single scan, manually initiated, and (d) single scan, initiated after a preset count of plant operations.
XXVII.
EVALUATING THE DATA LOAD
FIGURE
107
2
A farms. Alarm warning is set on when pre-established limits are exceeded. A varying level of ten to thirty alarm channels are provided with adjustable upper or lower limits which may be set as desired. The upper and lower alarm limits are selected by the registered program.
108
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Reversible Registers. Twenty, 6-bit high-speed reversible registers are provided and can be used individually or in various cascaded combinations. Output. Data output is recorded on either five- or eight-channel punched paper tape. A page teleprinter is available for printing out values of selected channels. Teleprinter. At the high rate of scanning, the data channels can be logged on the teleprinter. While scanning at one point, all channels may be printed. Line feed is automatically inserted by command from the machine program.
Reduced Operating Costs Fuel costs, for one, represent an expensive item in the steel plant operating budget. These costs can be reduced in the soaking pits by means of digital control over ingot loading and unloading, and automated firing rates and soaking times. Fuel costs can be reduced in the reheat furnaces by adjustment of firing zone controls to compensate for slab pushing rates. Also, when mills are operating at or near the break-even point, a few percentage points reduction in costs or a few more tons sold per month can mean the difference between profitable or unprofitable operation-and digital control (whenever carefully planned and instituted) can provide the framework for use optimization. Similarly, digital automation helps integrate data vital to supporting services such as accounting and inventory records. In complex industrial operations, only through this paper work can production be controlled, customers' orders tracked and scheduled, and mill performance supervised and improved-hence, the interest in providing a direct link between the plant control system and the individual process control system. Finally, through simulation and experimentation at the level of the twirr central computers, management obtained: • Increased production rate per unit of time • Systematic maintenance procedures for the production equipment • Reduced in-process scrap loss and damage • Reduced in-process inventory • Avoidance of waste of raw materials • Improved quality of the product • Prompt filling of scheduled orders and timely delivery of opportunity orders • Quick response to order inquiries As far as financial controls are concerned, particular attention was paid to an accurate analysis of the monthly and the year-to-date costs into a substantial number of classifications and subclassifications. The general approach is presented in Table I.
XXVII.
EVALUATING THE DATA LOAD
109
TABLE I Costs
Monthly costs Current month Current month actual budgeted
Expense Item
Year to date Ratio
Actual
Budgeted
Ratios
Total
DATA LOAD EQUILIBRIUM THROUGH BATCIllNG
Consider the "Fine Steel Industries." This company has installed two computers, one at its headquarters and the other at the factory. Figure 3 shows the sequence of events and the flow of information as far as order handling procedures are concerned. * The initial order handling step involves the acceptance and allocation of orders. This is followed by issue of instructions to the works, the preparation of production line information, and the initiation of a production control plan, which isthen forwarded to the factory. The individual customer orders received are transcribed onto magnetic tape on a daily basis. This tape is introduced to the computer where it is compared with the customer master file, in order to ascertain whether a contract of sufficient size is outstanding. If so, the details of order number and tonnage are entered onto the production control tape. The computer produces appropriate notices to the sales department, for subsequently informing the customers whether or not it is possible to accept the order, together with approximate delivery dates. In this way, the production control
* Reference should also be made to the management information system outlined at the end of the present chapter.
110
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
HEADQUARTERS
r---------, I I I--t-i-I"~I Initial order handling I ,
I
T
I
E
I
I
H
M A
R
K E
T
Order
! .. --;1--'1 I I I
I
Managent
Productio~
I control I
Invoice
I
!I
Rolling mill
preparation
Finishing
1--'--1
1-........
L
J FIGURE
I
I I
Saws-cutting
I I
I
I
I I
'I I reporting I j
I
Production scheduling
I
I t t l
I
,.... -;-.1-,
I
I
I
scheduling
Acceptanc'e nates
1__
I
FACTORY
r----------l
t Factory accounting
L
I
II I I
I : J
3
file has details of the orders allocated, inserted against each proposed rolling of a particular section. Production planning and control data are teletransmitted to the factory, where a process control machine prepares the detailed production program. This same computer keeps an up-to-the-minute log of progress of production line operations, against the scheduled orders placed on the mill. The procedure allows for the collection of information from the production line automatically. This makes it possible to compare data about the steel being rolled with information about the cast, ingot and bloom number, and the steel quality. Operations of this type pose a variety of problems. It is not possible, for instance, to label hot steel in any satisfactory manner. Hence, in addition to the production line flow, an information line flow corresponding to it must be established. The process control computer, at the factory, time-shares plant operations having instantaneous deadline requirements with the preparation of the necessary administrative records and reports. Its tasks can be classified as: • Data logging • Alarm monitoring • Fault recording • Real-time calculation of process variables • Production and accounting data handling • Production efficiency improvement and cost control • Total quality assurance
XXVII.
EVALUATING THE DATA LOAD
111
The computer center at the headquarters links corporate management, both with the factory and with a network of some twenty warehouses throughout the country. The transmitting stations are installed in the ordering-stock unit of each warehouse. Storeroom transactions are transmitted periodically to the company center, the particular procedures at each station being determined by store activity. A normal transmission consists of fixed information from a master card kept on file and variable information keyed in by the operator. As the transaction items are received in the computer center, they are processed on a priority basis. Those characterized as "not-urgent" are batched for weekly handling. The machine program verifies the transactions that are valid, updates a master tape, and provides the warehouses with a list of all transactions received. When transactions are found to be invalid, the incident is listed with an error code for correction by the store. Once a week, the updated master is processed to provide: A Warehouse Summary. This report lists, for each class ofitems or material and for the warehouse in total, the average and current investment and change since the previous weeks, the cumulative current sales, the inactive items and those that caused rush handling procedures because of acute variations in demand. A n Item Status Listing. This reflects stock on hand, investment, weeks of stock, cumulative usage, average demand, and generally all status information by item. The active items are underlined, while projections and extrapolations are performed and future fluctuations are determined. An Open-Order Report. Its objective is to bring management's attention to the execution of the current order, and to the forecasted needs, for replenishment of stock. This report shows order number, order quantity, the date an order was placed, the balance left to be delivered, and the scheduled delivery dates. One version of the report is intended for the top management and is prepared on an "exceptions" basis.
The concept of "batching" data work inside the computer, though disputable in itself, is a means for handling in a more efficient manner the initial data capability of the system. Batched information would obviously be subject to what might be called a "restricted delay." It is possible to think of the data processing system as subject to demands having varying degrees of urgency, ranging from "very urgent" to "required within 24 hours." Batched work is done after the normal operating day, or during the time where there exists a valley in data load. At times, it may be profitable to re-evaluate the nature of the transactions originally proposed as on-line functions. These are the demands that cause the unevenness or the loading on the system. By rearranging them through
112
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
time, peaks could be lowered and valleys filled in. This can help the economic acceptability of a data control system, for, if it is designed to meet peak requirements over a substantial period of time, it will have a capacity greater than that required to handle the total load imposed upon it by its environment. If the excess capacity of the system is not great, the resulting degree of utilization will be high and the over-all picture will be favorable from an immediate economics point of view. Hence, the necessary steps should be taken from the very beginning that would allow to level demands. The foregoing essentially means that to increase the utilization of a digital control system, it is necessary to separate the work to be done into instantaneous and postponable. It is, nevertheless, very difficult to define, in general terms, the nature of postponable work, since this would vary from one application to another. For example, the clearance and handling of sales orders involve a considerable amount of work in checking inventory levels, and for a manufacturing concern this is a time-independent job. Inversely, for a sales organization, it may well be time dependent: (1) The time-independent transactions, by being shifted in schedule, help to maintain a certain evenness in the load. Transactions falling into this category need be processed neither at the time of initiation, nor at the time the events that they record occur. Examples are the preparation of invoice registers, changes to the bill of materials, and the production operations record. (2) Time-dependent transactions include the analysis of customer orders, the scheduling and rescheduling of production, the control of critical inventories, and certain financial functions, for instance, specific accounts receivable or payable. Another class of time-dependent transactions is that of cash-on-hand control.
Data reduction and the updating of information that needs to be transmitted over long geographical areas may well become time dependent because of the usage of data-transmission media. An example is that of receivable records from branch offices and the update and check of the stock level in the branch warehouses. Other classes of time-dependent transactions include functions such as checking on and off work, and the answering of inquiries concerning the processing status. In cases of data inquiries, the eventual inability to answer specific questions can be attributed to four basic operations which tend to obliterate the information describing the events that are of interest. These operations are: • Last value recording of a single event variable, thereby losing all preceding values of the variables. • Consolidation of the values of many single-event variables, thereby losing the identity of individual events.
XXVII.
EVALUATING THE DATA LOAD
113
• Dissociation of connected events by distribution of data, thereby losing the associative characteristics of these events . • Detail-discarding, thereby losing the characteristics of these details. Such operations are not in effect because of ill-will or because offailure to recognize the limitations that they may impose. They are due to present technological restrictions on computer materials and components, and to financial considerations. Present data control systems, because of physical limitations of existing components, must account for these restrictive processes. Future systems may provide that questions are acted upon by a "question analysis and answer" generation program which would automatically analyze them and subsequently determine if they contain sufficient information and if they conform to certain pre-established conditions. The program will automatically extract the information necessary for answering the respective question, and it will present the answer to the questioner. Generation programs of this nature could also, after analyzing some of the answer information, notify a questioner of the estimated time in which an answer will be forthcoming and, for instance, if an answer should appear voluminous, indicate this to the questioner, and possibly suggest a good way of reducing the length of the answer. Such a service would impose a necessary configuration on the data control system, concerning, mainly, the storage capacity and the manner of storing and of retrieving information. For an example of the necessary storage capacity, consider a computer specifically designed for on-line purchase order processing. Say that the daily volume of orders to be processed is at the level of 7000 and that there are about 250 characters of original input information associated with each order. This makes an information growth a rate of 1,750,000characters per day, or about 40 million characters per month. With the growth rate forecasted in the example mentioned, after three years, one and a half billion characters need to be stored in the system. Assuming a fixed capacity system, e.g., at the one billion characters level, it would obviously be necessary to remove 1.75 million characters from the system every day after the full character capacity of the ensemble has been reached. The selection criterion for removal of information would be its potential value, and this is a matter that requires very careful study. The identification of "potential value" criteria may bias the whole control system operations if not done in the appropriate manner. * The alternative to a fixed capacity system is a system with an infinitely large storage capacity. If access time in such a system is nonuniform, it would be * See also D. N. Chorafas, "L'Influence des Ordinateurs sur la Structure des Entreprises." Editions de I'Entreprise Modem, Paris, 1964.
114
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
similar in operation to a "fixed" capacity system in which the information that has been "removed" is still accessible but pushed into slower memory devices. Growth rates in information handling can be more critical if one considers that information has the amazing ability to generate other information, and that there may be almost no end to where this process can go. Machinegenerated information would obviously require storage space. One possibility for handling this problem is to store only input data in the digital control system. That is, to store no machine-generated data except for immediate use, and to recreate all such data from stored input information. Another possibility is to save on input data storage making use of the fact that some information is essentially redundant. The total load of an integrated data system can be shown in numbers of instruction executions, random file selects, ordered file selects, and inputoutput characters. As defined here, a "select" is either a read or a write operation in the file. A distinction is made between random and ordered file selects because a number of ordered selects can be done in one file search, since the transactions for like records can usually be arranged in the same sequence in which the records are filed. Hence, a number of ordered selects requires less time than an equal number of random selects.
EXAMPLE FROM A PROCESS-TYPE APPLICATION The foregoing discussion on the load imposed on a data control system focused on manufacturing and sales operations. To make the picture more complete, we will consider an example from a process-type application. More precisely, this example is taken from an on-line computer controlling the process of precision-deposited carbon resistors. ATLAS,* is one of the first fully-automated facilities for making precision-deposited carbon resistors. The process is monitored and controlled from a central digital computer. The automated line manufactures, inspects, tests, and packs resistors at a rate of 1200 units per hour. The central computer is used to control the number of resistors of each type produced, in accordance with programmed needs, and to perform statistical quality control analysis on a real-time basis as the units are fabricated. Inspections at critical stages of the process are performed automatically and the results fed back to the computer instantly for analysis so the computer can alter any portions of the process as required to maintain desired product uniformity and quality.
* Fictitious
name, but the text is a case-study of an actual application.
XXVII. EVALUATING THE DATA LOAD
115
The deposited film resistor is not basically a complicated device. It consists of a thin film of pure crystalline carbon, deposited on a short ceramic rod. After deposition, the carbon-coated rod has a spiral path cut along its length, like the thread on a screw. The pitch of the spiral cut determines the effective carbon path length, hence its over-all resistance. The leads are then attached to the rod and it is mounted in a suitable enclosure. Despite the device's inherent simplicity, there are many potential pitfalls in its manufacture which can result in poor quality and short life. If any organic material comes into contact with the thin film during manufacture, it can result in a change in the resistance value in subsequent use or in short life. Several opportunities exist for such contamination in a nonautomated process of manufacture, imprecise control of the deposition process being one of the major reasons. Precise control with feedback during deposition of the carbon film and subsequent spiraling can eliminate the need for manual trimming operations and the resultant possibility of contamination. In the ATLAS system, the deposition of the carbon film on individual ceramic rods, or cores, is automatically regulated. The central computer controls furnace temperature, the flow of methane gas, and the rate at which the cores pass through the furnace, which determines the thickness of the deposited film and hence the resistance. With computer control, the resistance of the core can be controlled to within 2% instead of the former 25%, eliminating the need for sorting before the advent of spiraling operation. After deposition, the cores are transported to an inspection machine which checks resistance of deposited film and feeds the results to the central computer. If the cores show a significant shift from the desired value, the computer automatically adjusts the operating parameters of the deposition furnace. The next step in the process is the deposition of the gold terminations at both ends of the core by a vacuum sputtering process. Because of different wattage resistors, a different-size mask is required for each size. This is automatically selected by instrumentations from the computer. We need not be concerned with the detailed description of the other operations; suffice it to say that the computer automatically programs additional cores to provide for expected rejects in order to end up with the required number of resistors. If it finds that rejects are running higher than anticipated, it automatically increases the number of cores released to the furnace in order to compensate for the actual rejects. A careful consideration of this and similar cases indicates that, with regard to data load calculations for an information system with real-time requirements, there exist some basic principles which cannot be disregarded simply because of a variation on the nature of the data.
116
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Provided that a data control ensemble is technically feasible, operationally acceptable, and financially justifiable, its utilization in connection with an operating system should be approached with an open mind and in a conceptfree manner. It is true that in a number of applications in the chemical and petroleum industry, the optimal process control imposes its own data handling specifications. The industrial control computer must fit into an environment that cannot be selected by the computer designer, but which is in the domain of the process designer. The process control computer must be able to fit into a great variety of industrial production systems. In this respect, speaking of information machines available today, it differs markedly from what we are accustomed to calling "a business data processor," which can be designed as a unit of a planned system with all peripheral gear specified. It is equally true that medium to large computers have the capability of handling data control requirements on an instantaneous basis. Nonprocesstype applications performed on-line and in an instantaneous manner can impose demands on the system which in their most basic sense have much in common with process-type applications. Common problems do exist in all types of on-line use of data systems, as, for instance, the deadlock problem. Similarly, common opportunities exist as is the case with unified management operations. To underline the generic aspects of a data control application, in the following paragraphs we outline the basic framework of a management information system we developed some six years ago for a manufacturing company in the Midwest. Since then we have used this circuit as the understructure for a good deal of process-type applications: Subsystem A : Initial Order Handling • The client company received an average of 1000 orders per day, each order with one to four different items. • A new number system with redundancy checking was developed to identify the items so that the computer could automatically check the correctness of the order. • A completely automatic customer identification system was developed and tested. It provided a basis for the generation of customer numbers by the machine so as to eliminate the interference of human operators in data handling. • Company sales exclusives were automatically checked and treated. • Credit policies were implemented by the machine. Overdue accounts were identified; possible excesses of credit level were brought to the attention of management. • All incoming orders were automatically classified into "cleared" and
XXVII.
EVALUATING THE DATA LOAD
117
"withheld't-sonly the former were processed to manufacturing operations. Subsystem B: Sales Analysis • Statistical tests such as X 2 , t, and analysis of variance were specified for the appropriate sales data. • Mathematical tests were made by product line, geographic area sales office, and using industry. Subsystem C: Inventory Control • Mathematical models were specified for inventory control of both raw material and semifinished products. • The whole inventory setup of the company was described in computeroriented terms. Special attention was paid to rush orders that received preferential treatment in inventories. • Automatic recorder points and automatic issue of orders for replenishment of inventories were specified. Subsystem D: Cost Control • Making use of the fact that the company had available an established standard cost system, statistical tests were specified to enable accurate and timely identification of variance from standards. • Cost data were integrated for the production, sales, and administrative divisions. Subsystem E: Production Control • Two separate production control systems were specified-conforming to the particular conditions faced by the company. • The first, for items in line production, made use of mathematical programming. • The second, for job-shop operations, kept track of the processing of production orders leaving the production planning function to human operators. Subsystem F: Payroll • Company payroll was kept on traditional type tabulating equipment. Conversion of the whole setup into electronic data processing was specified. • Furthermore, the wisdom of using the computer to calculate sales commissions was examined. Computation of commissions was integrated with sales analyses, since at that earlier step, sales by-product, sales office, and salesman had been established Subsystem G: A ccounts Receivable • By means of a sophisticated coding scheme, the data processor was able to perform a complete upkeep of accounts receivable, crediting and debiting customer accounts, and establishing which accounts are becoming overdue.
118
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
• Overdue accounts were being carried on to a separate tape and a close follow-up was made regarding their time payment-reminders were being printed out for customers, and credit management is automatically informed when the last delay has expired. Subsystem H: Budgeting and Profitability Analysis • This is a yearly or semiyearly subsystem. The sales analysis data established in subsystem B, were used as basic information for sales forecasts. Cost control data were used for a projection on cost. Both forecasts were given, respectively, to sales and financial management for re-evaluation. • The subject information was used subsequently to establish budgetary data by department. This budget should then be tested on the basis of criteria set by company executives as to its profitability. This essentially accounts for automatic budgeting procedures. This project is of particular interest in that it was a "third-generation" computer application. The client had had years of experience with computer makes and requested this assistance in developing a system to overcome the shortcoming that had been experienced. The result of this work was an integrated data processing system that prepares cost control reports, provides inventory control data based on mathematical models, prepares a comprehensive statistically based sales analysis, and in general provides management with currently adjusted budgetary data and permits testing current expenses against budgets. Data load calculations proved to be of crucial importance in structuring the subject information network.
Chapter XXVIII PRODUCTION AND INVENTORY CONTROL AT THE FORGE METAL WORKS This case depicts a simplified version of an actual situation. The company in question manufactures sheet, pipe, and special alloys. It buys most of its raw materials from associate companies but imports some from abroad. The production process involves melting and mixing, in accordance with standard specifications, to obtain intermediate, semifabricated products for further fabrication, usually by other divisions of the company. On a company-wide basis, the basic processes are as shown in Fig. 1. Raw
material
I
---+
-->
I
I
I
Intermediate products
FIGURE
~
---+ manufacturing ---+
V
Finished products
1
Though the volume of production in the company has been and is increasing at a satisfactory rate, top management has over a long period of time been concerned about poor profitability resulting from relatively high costs and low margins. Three main areas of substantial costs have been identified in this connection: raw materials, depreciation of equipment, and indirect labor. Other problems occurred in the area of the serviceability of the customer firms. Forge Metal Works based its reputation on its ability to deliver on demand, delays being limited to twenty-four hours. This, in turn, implied high inventory costs and expensive distribution channels. With respect to organization, the Forge Metal Works disposes of five operating divisions (Fig. 2). Three divisions, the East, Central, and West, receive iron ore and produce iron and steel bars and strips according to customer specifications. The variation in type and quality are limited. Two of these divisions own iron mines. The Tubes and Plates division also 119
120
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
~
~ .,co
e <: e
E
"2.,
.
<:
~ .~
\1--'\---1 "E .!? ., :>
U:o
(,!)
FIGURE
2
XXVIII.
PRODUCTION AND INVENTORY CONTROL
121
produces on customer specifications, but it accepts virtually any specification, although the cost per unit varies greatly. The fifth division has Specialty Steel as its product line. It manufactures for stock and keeps inventories high enough to meet potential demand. When the conversion to a third-generation computer with integrated data processing aspects was considered, a number of initial studies were made to gauge the depth of the problem. Subsequently, a plan of action was developed: • The requirements of the operating divisions were studied individually. • With the assistance of operating personnel, a general plan was developed for inclusion of each division into an integrated processing system. • Individual areas requiring the collection of data and the development of ground rules and standards were identified. • A final plan was developed and results were evaluated in the light of its financial and serviceability effects. • Systems and procedures work, pertaining to data control, was completely redone before any application was attempted. The company decided that central data processing would be concerned with the planning of over-all production casting, purchasing, and selling control, while the divisional data centers will maintain limited responsibilities of raw materials management, production planning and control, payroll, general accounting, and other activities at the factory level. Of these, the most important involved: • Inventory control of raw material, with the assigned task of effectively decreasing the level of raw materials stock held at each plant. • Calculation of combination of raw material, oriented toward the elimination of the dead stock of raw material which was sometimes caused by the extreme complexity of the calculation necessary to use it. • Reduction of indirect labor cost in spite of increasing production by eliminating indirect labor involved in the data handling and other supporting activities.
EAST, CENTRAL, AND WEST DIVISIONS Beyond these corporate guidance lines, the East, Central, and West Divisions decided to carry out the following functions: • Determine acceptability of order as received on the basis of the mill's ability to produce. • Develop promise dates for orders.
122
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
• Integrate orders into complete order backlog. • Select or develop manufacturing practices. • Maintain in-process materials inventories at optimal levels. • Load equipment, prepare schedules. • Report production and materials usage. • Invoice customers. • Generate management control reports. • Supply either periodically or upon demand data pertaining to cost, production, and quality. The installation of such a data processing system in the subject divisions was expected to involve major changes in the structure and organization at the mill. Part of the job was the analysis of the quality information necessary for the acceptance, planning, scheduling, production, and shipment of an order-and all of the associated accounting functions. The approach further involved: • The consolidation of all practice writing and product coding into one centralized practice section. • The creation of a centralized order service section, for each division, to serve the divisional operations from one location. • The computerization of the production control centers. Line supervisors and staff heads were made directly responsible for developing and providing the necessary data. The coordination of interdepartmental relationships was to be accomplished through the designation of departmental representatives to a special committee. Production planning, accounting, industrial engineering, and metallurgy participated in this organization effort. The production control department, it was decided, should accumulate all order forms and make a "controlled" mailing to the computer center twice each day. The subsequent access to information is shown in Table I. For intraworks orders, the originating division was assigned to initiate the order entry form as the only order document. All information required of production control was to be completed by the ordering division. In addition, information pertaining to product description (specified grade, coil weight, and inspection class number) was to be identified in the proper location on the form. The hot-mill production control department would then review the information and forward it to the metallurgical department. From this point on, the order was to be handled similarly to a trade order. Throughout the systems analysis, which preceded these decisions, particular attention was paid to questions of the sales order entry. Received orders were to be automatically entered into sales order backlog by a series of integrated computer operations. The first of these operations was designed
XXVIII.
PRODUCTION AND INVENTORY CONTROL
123
TABLE I Subject
Processor
Access to Data
Inventories Sales order backlog Equipment loading Standard product cost
Computer
Computer inquiry
Equipment schedules Computer Projection reports Accepted and rejected orders
Printer, daily runs
Order progress
Supervisor"
2-Way telephone
Equipment breakdowns Local scheduling problems
Equipment foreman"
2-Way telephone
a Local
plant management had an access to the information system through an interface unit.
to determine the desired mill rolling period from the requested shipping date. Similarly, the systems analysts paid particular attention to the computerization ofthe subsequent works. For instance, the selection ofthe appropriate factory equipment for processing the order was made by the computer on the basis of data available in the metallurgical specifications file, where quality considerations dictate the use of one or the other mill. Otherwise, the technological sequence of processing was determined by the acceptance standards and limit codes in the cycle prediction data. In this manner, any possible tonnage spread between mills can be efficiently coordinated. The equipment selection priority to be used by the computer is as follows: • Earliest mill rolling, in desired rolling period with unfilled acceptance limit. • Next latest, until last mill rolling in desired rolling period with unfilled acceptance limits. • Earliest mill rolling in desired rolling period with filled acceptance limit and variable limit code. • Next latest until last mill rolling in desired rolling period with filled acceptance limit and variable limit code. • Repeat first four steps, for the other mills. Adjust rolling and shipping period relationships, if necessary. • Next rolling on either mill, prior to desired rolling period with variable limit code, filled or unfilled. • Next rolling on either mill, after desired rolling period with variable limit code, filled or unfilled. • Reject order.
124
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
In this manner, a specific rolling week is selected for each incoming order. The computer next determines whether an inventory problem may exist in meeting this rolling date. The possible existence of an inventory problem is indicated by either of the following conditions: • The rolling week is two weeks or less away • The rolling week is four weeks or less away and the billet requirements represent characteristics historically difficult to obtain If neither of these conditions exist, the computer will assume billet available by acquisition. If one or the other conditions does exist, the computer will interrogate unallocated balances in this inventory classification to determine material availability. If balances are sufficient, the requirement will be noted and billet availability assigned. Ifbalances are not of a sufficient period on the billet acquisition and still stay within the customer desired shipping period, this rolling data advancement will take place according to the rules previously stated. Subsequently, the computer will generate two reports to inform production control of what has occurred. The Received Order Report. This report channels to production planning information on both new orders received and order changes that have been processed. The detailed data appearing on the received order report includes: • Order number • Customer identification • Catalog number • Size • Quantity • Customer desired delivery period • Planned rolling week • Rolling mill selected for this order • Reason code, indicating circumstances or conditions that become evident in the computer entry procedure The question of the "reason code" necessitates further attention. This code may affect the planned rolling date, by indicating, for instance, that the order's planned rolling week is not in conformance to the customer desired time, or that a rolling of this size is planned during the desired time by the customer, but it has different acceptance limits. It is also possible that inventory does not exist, or is on order, to allow rolling during the customer's desired rolling period. The Order Backlog Recap Report. This report has also been designed for the information of production planning. It is issued weekly. Its purpose is to indicate the status of the sales order backlog as related to rolling cycles. This information will make possible an effective analysis of rolling cycle
XXVIII.
PRODUCTION AND INVENTORY CONTROL
125
predictions and facilitate their revisions to ensure proper order acceptance determinations by the computer. A secondary benefit is to aid in predicting the mills' operating levels and crew requirements for the next week or weeks. The report shows: • • • • •
Size finish and dimension expressed in decimal inches Shape code identifying rounds, squares, hexagon, and rods Rolling week-day, month, and year Acceptance limit and predicted cycle capacity for each size Limit code defining fixed and variable limit cycles and cycles established for quality restricted items • Amount of rod orders accepted for rolling in this cycle • Number of ordering, making up tonnages listed above • Per cent of acceptance limit filled to date • The number of standard mill hours represented by the accepted tonnage for a specific size and week The production control department must examine what revisions to the rod mill cycle are necessary. Decisions may involve changing acceptance limits, changing cycle size, capacity, or flexibility, adding new sizes, varying predicted rolling sequence, etc. The resulting decisions are reported back to the computer center on rod mill, rolling cycle prediction reports. These reports will also be used to request detailed cycle data from the computer to aid in decision involving deletions, changed rolling dates, and the like. TUBE AND SPECIALITY DIVISIONS In the studies conducted on the usage of integrated data processing media, the Tube, Plates, and Speciality Steel divisions placed particular emphasis in the following areas: • Order entry • Bookings report program • Mill invoicing • Warehouse inventory control programs The two divisions maintain a joint sales force throughout the country. The data processing plan called for one major computer installation able to handle the sales information across the board, and for satellite centers located at the mill which will be able to communicate with the central machine through punched tape, magnetic tape, teletype, and telephone. The operational sequence which followed data integration is properly identified in Figs. 3 to 12. Throughout this description most emphasis has been placed on the work performed by the interface computer at the mill.
126
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Customer
Sales branch
Customer service center at the headquarters
Dota insertio~s
and corrections
~To branch ~tape
New or corrected arder master tapef-
-r---
-f
FIGURE
L[r--lI I ~
Credit and pricing
3
The Information Circuit The information circuit starts as follows: The account salesman, in a sales branch, receives orders from the customers either by telephone or by mail. It is the account salesman's responsibility to insure that customers' requirements are properly stated on the order, by reference to specifications or review against a master item card. Such a card is available if the customer has previously purchased that item from the Forge Metal Works. The salesman prepares an order entry work sheet showing just the variable data, to minimize the information volume, and hands it to a teletype order writer.
XXVIII. i
I
I
Order change message Freight message Price change message (From central computer)
I
0
PRODUCTION AND INVENTORY CONTROL
127
A
I
On-line data transmission
~---
FIGURE
Computer Two - divisional
4
Both the master item card and the work sheet bear a master tape file index number, which indicates to the order writer the proper master tape to use in preparing the new sales order. This master tape reduces the time and effort required to prepare the new order, since only the information which varies, such as data, customer order, number, quantity, etc., need be added. The order is typed off-line, creating a teletype order tape at the same time. It is then transmitted by teletype to the customer service center. The branch office files the order tape, which can be used for repeat orders, in their master tape file cabinet. A copy of the order is placed in a master card file maintained alphabetically by customer and by product. The teletype room in the "customer service center" receives the subject message and produces copies of the order, and an order tape. A copy of the order is converted to the proper product station in the customer service center, and reviewed for completeness of description, specification, and
128
PART VII.
oI
APPLICATIONS IN THE METAL INDUSTRY
t
o
FIGURE
5
acceptability as of product and customer, in light of the customer's requirements. It is then reviewed for proper mill allocation and passed to a coding clerk for the application of traffic routing. Following this, it is returned to the teletype room, where, if any additions or corrections have been made, a new order tape is produced. If a corrected tape is not required, then the tape initially received is used to transmit the order by teletype directly to the appropriate mill or mill warehouse location. At the time of transmission, additional copies of the order set are produced and distributed to credit, data processing, and pricing. If, because of corrections or additions, a corrected order tape was produced in the customer service center, this corrected tape is mailed to the sales branch for its master tape file, for use in the preparation of repeat sales orders. Simultaneously with the transmission of the sales order to the mill, the information is transmitted to the data processing center. Other messages, such as order change, freight, and price change, may originate anywhere within the company, and be addressed to the computer center.
XXVIII.
o
PRODUCTION AND INVENTORY CONTROL
129
•
FIGURE
6
Computer Processing
The computer will process the forementioned paper tapes against the old order master, sorted on a reel of magnetic tape. It will then write a new order master. At the same time, the old booking master, which is on a reel of magnetic tape, will be updated, and a new booking master tape written. While these two masters are being updated, the computer will print out control totals of all these data and, at the same time, write a work tape that will subsequently be used to print by prescribed formats the order register, principal orders reports, various bookings reports, and daily comparative reports. As steel is assigned, in this process, from. an existent supply in stock, a "steel move" order would be punched out. At the same time, a new inventory stock status would be printed and any order that would fail to apply would be written out on a magnetic tape containing nonapplied orders. These
130
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
FIGURE 7
nonapplied orders would then be listed, investigated, and returned for processing either that day or the following one. This would complete a delicate program, working within the restriction supplied by management concerning authorized grades and allowable substitutions in process stock authorizations, minimum change size to properly provide finished size, etc. Following this program and as the needs may dictate, the applied orders, as written out on the magnetic tape or with new specification standards, would go through a sorting operation. The paper tape that came in with the order would then be forwarded to the shipping section, awaiting the shipment of the order. Before a discussion on data handling in the shipping section can take place, several other functions must be considered. The first ofthese concerns what goes on in data processing in the mill. Here, the sorted data from the preceding step is handled by a satellite computer against a master file. At this
XXVIII.
PRODUCTION AND INVENTORY CONTROL
ot
131
Complete manufacturing steps from data gathering control unit
~~--~ t
c FIGURE
8
time, the order begins to become more explicit as to actual manufacturing steps. In one pass, each order is analyzed into its particular standard operations necessary to produce a finished product dependent upon the physical characteristics of the steel applied. As a by-product of this pass, exception data is generated in printed form, concerning marginal profit items for management review, along with those orders for which no standard processing has been developed. In every case, the forementioned situations are investigated, corrected, and readied for the next processing. New processing methods or revisions to existing methods are also applied against this master file, creating a new specifications master to be used in later processing. Orders will then be supplemented by data feedback from special units located throughout the mill.
132
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
(0 t
,.,
.0
. ~
"E 0
..
E'"
e
D. I
II>
oc
-e c; 0 u
-;;
a.
E
c;
~ °i
.e .
"
"E
"..c:: u
> 0 E
~
..
"~
z
Q.
II>
:::l II>
'"
~
0
..
.!'" c; '"
~ -e c:: c
.
'" C E a.
:.c:
en
E
8
FiGURE
9
At the production floor level, the completed move orders must also be processed; this information is provided by material handlers within the mills. A computer sorting operation would be performed to arrange the above data in a sequence suitable for processing against the machine load master. For any manned production planning operation one of the most difficult jobs Js the order loading and unloading of all manufacturing units within the mill. This kind of data manipulation is one of the simplest for computer
XXVIII.
PRODUCTION AND INVENTORY CONTROL
133
8
t ..
~hipments '----_~
FIGURE
10
applications. In one pass, updating the machine load master file from previous processing, the new orders are treated as debits to machine load while the feedback from data gathering units throughout the mill is processed as credits to the load. The computer will relieve a particular unit of the orders it has processed during the previous shift, taking into consideration variables such as breakdowns, and then load into the unit all new orders, reprocessing what is necessary to complete the order to the customer promise. The logic necessary to perform this loading and unloading will be determined from established production levels of all units, priority processing categories for all classes of orders key customer, emergency, stock, and the like. Similarly, a matrix of possible substitutions of materials, and processing, by grade, condition, etc., will be used by the computer. As a joint product of this operation, a new machine load master file is established with an accompanying machine load report which will spotlight the "bottleneck"
134
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
8
t
~
Completed
~
6eparment~1 performance
l!.!2:.!.uid~~~. ~. FIGURE
I
Order I performance
~d
II
situations and, through careful analysis, enable management to develop new manufacturing techniques and establish revised parameters to meet current production levels and facility usage. Utilizing the sorted data, as above, two distinct file maintenance operations will then be performed by the computer: • Order file maintenance • Inventory file maintenance In the inventory file updating procedures, the steel application tickets reflecting the actual physical application of metal to an order are processed against raw material inventory files to reflect current status. This updated file is used in the following day's processing as input to the steel application. Simultaneously with the updating of inventory, new orders, which have been carried throughout the entire data processing sequence thus far, are introduced into the open order master file as a part of the order file maintenance operation. Orders and Inventory
Completed manufacturing operations, as reported by the data gathering
XXVIII.
PRODUCTION AND INVENTORY CONTROL
135
~t----;i!-~f:J ~
To customer, etc.
in'voice reg'is. includin~
g~t
FIGURE
12
units, are used as updating media for in-process orders reflecting actual pounds and costs and compared to standards. All completed orders are pulled to a work tape for subsequent processing and, as a result of this "completing" operation, teletype shipment tapes are prepared and transmitted to the central computer. An order status report is also produced, showing in detail the current status of all orders at a particular location. By exception reporting, manufacturing problems can be brought to light while the updating operation is taking place. These can be either problems that have been encountered or those that will be encountered unless corrective action is taken. The procedure is fairly simple. As soon as: • Completed orders • Open orders • Current inventory have been established by the machine, information is available to be sorted, manipulated, and classified, to produce timely, accurate, management reports including an inventory control and turnover on low profit operations, order execution, departmental performance, adherence to standards, and quality control histories. These reports are produced for factory management by the satellite computer.
136
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Special "product progress reports" are prepared for production planning. They include order number, customer abbreviation, mill grade name, department number, machine number, sequence number, operation description, and standard data relating to the operation under consideration. The listings are established in a scheduling sequence and are accompanied by a set of more limited reports, whose characteristics depend on their subsequent use. These are separated and distributed as follows: • A copy is withheld in production planning which becomes a reference media for the determination of material movement and order status. • A copy is given to the foreman, so that he can know and direct the schedule operation within his area of responsibility. • A copy is given to the materials provider to help establish the sequence that he must observe in assuring that the material so scheduled becomes available for its processing in the manner that it is indicated to move from one area of scheduled operation to another. • A copy, plus a deck of interpreted cards in identical sequence, are given to the operating units. Upon completion of each scheduled operation, the mill or machine operator uses the subject input card as one media for immediate production recording. Special data gathering units distributed along the work centers are able to accept: • The tabulating card, which records "fixed" information. • Plastic or metal "slugs," which record "semifixed" information, such as operator and machine identity. • Variable information, manually posted, which cannot be known until the operation is actually performed. This includes produced pounds, scrap loss, and material conditions code. The operator inserts the various requirements of the message that he is about to transmit. He then presses the transmission button. This signals the remote station sequential scanner which is located at some interim point between the numerous remote stations and the data processing department. Its function is to establish direct connection with the central recorder for the receipt of one message at a time. It then sequentially seeks and establishes further connections from other remote locations as the need for transmitting service is indicated. The central recorder receives and records the address of the sending station. It assigns the time that the message was received. This information is automatically punched into paper tape. In turn, this tape will become immediate input to the satellite computer. The tabulating cards are referred
XXVIII.
PRODUCTION AND INVENTORY CONTROL
137
back to the production planning department where they become a visible as well as machinable record of past performance. At the central computer, the order has already been updated. All that is now necessary is some limited information concerning the shipment. This would trigger the printing of an invoice, the updating ofthe central bookings tape, and the preparation of the necessary accounts receivable and sales analysis records. The computer can control the shipment of data, establish shipments performance, and follow up open orders.
Chapter XXIX QUALITY ASSURANCE AS A REAL-TIME APPLICATION Prior to the fifties, the pace of industry, the level of product complexity, and the importance of quality were all handled adequately by shop inspectors who were, usually, a part of the manufacturing organization. These inspectors were production men with a more or less good knowledge of the shop process and the functions of the hardware. They inspected what they considered important, took action "as required," and in general fulfilled a vital need in the organization. But technological evolution, with the mass production effects that followed it, put new perspectives in this effect. Product volume and complexity made "time-honored" artisan methods for quality assurance no longer valid. "Inspection" became a management problem and quality control organizations were brought into being. With the aid of advanced technology, the quality assurance function was characterized by the use of sampling techniques, the tailoring of the inspection job to measure the critical dimensions, the "bringing forward" of the quality subject to focus on engineering specifications, the classification of the importance of defects found in a production line-and, later, the establishment of the fundamental role of reliability. With this, we experience the beginning of the approach to product assurance as an entity in itself. What is really new is the concept of continuity: that matters of product assurance constitute a process problem, like that of refining or of power production. This means that, even though quality evaluation trials have commonly been undertaken in the past, the present availability of electronic informations systems gives them a new emphasis. The need for dependability makes the performance of "independent" and "unrelated" tests prohibitively inefficient. It is therefore essential to conduct trials that will provide information about the product performance on such a basic form that it can, throughout the life of the product, be used to predict performance in new operational situations as they arise. 138
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
139
In studying matters concerning product assurance, a mathematical model of product use needs to be formulated and, subsequently, used to predict performance for conditions in which trials cannot be conducted. Theoretically, an improvement in evaluation procedures results if the trial conditions are statistically designed to reveal the effects of important parameters. Unless only very gross over-all effects are to be determined, a substantial sample of results is required in each condition, because of the statistical variability of performance. Practically, this is not always feasible and this is another reason why industry has to establish a continuous data trial for quality follow-up. The use of computers at the industrial production level made possible this "continuous trial" idea. Computers provide the means to plan, operate and control the advanced quality systems that mass production requires. This is valid provided the proper analysis has preceded the projected installation; provided management realizes that not only product quality is important in itself, but also how it rationally relates to costs. A common industrial fallacy is that good quality is always costly, and that inferior design and materials, sloppy workmanship, inadequate testing, and careless servicing are considered to "save money." The risk is losing much more than one "gains," besides the fact that poor quality is the most expensive item one can put into a product. The analysis of short- and long-range quality trends do help bring this into perspective. In Chapter XVI, we made reference to the foregoing concepts as applicable to the electronics industry, and more precisely, to the design, manufacturing, and operations of data systems. In the present chapter, we will consider how total quality assurance can be applied in the production process itself, and the computer used as an efficient means for data integration and treatment to product assurance ends.
QUALITY EFFECTS OF MASS PRODUCTION Under quality assurance of the mass products of industry, we understand their functional operation for a specific time period in a combination of conditions specified by standards and technical requirements. The effort should start at the plant laboratories which are performing functional tests, the findings of which are, more often than not, neither properly analyzed nor analytically evaluated. As a result, it remains practically unknown whether there is an improvement or deterioration in the quality of the product, whether the combined production quality meets the standards and technical requirements and to what degree. Not only should a total approach be taken towards quality problems, but
140
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
also, at each level, quality test results must be analyzed objectively. This, too, is contrary to the current handling of quality matters where the evaluation of test results bears a subjective nature and depends upon the experience and the "good will" acquired by the different inspectors. This is not meant to undervalue the established statistical quality control approaches, but often the volume of industrial testing does not guarantee the necessary dependability product quality evaluation should have. We finally come to realize that the rather haphazard inspection procedures, which have been used for many years with seeming success, are no longer economically acceptable or sufficiently effective for controlling: • • • •
The The The The
quality of internal operations adoption of subcontracting programs enlarged scope of purchasing activities advent of new materials
Current production processes have magnified a structural need which, somehow, managed to escape attention. The requirements of the mass market itself focused our attention on the inadequacy and inefficiency ofthe present system of control and the need for substituting a more formalized and analytic method to replace it, hence, the interest in process control concepts to describe the operating practices and procedures that are being established in order to obtain built-in quality in manufactured items, and to analyze the factors that cause variations, to control these variations, to increase processing effectiveness, and to decrease waste and error. Current advances in mathematics and technology allow us to redefine the need for establishing a continuous process to measure quality and to indicate specific causes of poor or substandard quality results. What we want is to establish ways for quickly detecting the substandard material and to identify the structural reasons behind it. In turn, the implementation of such practice requires the handling of large numbers of unit records during the process of accumulating and analyzing quality data. This is much more than the simple employment of certain mathematical or statistical techniques. Perhaps in no other sector of industrial effort can the need, the usage, and the benefits to be derived from integrated data processing be better exemplified than in quality assurance. The fact that the use of applied mathematics alone does not guarantee product control can be demonstrated in many ways. In a study the writer did quite recently, in the high precision instruments industry, he observed an abundance of quality charts where the QC limits were constantly crossed over by both the sample mean and the range. Justifying his case, the production manager argued that this mattered little, "since specification limits were too tight for the job, anyhow." Engineering answered by saying that specifications
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
141
had to be too tight, "since production would not have observed them, no matter what they were." This is not an isolated case, and it would continue happening as long as data on quality are kept on a scaler basis at the shop level. The thesis hereby maintained is that, through a company-wide integration of quality information, the "errors" committed during tests in respect to uniformity and conformance can be effectively curtailed. Also, the subjectivity of answers as to the evaluation of these errors can be eliminated, by introducing the concept of "standard quality," to indicate the conformity of the manufactured goods with standards and technical requirements. "Standard quality" should be measured by the process of selective plant tests, after pre-establishing the functional properties of each. The novelty here is the continuity and consistency this information will have. Through the integration of "standard quality" data, the company can obtain a quantitative evaluation of how the production process goes, to its minutest detail. This requires the treatment of each type of test both separately and in a continuum-by all types of tests taken together. Management could predetermine the tendencies in production throughout the entire flow of goods. In turn, this will help measure the ability of the manufacturing organization to produce according to quality standards. An approach, which only a few years ago might have been just a specialized application by larger firms in a narrow operational field, might, through process-type data control, develop into a comprehensive system, ranging significantly across the entire manufacturing process. This would effectively help enlarge the contribution of product assurance by bringing special emphasis on total quality. To be effective, this emphasis should not be just on quality for its own sake, but in relation to production efficiency, cost performance, product reliability, and customer satisfaction. The data integration for product assurance outlined so far is a natural evolution of quality control. In the sense of our discussion, while quality control deals chiefly with production phases, quality assurance starts earlier and goes further: from design, deep into customer use and product life. This last part requires a good deal of data feedback from the field; feedback which in an efficient manner can help maximize preventive action in product planning, minimize the need for corrective action in the manufacturing stages, optimize monitoring, and guarantee satisfactory experience in usage. Similarly, once data integration has been put into effect, management can efficiently examine cost/quality relationships, an approach of great economic significance and promise. This presupposes: • Organization of quality history files concerning each phase of the overall product cycle.
142
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
• • • •
Mathematical-statistical definition of problem areas. Identification of specific trouble spots. Pre-establishment of corrective action reporting in terms of cost. Feedback and relationship of data from one phase to all other phases of the product cycle. • Practical use of advanced mathematical techniques in effecting product quality.
Furthermore, the successful implementation of a computer-oriented quality evolution will greatly depend on sophisticated programming. This programming effort has to reflect the usage of fundamental mathematical tools, and, with this, a computer-based system handling advanced quality information could be developed. This system can be used to monitor critical areas, in fabrication or assembly, collecting and comparing data in terms of cost and quality. In-plant feedback would assure that manufacturing and test data would be fed back to engineering for improvement of the immediate future articles-i-an operation to be performed by means of in-process analysis, on real time. Though this is a perfectly true case for all industry, metals in particular, being a base industry, feel the pinch. Admiral Rickover, speaking to the 44th Annual National Metal Congress in New York, made the following point: " ... in the development and production of nuclear propulsion system, I am shocked and dismayed to find that quality and reliability of the conventional items in the systems are inferior to the nuclear reactors themselves." The awareness about product assurance on behalf of the nuclear reactors industry is in itself understandable when we consider the safety factors involved. It is also understandable that manufacturers of conventional components, such as valves, heat exchangers, or electrical gear, feel differently, because of inherent bias in that respect. They have been making these items for years and consider their processes to be "well under control," whatever this may mean. In a sense, this becomes a problem of leadership, and when the leader fails, the organization under him fails too. Within the framework of the foregoing case, two examples can be taken. Engineering design. In this case, statistical analysis helps determine reliability requirements. Necessary changes and chances of meeting these requirements can, then, be predicted by the system. If predictions indicate that standards are set too high or too low, engineering tolerances would need to be reappraised, special items developed, or inversely standard items substituted for "special"-through the appropriate value analysis. With this, product balancing can be attained, improving quality and minimizing over-all costs. Materials and Supplies. Here a total quality system may automatically
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
143
analyze data on purchased components and establish the type of action that should follow. The systems manager, then, would only need to re-evaluate specifications for an acceptable range of quality; the integrated quality information will show to what extent received materials come within the new specifications. This approach can also be most useful to suppliers, furnishing them with conformance-analysis reports. Such reports should detail where items fail to meet specifications, helping their recipient improve his techniques and quality and guarantee performance to the user. Figure 1 presents the results from a study in the aeronautical industry. It concerns three endurance parameters: • Survival curves • Mean life • Failure level Survival curves and the failure level have been calculated through both an experimental and a theoretical approach. The point here is that, should a continuous quality recording process exist, it would be possible to simulate and "feed forward" product assurance information. This, in turn, will help tailor a program that ensures technical requirements of the aircraft. Of what this program should consist, and what part it should play in the basic industry line (metal suppliers, for instance), is a management determination based on the relationship to other crucial design factors. That this quality-oriented data network should not be allowed to grow and develop to a size and shape that is beyond its financial boundaries is as evident as the fact that the lack of the proper weight is going to be detrimental to final product quality.
1.0
t
Endurance parameters
0.9 400"
0.8
7-S
f? 0.7 B .8 0.6 ~
Q.
"0 >
"E
300"
5
0.5 200"
0.4
:>
lJ)
Experimental
~
0.2
0
4 3
0.3
0.1
6 ~
Failure level " I
\
Th7retical
-1-- ..£--.--
.:::::::..,=--1-- \
I""'~_=::;:::':-~
50
--.I
\
100 1:l0 200 250 300350 400 450 500 Time
FiGURE
1
100"
2
~
.'!! ~
.=!
;f
144
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
What we just said brings forward the subject of providing the most meaningful definitions of quality, as an applied concept and as a reporting practice. The specific objectives to be attained in this connection should include: • Defining standard parameters of product assurance that would serve as a medium of effective communication on quality problems. • Defining measures compatible with the mathematical theory of product assurance, and providing practical parameters that could be measured in the field. • Providing measures of machine performance divorced as much as possible from operator's performance. • Conforming as closely as possible to the thus established industrial standards in quality and performance reporting, throughout the "use" domain of the equipment. . • Avoiding the application of terms that cause conflict and confusion.
USING PRODUCT ASSURANCE INDICATORS Our discussion in this chapter has specifically implied that process control computers can be of major help in establishing product assurance history and implementing "feed forward" concepts. But, for this to be true, quality has to be built into the product, within the whole developmental cycle: from design to prototype models, tests, manufacturing, and the performance of final acceptability evaluations; in a way consistent with the principle we have outlined. It is an organizational mistake when the functional services responsible for determining quality standards do not expand to include the development phases, manufacture, and the field usage. The concept of "reliability" must become a corollary to development and "data feedback," a corollary to customer application and use-just as "quality control" is a corollary to production. There exist, in fact, several aspects in the data integration for product dependability which are of practical importance. One is the direct result of a dependability conscious organization where there is a constant pressure, from top management on down, for reports of the very latest performance figures. In attempting to satisfy this demand for known information, sampling problems are encountered. As a fundamental concept this extends well beyond the designer's board and the manufacturing floor, as will be demonstrated in the following paragraph. When making field measurements of article performance, it is desirable to obtain precise estimates of the dependability parameters. This has as a prerequisite the pre-establishment of the criteria for choice and of the value
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
145
ranges these parameters can have. It is also necessary to avoid frequent changes of the nature of the data collection system and of the criteria of choice. It may take months or even years to accumulate the quantity of data necessary to provide a high degree of statistical precision in the calculations. This brings forward the double aspect evaluation procedures should acquire: (1) Scientific evaluation, or the determination and explanation of the reasons for the performance, and the discovery of any aspects in which improvements can be made. (2) User evaluation, that is, the establishment of how appropriate the whole system is, provided that the task of achieving the serviceability objectives set by the producer has not been altered in a significant manner. These two types of evaluation are not mutually exclusive. Economy in time and money demands that they be interwoven. As far as user evaluation is concerned, the precision with which any given "trial" can be recorded is limited by the accuracy of the field measuring techniques that must be used. A given article represents only one sample of a large population, all articles having manufacturing and setting up tolerances within normal engineering limits and, for these reasons, having a "standard" performance within tolerance. The field feedback we suggest must reveal the true performance of the article under operational conditions. Field information must provide sufficient basic data about the performance, and the factors that affect it to allow predictions and projections to be made with confidence in likely operational conditions. The collected data must reveal those deficiencies or limitations of the product that can be removed or alleviated by evolutionary development within its lifetime. Pertinent to this point is the need for the determination of "failure indicators," that is, information that can be interpreted as "evidence" and give rise to quality precalculations. We can better define the foregoing by stating that whenever a man-made system is not performing its assigned function "satisfactorily" this provides an "indicator." The data can be emitted by the system itself, or by a subsystem associated with it. The interpretation of failure, which in the past was open to argument, is now mathematically defined, so what interests us most is a method of operation. The idea in itself is not new, since failure indicators have been used as an aid in designing and in maintaining man-made equipment, though rarely has one been built into the system. This underlines another point, the double need for incorporating a failure indicator into supposedly reliable equipment and of providing it with signal emission, and possible transcription media. The need for such continuous indication infers that every part of the system is likely to fail. This implication is essentially an admission of our
146
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
inability to make all parts fail-proof. But the essential point here is that, since equipment fails, we need to build in the means for experimental evaluation and projection. Figure 2 illustrates this point. Failure rates can be reasonably well forecasted, provided a continuous collection is made of quality information. Here the experimental curve is compared to three alternative theoretical curves. For the first 100 hours of operation, actual failure rate data coincide with those of theoretical curve 1. For the next 100 hours, actual failure rates average around the values of theoretical curve 2--then a scatter in failure rates starts, which in itself might well be a predictable characteristic. With this, for instance, the failure rate point at the 250 hours-of-operation level could be taken as an indicator for "feed forward" action.
.. 300
b
.
;
250
~ 200 ~
.2
150
~
100
Experimental
curve
50 50
100 150 200 250 300 350 400 450 500 Time
FIG. 2. Estimates of failure rates.
Two basic types of failure indicators could be considered. One of them frequently occurs without any particular effort from the designer. It is in series with vital functions of a device and is itself vital to satisfactory performance. Rapid determination of the exact cause of failure for most series-type indicators would require a special gear. Or, failure information could be locally collected and transmitted to a computer which, from that point on, would be responsible for interpretation and call for action. The other type of failure indicator is a hardware "identifier" incorporated in the design with the explicit mission of indicating "failure" when it occurs. This identifier is connected in parallel with a subsystem or component which performs a vital operational function. Hence, its data transmission will automatically identify the part of the process that is in trouble. The product performance program can, hence, follow each vital unit that fails, and isolate the trouble in that unit. The problem of determining the optimum size of parallel quality assurance connections in itself involves technico-economic evaluations. Furthermore, if the combination of parts which performs one or more vital functions and a failure indicator which
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
147
monitors them are considered as a system, the possibility that the failure indicator itself may fail should also be considered. The total aspect then, constitutes a subject of optimal programming for redundant systems. With this, then, we can say that data selectively collected becomes a vital factor in the product quality organization. If properly handled, it can be used to develop methods for predicting system performance, realizing error analyses, measuring quality, developing sampling plans, providing process controls, evaluating progress and programs, and ascertaining reliability. The inferences and subsequent corrective action can in turn be used to improve the product. The data should be selected from a variety of sources, including inspection and test reports from vendors, engineering, factory, test bases, and the field. The following is a summary classification within the conceptual framework which is presented in Fig. 3.
Manufacturing
Information feedback
Quality control
FIGURE
3
Development and Design
Throughout the phases of conceptual evaluation and preliminary design, reliability should serve as the integrating agency which assures coordination and compatibility between the various section programs. Much research activity will be involved at this point, and it is imperative to assure that at least the specified environmental and life limits will be observed. To ensure proper coordination, configuration histories should be maintained on each subsystem and component unit. This should include not only items produced during the development program in question, but also component units now in use with other ensembles. Such a history can be compiled from design,
148
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
manufacturing, and inspection data, and may be used for analysis purposes. In the foreground of the subject effort is the fact that no system is totally new. Its materials, its components, or its subsystem would have been used somewhere, somehow with another system. This case was, for instance, recently faced by the writer when he was asked to evaluate the reliability of a receiver-emitter. The system was composed of six major units. Four of them have been in the field as subsystems of other ensembles for over three years. But no performance data were available. One unit was a prototype model, in use with military equipment. Here again, nothing was available about its quality behavior. Had there been data about these five subsystems, it would have been possible to proceed with the study, analyzing the sixth unit, the only one that was completely new, down to its most basic elements. In this sense, it is advantageous that, as a matter of policy, a design disclosure review should be conducted to insure that the designer's intent has been clearly put into effect, that the design prerequisites have been completely communicated to the people who make, test, and inspect the hardware. In addition, this evaluation should provide for the necessary design, manufacturing, procurement, and inspection corrective action. Design optimization should also consider parts application characteristics. If it is assumed that many of these parts originate outside the company, product assurance specialists should review the projected applications, and, based on careful study and evaluation of their documentation and test results, determine whether or not the part will satisfactorily meet the requirements of the design. In turn, these data should be used to establish the numerical reliability goals for the complete system and for each of its subsystems. During design evolution, as data on equipment reliability becomes available, a continuous reassessment of the initial allocation within each subsystem must take place. Trade-off analyses must be conducted, considering a balance between reliability and performance in which, say, weight, operability, safety, cost, and schedule need to be taken into account. Reapportionment of requirements may then result, to assure an adequate and reliable design. Some of these reviews, particularly those of an interim nature conducted as the design develops, might conceivably be performed by means of electronic data processing. What we foresee as an automation of product assurance is, at least for the time being, the initial review which will consider general factors, such as adherence to specifications, reliability, safety, adequacy to the environmental specifications, and general capability. The computer can evaluate such details as fit, tolerances, assembly notes, and test instruction.
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
149
A final design review will then be necessary to consider these evaluations and to insure that all requirements of the formal design checklist have been met. Manufacturing Quality Inspection The automation of this phase requires that the scope of acceptance inspection, necessary to insure that products conform to dimensional and process requirements, has been adequately defined. All comparisons, which are to be carried out using the data collected by standard measuring instruments, can be easily automated. This may be easier to visualize for a process industry, for instance, but there is no reason why other processes can not avail fertile grounds as well-provided that the proper analysis is made. The operation is, in fact, no different than the requirements for on-lineness, as can be seen in Fig. 4, which presents a block diagram for a soaking-pit-slabbing mill operation. abbing
~
Quality acceptance
~_m_il~1
~ •~
~~ Quality test
L-_....JI
Product quality indicators
Computer
Alarm ond control oct ion indicators
Quality management FIGURE
Slabs
4
150
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
Here we must admit that what is lacking most is experience in the field and initiative. The important thing to realize is that, once a production test plan has been prepared, it can be computer processed. The machine can be efficiently used to define the acceptance testing that is necessary to demonstrate continuing conformance to company and customer requirements. For a complex manufacturing industry, this is accomplished by determining, in conjunction with design and test specialists, the test requirements necessary for production hardware. In other cases, simpler setting of quality roles may suffice. In this way, computer implemented acceptance tests will need to be designed to determine the acceptability of a product by establishing whether or not the product complies with functional requirements. Those products having demonstrated throughout the process a high degree of conformance to specification would be inspected on the basis of statistical sampling techniques. To assure that the process quality data are accurate and precise, rigidly controlled calibration programs would also need to be implemented. Inspection and test are worthwhile only iffounded on a sound data collection system. Field Use
To assure that the inherent product quality and reliability will be in constant evolution, field follow-up is absolutely necessary. This in turn means effective media for information feedback. Here, again, the computer can be used in a rational manner to perform "forward-looking" evaluations and diagnostics on failed hardware. Only thus can the actual primary cause of failure be determined, which in itself is an essential part of the corrective action feedback loop. When actual failure causes, as distinguished from apparent failure causes, are known, corrective action can be taken to prevent recurrence of the defect. For information feedback to be effective, continuous pressure must be maintained to assure full coverage on failures, malfunctions, and replacements. This type of data collection is a basic necessity in the performance of failure analysis, as the failed components are often available for testing. With adequate failure data, the data processing system willbe able to analyze the failure and to inform on the necessary corrective action. Statistical treatment of data on "early" or minor troubles can often reveal failure trends that are not otherwise apparent. Potentially serious quality problems can then be investigated and corrected before these problems become catastrophic. With this, any industrial field can effectively establish a closed-loop system for product assurance, for the prevention of failure recurrence, and for timely spotting of actual incipient and potential troubles.
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
151
CASE STUDY IN A TIN PLATE PLANT We will consider a case of the organizational aspects of quality assurance taken from the tin plate industry. Can companies are increasingly shifting to coil form tin-plate orders. This switch induces tin-plate producers to install digital systems as quality analyzers for recording and examining the dimensional elements of the finished product and keeping a complete quality history. Digital automation starts from the entry section for loading and preparing the strip, goes through the processing section for doing the line's actual job, and finishes with the delivery section for finished product inspection and coil removal. Being continuous, each coil entered into the lines is welded to the tail of the preceding coil so that a continuous band of strip is in process from the entry uncoiler to the delivery and winding reels. In a "typical" tin plate plant, at the ingoing end of the line, there is a provision for welding the start of one coil of steel strip to the tail end of the preceding one. The looping tower acts as a reservoir to supply the electrolytic tinning unit, while the weld is made. As the strip emerges from the electrolytic tinning unit, it passes a number of automatic inspection devices, which detect pinholes and weld, and measure coating thickness and total thickness. There is also a length-measuring instrument, arranged to emit a signal as each "unit length" of tin plate passes. With respect to the quality history, the majority of the defects are of a type that cannot yet be automatically detected; scratches, oil spots, arcing marks, dirty steel, laminations, unflowed tin, anode streaks, dragout stains, wood grain, and wavy edges can only be identified by visual inspection. At the outgoing end there are at least two down-coilers, so that as the shear is operated a new coil can be started immediately. In the logging operation, the position of all defects must obviously be measured from the sheared end of the coil. Ideally, all detectors, automatic and human, should be situated at the shear blade; because this is not physically possible, a correction factor must be applied to each measurement in order to relate it to the common fiducial position of the shear. This calls for some simple computing facility. In an application along this line, the input system is designed to deal with three groups of variable data: • Manual shift and coil information • Automatic plant inputs • Manual actuations and settings The manual shift and coil information is channeled through an input console on which may be entered the date, the shift, ingoing and outgoing coil numbers, weights, width, gauge, and gauge tolerances, as well as the specified tin coating thicknesses for each side of the strip. There is also provision for setting a minimum acceptable figure for the proportion of prime
152
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
material contained in anyone coil. The automatic plant inputs include the pinhole and weld detectors, thickness gauges, and a counter to count the footage pulses, as well as a contact switch to signal the operation of the shear. Further, the specific application we are considering disposes of manual actuations and settings made up of pushbutton switches operated by the human inspectors who examine the product for "visual" defects. A digital clock included in the system allows operations to be related to real time. With respect to the throughput, each order must be carefully followed through the processing lines to be sure that the prescribed treatment is given to the coils within that order. The identity of each coil must also be carefully preserved for accounting and inventory reasons. In practice, this order tracking is reduced to tracking and identifying the welds joining coils. A computer control system can and must perform this operation in order to synchronize coil identity and process instructions with the actual material in process. The necessary input/throughput system includes an information machine, which stores coil data, and pickup elements along the line, that is, position measuring transducers. At the instant a weld is made, the computer reads the loop transducers and adds this strip of footage value to the known fixed strip distance between the welder and the shear. At the same time the coil data are read. With this, digital control has the identity and processing instruction for the coil following the weld and the footage from the weld to the delivery shear. To complete the forementioned pickup network, a footage pulse tachometer may need to be located at the delivery section. It transmits to the computer one pulse for each foot of strip that passes the delivery shear. The subject pulses are subtracted from the measured welder to shear length, so that the computer knows at all times the position of the weld with respect to the shear. With respect to systems and concepts, this is close enough a parallelism to the on-lineness for the steel industry which we have reviewed in Chapter XXVI. But other definitions are still necessary. Thus far we have given enough information to describe the basic philosophy of a very simple ensemble. The computer, knowing and tracking the position of each weld and also scanning line-operating speed, can warn the operators of the approach of the weld on a time of bias. * A warning light will be energized at the delivery desk, telling the operator that a weld isapproaching. At a calculated time, depending upon the deceleration rate of the delivery section, the slowdown light will be turned "on," telling the operator to initiate slowdown, so that the weld is just before the shear when transfer *This reference is to an on-line, open-loop operation.
XXIX.
153
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
speed is reached. The final cut light will be turned "on" when the weld is at the shear. The digital computer can track through its own memory system the order data pertaining to each charged coil. A finished coil ticket can then be punched or printed at the instant each finished coil is sheared. Therefore, the identity and inventory data of each coil can be retained. With respect to quality, one of the most important functions of digital control is, of course, that of alarm detection. Alarm detection is achieved by comparing the value of each point with preset digital numbers corresponding to the desired minimum and maximum values of the process variable. The limits are set up and stored in computer memory, providing the necessary actuation depending on the nature and criticality of an alarm point. Depending on the type of control that will be desired, a variety of quality control elements can be instituted along the line to provide the computer with sound, accurate data, for inferences, quality projections, and estimates (Fig. 5). The process
I
GY~1Lf1 ficShU~~ routines
t
1J IQ
Processi ng line
Q
I Inspection units I
r
'(<0
EvalGation of ~eld or weight limit)
Evoluati01?, weld position
11
~
I
1
V
Q~ality scanning1 and rapid evaluation
r
I.
~ Computer
v
Alarms
!
!
Identific Process Quality points logs histories FIGURE
!
Inference data
5
For reasons of possible failure, alarm scanning is carried out. The transducer selection matrix is set to the desired input point, and a millisecond period is allowed during which the filter and signal amplifier switching transients decay. A signal proportional to the selected input is presented to the analog input of a comparator unit and the digital value of the limit is
154
PART VII.
APPLICATIONS IN THE METAL INDUSTRY
set into the proper register. The comparator output indicates whether the input is within the limit. This "yes/no" signal is sent to the program unit which examines the state of the alarm memory for the point concerned; ifthe alarm state has changed since the previous scan, either an alarm will be initiated or a return-to-normal print-out is arranged as appropriate. The digital control program determines the complete scanning and logging sequences of the whole system. It controls the other units in a fixed logical sequence, ensuring that each point is tested for alarms, logged or visually displayed as required. The system disposes of a clock unit consisting of a logical counter that receives, and counts, the half-minute pulses from the station clock. It presents time in units and tens of minutes, and units and tens of hours for printing as required. It also provides the program unit with onehour pulses for triggering the automatic logging cycles. With this setup, the digital control system can perform many of the operations common to the operation of the line itself. The operation of the central processing unit in any processing line is, of course, the most important component, even if the collection and analysis of process data using conventional strip chart recorders and analog indicating instruments are laborious processes. What interests the tin-line's quality of operations the most is a cause and effect analysis, and this is most difficult without special effort involving a number of programming considerations. As with all process control applications, the conversion of operating data to numbers for scientific analysis requires a special programming effort. The computer can read any number of analog signals from the vital portions of the process. It can correlate these with the order and material being processed at the time the readings are taken. It can then reduce the data and make any programmed analysis required. The data collected in this manner are immediately available in a control-oriented form. Here again, what we said about alarm action comes under proper perspective. Through real-time operations, the computer continually monitors line speed and alters the process instructions as required. As the head end of each order enters the processing section, the computer transmits these operating instructions to the process controllers. Tracking the order and the welds through the line, the. computer checks the results of the processing action and calculates new directions to correct errors or omissions made in the course of the initial analysis, or deviations due to operating conditions. Sideguides and strip centering devices could be provided to keep the strip running straight through the process. These devices may be damaged when wider strip is joined to the tail end of preceding narrow strip and the sideguides are not reset in time. The computer, tracking the welds through the process and scanning the order data, can determine when these transmission welds are approaching each of the sideguides through the line.
XXIX.
QUALITY ASSURANCE AS A REAL-TIME APPLICATION
155
On the output side, the most important document compiled in the logging operation is the "profile sheet," in which all the defects in the coil are identified by type and position. Such a log can be prepared for the manufacturer for quality control, while shortened or extrapolated versions could be forwarded to the customer with the coils and profile sheets appropriate to each other. Again, for quality assurance purposes, when the production system gets out of control, a line speed guidance device will slow down the line before the shear is operated, and when a new ingoing coil is welded to the end of the preceding one. In these circumstances new data must be set on some of the input switches; the line speed control output prevents the speed from being increased again' until these new data have been supplied. In operation, the footage pulses are used to interrupt the printing program and so initiate a program that calls in all the plant inputs. The profile sheet is compiled more or less in synchronism with the running of the line, printing following detection of the defects with a delay which depends upon the frequency with which they occur. When a shear is operated to complete one outgoing coil and start a new one, the two log sheets are updated.
PART VIII
Chapter XXX AIRLINE RESERVATIONS SYSTEMS
The growth and progress of many services vital to air transportation must keep pace with the accelerating growth of air transportation if full aircraft potential is to be realized. One of these services is that of airline reservations. The present discussion focuses on a computer reservations system and has been based on concepts and approaches developed for major air carriers by leading computer manufacturers. The first large-scale study made on the subject of airline reservations systems evaluated total processing requirements and proposed a setup assuming a single computing center to handle the in-coming transaction load. The resulting ensemble required two computers of substantial capacity coupled in parallel, a duplication necessary for dependability reasons. The researchers' primary aim was to bring forward an improvement over the then employed reservations systems. These setups worked only on availability and in keeping inventories. They did not act as central communications networks with message-switching capabilities. American Airlines says the following: SABRE was set up by a specialproject team. The group consisted of operational (Reservations) personnel, computer hardware and software, and installation and training experts. This team worked with the manufacturers ... to develop functional requirements, program specifications, etc. They designed, built, and implemented the system. They also worked closely with the Communications Department ... in the layout of the country-wide communications network. The initial work started on this project approximately five years prior to the cut-over in this city. Conversion was done on a city by city basis over a 2-l-yearperiod. Listed below is an approximate conversion schedule: a. 18 months prior to the cut-over, start hiring temporary help for reservations control. Brief local reservations management on the impact of SABRE. b. I year from cut-over, analyze installation and communications layouts. c. 6 months prior-s-train reservations instructors and start installation. d. 3 months prior-Complete equipment installation and checkout.
159
160
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
e. I month prior-Complete reservations agents training. During the period of actual cut-over, the local reservations management staff was supplemented by a team of experts from the SABRE project who assisted in making the conversion a systematic and orderly effort.
Another leading airline, TWA, commented on the same matter as follows: ... the implementation of an airline real-time system is affected by such items as the requirement for multi-programming, the equipment used must be in duplex configuration, inquiry terminals are widely dispersed and number in the hundreds, system availability must be sustained, there are specified response-time requirements, and messages are variable in length and they must be assembled and edited. It was necessary to go into a great deal of detail before conversion to the TWA real-time system.
EVALUATING SYSTEMS REQUIREMENTS
A number of alternatives has been investigated in the course of this initial study. One of these alternatives called for eight computing centers, using a total of twenty machines. But a systems evaluation showed that a configuration involving too many computers results in higher costs when compared to one center with two computers. With the multimachine system, the required total computing capacity is excessive because of duplicative functions and the use of less efficient data processors. Also, the file capacity and accesses are increased because of file duplication and module misfits. Advantages particular to the multiple-center configuration relate to the dependability of operations of this system. For example, losses due to external catastrophes in a ten-center automatic reservations ensemble are about half that of a single-center system. This was particularly valid as an argument with matters concerning military installations. * But with a civilian data processing ensemble, "vulnerability" is not as crucial as for a military setup, since there is no intentional damage or destruction. Furthermore, while losses due to computer breakdown may be reduced by decentralization, studies have shown that excluding malicious damage the same number of computers at one location would still further reduce breakdown losses with less over-all cost. The same studies established that, as far as a complex computer network was concerned, the program storage required presented an immense problem. Each computer must have access to enough instructions to handle the most common on-line transactions immediately. In order to avoid excessive delay and piling up of the less frequent transactions, the computer must also have access to a much larger storage of instructions *See also the discussion on SAGE, in Chapter XXXII.
XXX.
AIRLINE RESERVATIONS SYSTEMS
161
in a time comparable to the processing time of one transaction. A multicomputer system would necessitate large-scale duplication of programs and of working storage, also, memory space for intercommunication among the different centers. Considerations such as the foregoing, documented by a number of analytic studies, led to the conclusion that a centralized system using two computers to share the peak load would be the most economical. Dependability can be achieved at least cost by providing duplication of the centralized point. It was also established that, if vulnerability is felt to warrant the additional cost, a maximum of two centers dividing the peak load (but with duplicate files) should be used. This outcome called for two large-scale data processors in one central location. The basic assumption was made that a centralized system would require the execution of some 50,000 single address instructions per second to keep up with the average traffic during the peak hour of the peak day of the peak month, some four years after the system's installation. This assumed assembly of complete messages in the primary multiplexer rather than in the computer. In contrast to the single-center concept, the researchers estimated that a 50% increase in computing capacity will be required because of the extra computing load caused by decentralization and due to the fact that the computer module size will not precisely fit the requirements at each location. Most of the increased load is due to the need for complete availability posting at each location and to the other communications between centers. In addition, a 20% increase was forecast in order to keep the average waiting time the same as in a centralized system, since the faster centralized computer can operate closer to its maximum capacity for a given average delay time. With these considerations in mind, the resulting airline reservations system embraced four different sets of components: • The central computers' two large-scale machines, coupled in parallel. • The data transmission network: telephone lines or radio. • The multiplexing equipment, necessary due to the multicommunication aspects of the system. • The terminal sets, which were all man-operated. The central computers were located at a master control point. These machines keep such information as the number of unsold seats available on each leg of each flight for a specified period of time: data required for sales analysis, such as listing of space sold by cities, and flight status. They also keep a customer reservation file and provide flight availability information. In this sense, they handle inquiry transactions from local agents' sets.
162
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES Central control system (dual computer)
\
BjY B J
j
From real-time_ channel
-J--_---Il_To real-time channel
n
\.~------ ~-----~)
To and from local input-output media FIGURE
1
The central computer system is composed of the processing unit, the highspeed memory, and a number of data and file channels (Fig. 1). To the data channels are attached the magnetic tapes, the printer, and punched card equipment. The file channels have attached to them two specially designed mass memory units. Every record is written on both files for the dependability purposes. The high-speed memory holds some of the programs and all the work space. The magnetic tapes store information which might not be immediately required, for example, reservations information for other companies. Also a 30-day record for all passengers and transactions exists on tape. "Past time" file maintenance is done during the night. In the course of the on-line operation, maximum delays are not created by system interlocks but because of the queuing multidemands. Servicing is channeled by means of the multiplexers. The computer scans the multiplexers for waiting signals. Because of the cost of transmission, the more distant multiplexers are given first preference.
XXX.
AIRLINE RESERVA nONS SYSTEMS
163
An extension of this system can easily be visualized to incorporate optimization techniques. An example of an optimization package is a recently written computer program with the objective of preparing flight plans. The purpose of this operation is to provide the best route that will accomplish the flight objectives at lowest cost and in the shortest time, based on the latest available weather information and other salient factors. Aside from computer processing, the system includes: a high-altitude facsimile network service; two teletypewriter systems, one for domestic weather information and one for international weather information; and a third network that provides operational information and notices, as required for the optimization routine. In information processing, the master tracks that cover the extreme limits of what may be expected in routings between the two terminals are selected on the basis of appropriate air speed and flight level. The computer prints out two tracks which produce the fastest flight times. In the event that both tracks require the same time to fly, the one with the shortest ground distance is indicated. The weather model is analyzed to provide the selection output of the entire route, the component segments, air miles, and flight time. At the completion of the throughput operations, man-machine communication is given in the form of airway check points and ocean coordinates. Any military airspace blockage of projected tracks is computed, based on information from the air-route traffic control, and reoptimized tracks selected in case of blockage. This procedure was not possible in manual operations due to the complexity it involves. Applications along this line touch the fundamentals of guidance approaches to air traffic, a subject with which we shall be concerned in Chapter XXXI.
THE BASIC SYSTEM'S SERVICE Sales and cancellations are transmitted directly from any agent's position to the computer center through the same process as an availability inquiry, and are acknowledged as promptly (Fig. 2). A single entry initiates all the required inventory adjustments and enters or deletes the item in the passenger's name record. This mode of operation guarantees both an up-tothe-minute inventory and agreement between inventory and passenger name records at all times. This, in turn, results in improved load factors. The availability of a flight segment is determined by the computer from the inventory record, with rules by which the availability is determined, varied to meet current conditions. With respect to operations, the relatively long life
164
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES Local offices
000000000 I I I
\
I :
I I
I
I
I
I
I
I
"
:
[Input-output units I
tI~
I
:
:
:
I
I
)
High or low speed program scanners
.,
.,°E
c·~
c"" 0 ~ c.c 0
-
Q.)
o a.
.?: §~
a:
~
o G>...J-G>
E ",
ISu
-
2
c
-tl-~ Input-output adapter
tI~
Central computer
FIGURE
2
of a passenger name record in the central filesallows the computer to establish a fare quotation as a part of every record on an off-line basis. The same program can be used to price last minute reservations and changes on an instantaneous basis. This program will quote directly approximately eight-five per cent of the fares. The fares not directly quoted will be referred :automatically to a tariff specialist for computation. So, the present burden on agents for quoting fares can be substantially reduced. Other advantages to be derived from the system's usage are equally important. Passenger name records can be checked for expired ticket time limits when flight space becomes critical. When such a reservation is found, it may be automatically cancelled, referred to a supervisory agent in the proper city, or disposed of in any other manner that the airline wishes to adopt. The passenger name record file can be checked for duplicate records as flight space becomes critical. When duplicates are found they will be displayed to a supervisory position for appropriate action. With respect to throughput, reconfirmation is analogous to a ticket time limit since both involve a specified action by the passenger on or before a specified time.
xxx.
AIRLINE RESERV AnONS SYSTEMS
165
Since all operating records are stored in duplicate in independent disk files, the possibility of loss of a record is, for all practical purposes, insignificant. Furthermore, the failure of any communication channel will not affect the operation of the system as a whole. Only the station or stations on the defective channel will be out of the system, and this, only until alternate communication service can be obtained. Devices for detecting and correcting errors are built into the system, but simulation studies indicated that the actual use made of such devices willbe far exceeded by the use made of the logical power for the data processing center in detecting agent errors. An example is the failure of the agent to enter all the information required for a complete passenger record. The computer can be programmed to detect such errors and call them to the attention of the agent. To handle errors due to transmission noise, an automatic error protection system is available on radio circuits. When an error is detected in transmission it is readily corrected since either the originating agent or the computer has the correct data. This last consideration is obviously a function of the structural aspects characterizing the airline's transmission network. From coast to coast, the airline presently has 100 offices where reservations can be made, each with several agents. These offices are grouped into sixteen leased private telephone lines coupled in parallel for dependability purposes. They are designed at the capacity of 2000 bits per second. All transmission lines direct messages to the central data processing center in New York, a setup that allows no communication between offices. This means that all inquiries are directed to the data processing center (Fig. 3). How this synthetic case compares with actual installations can be better appreciated by the answers we received from leading airlines in the course of our research. The fact that these answers focus on structural characteristics should be brought under correct perspective. TWA responded to systems design as follows: Three Regional Processing Centers (RPC's) located in New York, Chicago and Los Angeles are linked to remote inquiry devices located at the sales offices and to the Central Inventory Processor (CIP) located in New York. Interrogations and transactions are initiated on a local agent set and transmitted over leased communication lines to the nearest RPC. The RPC stores flight information and seat availability as dictated by the CIP, which maintains inventory on all TWA's domestic and international flights, and generates messages to the RPC advising of each change in inventory status.
Eastern Airlines outlined the structural aspects of the communications links between the central site at Charlotte and eight regional sales centers or "complexes" at New York, Chicago, Charlotte, Atlanta, Houston, Tampa, Miami, and Montreal. Links are provided by commercial telephone leased lines, but the interconnections demand a great deal of intermediate
166
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
Several reservations agents in each office
FIGURE
3
equipment, involving a system of branches and "party-lines." By means of this network, each of hundreds of remote contact points can be given practically instantaneous access to the central processor. Customarily, long tables are arranged to provide working positions for sales agents in the regional centers, and UNISETS* are spaced to equip each two positions with one set. The placement of UNISET Programmers, Scanners, and Control Units is dictated to some extent by certain limitations on distance between units, so space must be provided to hold the intermediate equipment with due regard to concealment of cables for protection and appearance; cleanliness; ventilation and/or temperature and humidity control. Consideration must also be given to availability and dependability of power sources, and to possibility of future expansions. The leased data transmission lines must be of "high-speed" quality, permitting accurate transmission of 2000 "bits" or impulses per second. With due regard to cost of such lines, they still must be provided in such quantity and configuration so as to keep traffic loading within the desired levels, dictated by design and experience. Constant monitor of line performance indicates when additions or rearrangements are advisable. The objective, of course, is to keep the transaction response time as low as practicable; in Eastern's case, the response time rarely exceeds eight-tenths of one second at any point on the system. More than a quarter of a million transactions are handled daily. *Agent Sets.
XXX.
AIRLINE RESERVATIONS SYSTEMS
167
To carry the computer communications net on past the regional sales offices, a few words about the "complexes" mentioned may be in order. As originally designed, Eastern's central computer was to be connected as described to reservations offices at most of the larger stations served. While the system was being developed, the Telephone Company devised an arrangement whereby attractive long-distance rates could be applied to high traffic volumes between predetermined points-the Telpak system. By the use of Telpak, all of Eastern's stations, including the very smallest, could be allocated to groups roughly geographical, and each given direct telephone service to a central reservations office for that area. Plans were revised then to establish the eight regional sales offices, with the result that connections to the computer center were simplified or condensed; but more importantly, a customer in even the smallest station now can dial the local Eastern telephone number, and his call will be answered and completely serviced by an agent in one of the complexes having immediate direct access to the computer records. In addition to the computer communications network purely for the reservations system, Eastern stations are also connected by a Teletype network operating at 100 words per minute for administrative and operational traffic. Teletype messages from any point are received by the computer over one of 58 input lines, analyzed for priority and addressee, and automatically retransmitted to the proper station over one of 65 output lines. The relay under normal load conditions is accomplished in one to three minutes, and normal load at this time is about 40,000 input messages per day. Many hundreds of messages daily are addressed to the computer for action on reservations or other matters, and these are analyzed and processed without human intervention. Likewise, many hundreds of messages daily are automatically generated by the computer from certain inventory and other activities, and transmitted by Teletype to the stations concerned. The number and configuration of the lines forming the Teletype network are also dictated by load experience, and are subject to change as required.
American Airlines answered the same question on structural and teletransmission aspects in the following manner: All communication is routed to and from the ... computers via a real-time channel. A real-time channel and computer control program control the polling of9 high-speed transmission lines (high-quality, 2000-bit-per-second telephone lines). Up to 30 terminal interchange devices are attached to these fully duplex telephone lines. The function of the terminal interchange is that of: (a) a line multiplexor, (b) buffering device, and (c) agent set control. We have anywhere from I to 4 of these in a given reservations office. For reliability purposes, we usually do not put more than one terminal interchange in one office on the same communications line. Up to 30 agent sets can be connected to a given interchange. We have two types of agent sets: • Local agent sets which are located within 1500 feet of the terminal interchange. • Remote agent sets which can be at variable distances since they are connected to the terminal interchange via data phone equipment and standard telephone lines. All communication lines are leased on a 24-hour day, 30-day-a-month basis. The SABRE agent sets are manned by reservations sales agents who answer telephones. These telephones calls may be coming from the same metropolitan area that the reservations office is located in or in some cases we have foreign exchange lines in which a customer from another city dials a reservations office in a distant city. For example, the Fort Worth reservations office handles calls from Dallas, Oklahoma City, Tulsa, and Houston. As far as a customer is concerned, it would be the same as calling any local telephone number.
168
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
With all the special-purpose communication jobs being handled by the multiplexing equipment the only special facilities needed by each computer for handling the on-line communications are a serial input channel, an output channel, a program-interrupt feature to allow easy programming of off-line operations during slack periods, and a working store capable of holding the information needed to process all simultaneous transactions. The initial study had indicated that space for five transactions at 200 characters each would probably suffice. Since the terminals are man-operated, the ensemble is essentially bound at its input and output ends. The human-to-machine input-output device associated with the system is designed to have: • An alphanumeric, general-purpose, keyboard • A special-purpose keyboard composed of single-function keys • A serial page-printer, such as a typewriter, to display the machine output.
In ticketing installations, this printer must be able to print multiple copy tickets, and a ticket reader is required at certain airport locations. Each unit is designed serial-by-character and parallel-by-bit. A terminal coupling unit is also required to convert data from an agent set to a serial-by-bit data from a communication circuit to an agent set to a parallel-by-bit serial-by-character form, and it is critical at all locations having only a single agent set. The agent's position includes the agent set equipment and air information cards, in addition to storage space for reference material, such as timetables. Two kinds of air information cards can be used, origin-destination cards and off-point cards. Both types are precoded and can be sensed by machine for speed and ease of handling. The origin-destination card shows the most common flights grouped according to destination cities. Thus, by having up to sixteen itineraries listed on each side ofthe card, the agent can handle most requests with a small number of cards. The off-point card shows all scheduled company flights arranged according to the cities served. One or more cards exist for each city, showing all company flights serving that city. Any flights not covered by the origin-destination cards are covered by the off-point cards. This will greatly reduce the agent's need to refer to other schedules except for connecting service. Flight number cards can be used in place of off-point cards. These show every flight, leg by leg, on one line of a card, permitting the agent to sell any leg of any flight. Four units constitute the basic physical structure of the agent's set: • The air information device. • The routine action pushbuttons for entering data, number of seats, and action requested.
XXX.
AIRLINE RESERVATIONS SYSTEMS
169
• The input keyboard for entering passenger name data and other variable information. • The display printer for recording the agent's keyboard entries, and the data processing center's responses to the agent. A multiplexing unit is required to match the input and output of up to thirtytwo agent sets to a single communication line, on a full duplex basis, that is with simultaneous send and receive. Selection and serializing of the agent sets generated data would be performed by this unit. The output to a communication line will be the serial-by-bit, character-by-character, intermixed output of the associated agent sets. One circuit from a down-line secondary multiplexing unit, or terminal coupling unit, may be multiplexed through a secondary multiplexing unit. The latter unit must also distribute the computer output messages to the agent sets to which they are addressed. Finally, a primary multiplexing unit is needed to accept inputs over communication circuits from secondary multiplexing units, terminal coupling units, and other primary multiplexing units at a maximum rate of 1000 bps. A message received from a primary multiplexing unit is buffered and transmitted causing a delay of approximately 10 milliseconds per character. One primary multiplexing unit is associated with each computer, as a message organizer. This is going to be essentially the same as the other primary multiplexing units, except that it must be able to handle the information rate to and from the computer-160,OOO bps. The automatic reservation system incorporates a number of "data channels" or subsidiary computers which execute a semi-independent stored program to control the flow of data between the computer memory and a group of input and output devices. Several data channels may operate concurrently in the described ensemble. They are used to link such devices as magnetic tape units, card equipment, and disk storage to the central processor. Data from communication lines enter the processing center through a specially developed data channel, which assembles message characters arriving on the communication lines into groups, checks these groups for errors, and moves them into the main computer storage unit.
SOME OPERATING CHARACfERISTICS
The crucial human operators in this ensemble are the agents. Each agent receives a visual display of what he is inputing to the system. To find the information he wants, the agent may select a timetable and display it on the screen of the agent set. He can select the origin and destination cities by pushing the proper buttons on each side of the display screen. He then
170
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
receives information on space availability and flight status, by means of visual displays. He selects a specific flight and punches the keys on the main keyboard to record the month, day, number of seats, and type of transaction (space inquiry, sell, cancel, or flight status) through push button controls. Statistical control media could be used to advantage in this connection. When flying some 30,000 persons daily, reservation failures can and do make trouble. The quality problem, therefore, is one of assuring accuracy in the final reservation product by keeping the detailed components in control. For example, through the aid ofnp, or numbers defective, control charts, and effectively arranged data sheets, quality findings can be listed in such manner as to gain a factual picture of the process. Critical analysis can be made of work samples drawn daily from each office. The samples consist of written or typed reservation messages taken at random. By comparing contents of the messages with entries on the permanent chart to which they referred, it is possible to bring to light a variety of irregularities and errors which, when placed on the data sheet, reveal the "who," "what," and "how" of troubles. With this picture of where the principal defection lay, it becomes relatively simple to determine at what points one must concentrate corrective efforts. In an actual application of this type, the results were rather outstanding. When the work was begun in one office, the process of quality level was so low that 28 % of the work was defective to a greater or lesser degree. This was somewhat shocking, but comfort was taken in the knowledge that the researchers were getting to the root of the clerical difficulties. The greatest amount of defection was in errors of a minor category, or in that which was of such a character so as not to cause passenger loss of space or disservice. Generally, these amounted to violations of company regulations or procedures. There were, however, a sufficient number of the more critical types of errors to cause deep concern, and it was somewhat disconcerting to realize that the latter had existed in past performance for some time. These errors gave dependable guidance toward improvement. Virtually from the beginning of the project, errors in the work took a striking downward trend. Up-to-date availability on all future company flights, some for 60 days, others for a longer period, was to be available immediately to all agents by means of agent sets connected directly to the data processing center. By inserting either an origin-destination card or an off-point card (or flight number card), depressing keys for date and number of seats desired, and the availability button, an agent can determine which flights are open and which are closed. Each availability record in the data processing center reflects the status of each flight on the air information card. The address of the desired records are obtained directly from the code at the bottom of the card and for the date requested. The lights, then, display the status of the flight.
XXX.
AIRLINE RESERVATIONS SYSTEMS
171
If the desired flight is closed, the agent can press a button to ask the computer to provide the first available date the desired flight or flights are open. The computer can scan succeeding or preceding sheet day records and reply back on the display printer, giving by reference to the line number of the card the first date on which that flight is open. Hence, the agent will not have to continuously push buttons to find the first available flight during peak travel seasons. For an availability request beyond 60 days, the computer will always look first at the 60th day availability record. A code within this record can signal the computer whether to give an automatic open response or to select the availability record for the day requested. Availability can be requested for a specific number of seats. The availability record would show if there are 1,2,3,4, or over 4 seats remaining. A simple comparison within the computer can determine whether an open, closed, or request condition exists. Once the availability of a particular flight has been established, the agent can sell the space. Byselecting the proper line on the air information card and pushing the sell button, the computer will direct the inventory action to the proper flight inventory record by using an internal index with the card and line number providing the address of the flight record. As the itinerary in the passenger record is adjusted to reflect the agent's actions, the inventory records will also be updated, thus ensuring that the inventory records are in agreement with the passenger records. If desired, the agent may have the option of selling without checking availability, and the computer will determine whether the sale should be accepted or rejected. With each flight leg being updated, the status code will be tested to see if either the notice level or booking limit level has been reached. The notice level will be a predetermined number of bookings on each leg lower than the booking limit. The booking limit, in turn, will be the number of bookings that will be accepted on each leg: since it is defined by airline management, it may be either lower or higher than capacity. In order to retrieve a complete passenger record, the agent must enter the flight number, class, data, and passenger name. The computer contains an index of where the complete passenger name record can be found in the mass storage. The computer processing begins by finding if there is an exact match to the name. If an exact match is not found, then the program searches for the closest approximations. A cutoff level is set in this comparison routine, so that the computer does not continually search. If an agent makes a change in the passenger name record once it has been retrieved, the record will show it has been revised and the change, including who made it, will be retained in storage. If there is other airline space remaining after all of the user-company's space has been searched, the agent will inquire, by using the two letter airline
172
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
code, flight number, class, date, and passenger name. The airline code will signal the computer to search for the address of the complete passenger name record, and display the complete record back to the agent. Whenever flights involve date changes, because of crossing the International Date Line, it is necessary to provide the computer with a method of determining that an agent desires a record for a particular flight/date where flight departure date differs from flight origin date. This can be done by inserting the three-letter city code for the departure city of the flight. When the computer sees this in a retrieval request, the inventory record can first be consulted to find out how many days away from the origin date the request is, and hence direct the request to the correct flight/date/class index. Alternatively, once the agent has supplied the departure city code, a small internal table can provide this departure information for each company flight number. Input to the wait list for each flight/date requires the assigning of a priority to the passenger. Wait-listed passenger records can be filed in the regular name index, with a tag to denote that they are on the wait list. If space becomes available, passenger records will be selected in a priority order. After monitoring, the record will be forwarded to the sales office concerned, and the customer contacted for confirmation of his request. A wait list confirmation from the agent can update the inventory, and at the same time notify control that the passenger has accepted the space. It is also possible to maintain a "hot file" within the computer to assure that agents confirm the space. Using predetermined keyboard format or punched cards, company personnel will have the ability to add flight numbers to the inventory records. Schedule changes that necessitate a passenger being rebooked using a different flight routine will be handled manually and company personnel also has the ability to change flight numbers of passenger record"). When flight numbers and passenger records have been changed, the selling office can be notified during nonpeak hours. A specified agent set is to be used so that messages from the data processing center will not interfere with agents who are in the process of serving customers. As an off-line operation, lists of flights that are to be rescheduled can be printed out for delivery to sales offices for appropriate action. Duringthe off-peak hours, the computer is planned to perform period duplicate record checks, such as sorting and printing out duplicates or suspected duplicates in reservations. Sorting and rearranging is a continuous housekeeping need for reasons of effective control action. When a schedule change occurs, the computer must check to ensure that agents are using the proper card for the date requested. If the agent used the wrong card, the agent set printer must so indicate. Similarly, in the event an agent attempts to sell space on aflight
XXX.
AIRLINE RESERVATIONS SYSTEMS
173
that does not operate on a given day, the computer should notify the agent that the flight does not operate on the date being requested. A day numbering technique, similar to that used by many manufacturing concerns, has been chosen in assigning records to specified locations within the computer, thus keeping file maintenance routines to a minimum. Once a day number has been calculated, it remains the same, and therefore it will not normally be necessary to move records internally. The basic period for which this numbering technique is scheduled to operate is 60 days. The name record indexes for flights beyond 60 days are broken by flight number and date into blocks of ten names, with an overflow address at the end of each block for purposes of chaining blocks together for each flight number and date. The transition to the automation of reservation procedures was planned to take place in two steps. The first phase was to centralize and mechanize space control and reservation handling. The second phase involved the installation of agent sets and their associated equipment and transition to automatic operation, and was done one station at a time. The sale of space and maintenance of passenger name records were combined, reservation records centralized, and direct communication established between agents and the centralized records. The initial phase then included the central processing unit, the magnetic core memory, the real-time channel, and some of the disk storage units and data channels. Reservation posts were gradually cut over to the center for maintenance of seat inventories on a system-wide basis, and the handling of the associated message activity. While for dependability-in-transition reasons, local records were scheduled to be maintained without substantial change, and availability devices were to continue to be used, the data processing center was decided to operate on an off-line basis to take the necessary inventory control action. This cautious approach to the changeover problem was expected to provide the necessary guarantees as to service assurance. The procedural changes required to shift from the manual reservation system to phase I were reasonable. Training in new procedures was obviously required, but this also constituted the preparatory work for the second phase. Following some two to three months of operation in the first phase, automatic system operations were scheduled to commence. Procedures had to be radically changed to conform to the new requirements. A pilot operation of this kind for a few selected offices permitted installation plans, procedures, and programming to be finally proven, prior to cutting over other locations. Additional cities were then installed and converted to automation. The schedule for carrying out this program for orderly installation and conversion depended upon several time and cost variables.
174
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
ASSOCIATED SYSTEMS SERVICES The principal records maintained by the automatic reservation systems are the passenger name records and the airline seat inventory. The recording and retention of this data in machine language serves as a decision-making source for management. Periodic management control reports include: • Advance booking information with the capability of comparison with previous years. • Past booking history performance. Because the passenger name records can be retained on magnetic tape, the system is capable of reconstructing the complete booking history of any flight upon demand, as an off-line operation. • Traffic performance showing the number of seats available, the number of revenue passengers carried, and the resulting percentages. • Post-flight analysis, showing the no-shows, no-goes, late cancellations, off-loads, and the like experienced. The percentages of the foregoing to passengers boarded is also computed. • Record of demand for a flight or route that has been sold out in order to guide future flight scheduling and arrangement of extra sections, in accordance with the procedures outline for agents in recording this demand. To help identify "use frequencies" the computer can readily count the number of times a particular air information device card is applied. But, since more than one destination can appear on a given card, there is no way for the computer to determine which destination is being requested, unless the agent also uses a special identification key. Other services can as well be performed in an efficient manner. Requests for miscellaneous items, such as car rental, may be entered directly into the agent set as a miscellaneous segment. These segments will follow all the airline space. In order to initiate a request to some other city, the agent will supply the proper addressing characters for that city. The agent will first select the miscellaneous segment key on the agent set keyboard, and then type in the segment information. The computer will be able to recognize the "tag" provided by this segment key, and process the record accordingly. Other associated services, such as the control and sale of cargo and hotel space, may have a substantial bearing on the optimum utilization of the system. The automatic reservation network can furnish an accurate and rapid means of mechanizing these important functions. Cargo space availability, for instance, may be displayed at the agent set using an air information device card showing the cargo flight numbers to various area destinations. Availability within the computer will be in an open status when
XXX.
AIRLINE RESERVATIONS SYSTEMS
175
the inventory record indicates more than a predetermined weight open for sale on passenger flights, or on cargo flights. When these limits are reached, the availability records will be automatically closed. Since agents will sell up to the formentioned cargo limited, the weight of shipments will not be included in availability calls. Shipments exceeding these weight limits, or of abnormal density or restricted articles, will be requested by the agents and printed out for action by control personnel. Following an availability call, the agent can use the keyboard to sell the space, since the weights can vary considerably. After these two actions have been completed, the agent may then use the agent set for the cargo record portion of the entry in the same manner as for passenger name records. In a proposed setup, cargo inventory will be maintained by weight (kilos) and volume (cubic feet). Agents will make bookings by weight only. In each such case, the computer will assume one standard density, calculate the cubic footage, and adjust the volume inventory accordingly. Cargo of abnormal density will be requested and, if confirmed, control office personnel will adjust both weight and volume inventories accordingly. Cargo records will be retrieved using flight number, date, and airway bill number. The use of a programming technique very similar to that used with passenger name records will overcome most of the difficulties when, for example, a transposition occurs between two digits in the airway bill number. The inventory record for cargo can contain the same levels as those for passenger. A notice level will indicate to control personnel when a certain weight or volume level on the flight has been reached, and this will cause an automatic print-out at space control. An internal code within the record will prevent the message from continually being printed. Should a cancellation later reduce the cargo inventory below this level, space control will be so notified. A second level, the booking limit, can also be established. This limit will be higher than the notice level and will be used in the same fashion to automatically indicate to control that a certain inventory condition exists on a given flight leg. For another example, availability for hotels might normally be maintained over, say, a tOO-day period, with certain selected hotels during seasonal periods having a longer period. This availability can be maintained by day, for each of three categories or rates for rooms: minimum, moderate, and super. A card, similar to the air information card, can be used showing the names of the associated hotels. The agent would use one of the origin buttons on the top of the agent set to select the category for the hotel, select the line for which a display is desired, the number of rooms, and the date that the customer will enter the hotel. The display back to the agent on the lights will show the date requested and the availability for the next succeeding days.
176
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
For the subsequent sale of space, the agent will be required to enter the checkout date so the computer can take the action to update the inventory record. In answer to a request response, the agent will operate in the same manner as for a passenger request and the computer will not channel this through inventory. In retrieving a guest record, the agent will key in the hotel code, the room category, date of occupancy, and guest's name. This procedure will allow maintaining an inventory by hotel, date, and room category. In updating inventory, the difference between date in and date out can be calculated and hotel space most efficiently used. In all, electronic data processing holds a good promise of achieving economy and greater efficiency through integrating the information requirements of several operating departments. This can have a very real impact on communications systems-achieving fully, with accuracy, timing, and at a reasonable cost, the aims of paperless data handling.
Chapter XXXI GUIDANCE APPROACHES TO AIR TRAFFIC
An example of computer usage in real-time operations is that of air-traffic control. Computer control of airplane and missile flights may range from a complete air-defense system, as we will examine in Chapter XXXII, to a simple data generation in respect to the future position of a flying device. Developments along this line are a result of necessity. Congestion and delays in present and future civilian air traffic can be directly traced to inadequate information processing. But how could this situation be improved? In present-day air-traffic control: • • • •
Aircraft locations may be obtained by radar. Aircraft altitude can be computed, or be made available, by radio. Identification and destination are available by radio. Schedules and flight plans can be teletransmitted on a machine-tomachine basis. • Landing instructions and assignment of flight lanes and holding parterres are also transmitted by radio, between the control tower and the apparatus. Although, with only minor exceptions, this is what is done at the present time (Fig. I), there exists no high-speed automatic aid to the central information processing. In developments along this line, data networks can be used in providing future position information and in predicting potential conflicts for the flight controller. The writing ofthe necessary programs is not extraordinarily complex. In addition to an input routine which conditions the flight plans, two other programs are necessary: one, to process flights that will be flown over designated airways; the other, to process flights that will be flown by direct routes between specified points. 177
178
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
Radar
-
Control lower
FIGURE I
Mathematical means can be used to determine the fixesthat must be posted for the particular direct-route segment involved. The slope of a line representing the route of a flight would then be developed from x and y coordinates assigned to each fix. This slope might be used to index the program to one of several radial tables that serve a function similar to the airway table. A history of the speed of flights along various routes should also be maintained, based on position report data that are received from pilots. This historical information could be used to develop ground speed more accurately for subsequent flights over or near the same route. A similar approach can be applied with off-the-area data. The units required in a flight plan will be the identification of a flight, the type of aircraft, the speed of the plane, the departure point, the altitude to be flown, the route of flight, and either the departure time or the estimated time over an entry fix, if the flight originates outside the area. The principal function of an input routine would, then, be that of identifying units of information in a flight plan and of organizing the flight plan data in the computer storage so that it can be processed by the other routines of the program. After the flight plan has been organized in memory, the next step is to determine the fixes for which flight progress strips are required. The computer should also develop the needed information for the flight progress strips, the fix time tables for conflict detection, the data for automatic communication, and the speed history as position data are received. Data control for discrete particles is, in fact, based on the foregoing fundamental steps, be it for airplanes, for railroad, or for motor traffic.
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
179
SIMULATION FOR AIR-TRAFFIC CONTROL As an introduction to computer-processed air-traffic control, we will start with AIRSIM, * a hypothetical simulator for environmental and traffic control studies. Its objective is to experiment with live-looking air-traffic situations for the purpose of evaluating new concepts, their impact, and operational relationships to the air-traffic controller. Digital simulation is in itself a new approach to this subject. In the past, real-time simulation studies for air traffic were conducted on special-purpose analog computers: • A "pilot" was assigned to "fly" each simulated aircraft. • A radar simulator transformed the aircraft coordinates into signals for a radar display. • Voice communication links were used by the traffic controller to communicate with the "pilots." AIRSIM is based on a completely different concept, with digital simulation through computer-processed programs playing the major role. The programming work as developed is capable of presenting all aircraft control situations currently known or likely to appear in the near future. Radar signals and controller facilities share the same requirements of duplicating conditions, existing or anticipated. The system has ten primary and ten secondary radars simulating coverage of an area some 1500miles in diameter. Design specifications require that as many as five hundred aircraft, subsonic and supersonic, should be under control during an exercise. A number of intruding aircraft, not under control, can also be accepted by the system. AIRSIM's radar simulators will take into account the intermixing of signals produced by fixed echo generators, moving echo generators, and the background noise. From computer output data, the radar simulators will present aircraft echoes, reflecting the vertical and horizontal coverage diagrams of the radars and of echo level for various beam widths. The system's communications network gives each controller twelve available frequencies selected from sixty basic frequencies and forty telephone lines ranged in four groups of ten. The general schematic diagram ofthe installation is shown in Fig. 2. The air-traffic controller has access to a family of general lines providing simulated liaison with other centers. A recording system is included to register communications for playback. The main characteristics of UHF and VHF are fully simulated, including background noise, emergency transmitter/receiver, and jamming of controller's transmission. Flying spot tube * The name is fictitious, but both the fundamentals and the characteristic hardware features are based on actual and in-process applications.
180
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
FIGURE
2
scanners are used in the synthetic echo generators, in a system based on the technique of film recording and scanning. A video map generator is included, employing a flying spot tube, scanned as a display, by an unmodulated spot. A photomultiplier cell produces the video signals which are distributed through a group of separate outputs. A digital communication system, now an AIRSIM component, is experimented with as a possible linkage to a live traffic control network. For identification reasons, we will call this digital set TRANSAIR and describe it as a new air-to-ground communications system designed to offer a general solution to the use of digital communications techniques. Its basic feature is a buffer storage unit designed to permit use of low-speed page or strip printers in the aircraft. The whole systems concept centers on the fact that a digital communication means must operate in conjunction with voice communication. With TRANSAIR, voice communication functions are not replaced, but the additions of digital channels are implemented in a scope adequate to accommodate all users of the airspace. Modular additions to TRANSAIR can provide canned message capability, automatic displays, a teleprinter keyboard, facilities simplifying clearance
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
181
procedures, and routine reports, message composers, lightweight printers, and integrated automatic displays. Most of these can be effectively actuated by the pilot operators and by the traffic controllers. In the AIRSIM concept ofsimulated air-traffic control, the collection ofdata on the movement of the aircraft, performance measures, and operational data was simplified by using the same computer to generate targets and collect data. Data were registered during the simulation run, and batch processed during nonpeak operational periods-this, of course, with the exception of the information that had a real-time impact. For efficiency in data handling, computer memory is divided into the following functional areas: • • • •
Data acquisition Operational control guides Simulated aircraft targets General data processing.
The simulated aircraft guides include the estimated aircraft characteristics, geometry of the area simulated, and wind forecasts. Data acquisition includes the sampling rate of the radar, the positional data of the operational airtraffic system, the communication lines, the simulated workload, and other performance variables and characteristics. This works in relation with a subroutine from the AIRSIM library, which reads the parameters specifying the conditions of the run. Input traffic flight plans are selected from memory. With this, tables needed by the operational control program are also computed. At the laboratory level, to which the subject system is addressed, a simulated environment is also required to study solutions to the air-traffic control problems. This requires a different set of services from the digital computer. The AIRSIM data processor is programmed with major emphasis put on aircraft heading, position, and velocity. It allows the inclusion of controlled navigational and speed error distributions, so the effect of errors on the simulated system can be examined. Through a monitor program, it can call-in incorporated subroutines such as a simulator of take-off acceleration or a navigation using the instrument landing system. The same is true for the simulation of a radar position acquisition system, a radar or beacon tracking system, etc. Buffered, general-purpose displays are used for controller and pilot situation reflection. These units are also capable of displaying table and other general alphanumeric information to the controller. This function is accomplished in a flexible manner, on a programmed display basis, with no built-in format restrictions. With respect to the organization, each control and pilot position is linked to the computer through a keyboard, in order to
182
PART VIII. GUIDANCE FOR DISCRETE PARTICLES
update its information or to call down ancillary information to a printer. The "pilots" through their link are able to modify the aircraft's flight (Fig. 3). All parts of an exercise can then be played back for analysis reasons, the system being able to prepare and implement the exercises, test new procedures, theoretically, and perform scientific and statistical research.
Simulated environment
Operating coordinate
.,f28/i'
Cb .::::-
:-::::~
"0 ~
'~l)
DisPlaYL. lnforrnctions
.....
~
g
.~
---,~~vIUelwasted JQo. §ulo! ~<5'
FIGURE
3
The mathematical models used by the simulator are especially designed to help in air-traffic scheduling and control, providing the ground basis for sophisticated digital experimentation. A program is developed to deliver aircraft to a destination, based on such criteria as safe maximization of the aircraft acceptance rate at an airport and the reduction of the average delay. The control function is then exercised to minimize the difference between the actual time the aircraft arrives at the destination and the schedule time. In turn, control is effected by the air-traffic controller, control decisions being based on computer calculated data. The so-developed air-traffic computer concept can then remain as an integral part of the system, when this system will eventually be converted to on-line open-loop control. The monitor program for AIRSIM has been designed to take full advantage of simultaneous input-throughput-output operations and of priority interrupts. Different interrupt levels have been established, the selection based on the real-time requirements. These include "control entry" interrupts with reference to information inputs by air traffic controllers and pilots. Data are
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
183
entered into the computer via the keyboards. The function the computer performs at that time is interrupted as long as the machine program needs to read the keyboard message, check it for validity, and store it in the message input buffer area of the internal memory. Then control is returned to the program in progress at the time of the interrupt. With respect to output, AIRSIM is programmed to transmit data to the displays without tying up the central processing unit while sending this information. A special routine has the function of generating the simulated targets under the control of the pilot keyboards, an approach that allows the inclusion of controlled navigation and speed error distribution, to whose need reference has been made. For identification purposes, all aircraft flights are assigned a simulator number when they enter the program. An assignment subroutine monitored by the executive program maintains an updated list of available simulator "positions." These positions are the key to all information concerning a given flight, and of the descriptive parameters associated with it. Bythis, reference is made to the coordinates, the air speed, performance characteristics, and so on. Look-up facilities are provided to simplify the retrieval problems presented in this connection. A number of data storage and retrieval methods have been experimented with, in order to identify the data storage organization that is most efficient in target generation and display' programs.
REAL-TIME AIR-TRAFFIC CONTROL If the simulation idea is extended to the live coverage of the air space, the
electronic gear to which reference has been made will need to be reconsidered. Nevertheless, the fundamental concepts behind systems design remain basically the same: A computer network can be used to advantage for air control. Apart from being linked into the automatic data network, each on-line data processor can handle on a "local basis" the problems of the air section it has been assigned. The computers in the subject network should be able to operate on real time, for such operations as the estimation of flight arrivals and the determination of conflicts in flight plans. On the basis of generated data, in forecasting forthcoming schedule conflicts, the data system should go further, issuing timely command signals to the airplanes in flight, thus allowing immediate flight plan changes. Under the manned method, the one presently being used by most airports and air-traffic control centers, flight data are received from the air carrier companies, military air agencies, etc., by radio, phone, or teletype. Flight arrivals have to be computed by hand and written on paper flight progress
184
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
strips inserted into strip holders and racked in front of the traffic controllers. Any changes in flight plans necessitate manual refiguring of new data which is both time consuming and of questionable dependability as to minute changes. But with the increasing density in air traffic, the usage of electronic data systems for air-traffic control is coming particularly into focus. The capabilities for accurate forecasting presented by the automatic data system are, therefore, essential. The machine should calculate well in advance, and in an accurate manner, what flight conditions exist, so that controllers can plan ahead flight safety for aircraft. Two subjects are germane in this connection. Data Collection and Reduction. The computer can be used to collect and evaluate data, on a real-time basis, while performing internal calculation functions. This in turn implies the development of data transmission methods that are extremely reliable in the face of interference from a variety of sources. Since the safety of a high-speed airliner would be dependent upon the splitsecond accuracy of information being exchanged with the control agency in a densely traveled terminal area, communications must be impervious to noise. To meet this problem, a number of approaches offer promise. One approach has been brought to light by communication theory developments, which indicates that the communication channel efficiency can be improved in the presence of noise by the manner in which the message is encoded, and the form of transmission symbols used. In Chapter VI, we have considered data conversion methods. With the usual type of analog, continuously varying signals will be translated into bits. These bits will be transmitted in a variety of forms or wave shapes, designed to achieve an optimum match to the transmission medium and to the type of detector. The symbols used can be coded with redundancy to make possible error detection at the receiver. Another approach uses transmission redundancy by the technique of space frequency and time diversity. This combats degradations in the transmission medium by sending information simultaneously along two paths in space, or on two frequency channels, or by simply sending the information in the same channel several times. These methods are quite effective in overcoming multipath fading, produced by canceling signals arriving at the receiver from different transmission paths. Still another approach in combating interference lies in the method of modulating or adding intelligence to the radio carrier. A second basic need for air-borne communication is an increase in information passed between the aircraft and the ground at a given time, and, simultaneously, a material simplification in the communicating process. This
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
185
need is rapidly becoming more urgent with the increasing aircraft complexity and speed, and the developing traffic density at the terminal area. Problems of this nature point to the need for data links between the plane and ground, to automatically and reliably transmit required information at high speed. Data is sent with a discrete address to activate only the intended receiver. The information is prepared and transmitted in digital form, and utilized at the receiver to automatically activate the necessary functions, or to make the appropriate indications. Optimal and Timely Changes in Schedule. A digital computer can take action in the direction of these changes, from run to run. By changing the operational variables, an entirely new air-traffic configuration can be developed, while the central computer facility is utilized on a time-shared basis to study many different air-traffic conditions. Computational and logical operations are, in fact, an important part of any general air-traffic control and navigation plan. An exchange of planeand ground-gathered information on the location can be accomplished enroute. Along with it goes a continuous review and alteration on flight instructions as a result of comparisons of the flight plans and progress of other planes within the area. Data collection and transmission, no matter how efficient, must be supplemented with automatic computing capabilities, to develop flight plans, to report on flight progress, to precalculate paths for conflict detection, and, in general, to provide all crucial data to the flight and to the landing. The airways specified in a flight plan can be used to index the computer program to an airway file or table that describes, in machine code, the route to be used. An airway table must specify information on history of flight movement that concerns it, navigational aids and airway junctions inside its own area, and navigational aids and airway junctions outside its area. Processing a flight plan starts by locating the address of the entry fix in the airway table. Then the address of the next airway junction point is obtained. This will be the address of the destination if only one airway route is specified in the flight plan. If the two addresses are both within the "inside-area" portion of the airway table, the direction of flight can be obtained by a subtraction. Information needed to predict future conflicts will also be necessary. This can be generated by the computer from the data developed for flight progress reports. The flight identity, altitude, and time that each aircraft is estimated to be over a fix might be stored in a fix table. Each time a time estimate for a flight is developed at a fix, the fix table can be searched to determine whether a potential conflict exists with any flights that have been processed previously. When potential conflicts are found, they may be printed on flight progress strip, in the form of traffic notations.
186
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
Real-time data inputs can be used to supplement the generated information. Position reports are received from pilots and from control towers, for flights landing on or leaving the area. Such position reports might indicate that, for instance, an aircraft is more than five minutes off his original estimate. In this case, corrective action would be necessary. Such data would be entered into the computer and the machine would proceed with the calculation of a new flight plan which would subsequently be transmitted to all airplanes concerned. A new real-time control program can be implemented most efficiently by maintaining a small group of specialists working closely together. Their responsibility should include systems analysis, functional evaluation, operational synthesis, and carry through to the whole range of an air-traffic control application. The small group size and the full responsibility for all technical sections of the job should result in greatly reduced communications and in an effective coordination of requirements among the members of the group. Concurrently with the live application, the use of digital aircraft simulators would provide most helpful experimental data. In planning and programming for air-traffic control, the analysts should address themselves primarily to the salient computation and communication needs facing the air-transportation industry. Namely: • An increase in the rates of data transmission without excessive increases in frequency bandwidth. • A high reliability in data transmission without a serious reduction in information capacity. • A universal data-link system that fits the needs of both military and civilian aircraft, but uses the same terminals and facilities. • The development of air-borne communications and navigation equipment that will not require extensive operator attention when switching from one terminal or route facility to the next. • The design of both air-borne and ground computing equipment and programs providing an efficient coordination of air traffic. The last point in the preceding section helps emphasize that, without advanced systems for communication and automatic navigation, aircraft performance would necessarily be limited. The services provided by electronic communications and navigational equipment can be regrouped into four basic functions: • Navigation • Weather prediction • Air-traffic control • General operational traffic. Automatic navigation requires devices that provide information to help
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
187
the pilot determine his location, with respect to a known point or points, and to guide him to his destination. Electronic navigation media can give continuous checks of position and, further, be utilized to actuate plane controls and to automatically guide the plane. Electronic navigation is performed by one of two basic principles. The first is to obtain by radio direction-finding techniques relative bearings from the plane oftwo or more identified points, such as broadcast or range stations. The intersection of the radio bearings establishes the plane's location within the accuracy of the observation. Direction-finding equipment is still carried on most commercial and military aircraft, but it is being used more and more as auxiliary or backup to more automatic methods. The other basic electronic navigational method is the rho-theta method, where the relative bearing (theta) and the distance (rho) from a single known point establishes aircraft location. We have then defined the other major function of aviation communications as being that of air-traffic control. To this, we said that, with the tremendous amount of traffic using present-day airspace, especially around air terminals, the air-traffic control problem is quite critical. This air-traffic control problem we can now reclassify into two basic functions: • Enroute control • Terminal and ground-approach control. In each case, the object of the control agency is to move as much traffic as possible without running undue risk of collision or accident. To carry out this function, this agency must maintain tight control over the entire air space, often under conditions of poor weather and visibility when instrument flight rules apply. During this process, flight plans are adjusted by the control agency to allow adequate air space around each flight to prevent collision. To continually check on flight progress requires frequent communication with the plane. In terminal areas the plane is constantly observed by radar, and voice instructions are issued by radio to guide it into the landing center. Another category of communications covers general operational traffic. This includes identification, for civil and military purposes, exchange of weather information, passenger and airline data, and emergency communications in any phase of flight. To certain fundamental aspects of this process, we have paid due attention, but it would also be necessary to consider some aspects of weather data collection and simulation works. WEATHER DATA COLLECTION Weathermen, weather scientists, and air-traffic controllers now, more than ever before, require an automatic digital system able to:
188
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
• Predigest vast amounts of weather data from a variety of sensors. o Reduce this data to its significant information content. o Make specific projection concerning weather evolution. Rapidly communicate that information to all interested parties. o
Four subjects will interest us in the following discussion: (a) The present state of the art in weather information handling (b) Advanced methods in weather data gathering (c) Simulators for weather forecasting (d) Eventual developments of "weather research" and laboratory activities. These subjects are intimately related to the fact that, currently, the gathering of comprehensive weather data from ocean areas outside of the regular shipping lanes is haphazard and of an intermittent nature. Both the civilian and the military air-traffic authorities would be better off in predicting weather conditions, if they were able to receive continuous and systematic weather reports and process these reports in a timely and actionoriented manner. A study in automating weather data collection and processing would necessarily start with a thorough consideration of the sensory pickups of the system. We have presently available equipment able to instrument automatic weather data stations. This is a fundamental requirement. Then the collected data should be transmitted to a station, coded, and transmitted again. As conceived in recent studies, an automatic station would translate information from each of a number of sensing elements into a specified code. Then it would transmit the coded signals in a pulse modulated carrier frequency to a central control unit. A setup for an automatic weather data collection scheme isshown in Fig. 4. The weather sensing elements convert variations of water surface conditions for measurement by a motor-driven self-balancing bridge circuit. The air and Identification and reference Water temp.
Air temp.
Master timer
Program timer and ...... element selector
r--
Code Pulse selector generator Final Bridge and --- amplifier amplifier -- and code -- keyer buffer amplifier amplifier
FIGURE
4
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
189
water temperature sensing devices are thermistors. A precision barometer measures air pressure. This barometer is so modified that a slave needle rides above a resistance strip and is clamped to the strip at the time of measurement. An especially rugged three-cup anemometer drives a small magnetic generator whose output is applied to an electronic gadget. The wind vane is connected to a transmitter and receiver circuit activating a servo system. The servo positions a magnetic compass synchronously with the wind vane. Mounted on gimbals, the compass has a slave needle and a clamping system that gives resistance values corresponding to the wind direction relative to the magnetic north. Signals with weather data can be received on standard communications receivers and compared with a decoding table that gives numerical values for each of the meteorological variables measured. During the single transmission interval, the following items of information must be given: • A coded signal identifying the station • Air temperature • Water temperature • Barometer pressure • Wind speed • Wind direction oriented from magnetic north. Developments along this line are by now well under way. An instrument system for rapidly gathering meteorological data by means of acoustic signals is, for instance, under refinement. Research on a sonic anemometerthermometer system was undertaken to determine the feasibility of computing wind velocity and air temperature from measurements of the speed of sound taken simultaneously in each direction along a defined path. Vertical wind speed, horizontal surface divergence, vertical surface vorticity, or surface densities can be provided, through the subject hardware, over areas of ten acres or more. Headwind and crosswind components can be provided directly, in addition to air temperature. The weather measurements to which reference has been made are true averages obtained as a result of projection of sound between transducers located only at the ends of the sound paths. Except for the sound pulse transmission itself, data are obtained instantaneously. No material object is required to come into equilibrium with its surrounding environment before a true value is obtained, as would be the case with a mercury-bulb or other thermometers. Also, there are no solar radiation effects as obtained with a glass or other thermometers exposed to sunlight, the quantities measured being directly those of motion of the air itself. The installation of the necessary instrumentation consists of a field unit, containing pulsing and amplifying circuits, and of three field towers. Two
190
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
of the field towers include a single horn and driver, and a small pulsing chassis. A third tower includes two horns and drivers. The two sound paths are laid out perpendicular to each other, and are of equal length. The horn drivers are used both as high-power sound projectors and as receivers, through multiplexing. The field hardware is coordinated through a central digital installation. This includes all of the control, measuring, and computing equipment. The display unit contains five dials showing the two runway wind components, wind speed and direction, and on- or off-runway air temperature. In operation, the wind components and temperatures (along each of the sides ofthe runway) can be determined by comparing the transmission times of sound pulses sent simultaneously from opposite ends of each runway path. These pulses are separated from ambient noise by narrow-band filters, by pulsecharacter filtering, and by time selection circuits of both fixed and variable gate position. A fundamental consideration of the successful operation for the sonic anemometer-thermometer at an airfield concerns the proper placement with regard to ambient noise. With sonic instruments, there is no time lag in response, while physical sensing elements come into equilibrium with their surroundings, and no radiation effects confuse the air temperature measurements. Considerable differences are often observed between the runway and off-runway temperatures, making a difference of hundreds of feet of ground-roll distance required for take-off. The control tower anemometer consistently presents an inaccurate indication of the winds over the runway, while the sonic anemometer-thermometer can make remote measurements over an operational area. The U.S. Weather Bureau, for another case, acquired a new tool to check weather conditions. Five variables are measured: • • • • •
Wind direction Wind speed Barometric pressure Air temperature The difference between air and sea temperatures.
This system consists of a boat-type platform for weather observation in remote ocean areas. The platform contains meteorological sensors, data processing equipment, power source, and units for the automatic transmission of meteorological data from remote ocean areas to shore stations up to 1000 nautical miles away. The approach to gathering and processing data has been to use digital rather than analog techniques. Operating unattended for a year, the unit transmits data every 3, 6, or 12 hours, as selected. A storm sensor operational at all times activates
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
191
the station to broadcast hourly signals when wind velocity reaches 22 knots. Two radio transmitters make clear reception possible at the shore station under varying atmospheric conditions. The frequencies are in the 3-6 and 9-12 Me bands. To save energy, the pulses, transmitted at a 100 wpm teletype write rate, utilize a code requiring transmission of only the transition points of a standard teletype code. A single bistable multivibrator at the receiving station converts the data into a signal that operates a standard teletype printer. Each message transmission of approximately 5 seconds is repeated at IS-second intervals for consecutive transmissions. This redundancy insures the correct reception of the message even if no single message is correct because of electrical or atmospheric disturbances. Three means of activating the unit are provided: manual start for testing and repair, the master timer or chronometer, and the storm sensor for increasing the sampling rate to every hour. From the station activator the program timer sends signals to the computer, the five sensor inputs also being fed to the appropriate section. The sensor inputs, essentially analog in character, are converted into digital-type information using optical scan methods. Nine synchronizing pulses are sent, first at the receiving station. Determination of the clearest transmitting frequency is made and reception begins. In addition to the information provided by the five sensing devices, the signal also includes day of the week, time of the day, call letters of the transmitting station, latitude, longitude, and visibility. With the addition of a command receiver, the floating weather station could be interrogated by ships, land stations, and satellites. In another development, a system for rapid distribution of weather data to major Air Force Commands on a base and nation-wide level has been installed at Griffiss Air Force Base, Rome, New York. Known as the Automatic Voice Link for Operational Weather, the hardware is a combination of audio and visual means of transmitting weather data. Hence, it is ofdirect reference to the case under discussion. The system in question consists of three basic units: input, data display, and voice outlet. The input unit has a panel control located in the weather observation site at the end of the runway. To transmit the weather data, the observer records the information on the input panel by pressing the correct numerical code. Following the computational process, the output is relayed to the data display unit and the voice outlet. The data display unit, located in the base weather station, consists of a display table, where information from the data input control is shown, and a television camera, which transmits this information on the display table to all authorized areas connected with the system. Of particular interest to this reference is the voice outlet, composed of two main units: the voice drum and the base-wide intercom and telephone
192
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
hookup. The voice drum is simply a magnetic tape that has a vocabulary of some 58 words. This tape is run through a cylindrical drum that amplifies and transmits the words throughout the base. For the words to make sense and have a meteorological meaning, the data fed through the input units are transmitted in a code that is deciphered by the equipment in the voice drum unit. With this, the words are stated in an intelligible manner to have a definite meteorological meaning. Similarly, technological evolution during the past few years has permitted a substantial increase in the amount of weather data collected, through a variety of electronic sensing devices, including satellites and radar. As with any real-time control application, weather data is highly perishable, and, unless one gets the proper data to the proper place fast, the information may be worthless. For example, the point may be reached where the pilot in a fast aircraft will be moving faster than the weather information he needs. Screening, evaluating, and transmitting the huge amount of information is the most critical problem today. This is the reason why in air-traffic control, among a substantial number of cases, we are looking forward to a system that will accept data transmitted from weather sensors, digitize it automatically, run it against a program that will eliminate unnecessary information, make a decision on the basis of the retained information, and then transmit that decision where it is needed. This essentially brings our discussion to the need for mathematical simulation.
USING WEATHER DATA FOR PREDICTIVE PURPOSES After the information from the automated weather data collection system has been received, it must be subjected to certain operations so that it will become helpful for predictive purposes. Projects on mathematical weather prediction have been carried on by scientists for a certain period of time. We know, for instance, that it is not too difficult to predict the motion of a storm already in existence, but in the past it has not been possible to predict the start of a storm (cyclogenesis). Before the advent of the data processor, meteorologists were handicapped in their calculations because of the complexity of computations necessary for accurate forecasting. The whole framework of weather prediction routines has now basically changed, and Fig. 5 presents the general organization of an on-line data collection and weather prediction system. The mathematical aspects that should be considered in the weather research effort must be properly underlined. In constructing a hypothetical atmosphere, the analyst must first select a system of physical laws that are assumed to be most important in determining atmospheric movements and
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
193
The environment Sensors
•
Wind Barometric Air direction pressure temp.
~
t
Other criticci dato
t
6
Antenna system
J
~
Central computer
FIGURE
5
evolutions. The laws must then be expressed in differential equations, which are numerically analyzed and programmed. The complexity of the model is limited by the capacity of the computer to be used, and, as we stated, early models described the motions of the atmosphere in a relatively simple manner. Based on this simulator, the computer evaluates the movements of the atmosphere over a series of time steps. The model should not be considered correct unless it realistically simulates possible atmospheric behavior over relatively extended periods of time. The following is a brief description of an early mathematical model, which has given valuable information on weather development. Daily observations are received from more than 150 weather stations. Temperature, pressure, humidity, wind velocity, and direction are recorded; these are values obtained at various altitudes, depending on the type ofobserving equipment used. The observations are interpolated to obtain the values at each of 256 grid points (16 X 16). The subject grid covers the United States, and three levels are used, one each at 3000, 10,000, and 25,000 feet.
194
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
The relative motion of large air masses can be calculated from equations in the field of fluid dynamics. The theoretical differential equations have been modified to the form of difference equations suitable for processing by digital techniques. The air mass motions are calculated at half-hour intervals for a total of twenty-four hours in advance. It is perfectly feasible for the data processor to perform the interpolation between the observations to obtain the grid point values, since this relationship remains constant. When the final pressure values have been calculated for the 24-hour period, they are printed by the computer directly onto a map of the United States in the appropriate positions. A weatherman can then draw the lines, connecting points of equal pressure (isobars). This map serves as the basis for the actual weather forecast. For a given grid size and time interval more time steps result in a smaller physical area of accurate forecasted values. Hence, for longer-range forecasts a very fine grid must be available. Statistics can be used to advantage in weather prediction to eliminate the very complicated system of thermodynamic and hydrodynamic equations. The atmosphere repeatedly solves these equations in going from one state to another. Atmospheric processes are stochastic, at least in the way we sense them. A future state cannot presently be uniquely determined because of the incomplete way in which the atmosphere is measured both in time and in space. The nondeterministic nature of the atmospheric processes must in some way be handled by the probability relationships obtained from a statistical analysis. One of the statistical methods used represents the surface pressure distribution mathematically, with the parameters describing this distribution applied as predictors in a multiple linear regression equation. The predictant can be any weather element desired. Because of the complexity of the operation, computers have been used for the initial computation, as the fitting of the parameters and the inversion of the covariance matrix. The mathematical process considers the following: From the hydrodynamic equations the distribution of pressure at the surface approximates the large-scale features of the horizontal field of motion. It is for this reason that meteorologists have considered the surface pressure distribution as a basic tool in their forecasting procedure. Because the number of points at which surface pressure is measured at a given synoptic observation is very large, certain steps need to be taken, to reduce the number of variables necessary to represent the pressure distribution, in a statistical study: • First, comes the reduction of the area to a size reasonable for the interval over which predictions are to be made . • Second, the number of pressures observed in this grid is reduced to only those values at each of the five degrees latitude and longitude inter-
XXXI.
GUIDANCE APPROACHES TO AIR TRAFFIC
195
section points interpolated from the smoothed pattern drawn to a large number of observed pressures. • Third, the resulting pressure points are reduced still further by fitting a family of orthogonal surfaces to each pressure pattern. The parameters measuring the goodness of fit of these surfaces to the surface to be approximated are then used as predictors in a multiple linear regression equation. In a research project in this domain, the fitting of the polynomial surfaces for each of the surface pressure distributions was accomplished on a largescale computer. The same was done for the sums, sums of squares, and sums of cross products for the normal equations, the covariance matrix and its inverse. A medium-size computer was then used in the first stages of the evaluation of the prediction equations. The final step in the prediction procedure was to plot on a weather chart the tabulated numbers. The forecaster could then analyze this chart indicating such things as fronts and centers of high and low pressure. The precipitation probability was put on the chart with an estimate of the amount of precipitation that might be expected if it does occur. Along these lines, the U.S. Weather Bureau uses a large-scale system to produce numerical weather data on a simulated basis. Events ranging in scale from a single thundercloud to world-wide weather are considered, including a model of scale weather processes. This simulator provides the groundwork for experimental studies, and results are sought concerning the following fundamental questions: • Why does the atmosphere respond in the way it does to energy from the sun? • How and why does the atmosphere transform this energy from the sun through various stages before it is ultimately dissipated? • Of all the possible motions that one can imagine in a fluid such as the atmosphere, why do we observe only a few? • What governs the periodic wave pattern of the powerful jet streams found at the altitudes where modern aircraft fly? • What physical processes produce the boundary between the stratosphere and our ordinary weather zone? • To what extent do variations of the earth's surface determine our climate? • How are global weather patterns modified by the "local" characteristics of oceans and continents with their mountains and valleys, deserts and forests, steaming tropics and frozen polar ice caps? • What is the relationship between the circulation in the Northern and Southern Hemispheres?
196
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
• How are the stratosphere and lower atmosphere coupled? • Are variations of the sun's radiation a significant factor in the weather we experience? If man is to modify the weather or even to forecast it for long periods, these questions, among others, must be answered. Answers to such questions will enable meteorologists to improve their long-range forecasts, both in accuracy and in timeliness. As techniques are perfected, numerical weather simulation may also be used for investigating more generic problems and point the way toward some practical means of modifying weather conditions. In this pioneering work, the basic processes of global weather are simulated in an extensive and fundamental manner. The simulator is designed to print out numerical information on the "weather" at each of nine different atmospheric levels, reaching up to the top of the stratosphere. This may be detailed information, either printed maps or listings, showing the simulated weather for 10,000grid points at each level in terms of pressure, temperature, wind velocity, and relative humidity. The ouput may also be in the convenient condensed form of "integrated" numerical values that sum up the effects of global weather processes and thus provide sensitive indicators of cause-and-effect relationships. The computation for each of the 90,000 atmospheric grid points takes into account weather effects and variations due to latitude and season. In dealing with the complex radiation processes of heat lost from the earth as well as energy gained from incoming solar rays, the model computes radiation effects for the appropriate amounts of ozone, carbon dioxide, and water vapor at each of the atmospheric levels.
Chapter XXXII AIRCRAFT DETECTION AND IN-FLIGHT TRACKING In the preceding two chapters reference was made to the fact that computers operating on real time are not an exclusive characteristic of automation for factories with continuous or semicontinuous flow of products. Airline reservation systems, traffic control, and department store operations are examples of possible real-time applications with "discrete" particles. Industrial and business applications in this area hold great promise, even if, at present, the single largest area of application for real-time systems with discrete particles is that of the military.
AIRCRAFT DETECTION SYSTEMS Aircraft detection systems are subject to a high degree of obsolescence as a result of rapid advances in airplane technology. Late World War II detection systems, for instance, to a substantial degree, depended on the fact that a major portion of the energy returned to a radar from an aircraft was reflected by the propeller. In addition, the engine configuration, particularly for the very large horsepower engines ofthe propeller-type fighters, demanded largediameter fuselage design, even for relatively small airplanes. On such targets, the returned energy was sufficient to provide adequate warning, for the speeds then available. The jet aircraft, however, provided immediately a major increase in target velocity and slender, propellerless fuselages. The over-all result of these two changes was to reduce the effective range of detection systems by at least a factor of four and warning time by a factor ofeight. Hence, nearly all ofthe radar systems so painstakingly developed during World War II were made obsolete, and development had to begin again. These difficulties have been further extended by the advent of supersonic
197
198
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
aircraft with an even smaller echoing area, and a velocity so great that the horizon-limited range of radar may be insufficient. This is especially true in view of the tremendous destructive potential of even a single military plane carrying nuclear weapons. The use of ballistic missiles as offensive weapons not only introduces a further increase in target velocity with a virtual minimum of radar-detection area, but detection systems must also provide information on the precise location of the target at longer distances. Electromagnetic radiations covering the spectrum from radio to infrared are now utilized in electronic detection systems. Various frequencies have their individual advantages. The lower frequencies yield higher power, more sensitive receivers, and greater freedom from atmospheric attenuation. The higher radio frequencies favor higher antenna gain, smaller size, and finer target discrimination and detail. But more important than the development in the characteristic behavior of the different components is the evolution in the "systems concept." This can best be appreciated by studying the iriformation network that forms the basic frame for early warning and control operations. A real-time military system for the control of the air space places missile batteries in a waiting line and evaluates whether or not the missiles will be fired according to the "received" data.* This distinct capability accounts for much of the difference between the state of the art of some twenty-five years ago and the present state. Figure 1 presents a traditional operating system of World War II. Airplanes were sensed through man-machine units. This information was transmitted, often verbally or by telegraph, to a monitoring unit that was composed almost exclusively of humans; there were initiated fire commands for the antiaircraft batteries which again were man actuated.
I
I
Sensory unit (Man-machine coordination) I I L-
Batteries ( Man octuated)
t
_ITactical monitoring I ( Human)
..
FIGURE
1
* For a more detailed discussion on this subject, see the following section of the present chapter.
XXXII.
AIRCRAFT DETECTION AND IN-FLIGHT TRACKING
199
Sensory units (Radar. etc.)
!
Master tactical morutonnq (Human)
FIGURE
2
Figure 2 presents an automatic system. Airplanes or missiles are sensed by the radar. The information received by the sensory units can be processed through paths a, b, or c. In every case, it ends in the automatic tactical monitoring unit which is a specially designed computer. A complete instruction program is stored in the memory of the computer. On real-time basis, each instruction is read and evaluated. Its execution depends on the "actual data" received at the center. So the computer is able to direct the batteries of missiles that are automatically actuated. The whole evaluation process takes only a few seconds. If the information from the sensory units takes path "a," the whole operating process is completely automatic. But as it now stands this real-time control system includes a master tactical monitoring unit with human elements. In cases, it is advisable to have the information screened by this unit before it is transmitted to the computer (path b). It may also be necessary to establish a strategic unit that will cross-examine the receive data (paths cl and c2, respectively). Though, from a technical point of view, these deviations weaken what might have been a perfect case of digital control, their implementation was deemed necessary due to our lack of experience on the total behavior of data automation ensembles. In spite of the fact that a great deal of work still remains to be done, early warning of the enemy aircraft approach is one of the outstanding applications of an automatic system. This is so because one of the major characteristics of a ground-based detection system is the necessity of establishing such data control setup as portions of a rather extensive defense network. As with a good number of industrial, nonmilitary applications, in the days of World War II, it was possible to operate on a point-defense concept, and hence a
200
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
detection system could be associated with the immediate users of such information. The advance in aircraft capabilities to which we made reference, and the necessity of achieving very high attrition rates, because ofthe nuclear capabilities of enemy aircraft, makes this evolution to data automation mandatory. This particular reference to the impact of the systems concept on both military and industrial applications of data ensembles-and the transferability between certain aspects of military and of industrial usage of digital systems-should be brought under correct perspective. In Chapter XXXI we spoke of guidance approaches to air traffic. The SAGE system, which we will examine in the following section, has in fact been considered for civilian air traffic use. Such is also the case with subsystems and components, as the following example helps indicate. Texas Instruments' Apparatus division developed an experimental alphanumeric radar display generator as part of the FAA's Advanced Radar Traffic Control System in Atlanta. This equipment is designed to allow controllers to "write" on the radar screen the flight number and altitude of the aircraft, and a velocity vector to show where the aircraft will be at any future time. This information tag then follows the target blip as it moves across the radar screen. With respect to design, the message tag is controlled by a computer that puts the tag in the proper place on the scope and also moves it in relation to the target blip on the radar screen. This tag is referenced to the position of an aircraft symbol, put on the scope by the traffic controller to distinguish between military, commercial, private, or other type of aircraft. The man-actuated control equipment positions the aircraft symbol by a joy stick on the console. By moving the stick, the controller positions a small light dot on the radar screen until it is on top of the radar blip. He then pushes a button on top of the stick, and the computer calculates the x,y coordinates for positioning the aircraft symbol on top of the target blip. At the same time, the operator writes on his console keyboard all flight information he wants displayed with the target symbol. A total of 59 letters, numbers, and symbols is available for data input purposes. The systems operators also have available a choice of one hundred different functions for instructing the computer. These include initiate or terminate tracking, test modes, and information-only mode. Positioning of the message tag in any of eight coordinates around the target symbol is also possible by selecting the right function. If two target blips move close enough together so that message tags overlap, the controller can switch the position of any or all target message tags. Similarly, he can blank out any part of the message tag he does not wish to examine. The output format of the computer includes the address of the message
XXXII. AIRCRAFT DETECTION AND IN-FLIGHT TRACKING
201
tag to the proper console or consoles, positioning information for the aircraft symbol in x, ycoordinates on the screen, direction ofthe leader to the message tag, velocity components for the velocity vector line, and alphanumeric symbols in the message. A complete message tag can be written out in 1600 usee; that is, within two radar sweeps. Furthermore, the data generation system is time shared with the radar video. Component units of this nature are of equal importance to civilian and to military approaches to air-traffic control.
AN EARLY EXPERIMENT IN DATA AUTOMATION TECHNIQUES The foregoing concepts are clearly illustrated by the establishment of the distant early warning line for the defense ofthe North American continent. For this line of outputs to be useful, the information gathered must be rapidly transmitted to a central point where effective warnings can be issued for all active and passive defense measures, including retaliation. SAGE, the semiautomatic ground environment system, further illustrates the necessity of a network of "sensory pickups," or detection stations.* The central processor of the semiautomatic ground environment system stores information from flight planes, radar stations, radar picket ships and planes, weather stations, and ground observer posts (Fig. 3). It correlates this information with, for instance, aircraft identification. It also uses these data to compute course, altitude, and interception times. Search media on early warning aircraft, early warning ship patrols, longrange land-based radar, etc., pick up all planes entering the air defense identification. They transmit information to the computer, which determines automatically whether planes are friendly or "unidentified." From radar data, the machine computes altitude, speed, and direction of approaching aircraft. If planes are unidentified, it works out best interception method and, with the aid of radio-radar relay stations, orders countermeasure units into action. In this manner, the system can sight the approach of an attacker, compute its course, speed, and altitude, notify an interceptor to meet it, and guide the fighter to the kill. It can even fly the defending airplane, while the pilot sits with "hands off," or it can trigger a guided missile and steer it to the target. Much of the importance of digital control action lies in the system's ability to handle in seconds the mathematical computations involved in an *Though the
SAGE
system is by now obsolete, the "systems" concepts and approaches that
it helped develop have opened new horizons to data control.
202
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
FIGURE
3
XXXII. AIRCRAFT DETECTION AND IN-FLIGHT TRACKING
203
intercept. Problems of this nature would have taken the most skilled Air Force teams considerable time to solve and transmit, if they could be approached in the first place. In more detail, these operations are as follows: When a penetrating airplane is picked up on the radarscope, an operator must track it for a time in order to compute its speed, altitude, and direction. This information must be relayed to a direction center where a master plot of all aircraft in the area is maintained. Direction-center personnel should determine the identification of the plane by consulting prefiled flight plans of a number of known aircraft and comparing them with the path of the "unknown." If, in the course of the forementioned comparison, it is decided that the plane is "hostile," the aircraft is marked in red on a plotting board. Then, the course a defending fighter must take is automatically calculated. Throughout the interception, the radar station must continuously report the progress of the hostile aircraft. If it deviates from its initial course, the mathematics of the intercept must be recomputed and the fighter advised. The SAGE system handles all these steps automatically. There is little conversation among members of the operating crews. What information must be passed from one station to another is transmitted by means of a machineto-machine communication. When one of the radar stations connected to the system picks up a plane, the information is relayed over telephone lines to the computer memory without human intervention. A radar blip immediately appears on all the scopes in the control center. As the plane continues on its course, the radar station tracks it and passes the data to the computer, which studies the information and automatically computes the plane's direction, speed, and altitude. A picture of the plane's flight path appears on the console scopes; next to it, in coded symbols, appear the performance data. To perform its on-line control action, the SAGE system has stored in its memory an electronic map of the defense weapons including available fighters, antiaircraft guns, and guided missiles. Their locations relative to the flight path of the hostile plane can easily be established. From this electronic map, the computer is able to determine quickly which air base is closest to the attacker. Knowing the performance of the fighter aircraft, it can also compute the point at which the interception should take place, the heading the fighter should take to intercept, and the altitude it should attain. All this is displayed pictorially on the scope. When the digital control system indicates that the attacker is over or approaching an antiaircraft or missile battery, SAGE'S operator-observer turns the action over to another section, the antiaircraft console, manned by an officer. Since this machine operates at speeds faster than data can be received, it can project forward in time the future course of each aircraft's flight path.
204
PART VIII. GUIDANCE FOR DISCRETE PARTICLES
In this manner, it provides for immediate identification of hostile planes from friendly ones and determines the most effective defense action to be taken. By means of stored subroutines, the computer performs automatically some data-generation functions, too. For example, given the direct line distance from a radar to an aircraft and the relation of this line to "true north" in the form of an angle, it can pinpoint the aircraft's actual position. At all times, on the basis of stored data and instructions, a complete air situation is presented by the computer to its human linkage via a special display system having over 100 television-type cathode ray consoles, projection screens, and other visual aid devices. Command decisions are then made by the Air Force team, with the computer maintaining electronic control of the intercepting aircraft's or missile's flight path to make the "kill" of the enemy aircraft. To meet the special reliability requirements of the system for an aroundthe-clock air defense, scientists and engineers have been engaged in a continuing program of improvement and upgrading for the individual components and the over-all design. Through this program, component specifications have been raised well above minimum initial military procurement requirements, thus providing continually increasing equipment reliability and long life. Furthermore, an automatic self-checking circuitry was built into the machine to assist in predicting, isolating, and subjecting failures to immediate connection without stopping the computer's operation. To guard against failure of the system at a crucial time, every computer center has two basic machines, with a spare memory working on a stand-by basis. Both subsystems store all the information fed to them, but only one actually operates. Should something go wrong, the stand-by machine automatically picks up where the other left off. Meeting defense requirements with a computer system capable of operating around the clock, seven days a week, requires skills and knowledge ranging from advanced mathematical theories through the intricacies of memory and logical circuit development and design to an understanding of the versatility of modern electronic equipment. Though SAGE has shown several imperfections and, in certain cases, failed to pass air-control, failsafe tests, it remains, nonetheless, true that it constitutes a major step forward in the direction of digital control. Perhaps the best lesson we have learned from SAGE is that the role of the computer in air defense is to function as a vital element in an information processing system, thus structuring the capability for centralized control. We also have learned that, if the whole is to have an optimum response characteristic, all information systems functions must be entirely compatible. But is this not the fundamental requirement for all data control ensembles? Man-made organizations, whether industrial complexes or air-defense
XXXII.
AIRCRAFT DETECTION AND IN-FLIGHT TRACKING
205
projects, are primarily systems self-adjusting in character. This imposes a minute analysis of their actual response characteristics if programmed systems are to be developed in an able manner. The information network should serve an "homeostatic" purpose*-a vast subject of which we have only recently become aware. Speaking of national defense functions, Tetley identifies the following areas of consideration for data control systems: • The local computer functions located within the confines of individual regions and associated directly with individual weapons and their support systems. • The regional computer functions, which, by the use of local or slave computers, perform the so-called air-defense mission. • The central computers, which, at national level, would manage and coordinate all regional air-defense activities. • Numerous support computer functions outside of air-defense regions but which devote considerable frame time to the support of the airdefense mission. In this, Tetley distinguishes between computer functions, computer systems, and the actual computers themselves. He defines "computer functions" as those closely associated with the basic air-defense steps, computer systems as networks consisting of the central computer function and its remote or slave functions, t and "actual computers" as essentially the hardware and the associated programs that perform compatibly within the computer system. This definition is perfectly compatible with industrial operations and can be generalized to that end.
COMPUTER ROLE IN SATELLITE AND MISSILE TRACKING One of the current jobs with a reasonable degree of on-lineness concerns the use of a data processer in the digesting of data recorded from each guided missile firing, and in providing instantaneous results of the data reduction computations. Another important job is "flight simulation." In such studies, the behavior of the missile is mathematically described and processed through the computer in a simulation of actual flight.
* See also D. N. Chorafas, "Information Science." Akademie Verlag, Berlin (in preparation). r Fcr air-defense purposes these systems, in different degrees of complexity, would perform collectively at both regional and national levels. The same is true about message switching networks.
206
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
Data are transmitted electronically from the observation stations to the data reduction center, and fed directly into the machine. The sensory means for data on missile flights can be surveying instruments, tracking on filmsthe missiles in flight and recording azimuth and altitude measurements on each frame with the picture of the missile. Since the instruments take as many as 5000 frames per second, the in-flight data is enormous. This must be computed and analyzed after each flight, and here exactly is where lie the competitive advantages of the computer. With respect to application, it is essential that data for each flight, including the pitches and yaws of the missile, as well as information obtained from scores of instruments in the missile itself, be checked and analyzed in order to make adjustments in future flights. Erratic behaviors have to be analyzed from flight data and their cause determined. Correspondingly, before any satellite-bearing rocket lifts off its Cape Kennedy launching pad, two largescale computers are standing by to monitor, compute, and predict the flight of the satellite launching vehicle and its payload. One of the two stand-by computers receives radar data immediately after launching of the missile. Its job is that of forecasting for the Range Safety Officer where the rocket would fall if its power failed at any moment during its climb up to orbital altitude. If the officer, watching the computer's radarscope-like picture, sees that the vehicle is heading out of the predetermined safety boundaries, he can destroy it. The ability of the computer to digest in a fraction of a second the situation, including all forces acting on a newly launched rocket and from them predict its trajectory, gives the rocket launching program a significant margin of safety. While the rocket is still coasting up to the altitude where it will fire its last stage driving the satellite to a sufficient horizontal speed, the Cape Kennedy computer transmits preliminary data on the rocket's position and velocity to a central computer. This machine combines the coasting information with data on the expected performance of the rocket's last stage. From these, it produces the first computations of whether the satellite has a chance of orbiting and, if so, what the orbit will look like. Such calculations come off the computer 10 to 12 minutes after launching, or just after the rocket's third stage has burned itself out. On any given satellite launching attempt, the central computer calculates a preliminary orbit and alerts can then be flashed to observation stations over which the satellite should pass. Following this preliminary orbital calculation, the computer continues to absorb data, refining its prior computations constantly, to reach the point where it can calculate an orbit with great precision. Here, digital control must meet some basic characteristics. Calculating speed is essential, for the satellite's orbit must be known exactly, checked
XXXII.
AIRCRAFT DETECTION AND IN-FLIGHT TRACKING
207
and rechecked. Normally, the life of a satellite's broadcasting batteries is limited, and upper atmosphere scientists must extract all the information they can while the satellite's radio is operating. It is, therefore, of vital importance that the exact position of the satellite be known at the moment when any of a number of reports from space are received. Nearly as important is the necessity to predict the satellite's path with precision so that observation stations can be alerted as to where and when it will appear in order that these stations can improve readings over those possible from chance sightings. Antennas For Missile Tracking The antennas used for gathering radio telemetry data from guided missiles and earth satellites are a very important part of the over-all launching facilities. Telemetering reception at the Atlantic Missile Range provides coverage for guided missiles from Cape Kennedy on the Florida mainland, through the Bahama Islands and lesser Antilles, and across the equator to Ascension Island in the South Atlantic. This instrumentation chain may be likened to a large laboratory that is over 5000 miles long, whose purpose is to acquire scientific data during experiments. One of the most important data acquisition systems on the Atlantic Missile Range is the radio telemetry equipment which gathers over 85% of the data during a typical test flight and records measurements that cannot be made by any other method. The telemetry antennas used at Cape Kennedy must exhibit certain characteristics to accomplish the purpose of receiving unguided radio energy and providing a suitable source to the input of the receiving system. These antennas can be used at different frequencies. The field strength of the received waves has a large dynamic range, the polarization of the signal varies, and the environment is warm, humid, with salt spray, and has wind gusts of up to 50 miles per hour. The reason for these diverse requirements of telemetry receiving antennas can be found in the fact that Cape Kennedy needs the capability of receiving many telemetry transmissions simultaneously so as not to delay the test schedule of a missile program. A typical missile flight may carry four or more telemetry transmitters, each having a spectrum bandwidth of 500 kilocycles per second. Past experience has shown to be impracticable to transmit on adjacent channels on anyone missile because of transmitting antenna multiplexing difficulties. Also, during the test flight of one missile, another missile may be undergoing preflight checkout and simultaneously transmitting on its telemetry channels. The forementioned factors point to the complexity of the over-all system. To these should be added the fact that the probability of future variations
208
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
is rather strong. This means that the established system would accept the expected increases in requirements because of the number of future tests. Hence, the telemetry band has been extended to include all frequencies from 216 to 260 megacycles per second. When a missile is just rising off its launch pad, the nearby telemetry receiving antennas will be immersed in an intense field due to the proximity of the transmitter, but as the rocket recedes in the distance and finally disappears below the horizon the field strength has decreased to very low level. The extremes offield strength that are encountered at Cape Kennedy vary from 1to 20,000 microvolts per meter. Wave polarization presents a special problem on the Atlantic Missile Range that is not ordinarily encountered in radio data links. Aerodynamic considerations usually dictate the use of linear polarization of the telemetry wave that is transmitted from the test vehicle. Moreover, no matter what the spatial attitude of the vehicle may be during the flight, satisfactory telemetry reception must be provided. In the event of a missile malfunction that causes unwanted gyration, tumbling, spinning, and so forth, good telemetry reception is very important so that the trouble may be located and corrected on subsequent tests. Erratic flights result in a change of polarization of the received wave to such an extent that the Cape Kennedy telemetry receiving antennas must have the capacity of receiving linearly polarized waves that are horizontal, vertical, and all angles in between. When a large missile is launched, the initial field strength of the telemetry wave is high, but as the missile continues to gain altitude, the received signal strength decreases for two reasons: first, the path attenuation increases according to the well-known inverse-distance law, and second, a large, erratic attenuation occurs as the missile speeds through the upper layers of the earth's atmosphere. Diminution of the telemetry signal caused by this second factor has caused loss of data during a critical portion of the trajectory, namely, the burnout and staging. Data transmitted from the missile at this time is extremely important and the problem of avoiding loss of data has been solved by two methods: • The utilization of portable telemetry receiving stations sited north and south of Cape Kennedy. • The utilization of a higher gain receiving antenna. Generally, when missiles are passing through the region causing additional signal attenuation, it has sufficient altitude to permit line-of-sight reception conditions for the ground receiving stations at Cape Kennedy, Spruce Creek, Varo Beach, and Grand Bahama Island. Thus, even though each station may experience a brief loss of signal, complete telemetry coverage is provided.
XXXII.
AIRCRAFT DETECTION AND IN-FLIGHT TRACKING
209
A SATELLITE TRACKING NETWORK Once a satellite is aloft, two immediate problems arise: how to prove the satellite is actually orbiting and how to determine the precise orbit it is following. The first, called acquisition problem, is essentially that of locating the object. The second, called the tracking problem, is that of measuring its angular position and rate with sufficient accuracy to alert the nonacquiring stations and inform them as to the expected time and location of the object. The solution of these two problems is met by a network of tracking stations. This tracking network utilizes an oscillator of minimum weight and size to "illuminate" pairs of antennas at the ground stations where the angular position is determined.* Employing radio phase-comparison techniques, the system is independent of weather conditions, visibility, and time of day, so operation is assured whenever the satellite is within the ground station antenna beam. Figure 4(a) shows how the phase-comparison approach works. The satellite S, is the signal source; AI and A z are two receiving antennas. The signal arrives at antenna Al at the same time it arrives at point P on the way to antenna A z; thus, it arrives at A 2 sometime later. If the phase of the radio signal arriving at antenna AI is compared with the phase of the radio signal arriving at antenna A 2 , the phase difference will be a direct measure of radio path PA 2 , where PA 2 is A IA 2 sine. By the addition to another set of antennas, orthogonal to the original set [as shown in Fig. 4(b)] , two direction cosines are measured giving complete angular position of the source. A tracking station has seven antennas, including five antenna-pairs. Zenith
To satellite
length
~A'
PA2=A 1A 2 sin{3
n\ or 360n O ( a)
FIGURE 4
* Reference
N
is made to the American Tracking Network.
(b)
210
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
Figure indicates A t- A 2: A t- A 3 :
5 shows placement of the seven antennas. The following list the five parts that work together: Fine-angle measurement in East-West direction Medium ambiguity resolution in East-West direction ArA s: Fine-angle measurement in North-South direction A 6-A 7 : Coarse ambiguity in North-South direction.
,
~----Jjl!!I!/ilA5
-1 50'1-
-1$llllHm!l-~~--Em:!I-
AI
A3
A2
500'
A= onteno FIGURE 5
For example, for a satellite at 300 miles altitude, at each station, this pattern will provide a North-South coverage of about 600 miles and an EastWest coverage of about 60 miles. Once the tracking station has the data from an observation, the information is sent to the computer center. The computer center has the over-all data control of the network. The mission of the data processing system, which has been installed at the center, is to receive the raw observational data, perform the necessary calculations, and provide orbital information specifying predicted future motion of the satellite. The computer can also provide a comprehensive history of its past motion. Because of the historical interest of the subject and its contribution in the evolution of programming systems for satellite and missile tracking, in the following we will briefly review the logic of the programming approach taken for the first Vanguard satellite. The programming effort for the satellite tracking system required about seven man-years of work involving approximately 25,000 instructions. It is organized in semiprograms* consisting of a collection of subroutines linked together to perform a broad orbit computational function. One auxiliary storage magnetic tape of the data processor serves as the "systems tape."
* The word "serniprogram" has been preferred to the originally used word, in this system, "macro-operation." The reason is that the latter word has acquired a quite different meaning in programming literature.
XXXII.
AIRCRA FT DETECTION AND IN-FLIG HT TRACKING
211
Each block of information on this tape comprises instruction, constants, and control information required by just one semiprogram. The internal calculations are generally performed in duplicate to check for occurrence of random machine error. Most of the output is written in binary-coded-decimal form on magnetic tape for subsequent printing. One such tape is the "log-tape" which preserves a detailed chronological history of calculations performed, and of the results. In case of mach me breakdown, another data processing center with the same type computer is ready to take over the computational functions. This second machine is kept on an emergency stand-by basis. The readings received from each tracking station, including phase difference readings, initial and terminal times of the reading, and certain information identifying and specifying characteristics of the observing station, are called a "message." A device at the computing center automatically converts the teletype tape to decimal punched cards, which will be subsequently fed into the machine. The following is the sequence of computing events: Semiprogram 1. The message received is processed by the first semiprogram which is composed of four principal subroutines: • The first subroutine leads the message into the high-speed storage of the computer. • The second subroutine performs an editing function, comparing the items of the message, which are in triplicate form, for exact agreement. • The third subroutine proceeds with adjustments to the data from certain characteristics peculiar to the observing station, time lapse, radio refraction, and conversion of the phase readings to directional information. Each adjusted and converted observation provides the approximate direction of the satellite from the observing station at a certain instant of time. The distance of the satellite from the station has not been determined as yet, only the direction has been measured. • The function of the fourth subroutine is to fit a least-squares parabola to each set of direction components derived collectively from East-West and North-South phase-difference readings. The principal output of this semiprogram consists of a single "parabolically smoothed" direction of the satellite from the observing station expressed in a local coordinate system and corresponding to an instant of time. Semiprograms 2, 3, and 4. Computation of a preliminary orbit is determined in this series of semiprograms. Two observations which are suitably spaced in time are used to obtain a preliminary circular orbit, in semiprogram 2. Semiprograms 3 and 4. These semiprograms determine two elliptic orbits from three or four observations which are neither too closely nor too widely
212
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
spaced in time. To do this, a reference position vector and corresponding velocity vector are obtained for the satellite at some instant of time. The machine computes certain quantities serving to characterize the orbit in question in different ways, such as, period of revolution, inclination of its orbit plane, and, in the cases of the elliptic orbits, such quantities as the semimajor and the semiminor axes of the orbit and the perigee (closest approach to earth). Semiprograms 5, 6, and 7. After obtaining a preliminary approximate orbit, comes the problem of improving and updating it. Semiprogram 5 consists of a procedure for numerical integration of the differential equations relating, by Newton's law, the components of forces acting on the satellite to the components of the acceleration.
Chapter XXXIII RAILROAD, SUBWAY, AND CAR TRAFFIC PROBLEMS Throughout the present work, we have emphasized the impact automatic guidance and control systems have on "continuous processes." Seen from a macroscopic point of view, the operation of a railroad is indeed a continuous process taking place over tens of thousands of miles of line and in thousands of stations. This operation involves hundreds of thousands of men, tens of thousands of wagons, locomotives, and other gear. The work performed by each unit is closely linked to that performed by the units that precede and follow it. In all, this forms a complex ensemble whose component parts are in a state of perpetual processing, while the over-all condition on the network is itself in a state of constant development. The operational control of a railway network implies an impressive number of interdependent functions. Such is the case with the calculation of train running, reservation and seat-load inventory, train and traffic control, working of marshalling yards, distribution of rolling stock, determination of the most suitable loads for available empty wagons, and the like. The complexity of these functions makes it apparent that the advantage of processcontrol type applications lies essentially in the fact that such applications can determine favorable combinations with much greater ease than human operators could achieve. The expected benefits to the railroad from a sound approach to digital automation were outlined at a recent conference as being the following six: (1) Effective car utilization. resulting from an automatic, accurate, and current freight car inventory. Apart from the required maintenance periods, which as expected will be scheduled automatically by the control system, through digital automation, cars could be used for the major portion of the twenty-four hour period, each day of the year. This will appreciably lower the current high inventory of car units.
213
214
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
(2) Substantial increase in freight tons carried per train, in freight tons per operated kilometer of track, in freight car ton capacity per car. (3) More reliable service, with the assurance ofconsistency oftotal transit. This
will eliminate the traditional "long waits" in yards to assemble economic trains and will take care of delays in intermediate yards for the reclassification of trains and other operations inherent in freight terminals. (4) Appreciable increase in productivity, through efficiency in the utilzationof men, machines, and installations. (5) Rational pricing ofcustomer charges as to both tariffs and rates. This would
be made possible by the data accumulation of precise costs and the rational analysis of cost components of freight and passenger operations. (6) Effective control ofa railroad system, so as to make it responsive to current critical needs. This presupposes the ability to define commercially the rail-
road's proper role, and the changing aspects of this role, as compared to other forms of transportation. It also implies developing the tools to effectively plan the realization of such a role and to analyze the obtained outcome in what concerns cost performance.
TOWARD RAILROAD AUTOMATION
Up to the present day, there have been several interesting applications of automatic control to specific areas of railroad operations. Most of these applications, though, have dealt with restricted problems, and very little work seems to have been done on the broader more generic problems that are around. Because of the limited scope of these cases, dangers exist in deriving optimal solutions to one operational element without adequate consideration of the effect of this solution on the total. This is the case, for instance, with certain automatic control applications that deal with trains, train crews, and train scheduling, without paying due respect to the total outcome. Nevertheless, as the idea of data control applications for railroads arrives to a relative maturity, the first signs of an interest in the direction of integrated applications start showing up. Planning, experimentation, and development of automatic train operation are now progressing reasonably well in several countries. Furthermore, the development of automatic marshalling yards is in process, and the approach taken here is usually determined by the local conditions in the various countries. Some of these developments were made imperative because of changes in the mode of rail operation itself. Very high speed passenger trains make necessary alterations in the established techniques in braking and signaling. Approaches to the problem are now being investigated, with indications of a successful conclusion. A number of
XXXIII.
RAILROAD, SUBWA Y, AND CAR TRAFFIC PROBLEMS
215
techniques are also being used in automating other facets of railroad operation, such as weighing loads in motion and handling trans-shipment of goods traffic. Because of the foregoing reasons, the operation of a railroad system is an open field for the development of methods of control and command. The practical control of this interconnected series of operating procedures depends essentially on the same criteria as any real-time project: • The collection and dissemination of operational information, so that the situation on a network can be known exactly at all times. • The constant appraisal of this situation and the forecast of its development over a substantial period of time. • The quest for optimum solutions, and the subsequent decision-making process. • The transmission of corresponding orders to the appropriate sectors. • The follow-up on the execution of the order and their analysis in relation to the development of the operational situation. As in other process control cases, we must distinguish between on-lineness and the open- or closed-loop aspects of the system. With an open loop, the proper control action will be exercised by responsible officials, equipped with modern tools and techniques. But these human operators will essentially constitute the motor activity part of the system, the evaluation being accomplished by means of digital automation. Hence, the proper choice of a standard of appraisal plays a highly important, decisive part in determining the optimum control conditions. Railroad automation should pay due attention to both the information feedback aspects and the hardware characteristics of the traffic network. A prominent member of the USSR Academy of Sciences, properly underlined this dichotomy by identifying the need for two study groups: • One working on the analysis and evaluation of documents in connection with planning and standards, transport plans for loading and moving wagons, marshalling plans, train graphs, and the technical standards of the operating procedures. • The other working on the organization of the control properly speaking of the network, including the allocation of the rolling stock. The first is basically a management information subject, and it implies the examination of methods for obtaining operational statistical and accountancy data, auditing of receipts, compilation of paybills and pensions, supplies and stores management, supervision of certain kinds of traffic seat reservations, and other problems. While the collection and storage ofthis information are major problems, one should not underestimate the requirements
216
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
which are also posed by the definition of the mathematical methods to be used in the solution of transport problems. These involve linear and dynamic programming, probability and statistics, and the determination of an optimum method of routing traffic, including the distribution of traffic between the various forms of transport. The second subject focuses on the use of computers for: • The compilation of train formation diagrams. • The planning of the distribution of traffic between marshalling yards. • The calculation of typical train running times. • The compilation of train running diagrams, engine diagrams, and staff rosters. • The studies of matters relating to line capacity and transport potentialities. • The control of the whole process of railway operation, including rolling stock distribution, and the like. Digital automation in the control of autonomous procedures would also require the use of electronic computers for train running control, supervision of station and marshalling yard working, driverless train operation, and so on. In a practical applications case, the San Francisco Bay Area Rapid Transit District has begun testing several separate approaches to automatic train control. The rapid transit system, scheduled to begin operation in 1967, will be completely controlled by computers, with only one attendant on each train for emergencies. With speeds ranging from 50 to 70 miles an hour, and trains scheduled only 90 seconds apart, the design for the control system required the elimination of previous concepts to find an entirely new approach. One of the competing companies has devised a closed-loop system in which control units at each station would be in continuous two-way communication with the trains, using a send-receive "wiggly wire" channel between the rails transmitting audio-frequency signals. An antenna mounted at each end of the train will feed signals of different frequencies into the channel, so that a series of pulses will be fed to the controller. A computer will use the pulse rate to determine the position, velocity, and acceleration of the train, and, from the frequency variations between the leading and trailing signals, determine the length of the train. The station's control computer will store a speed-distance profile for the section of track it controls. According to this particular approach, instructions to the train will be transmitted in combinations of audio-frequency tones along the "wiggly wire" channel. All control information is designed in digital form.
XXXIII.
RAILROAD, SUBWAY, AND CAR TRAFFIC PROBLEMS
217
Finally both the information and the hardware control aspects imply the development of the proper technical facilities for data transmission. Reliable systems are necessary for the remote transmission and automatic data collection. This necessitates the proper study of transmission channels, methods of coding data and improving the reliability of its transmission over existing channels; systems for automatic data reading, with particular reference to the identification of moving vehicles, and a variety of other subjects that are structural in any real-time control application.
PLANNING AND CONTROLLING THE ROLLING STOCK For the purpose of controlling the rolling stock, some European and American railroads are giving serious thought to a system of wagon identification, whereby lineside equipment at strategic points scans some form of code (or responding apparatus) contained on the wagon. By means ofthese "identifiers" the digital automation media can efficiently recognize the wagon number, and any other variable data. The subject information will then be channeled to the appropriate control points for subsequent analysis and digestion. The information feedback to which we make reference must be available on an instantaneous basis so that subsequent instructions are meaningful. The pickup elements are here a crucial item, for it would be impossible, for instance, to read wagon numbers by eye when the wagons are in motion. If we consider some 900 scanning points over a railway network generating information arising from, say, 35 trains a day past each point, with each train consisting of 40 wagons and equipped with 8 digital code plates, the total number of digits generated amount to a little over ten million per day. A data reduction scheme, then, should be provided for up-to-date statistical information regarding transit times, pinpointing delays in transit and at terminals, and bringing forward the knowledge of where any consignment is at a particular time. The forementioned data reduction scheme will, furthermore, necessitate the development of simulators able to provide a mathematical description of transport operations. This will make possible an effective reduction on the intermediate times of planning and regulating. At the actuator's end of the line, it will be advisable to depart from "conventional developments" in the field of automatic control. What is highly important here is to proceed with the evolution of homogeneous, reliable, and efficient digital systems. The construction of an operational mathematical model will in itself require advances in the theory of "optimum strategy" of digital control, for train traffic. Problems connected with the construction of optimum
218
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
systems are the hub of theoretical and technical guidance, particularly those concerning learning theory and the corresponding mechanisms. But very little research, if any, has yet been invested in the domain in question. Yet, in order to take full advantage of digital automation, railroads will be in great need of systems that are automatically adaptable to varying conditions. Both theoretical and experimental developments will be necessary and the mathematical theory of self-adjusting and self-organizing devices should be reseen under the light of operational imperatives in railroad traffic. Similarly, the theory of structural reliability is closely related to railroad automation. The complexity of problems involved in digital control for train traffic makes it necessary to build appliances of high operational dependability. Consider, as an example, a junction control applied to a large volume of computer traffic and say that it feeds two main terminals. What, then, could be the effect if the central nervous system fails at a peak hour? If backup is provided, where should the stand-by machine be located, so as to minimize the cost of the network at the digital end of it? Other questions pertaining to the optimal usage of the rolling stock must also be answered-the best solution being the one that can assure total systems performance. Assume, for example, that presently at peak hours the conflicts arising from converging and crossing paths limits the capacity to about 30 trains per hour, each way. Assume, also, that the junction is controlled by two men working on a fixed timetable. If, as a result of delays, trains get out of path, the decision over priorities will be indicated by their priority in the working timetable. If trains get badly out of path, then it is at the signalmen's discretion to do the best they can. The foregoing is a typical case where mathematical experimentation can pay the cost of the total digital automation system. * It is possible that, under certain conditions, signalmen do make sound decisions. But an analytic examination of knot control rail data proves that this has been the exception rather than the rule. Often, uneconomic solutions resulted because the information displayed in front of the human controllers did not cover more than half a mile or so on either side of their blocks. To this should be added the time lag for checking the condition at the terminals or down the line by telephone or telegraph. The total picture indicates that manned media are quite unacceptable during much of the operational time. With the best will, the individual signalman would be unaware of incidents occurring at the terminal stations, or of the likelihood of trains running out of path farther down the system, hence, the need for establishing a timely and detailed record of entry time
* In fact, this has been the case in certain terminals, as will be discussed in the following paragraphs, where we consider approaches presently taken for wagon command in a station.
XXXIII.
RAILROAD, SUBWAY, AND CAR TRAFFIC PROBLEMS
219
of the front of the train and exit time of the back ofthe train into and out of a block section. And this is only a small part of the problem. This example also helps bring into perspective some fundamental facts underlining digital automation. In the establishment of the "man-machine" dichotomy in computing systems, it is necessary to keep in mind the positive peculiarities and limitations involved in the transmission of data between subsystems via the "human" channel. To partially bypass human limitations, some railroad studies concentrate on the development of programs in which use is made of "heuristic" procedures. This approach resides in the incomplete study of certain methods, chosen somewhat arbitrarily, each of which supplies a key for guidance: informing the junction controller whether he is on the right way to a satisfactory solution, and has to reach that destination after a series of suppositions. But even at this relatively limited level of sophistication, current work is still far from any usable results. It is an indisputable fact that, in order to reach efficiency in systems evolution, the generic aspect of planning and controlling the rolling stock should be properly underlined. Wagon command should receivegreatattention. For the potential dividends from this application are very large indeed. Let us consider a specific case. In order to move freight by rail, marshalling yards are needed at various points in the railway system. Trains of wagons arriving at these points are marshalled into new trains in accordance with the destinations of the individual wagons. Modern yards are almost always of the gravity type, in which cuts of wagons, from a single wagon on, are pushed slowly over a hump and then run, under gravity, through a switching zone into the many sorting sidings. To prevent heavy collisions, which result in damage to wagons and contents, a method of speed control is necessary. The first hump yards employed chasers who ran about in the sorting sidings and operated the brakes of the wagons to minimize heavy collisions. This technique was not very efficient and involved a considerable labor force, and, in fact, the latest yards are automatic in operation, the correct release speed from the retarder being computed electronically, thus relieving the operator of all guesswork. To the need for study and research along these lines, we have made due reference when considering certain examples in the preceding paragraphs. With an automatic hump yard, incoming trains enter the reception sidings from several directions and await their turn for humping. While a train is standing in the reception siding, a, "cut list" is prepared, showing how the train must be uncoupled into cuts, and the number ofthe sorting siding to which each cut will be directed. This cut list is used to control the setting of the points in the switching zone as the train is humped. After passing over the hump a cut traverses a maximum of six pairs of points on its way to the sorting sidings, and therefore the route can be
220
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
specified by a binary code, each digit defining the position ofapairofpoints. The cut list, which has been prepared in the reception sidings, is transmitted to the traffic office in the control tower. There it is typed onto a teleprinter, producing a page printed record and a punched paper tape. A further page printed record is produced in the control room. When a train is to be humped, the appropriate punched paper tape is placed in a tape reader in the traffic office, and the destinations of the first four cuts are read from the tape. A unit converts the teleprinter characters of each destination into the binary code necessary to define the point positions. These codes are stored in indication registers which inform the operator of the destination of the first four cuts by means of illuminated figures on the control panel. As the cuts are humped, the destinations automatically progress through the indication registers, and new codes are fed in by the tape reader. With this approach, the whole of the switching zone is track circuited. The track circuits are used to prevent points being operated while a cut is passing over them, and also to control the storage circuits that set the points ahead of the cut, according to its destination code. Each track circuit has registers associated with it capable of holding the destination code of a cut. The destination code of the first cut is fed from the first indication memory into the registers of the first track circuit, which pass it on to those of the second, and so on. At each pair of points, the appropriate digit sets their position and directs the rest of the code to the register of the next track circuit on the route. Operationally, this is a straightforward method. When the cut is humped it occupies each track circuit in turn along its route. When it occupies the first track circuit the associated registers are isolated from the indication memory but continue to store the destination code. During this time, the contents of the indication memory progress, thus making the code of the second cut available. When the first track circuit becomes unoccupied, it cancels the code ofthe first cut in its storage circuit and the new code in the first indication memory takes its place. The registers of the following track circuits are controlled in the same way, so that the cut progressively cancels its own code, allowing the code of the next cut to take its place and set the new route. In order to put the maximum amount of traffic through the yard, the train should be humped as rapidly as possible, but the speed of humping is limited by the requirement that one cut must not catch up with another in the switching zone. If there is less than one track circuit length between consecutive cuts, it becomes impossible to change the points, where necessary, between the passage of cuts. Both primary and secondary retarders are used in the system. A cut running well will tend to catch up with a cut running badly, and the primary retarders
XXXIII.
RAILROAD, SUBWAY, AND CAR TRAFFIC PROBLEMS
221
are used to minimize this effect by retarding the good runner more than the bad one. The function of the secondary retarders is to release the cuts at such speeds that they run into the sorting sidings and impact the wagons already there at minimal speeds. But, if the retarder is to be controlled to release the cut at the correct speed, the actual speed of the cut must be known. This can be measured by a Doppler radar unit mounted between the rails just beyond the exit of the retarder. A continuous wave signal at a frequency of about 10,000 megacycles per second is radiated toward the approaching cut from an aerial in the radar unit. Part of this signal is reflected back from the cut, received by the same aerial, and mixed with a proportion of the transmitter signal, producing a Doppler freq uency proportional to the speed of the cut. A comprehensive system of alarms is provided, and indications are given both on this panel and to the operator on the main control panel. The alarms are arranged to show what section of the equipment is out of action and to inform the operator whether fully automatic, semiautomatic, or manual operation of the retarder is possible.
RESERVATION SYSTEMS FOR RAILROADS In Chapter XXX, we have considered the works of an airline reservations system. With a few years lag, as compared to the airlines, railroads, too, start showing interest in the development and use of an automatic reservations network. This essentially means an on-line booking. The ticket is directly handed to a customer at the counter or sent by mail to the customer if payment has been received. Tickets prepared but not paid for should have an option of expiring date, and provision should be made so that they cannot be confirmed after the expiring date is reached, neither should they be handed to the customer. As with airlines, railroads put substantial emphasis on the waiting list. All passengers who could not get a seat have to be informed that they have been put on this list. Their names and addresses have to be retained in computer memory. This is also the case with the seat requested, and all information pertaining to this seat, These requests could be fulfilled according to their chronological order, as soon as seats become available, or through a priority scheme, which will also have to be described and the selection criteria properly established. A special file will thus have to be created and inspected by the machine in order to look for priority passengers. The handling of waiting-list passengers could be performed as follows: • A master file registers the "waiting-list passengers," subdivided into several categories.
222
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
• Train information would consist of seats canceled and seats obtained through expired options, also subdivided into categories. • The clerk at the master position can start an automatic handling of waiting-list passengers, if the number of seats requested per category does not exceed the number of seats available in this category. · If in a given category the number of waiting-list passengers exceeds the number of seats available, the machine would branch into an optimizer subroutine in order to evaluate whether it pays to add extra cars or an extra train. • If the decision is reached to add cars (or a train) the handling of the waiting-list passengers can proceed, but, then, a whole cascade of command operations of train availabilities and programming should follow. • In case the decision is not to add extra cars (or a train) the computer should print out the passengers' waiting list by category. The foregoing operations raise certain problems. Among those directly connected to the reservation system are how to cancel a seat, and how to allocate canceled seats or seats becoming available through expired options. For the expired options, provision has to be made that a clerk cannot enter a confirmation for an option after the expiring date has been reached. This can be achieved by storing an indicative per train, that has to be checked by the program in case a confirmation is entered. If the confirmation was originated from a normal sales position after the expiring date has been reached, the confirmation would be refused. For waiting-list passengers, to get a reservation the computer should proceed at a predetermined time, by: • Closing the train for normal booking • Opening seats available through cancellations and expired options for passengers on waiting list • Booking waiting-list passengers · If there are still seats available, reopen the train for general sale. Should the reservation system be able to hold preferential seats for certain establishments, and provide for the possibility to suppress them, two problems would have to be handled: (a) a list of the amount of required seats by individual organizations has to be kept, and a check made that these seats will be used only by authorized persons; (b) the suppression of seats not requested by that organization should be accounted for. This can be done in the same way as expired options, the subject seats treated as such as soon as the reservation period is open. Furthermore, it needs to be clarified which scheme is to be followed if a customer request cannot be satisfied. What, for instance, are the parts of the request that may be modified, and in which
XXXIII.
RAILROAD, SUBWAY, AND CAR TRAFFIC PROBLEMS
223
sequence seats are to be distributed? What is the operation if the customer specifies, say, class only? In order to determine the reservations system's storage capacity, it is necessary to establish how many stations each train will serve and the train frequency and configuration for the whole network. Rules must also be clearly set and their data-implications evaluated. For instance, should there be a restriction on how many times a seat may be reserved per itinerary? The way in which reservation information will be transmitted and matters concerning the recipients of this information should be weighed very carefully. Some stations, for instance, may not need alphabetic listings of the reserved seats, but only a "seats inventory." With this, special attention has to be given to the question ofinput-output devices, so that a uniformity for all positions can be established. As far as data transmission and transcription is concerned, the use of teleprinters seems a promising solution. Teleprinters may be equipped with paper tape readers and perforators which would allow the preparation inquiries off-line, thus reducing the line load and giving the possibility to check input data before transmission. Similarly, in order to optimize the usage of freight trains, several approaches have been taken thus far. For instance, a railroad communications system has been established in Midwestern United States to provide faster freight car movement information to shippers and receivers through the railroad's nation-wide network of traffic offices. This communications system is expected to eliminate a staggering load of paperwork and accounting. The system consists of a microwave network that spans the Rocky Mountains, a high-speed facsimile sending and receiving equipment, a largescale electronic computer, and a new nation-wide system of teletype communications. It can pinpoint the location of anyone of thousands of railroad cars in less than a minute after they have passed a Rio Grande microwave transmission station. It will also digest thousands of facts and figures to determine earnings and control operating expenses on a daily basis. Waybill and train movement information transmitted by microwave to Denver headquarters is converted to punched cards, transmitted to the computer, and stored on magnetic tape for subsequent processing and to answer customer inquiries. Status of a given freight car can then be provided in a matter of seconds. In another operation, information contained on the punched cards is converted to punched paper tape, then transmitted over leased lines to "off-line" traffic agencies throughout the United States, to keep traffic agents, shippers, and receivers informed on progress of various shipments. Information stored in the computer to keep tabs on all freight cars is also used for the railroad's computation of per diem payments to other rail-
224
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
roads for their cars, while they are on the Rio Grande system, as well as rental due from other carriers for Rio Grande cards on their lines. The data processing system will maintain, on magnetic tape, a carload inventory of principal terminals. At any time, a report of cars on hand in any terminal, including the commodity, tonnage, and destination of each, can be printed out at the rate of 150 cars per minute. The backbone of the system is a microwave radio relay network that occupies a bandwidth of 240 kilocycles, spans over 715 miles, and uses 18 stations. Six are terminals, located to correspond with the facsimile scanner and printer sites. Of the twelve repeater stations, eight are on mountains, and four on relatively flat land. Digital, analog, or graphic source material is converted into video signals, transmitted and directed to produce the original intelligence in printed form, or to produce an instantaneous display on a television tube. The transmitting unit accepts page-width documents of any length and converts the source material into signals sent via wideband printer which produces an exact copy of the original.
DEVELOPMENTS IN SUBWAYS The operation of the subway trains for the New York City Transit Authority is being developed on a fully automatic basis under the control of a time clock and perforated preprogrammed tape. This tape, which starts the train at either end of its run, has four channels, one of which is manually selected by the dispatcher for daily, Saturday and holiday, or Sunday traffic loads. Additional wayside control equipment has the following functions: First, it supervises the performance of the train from the time it starts a run from either station until it stops at the other end. The running speed of the train is under constant surveillance during the run and as it enters a station. If the speed, which is sensed by proximity detectors mounted along the tunnel wall in each of the speed zones, exceeds established limits for that area, the wayside equipment actuates an automatic tripper to stop the train. Second, it provides code signals to the train through the tracks or through a loop laid between the tracks at the end of the run. These signals control the speed of the train as it starts and enters each of the coded speed zones. Third, it functions as an alarm and indication system. If, for example, the train takes 20 seconds more than its allotted time of 90 seconds to complete a run, a timing circuit will cause an alarm. The train speed is checked by proximity detectors as it enters the: station. A dispatchers board and tape programming equipment places the train in the out of service, starts the train at both ends of its run, and displays the train's location at any time. If the train stops because of any failure during its run, it must be put under manual control, that is, a motorman must
XXXIII.
RAILROAD, SUBWAY, AND CAR TRAFFIC PROBLEMS
225
bring it back to the station, and the programming equipment must be recycled. A "catch-up" feature of the system maintains the schedule of the train. If a passenger holds the doors at one station delaying the train for say, five seconds, this time will be shaved from its waiting time at the next station. Code generating equipment receives signals from the programmer and in response applies code pulses to the train. Also, the equipment supplies code signals that open and close the doors at the end of the run. Two ultrasonic transducers are located above the train at each end of the run to check the position ofthe lead car. They return a signal to the code generating equipment if the train is within 15 feet of its stopping point. Electronic equipment on the train starts and stops the train, opens and closes the doors, and changes the head and tailights and destination signs. Correspondingly, the Moscow Metro is automating its whole operating system. The currently experimental automatic unit is expected to eventually handle almost everything. The automatic driver would stop and start the train and pull it into the station according to its length and shape. In case of a blocked door or other delays, the automatic unit would decide which is the most economic and safe section to make up lost time. Experimentation is scheduled to take place inside the computer, concerning questions of speed, braking time, and differential control taking a range of critical factors into account to maintain a strict and optimal economic schedule. The next step is to automate the turnabout operations at the terminal and then switch over to a central panel control over the whole network. A color recognition device to react to red, yellow, and green is to be incorporated into that system. Other major cities have also contracted work on digital control for subways. Analysts of a German electrical manufacturer are currently studying the automation of the Hamburg metro. The approach taken here is to follow the subway train through criss-cross wire embedded between the tracks. By means of this pickup element, * the central computer will be able to effect automatic guidance, speed control, train location and detection, and the evaluation of the need for initiating alarm conditions.
MOTOR TRAFFIC CONTROL A major application for real-time systems is motor traffic control. Computers can be used to advantage in this connection, with either "manual" or "automatic" processes. In a "manual" traffic control system, the computer would
*See also discussion on the usage of criss-cross wires for motor traffic guidance, in the following section.
226
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
collect, collate, and handle input traffic data; then compute and output the necessary instructions which may be transmitted to inform drivers. In an "automatic" traffic control the computer would actually drive the cars with humans (including the car driver) playing no role in the driving process. Helicopter data collection, radio flash news on traffic, and perhaps mapping by closed TV networks are within the "manual" mode of traffic control. In that case, the role played by the computer consists mostly in dataprocessing through simulation models to forecast density of traffic flow and possible bottlenecks. Computer-directed neon lights on highways, parkways, etc., can give visual stimulus to the driver. Radio signals may become of universal use by making the radio mandatory in every car. Following then along this line, the use of a "destinator" in the vehicle, which would be able to communicate with a central computer, is probably the first step towards "automation in driving."* An autocontrol system, developed by a major car manufacturer, has already been tested on a small-scale highway. This is part of a long-range study of possible automatic vehicle controls for future turnpike or crosscountry highways. Full-size, it would offer the motorist virtually constant electronic chauffeur service with automatic steering, speed control, and obstacle detection. Potential benefits are greater safety, speed, efficiency, and convenience. Because of the extremely complicated sensing and decision functions required for even light city traffic, emphasis throughout this project was placed on the limited-access type of road where complete automatic control does seem feasible. Also, since, at present, car-based sensing equipment lacks the necessary discrimination to be practical, most of the control equipment in this research was installed in the road. This leaves only a minimum of car-based equipment, which could be either permanently or temporarily installed in the vehicle. The control functions of the subject system include: • • • •
Automatic guidance Speed control Obstacle detection Warning lights for manual cars.
These functions are accomplished by means of electromagnetic induction techniques. Command signals originate from current-carrying wires installed below the road surface. Then, these signals, detected by suitable coils * See also a complete model for automatic driving presented by the writer in the course of the October, 1960 "Journees d'Etudes, SICOB," the proceedings of which have been printed by Dunod, Paris.
XXXIII.
RAILROAD, SUBWAY, AND CAR TRAFFIC PROBLEMS
227
mounted on the car, act to control the car's direction and speed. In fact, providing steering control signals from the road is relatively simple. Since the desired path of the car is the same as that of the road, only the position error of the car on the road must be sensed for guidance. The required steering correction can then be determined, taking into account the car response characteristics in order to assure system stability. The steering control element in the model highway is in the form of a criss-crossed wire, formed by two parallel wires embedded in the pavement down the center of the lane. Alternating current, of about 50 kc in the wire, generates a magnetic field along its entire path. This same medium is used for speed control, with two pickup coils mounted on the underside of the model car straddling the criss-crossed wire. Changes in voltage between the two coils, as determined by their position relative to the cable, automatically adjust the steering mechanism to keep the car on course. The subject experiment calls for two automatic lanes and one manual lane, for each way traffic. If the driver of a nonautomatic car wishes to pass a slower moving car, or one that is stalled, it is necessary to swing in and out of the automatic lane. This maneuver can be safely accomplished with the help of warning lights located along the left side of the automatic lane. The warning lights are turned on by a 2-kc signal from a coil on the automatic car to the speed command coil in the road. These lights "travel" along with the car to warn manual cars in the vicinity that the automatic lane is in use. A number of fail-safe features are inherent in the model road; others would be required in a full-size installation. If the criss-crossed wire used for guidance and car speed measurement failed, the car would continue to guide on the speed command wire. If the speed command signal was interrupted, the car would stop since no command signal corresponds to a "stop" command speed. In addition, stand-by oscillators could be installed to switch in upon failure of major components. Provision need also be made for the driver to switch to manual control in the event of an emergency. Another development along the same line concerns a method of communicating verbal information to automobile drivers. Experiments were conducted in the very low frequency (VLF) radio band. The communications systems can be arranged to broadcast a blanket 50 to 100 feet wide and up to 2000 feet (or more) along a highway. This is done by careful control of the dimensions of a radiating loop laid alongside the highway and by properly adjusting the power ofthe transmitter. This guarantees each message will be transmitted into a car at one specific location and no other. With respect to design, two transmitters are embedded in waterproof enclosures along the road. The first sends out a continuous signal from a short loop which "triggers" the car's receiver. This insures that only cars traveling in one direction will receive the message. The second transmit-
228
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
ter is connected to the long antenna loop. This carries the tape-recorded message. Car radios can be adapted to receive the messages, or small portable units can be hung on the driver's door. Reception could be arranged one of two ways: either through special units with their own small speakers that attach to the driver's door, or through permanently attaching a receiver to utilize the car radio's speaker system. In the latter case, the system is designed to cut off a commercial radio program when the car is passing through an information zone, or to transmit the message even if the car radio is off. Experiments have shown that a 500-foot loop antenna stretched along a highway will allow a car traveling 65 miles an hour the opportunity to receive a three-second message twice before the car is out of the range-and it was surprising to see the amount of information that can be worked into three seconds. In freeway traffic control a new concept may mean major savings and reverse a trend that is diminishing the efficient use of freeways throughout the U.S. The reference known as "the Chicago system" is one that automatically controls access to a section of freeway where congestion is developing. Automatic ultrasonic detectors count the traffic and, when the traffic becomes too congested the detectors automatically activate metered stop lights and entry ramps behind the jam. Persons then wishing to enter the freeway find themselves confronted by a red light. It turns green only for limited intervals, which automatically controls the amount of traffic, thus preventing hopeless snarls. Though the reasoning behind this action is simple to follow, it is difficult to estimate how well this could be adapted to certain metropolitan areas because of the capacities and distances to alternate north-south routes. The basic idea behind this approach is that vehicular traffic over congested routes (air lanes, freeways, rail lines) will be automatically controlled through computers, switching from center to center as the vehicle traverses its route. Human piloting of automobiles will then be restricted to the uncongested roads. Computer-controlled equipment will also help reduce the likelihood of high-speed collisions at traffic lights. One such approach has been developed by the Road Research Laboratory in England. * The subject system operates when the lights change to amber as a driver approaches at a moderate speed on a derestricted road. There are certain critical distances where drivers in this situation are going too fast to stop safely, or not fast enough to clear the crossing before the red light. Detectors about 500 feet from the stop-lines measure the speed of an approaching vehicle. The time it will enter its critical section is estimated, and the signal held at green until it clears the intersection. *See also, "Systems and Simulation," Chapter 19, Simulation for Motor Traffic. Academic Press, New York, 1965.
XXXIII.
RAILROAD, SUBWAY, AND CAR TRAFFIC PROBLEMS
229
In the course of our April 1966 visit to Tokyo,* and the discussions which we held with senior executives of automobile and heavy equipment companies, we were told that Japanese engineers from the Government's Mechanical Laboratory worked on the development of a new control system for automatic car driving. Experimental cars built by them can be automatically driven at controlled speed of 30 miles per hour on a test loop which has a speed limit of 36 miles per hour in the curve, because of the small radius of the curvature. The performance of control systems developed by the Japanese engineers has been described as "very stable." Yet, important problems remain unsolved: the addition of safety devices; the simplification of control systems; the practical use of automatic control devices for automobile driving, and the like. We thought it necessary to add this reference in order to bring developments in other countries under correct perspective.
*This was part of an early 1966 world-wide study on international management tools. It covered both industrialized countries and countries in the process of development. The author traveled 37,000 miles covering four continents and some twenty different countries. Among the countries visited were England, France, Germany, Italy, Switzerland, Brazil, Argentina, Chile, India, and Japan. Meetings were held with senior engineering and management people of 110 companies, mostly firms of an international status. Control systems functions attracted a good deal of interest. The findings of this study documented in a solid manner the ideas, concepts, and applications presented in Volumes A and B of the present work.
Chapter XXXIV DIGITAL AUTOMATION IN BANKING
The application of computer programs to banking may seem remote from the main stream of process control operations. However, this is not true. As with airline and railroad reservations, air-traffic guidance, and car-traffic control, it constitutes an open field where human imagination and ingenuity are the gates to technological success. The on-line operations in the banking business are, indeed, fantastic. The president of the Bank of America, for one, has forecast that in the near future a paycheck will not have to be issued but will be automatically credited to the individual account, and the company's account correspondingly debited. He foresaw a similar approach to the payment of utilities and goods. If magnetic ink or optical scanning media are used for loan payments, savings deposits and withdrawals, and other transactions, through the adaptation of simple unit encoders to the teller proof function, then these various transactions can be proved, segregated, sorted, and posted in much the same fashion as any other industrial or business real-time application implies. LOOKING TEN YEARS AHEAD
During the next decade, while present-day users of magnetic ink character recognition and other advanced media in the banking field become accomplished practitioners of the art, the banking industry will be moving beyond the initial array of "easily apparent" applications. What are, then, the coming "events?" Speculation about the future impact of on-line equipment can be a science fiction game, or a reasonable extrapolation of present-day concepts and components through a 5 to 20 year period. Taking this second approach, we may consider the following projected developments with reasonable confidence: 230
XXXIV.
DIGITAL AUTOMATION IN BANKING
231
• Direct memory-to-memory transmission over wire and microwave links for bank-to-bank, client-to-bank, and bank-to-client applications. • Machine language communication between companies and the government, between individuals and financial institutions. • Rapidly expanding use of the credit card idea through automatic channeling of information by using interface media. • Direct payment of insurance premiums by banks. • General ledger accounting by banks for small, local establishments. Many of these developments are well under way, though some 10 to 20 years ago they would have been considered radical, impractical, or uneconomical. The evolution in the approach now taken has a structural characteristic. Most personal charges, from the signing of a check to the extension of credit, result in transactions characterized by the presence of an on-the-spot decision regarding the type or amount of an item or service to be purchased. A manually operated recording device of some kind is then required for the translation of financial "decisions" to machine language. Let us consider one example. Say that the meters of a utility company, such as gas or electricity, include transmission units for monthly collection of meter readings at a central location. The monthly billing operation on the company's computer will then need to produce no paper whatsoever. A magnetic tape recording will be enough, and, subsequent to its sorting according to financial institution, a transmittal to the bank. The individual consumer, then, learns of the bill and its payment only when he sees it as an item on the bank's monthly statement. Similarly, telephone and water service can be billed and paid entirely without human intervention (and human mistakes) until the point of review and approval, by the consumer; hence, some grace period may be necessary for reversal of fund transfers involving disputed transactions. Another class of financial transactions could be handled on-line even more simply. Payments on mortgages, insurance, installment loans, mutual investment funds, and special savings plans are regular repeated transfers of funds in constant amounts. These can be handled by semipermanent programs requiring only on-time input. Discrete shopping items present slightly more problems. Consider the housewife's afternoon trip to the shopping center. First, she goes to two or three shops at the department store and then to specialty shops for children's wear, shoes, and the like. No presentation of cash or check is involved; no time is lost in preparation or signing of sales slips, change making, or credit verification. Instead, she may be able to use a universal credit card and a machine language input device. While at the shopping center, a stop at the automated grocery store is in order. Again, no cash or check is presented; no checkout stations are
232
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
involved. Each item of prepackaged food is ordered by insertion of the universal credit card in a sensing slot. The merchandise is automatically gathered in an electrified cart. The cart is routed to the wife's parked car where she has previously used the credit card to buy parking time and to identify the car for merchandise deliveries. Do the on-line aspects of this trip to the shopping center present any insurmountable technical problems? Certainly not. The essential data for the financial part of the business transactions were captured by inexpensive, simple recording devices in the department store, the specialty shops, and the grocery store. The data are gathered in the shopping center's central communication office from where it is transmitted downtown to the bank. At the bank, shortly after the transactions occurred, the obligation for the wife's debts is recognized and the checking account is promptly charged for the worth of children's wear, shoes, and groceries. All of this is done with no exchange of cash, checks, invoices, receipts, or other paper. The bank's weekly statement adequately covers all record and advisory requirements. Fund transfers are accomplished internally to the bank's data processor if all participants have accounts at that same bank. If one or more participants have their accounts in another banking institution, then the first bank initiates interbanking communications operations, in machine language. This is the mechanized equivalent of today's clearing house and transit operations between banks. So far we have considered only the debit side of the bank's accounting procedures. If the solvency of the individual accounts is to be maintained, the bank must receive some credits. The source and handling of the credits are relatively obvious matters. The employer prepares a payroll record on magnetic tape, just as the utility concern prepares its billing tape. Through internal and transit computer operations, the bank credits the individual account with the transmitted monthly salaries (less deductions), and debits the corporate fund account of the employer. Some other income sources will be handled just as automatically. The individual's income from bonds and common stock investments will be credited to the account directly. In other cases, transactions between individuals, such as rental payments to an individual owner, payment of lawyer and doctor fees, and generally low-volume transactions, may not be subject to mechanization through the recording devices utilized by, say, department stores. But again, it may be that-through the use of a relatively inexpensive facility such as dial telephone,· low-volume transactions will be automatically forwarded to the bank and the use of cash regulated to the very small area of petty purchases and other out-of-pocket expenses. In fact, aspects other than simple cash transactions will open new frontiers to machine usage. A power production Company may request, in machine
XXXIV.
DIGITAL AUTOMATION IN BANKING
233
language form, bids for manufacture of a million-kilowatt atomic generator. A computer network will be employed by all concerned firms for bid preparation and transmittal, bid evaluation, and order placement. Investment opportunities can be evaluated in a similar manner. A company planning a major investment can predecide the wisdom of this investment by experimenting, through the use of computer media, on the effects of a complete business cycle. Inversely, computer calculated requirements for equipment renewal can be translated into subassembly and raw material orders to be placed with subcontractors and vendors. In a similar manner, step-by-step production schedules could be calculated and checked against input from the production departments and the received stocks. Labor, material, and burden costs will be accumulated and charged appropriately. Payments to vendors will be effected automatically through the bank's fund transfer activities, the payments being audited against receipts by computer matching of bank statement against the vendor's machine language shipping order. In the service industry, relatively inexpensive data transmission media, such as dial telephones, will be used for discrete operations that can absorb only minor burdens, such as request and confirmation of hotel and travel reservations. This one-shot request and confirmation from an immediate access, on-line memory, combined with the universal credit card, will materially speed plane and train boarding and room occupancy. The service industry's applications to which we make reference can go well beyond the airline reservations' approach which we covered in Chapter XXX. For travel, no ticket may be required. If the reservation has been requested and confirmed through on-line devices, then insertion of the credit card in the gate slot can allow access to the airplane or train. For hotels, only advance registration (via coded telephone) would be needed. There will be no midnight wait at the registration desk: use of the credit card may automatically eject the appropriate key. Naturally, automatic billing and payment will follow in both cases. Computing, via train or bus, will be even more convenient, involving no ticket and no ticket punching or scanning by a conductor. Use of the credit card in a slot on the train or bus will provide the data for billing, which will naturally be adjusted to the frequency and regularity of the computer's use of the service. This will result in accurate traffic counts for the transportation company. These examples of transactional activity are only indicative; they are not meant to be comprehensive. But, do they make sense? Is there any basis in present-day accomplishments that leads us to believe them possible? Is there any impetus to make us want to bring them into being? We are now seeing million-bit microsecond memories, million bit per
234
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
second tape units, and widespread use of credit cards. To make the applications of the described nature practical, we do not need a great deal more in the way of equipment developments. Billion-character microsecond memories, improved transmission and terminal equipment for communications, and simple source data recorders will suffice if they are provided with reasonable price tags. Beyond the equipment, only some advanced concepts of planning and cooperative effort appear to be necessary. But how rapidly will we see the changes in equipment and concepts that are necessary to allow for the futuristic business world we are discussing? The answer is that such applications, although feasible, will not be immediate. The necessary skill to bring them about is not there, although the machines may be available. It has been the history of the business machine field that equipment capabilities precede user demand and imagination. Almost invariably, the equipment manufacturers, because of competitive pressure, have produced equipment innovations before the appearance of any concerted demand on the part of actual or potential users. Current and near-future announcements in the communication field will give us very reasonable costs for relatively good speed and point-to-point communication hookups. An order of magnitude change in cost will allow economical, high-speed data collection from widely spaced, low-volume, multiple transaction points.
PROBLEM DEFINITION AT THE BLOOMINGTON SAVINGS BANK In order to bring the foregoing discussion under present-day perspective, we will consider a case study from the banking field. Purposely, this has been chosen to coincide with a time of conversion routine faced by a certain banking institution. Less cryptically, the principal assignment of a special committee at the Bloomington Savings Bank* was the definition and specification of the problems bank management was facing with the changeover to data automation. Ofthese problems, the following were the most outstanding: • • • • •
Input requirements Peripheral gear Time scheduling Inquiry specifications Form and cycle of statement.
Input Requirements
All "on-us" checks are prequalified in magnetic ink. The transit number
* Fictitious
name.
XXXIV.
DIGITAL AUTOMATION IN BANKING
235
and account number are prequalified prior to issuance. In certain cases, such as official checks and agency dividends, the check serial number is also prequalified. The amount and block/batch is encoded during the proof operation. On personal accounts, the checks are posted individually to the balances maintained in the master file. In the case of business accounts, an alternate method is used: the check totals are summarized for all accounts on which more than six checks are being presented for payment. Lists containing the details of these summarized entries are then prepared and filed with the checks. The personnel and equipment for producing such "short lists" are included in any proposal utilizing this alternative. The deposit ticket includes the account number, amount, transaction code, number of items deposited, and the amount of cash. This data is transcribed to magnetic ink coding. Stop payments are held in a table for a period of three months. This table is automatically examined each day as checks are posted. If there is a stop payment in the table for the same account and amount, the check is still posted but an exception is printed to initiate clerical action, and to compare this check to the stop payment form. If the check is the one for which the stop payment was issued, an adjusting entry is prepared to reverse the posting on the following day. Holds are also maintained in the above-described table. The table provides for amounts to be held for a specified number of days which will be stepped down each day during an updating run. This table is printed each day to provide reference before authorizing the payment against held funds. When checks are presented against held or uncollected funds, the checks are posted and on advice printed to initiate officer authorization for the transaction. If approval is not obtained, reversing entries are made and posted on the following day. Peripheral Gear In a structural sense, data automation in banking implies on-line teller processing; this required seven basic subsystems: (I) Teller units able to accept teller-indexed transaction messages and print computer-processed replies on customer passbooks, transaction tickets, or the transaction journal. In general, these are remote input/output units able to function at teller-specified deadlines. (2) Local interfaces, with the capability of multiplexing teller messages for transmission to the computer, of receiving the computer-processed data, and of channeling these data to the recipient teller units. (3) Teletransmission network capable of transmitting messages and replies between the local interface and the central data processor. This should
236
(4)
(5)
(6)
(7)
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
obviously include telephone lines and data terminal units, designed to afford the projected data load without unwanted delays in processing. Central interface, located by the central processor and able to receive incoming transaction messages at random intervals, store them until the CPU* is ready to process them, and return the processed replies to the local interfaces that originated the transactions. The main data processor, able to provide the necessary throughput capability and to monitor supervisory operations over the entire on-line system. This includes multiplexor-selectors and input/output units as required, with a good deal of emphasis on computing speed. Mass memory (random access) capable of providing storage of on-line records with account balance, available balance, unposted dividends, and the like. Access speed here is critical and so is simultaneity in access required for complete processing of transactions and the timely answering of inquiries. Serial access low-cost storage, based on a photographic or other principle, preferably on-line with the central processor. This is necessary to store legal, accounting, and other documents in order to satisfy, to the letter, the law regarding unbiased reproduction and to provide the grounds for the eventual conversion to highdensity storage at the central files.
The entire work obviously starts at the teller's level. The teller units are the primary input and output of the system, each operating similarly to conventional window machines. It must accept teller-indexed transaction messages for transmission to the center and print processed replies. Through these units tellers can post payments and disbursements to customer accounts, inquire into the status of such accounts, and generally handle any type of window transactions and inquiries. In respect to systems and processing reliability, automatic features must be incorporated to protect against incorrect postings on the passbook. Similarly, teller-oriented design should prove maximum ease and efficiency of operation through a simplified keyboard, exclusive single entry-key operation, and buffering, permitting the teller to enter the entire transaction without delay even when other teller consoles are transmitting or receiving data. Time Scheduling
The most critical time requirement is the availability of the printed trial balance by 9:00 each day. This trial balance reflects the condition of the accounts after posting the transactions received during the previous day. The time schedule is given in the following tabulation:
* Central
Processing Unit.
XXXIV.
9:30 a.m. 11:00 12:00 4:00 p.m.
4:30 5:00 6:00
DIGITAL AUTOMATION IN BANKING
237
First deposits Formal clearing First cashed checks, branch over-the-counter items, and final cash Bank closes Final deposits (branches and main office) Final cashed checks Final checks available out of proof
After receipt, the items are encoded and proven on unit inscribers or proof machines. This occurs during the day as items are received. The primary sorting, proving, and converting of check data on the sorterconverter takes place as checks are received from the.inscribing operation. The final cashed and deposited checks are received from the inscribing unit at 6:00 p.m. Thus the period from 6:00 p.m. to 9:00 a.m. is available for the primary sorting and converting of the final transaction data, the posting of accounts, and the printing of trial balance. The preparation of statements and the fine sorting of checks for filing occur outside of this period. A peak computer utilization occurs at the end of the month when business statements are prepared. When the transactions received on the last day of the month are posted during the evening, exceptions resulting from that run must be processed and adjusting entries made in a second updating run before statement preparation can begin. The statements are processed on the computer, matched to the name and address file, and printed. These statements are available for mailing on the evening of the first business day ofthe new month. Inquiry Specifications The system meets the following requirements for reference and audit purposes. First. A daily trial balance is regularly printed each day with all accounts shown in numerical sequence, regardless of whether an account was active. The trial balance contains the account number and balance; in addition it shows whether any hold, uncollected funds, stop payments, or other restrictions are in force. This trial balance also contains numbers of transactions for the period, average balance for the period, the year the account opened, a code indicating if any postings have been referred for approval, and the last transaction. The trial balance is ready for distribution by 9:00 a.m. each day. Second. A transaction journal is also printed daily, after all entries have been sorted into account number sequence. This journal includes the account number, batch number, amount, number of items (if a list posting), and the transaction code. Control totals are maintained and printed in the journal, also.
238
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
Third. Particular emphasis has been placed on providing an audit trail that is complete and permits easy reference. The significant control points are: • A listing of all readable dollar amount checks, indicating the account number, amount, batch number, and pocket number. • A zero proof of these items against the block total. • A sort of the checks into six categories, namely: large-volume accounts, unreadable dollar amount rejects, other rejects, miscellaneous category accounts, business accounts, and personal accounts. • An accumulation of the check totals in each of the following categories: large-volume accounts, all rejects except unreadable dollar amount, miscellaneous category accounts, and valid encoded items. • A transcribing of the check information on the valid encoded items to the computer. Fourth. At the first reading of the check information into the computer the check amounts are zero proven against the block totals, and out-ofbalance blocks are printed. Fifth. After sorting, control totals are accumulated by groupings (on blocks of account numbers), and the grand total proven against the total of the blocks. Sixth. A similar control of other inputs is established so that all transactions are proven prior to entry into the system. Seventh. The daily trial balance includes a summary of old balances, transaction amounts, and new balances within control groupings to permit a daily balancing of accounts. Form and Cycle of Statements A summary-type statement is used for all regular personal accounts. Printing of these statements is cycled over 10 days of the month. The name and address file are maintained in such a manner that only that part of the file required for the statements being prepared is handled on statement days. Fully detailed statements will be made on business accounts. These statements are prepared:
• As full sheets occur during the month. A line is filled for each day that there is activity or for each multiple of the three debits (checks or lists) and/or two credits to be printed on any day. • As of the last day of the month, with all details of all activity occurring during the month other than that on full sheets previously prepared. • At such other time during the month as customers may request. A name and address file is maintained to supply addresses for the monthend statement preparation, but, it is not necessary to supply names and
XXXIV.
239
DIGITAL AUTOMATION IN BANKING
addresses from this file for filled sheets and special request statements. The master file includes the means of accumulating debit and credit detail information required to prepare the statements. In addition, it is sometimes necessary to delete or adjust these data as a result of adjusting entries posted on subsequent days. The account numbering system shown in Table I is in effect at the Bloomington Savings Bank. TABLE I Category Reassignment digit digit
Category Large-volume accounts Business accounts Special personal accounts Regular personal accounts Utility accounts Miscellaneous
0 9 I 2,3,4,5 6, 7
8
0 0 0 0 0 0
Hyphen
Identification
Check
digits
digit
xxxxx xxxxx xxxxx xxxxx xxxxx xxxxx
y y y y y y
AUTOMATING BASIC BANKING OPERATIONS Figure I illustrates the method of handling the checks received each day from various sources from 8:00 a.m. to 6:00 p.m. (see also Fig. 2). These checks consist of the incoming "on-us" items received from the proof deposits. The first operation is the inscribing, on the checks, of the amount, block/batch number, and transaction code, in magnetic ink. At the same time a listing is prepared and the stock amounts are zero proven to the batch total. A batch header slip is then encoded and the batch is recorded in a control ledger and accumulated into blocks of about 2000 checks. A block header slip is also encoded. The data encoded on the checks are read, and the checks are sorted by type. This sort is based upon the category or high-order digit of its account number, plus the ability of the sorter-reader to recognize large-volume accounts and to segregate unreadable dollar amount rejects, and other rejects. A listing is made at this time of the check data, except for the unreadable dollar amounts. This listing includes the account number, amount, batch number, and pocket number. At the same time the amounts of the readable checks are accumulated into separate totals for the large-volume business accounts, the other rejects, the miscellaneous items, and the valid encoded items. In addition, all ofthese amounts are subtracted from the block total to establish a zero proof, with
240
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
Encode amount. botch number. transaction code in magnetic ink List and zero prove
Accumulate "blocks" of checks
Encode block header slip
To Fig. 2 FIGURE
I
the remaining balance, if any, being reconciled with the amount of the unreadable dollar amount rejects. In the operation, shown in Fig. 3, checks and deposits are sorted into groups with header items containing totals of groups of these checks. The groups are designated as blocks of items. The operation of the machine has as follows: • Prove blocks. Add the amounts of the detail items in each block and prove against the block total contained in the header item. • Write detail items on out-of-balance tape. If out of balance, use this data to print an out-of-balance listing. If in balance, rewind and write the next block. • Write transaction tape. Write each detail on the transaction tape with account number, amount, transaction code, and batch number. The output of this operation is out-of-balance blocks and list of transac-
XXXIV.
DIGITAL AUTOMATION IN BANKING
241
Read magnetic ink encaded dolo Sort checks - Lorge-volume business accounts - Unreadable dollar amount rejects - Other rejects - Miscellaneous -Business -Personal
Print check dolO on listing - Account number -Amount - Batch number - Pocket number Add amounts of readable dollar amount checks by type - Lorge-volume business -Invalid and nonencoded items (other than amount) -Miscellaneous items - Valid encaded items (to be posted in automotic system) Subtract each readable amount from block total
Manually odd unreadable dollar amount checks
Reconcile block difference (block total minus readable amounts) with unreadable dollar amount total
Convert computer- system checks 10 computer media
To further operations
FIGURE
2
tions. In the following operation (sort transaction, Fig. 4) the input consists of check and deposit data, and of other entries such as: • Out-of-balance block adjustments • Requests for off-cycle or special statements • Reversal entries of certain of the referral items of the previous day • Header entries with control totals of the above items. The machine procedure involves reading input data, accumulating amounts in control totals to prove against header totals, reading check and deposit transactions, sorting account number, and writing sorted transactions. The preparation of the transaction journal involves an operation on business accounts with more than six checks, accumulating and printing the
242
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
o
"" ._--~
cY'
Transactions by block
r"'~'
o
and other transactions
~o
Transactions in rcndorn
o
~redits
Check data
I
FIGURE 3
ot o
Transactions
j
o o
Sorted transactions
t
FIGURE
4
amounts and number of checks. The computer (a) prints the list data for these accounts and the individual transaction data for all other accounts, (b)
XXXIV.
DIGITAL AUTOMATION IN BANKING
ot o
243
SOrted transactions
eX
Sorted transactions
...---"""--..,
~
o
FIGURE
5
accumulates the amounts of transactions in control totals, and (c) prepares the transaction data output. The output of this operation includes the lists for business accounts with more than n checks, the daily transaction journal, and sorted transactions. The business account-master file, the transactions file, and the uncollected fund table are used as inputs to the business master file updating operation. The machine procedure consists of matching transaction to master file by: • Adding transaction amount to account balance. • Adding number of items to respective item total. • Storing the amount and transaction code, for each transaction, on the statement history portion of the master record. If the transaction is a list, also storing the number of checks on the list: (1) for each account on which there is activity, storing the date and new balance in the statement history portion of the master record; (2) computing and storing new average daily balance; and (3) deducting uncollected fund total from balance, and writing an exception if payment has been made against uncollected funds. The machine then prepares daily trial balance data, checks for off-cycle
244
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
0) ~
o o
0
,,,"OO,"~
Business master
acc~ fil~unrs ~
/
FIGURE
~Ollected un~OldS
fund toble
6
statement requests, and writes statement data where required; accumulates totals of old balances, transactions, and new balances by control groups, prints these totals on the trial balance as they occur; and writes the updated master file. In this way, the output consists of the updated master file, the off-cycled statements, the exception items, and the daily trial balance (Fig. 6). Correspondingly, the input to a computer operation for updating the personal account master file involves: the sorted transaction-s-personal accounts, the personal accounts master, and the stop payment and uncollected fund table. The operation starts with the matching of transaction to the master file: • Adding transactions amount to account balance. • Adding transaction amount to accumulated debit or accumulated credit total for the period. • Adding number of debits to the total of debits for the period. • If a deposit, adding the number of deposit items to the total for the period.
XXXIV.
DIGITAL AUTOMATION IN BANKING
245
• Where necessary, checking for overdrafts and writing an exception item if the balance is overdrawn. • Storing new minimum, maximum, and accumulated average daily balance, as required. • Examining stop payment and uncollected fund table. If stop payment for amount of check is in file, writing an exception. If the account balance after posting (less the uncollected fund total) is a debit, writing an uncollected fund overdraft exception. Then, the computer accumulates totals of old account balances, transactions, and new balances by control groups, and prints a trial balance of all accounts, namely: • Account number • Balance • Average balance for the period • Year account opened • Code, if a hold is on file for the account • Exception code (type of exception resulting from today's posting, if any) • Number of transactions for the period. The machine then proceeds by printing the trial balance and control totals as they occur, and by testing statement cycles. The latter operation involves computing and, then, deducting the service fee to be charged from the account balance, and printing the service fee and profit and loss data and loss analysis as follows: (1) Average daily balance
- Estimated daily float - 25 %Cash reserve Funds available for investments X Earnings credit percent
Account income (2) Service fee items X Standard unit cost Account cost (3) Account item + Service charge - Account cost Profit or loss on account
246
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
The following fields are then set back to proper value: • • • •
Minimum balance Average balance Number of debits Number of deposit items.
With this, the master file is updated. The output of the operation shown in Fig. 7 includes the service fee journal, the daily trial balance, the exception items, the cycled statement data, and the updated master file. QUESTIONS OF OBSOLESCENCE The foregoing case study was based on a current application. Questions of obsolescence are inevitably present when discussing the present and the future of automatic data handling. "Obsolete" is defined in the dictionary as "no longer in use; disused, as, an obsolete word, law, or tax."Or secondly, "of a type or fashion no longer current; out of date; as, an obsolete machine"; whereas "obsolescence" is defined as "going out of use; becoming obsolete." In technology, a device is functionally obsolete if it is either worn out or
FIGURE
7
XXXIV.
DIGITAL AUTOMATION IN BANKING
247
no longer meets the requirements of the application. The first is selfexplanatory; the second-that is, "no longer meets the requirements ofthe application"-may result from the installed system being either too large or too small for the new requirements of the application. The usual occurrence of this type is that the device or system installed is outgrown by the application. An "obsolescence" of this type may be due either to growth in the volume, to increased complexity, or to "abnormal" growth due to mergers. It is also theoretically possible that the applications requirements could contract to such an extent that the installation would be too large for these revised requirements. The method of handling these changed requirements should, nevertheless, be planned in advance. One approach might be to add a second machine when the volume grows sufficiently to make this economically and operationally attractive. In other cases, it may be possible to expand a system sufficiently by adding components to accommodate the increased needs. A man-made system can also become obsolete, in what regards its ability to perform predetermined functions, or by wearing out. Here, we might say that electronic computing gadgets would not, in general, have a chance to become functionally obsolete. Electronic devices do last for a long period of time, although some of their components may have to be replaced. In considerations of this type, the important thing is not technological obsolescence, but the economic obsolescence that might result from technical advances. This is the case when a new device is sufficiently more efficient to make continued use of the old devices economically unattractive. If this is the case, it follows that the new machine must be much more efficient in order to justify the additional conversion cost. Some computers, for instance, recently introduced to the market, are both faster and less expensive. Here the subject of economic obsolescence takes most of its momentum. A careful analysis of developments of a technological nature indicates that true technological obsolescence probably occurs only with a technical breakthrough rather than with continued but gradual improvements. Such technological breakthroughs may be occurring in some restricted applications in input-output areas. In the area of nonimpact printing, for example, a number of printers currently in the development laboratories show great promise. Breakthroughs of this nature do not happen every day, and, in what concerns the particular interest of commercial banks, the first really major advance that might result in true technological obsolescence will occur when an internal memory can be made sufficiently large, compact, inexpensive, and yet with fast enough access to store a company's main files and provide direct access for both transaction posting and inquiry reference.
248
PART VIII.
GUIDANCE FOR DISCRETE PARTICLES
This is the kind of application to which we made reference earlier in this chapter when we projected the day the check itself would be obsolete and the banking business will be performed through teletransmission and telecomputing media.
Index Boldface numbers refer to pages in this volume. Other page numbers refer to Volume A. Abnormal growth, 246 Abstracting process, 214 Abstraction of literature, 191 Abulated numbers, 195 Access random, 10, 37 serial,235 Access location, 308 Access time, 38, 83, 113 Account business, 237 large volume, 238 personal,234 Account cost, 244 Account number, 234 Account receivable, '57 Accounting, 47, 86, 103, 122 credit card, 59 cost, 86 gasoline, 49 interline, 62 Accounting control, 63 Accounting data, 62 Accounting system, 47, 63, 96 Accumulations-computations-setting, 242 Accumulator, 309 Accurate transmission, 166 Acrylonitrile, 13 Actual status determiner, 11 Adaptation, 235 Addition logical, 178, 181 successive, 158
249
Address, 38, 318 Address file, 59 Address register, 349 Addressing character, 174 Advance booking, 174 Advanced mathematical statistics, 194 Aeronautical industry, 143 Afferent, 33 Afferent neuron, 80 Agent's position, 168 Aircraft early warning, 201 supersonic, 198 Aircraft altitude, 177 Aircraft control stick, 207 Aircraft detection, 197 Air information card, 168 Airline management, 170 Airline reservation, 136, 159, 197 Air masses, 194 Air temperature, 189, 190 Air traffic, 177 Air traffic control, 179 Air traffic controller, 182 Air transportation, 159 Airway table, 185 Alarm, 75, 76, 107 audible, 76 off-normal, 76 Alarm conditions, 225 Alarm memory, 154 Alarm monitoring, 354, 110 Alarm organization, 308
250
INDEX
Alarm printer, 76 Alarm scanning, 12, 69, 74, 153 Alarming, 74 Al-Batani, 161 Alexandria, 65 Algebra, 160, 161 Algorithm, 7, 161,259 Allotted time,S Al-Khwarizmi, 161 Alkylation, 264 Al-Mansur, 161 Alphabetic listings, 223 Alphanumeric input, 70 Alphanumeric message, 75 Alsifr, 161 Aluminium oxide refractory, 83 Ambient temperature, 24 Ammonia, 13 Amplifier, 90 Analog, 51 Analog arithmetic, 160 Analog automation, 9 Analog devices, 99 Analog scanner, 78 Analog signal, 108 Analog-to-digital, 14, 78 Analog-to-digital conversion, 83 Analog-to-digital converter, 16, 85 Analysis, 7, 159 bibliographical, 191, 196 conformance, 143 correlation, 29 cost, 47 design, 7 discrimination, 354 failure, 150 feed-forward,90 mathematical, 9, 70, 71, 243, 258, 270, 14,
90, ... numerical, 12 order, 89 physical systems, 73 post flight, 174 profitability, 118 regression, 29, 8 sales, 86, 117, 118, 137 scientific, II, 17 statistical, 49, 355 stream, 78 systems, 3, 4, 20, 57, 102, 122 trade-off, 148 Analyst, 22, 43, 44,45, 374
applications, 229 mathematical, 244 operations, 71 systems; 22,60,66,67,72,54 Analytic cost accounting, 48 Analytical inventory control, 14 Anchor system (two and three), 18 Angular position, 209 Annealing, 13 Anode streaks, 151 Antenna beam, 209 Antenna pairs, 209 Antennas, 207 receiving, 208 Antwerp, 162 Applications, 4 Applications work, 4 Applied mathematics, 140 Aquisition problem, 209 Arabic book, 161 Arc cosine, 372 Arc sine, 372 Arc tangent, 372 Archimedes, 65 Aristotle, 64, 158, 161, 173 Arithmetic, 159, 160, 161, 173 theoretical, 160 Arithmetic-logic-instruction, 337 Arithmetic operations, 164 Arithmetic operator, 316 Arithmetic overflow, 351 Arithmetic system, 158, 170 Artificial intelligence, 8, 359 Artificial variables, 265 Artificial vector, 261, 265 Assemblage, 68, 69 Assembler-coordinator, 356 Assembler-generator, 363 Assembly time, 370 Astronaut maneuvering unit, 270 Aswell, 72 Asynchronous design, 348 Asynchronous operation, 344 Atmospheric level, 196 Atmospheric movement, 192 Audible alarm, 76 Audible warning, 74 Audio-frequency signals, 216 Auto-abstract, 212, 214 Autocontrol system, 226 Autoencoding,206 Automated data control, 268
INDEX Automated firing rates, lOS Automated grocery store, 230 Automatic abstracting, 193 Automatic access, 7 Automatic attenuating amplifier, 85 Automatic (continuous) blending, 5 Automatic budgeting, 86, 118 Automatic control, 3, I, 3, 14, ... Automatic channeling, 8 Automatic coding-decoding, 193 Automatic data collection, 84 Automatic data logging, 5 Automatic data system, 191 Automatic displays, ISO Automatic evaluation procedure, 5 Automatic gauge control, 91 Automatic guidance, 12 Automatic guidance profiles, 193 Automatic hump yards, 219 Automatic indexing, 193 Automatic information handling, 195 Automatic information regeneration, 193 Automatic information retrieval process, 194 Automatic information retrieval system, 1% Automatic interrupt, 10 Automatic interruption, 44 Automatic interruption technique, 346 Automatic issue, 117 Automatic language translation, 193 Automatic logging cycle, 28, 64 Automatic manned media, 1% Automatic marshalling yards, 214 Automatic material handling, 85 Automatic media, 196 Automatic message, 94 Automatic method, 211 Automatic navigation, 186 Automatic plant control, 204 Automatic process control, 26 Automatic processing, 200 Automatic recorder points, 117 Automatic request, 34 Automatic retrieval, 196 Automatic shutdown, 12 Automatic startup, 12 Automatic train operation, 214 Automatically gauging sentences, 194 Autonomous system, 33 Autosimulation, 193 Auxiliary storage, 12, 210 Availability cargo-space, 174
251
up-to-date, 170 Availability button, 170 Availability record, 170 Average daily balance, 242, 244 Axiom, 227 additive, 227 Axis crossing, 119 Babylonians, 159, 160 Babylonian astronomical observation, 202 Backlog, 122, 123 Backlog recap report, 124 Balance average (daily), 242, 244, 245 daily trial, 236 minimum, 245 remaining, 239 Ballistic missiles, 198 Banking, 229 Banking business, 229, 247 Base-load station, 65 Basic cell, 182 Basic cycle, 307 Basic star, 183 Batch, 4, 6, 15, ... Batch header slip, 238 Batch number, 239 Batch processing (system), 350, 3, 88 Batching, 111 Bauds, 128 BCD,91 Behavior, 5, 224, 228 erratic, 206 Bellman, 247 Bernoulli, 225 Bibliography, 194 Bid evaluation, 232 Billet cutting, 13 Billing, 86 monthly, 230 Billing information, 51 Bimetal strip, 81 Binary, 91, 162, 168, 169 Binary code, 108,313 Binary coded, 179 Binary coded decimal, 105, 162, 163 Binary form, 164 Binary pulse, 119 Binary system, 167 Binary variables, 180 Bistable multivibrator, 191 Bit synchronization, 104
252
INDEX
Black box, 91, 99, 102, 187, 244, 267, 279, 358,359 digital control, 243 Black box analysis, 186 Black box approach, 253 Blank out, 200 Blast furnace, 5, 85, 91, 92 Blast moisture, 92 Blast temperature, 92 Blending operation, 255 Block building, 5, 74 data building, 211 idea building, 210 out-of-balance, 239 prove,239r storage, 315, 319 Block batch number, 238 Block diagram, 104 Block of memory, 349 Block transmission, 323 Blocking, 234 on-line, 221 Blue collar, 61 Boiler, 68, 69 Boiler operations, 70 Boiler panel, 64 Boiler testing, 78 Booking advance, 174 normal, 222 past, 174 Booking master, 129 Boole (George) 173, 174, 180 Boolean algebra, 163, 173, 176,180,181,185, 198 Boolean function, 178, 180, 181, 182, 183 Boolean logic, 174 Boolean matrix, 185, 186 Boolean symbol, 174 Boolean system, 176 Boolean tool, 186 Boolean variable, 105, 183 Boolean vector, 184, 185 Boolean vector product, 184 Booiean vector sum, 184 Brain, 80 Brain stem, 33 Braking time, 225 Bridge-type synchronous detector, 120 Broadcasting batteries, 207 Budgetary control, 106
Buffer, 370 Buffer storage, 86, i16 Buffer storage facilities, 33 Buffering subsystem, 7 Building blocks, 5, 74 Bulbs of Krause, 79 Burst cartridge, 10 Business data processing, il Butadiene, i3 Calculating, 238, 239 Calculation, u, 226 orbital, 206 multicomponent distillation, 19 real time, 1I0 Calculus, 173 Calculus variation, 254 Car based equipment, 226 Car units, 213 Carbon monoxide, 92 Card file, 200 Card Punch, 272 Card reader, 272 Card-to-tape machine, 19 Cargo space availability, 174 Cataiyst performance, 10 Catalytic cracking, 264, 19 Catalytic deterioration, 24 Catalytic reforming alkylation, 13 Catastrophic deficiency, 352 Catastrophic indicator, 316, 3i7 Catch-up feature, 225 Cathode ray consoles, 204 Cathode ray tube, 11,202 Cell body, 33 Cement kiln, 255 Cement manufacturing, 254 Central computer, 135, 161 Central data processing equipment, 54 Central data processing unit, 77 Central interface, 235 Central nervous system, 14, 3i Central processing unit, 12, 154 Central processor, 31, 77, 85, 201 Central recorder, 136 Centralized control, 204 Centralized point, 161 Cerebral cortex, 230 Channel, 160 communication, 165 data, 108, 169 tributary, 141
INDEX trunk, 141 Channel capacity, 209, 210 Channel sequency number, 144 Channel service, 38 Channel word, 38 Character recognition, 229 Check information, 237 Checkpoint procedure, 355 Check up, 17 Checking account, 231 Chemical process controller, 89 Chromatograph, 85 Chronological, 197 Chronological order, 221 Cifra,161 Circuit(ry) electronic, 163 information, 125 integrated, 26 monglyth, 26 self-checking, 204 solid-state, 26 storage, 220 supporting, 232 Classification, 197, 198, 199 Clerical difficulties, 170 Closed loop, 63, 92, 101, 318 semi-closed, 25 Closed loop applications research, 2 Closed loop control, 22, 17 Coating, 83 Coaxial cable, 95 Code conversion, 140 Code generating, 224 Code signal, 224 Coding, 362 Coil, 93 Coke-to-ore, 92 Cold stimuli, 79 Combustion experts, 238 Command signals, 183 Communicating process, 184 Communication, 208 digital, 180 interbanking, 231 intercomputer, 9 two-way, 216 Communication channel, 165 Communication-gap, 4 Communications line, 169 Communications link, 34, 179 Communications network, 159
253
Communication-satellite, 6 Compatibility, 21 Competitive pressure, 233 Compile, 343 Compiler, 362 algebraic, 275 retrieval language, 206 Complementary, 169 Complement, 170 Component configuration, 44 Component specification, 204 Component testing, 14 Comprehension, 233 Comprehensive history, 210 Computer, 5, 307 central, 135 digital, 256, 88 electronic digital, 86 general purpose, 12 master, 1 multiarithmetic, 336 off-line, 345 on-line, 345 primary, 47 process control, 144 real time, 74, 81, 370 stand-by, 206 subsidiary, 169 Computer automation, 3 Computer breakdown, 160 Computer control, 307, 10, 115, 152 Computer-guided control, 244 Computer-guided mechanism, 244 Computer interrupt, 319 Computer network, 54 Computer-oriented application Computer-oriented control system, 244 Computer-oriented quality evaluation, 142 Computer operations Computer processing, 129 Computing, 3 hybrid, 92 Computing capacity, 161 Computing center, 160 Computing events, 211 Computing machine, 4 Concurrent trunk, 94 Conditioning, 9 Cones, 79 Conformance analysis, 143 Conjunction, 178, 234 Console, 7
254
INDEX
Console typewriter, 312 Contact closure input, 12 Continuous annealing lines, 85 Continuous automatic scanning, 66 Continuous processes, 213 Continuous trial, 139 Control, 9, 106 adaptive, 9 air traffic, 179 analytical inventory, 14 automated data, 268 automatic, 3, 1, 3, 14, ... budgetary, 106 cash-on-hand, 112 centralized, 204 closed loop, 22, 17 combustion, 12 computer, 307, 10, 115, 129 communication, 5 cost, 86, 110, ... data, 10, 13, 89, 121, ... digital, 4, 10, 58, 59, 103, 217 digital process, 24 direct, 5 electronic, 89 fire, 8 ground-approach, 187 highway traffic, 13 humidity, 166 input-output, 9 integrated refinery, 13 inventory, 88, 105, 117, 118, 119, 121, ... kiln, 13 manual, 224 missile range, 136 motor traffic, 225 multi-level, 31 multiple unit, 13 numerical, 7 one-level, 31 output, 35 process, 7, 78, 256, 266, 273,310,312,315, 318,363,6,60,86, ... process-type data, 141 product, 40 production, 117, 119, 122, 124, 125, railroad classification yeard, 14 real-time, 9, 77 remote, 131 seat, 136 steering, 227 traffic, 13
wayside, 224 Control action, 244, 256 Control buffer package, 346 Control charts, 62 Control cycle, 323 Control device, 15 Control element, 38 Control entry, 182 Control level, 2 Control loop, 44, 46, 12, 69 Control motor action, 310 Control oriented, 15 Control panel, 15, 64 Control personnel, 175 Control processes, 66 Control program, 3 Control setting, 244 Control situation, 204 Control storage, 38, 318 Control system, 9,37,57,60,101,313,342,68 Control system functioning, 374 Control system operation, 113 Control system programs, 310 Control techniques, 229 Control totals, 236 Control variable, 92 Control word address, 370 Control word field, 370 Convalescence, 281, 289, 290,291,292,293, 296,300 reliability, 295 Conventional notion, 208 Conversion method, 96 analog-to-digital, 83 Conversion schedule, 159 Converted observation, 211 Coordinates hyperbolic, 18 Coordination signal checks, 144 Corpuscles of Meissner, 79 Core storage, 118, 200 Correcting errors, 165 Correlation coefficient, 63 multiple, 373 partial, 373 Cost account, 244 standard product, 123 standard unit, 244 Cost accounting, 86 Cost analysis, 47
INDEX Cost control, 86, 110, ... Cost effectiveness, 35,40 Cost efficiency, 152 Cost efficiency evaluation, 14 Cost variables, 173 Courant, 247 Convariance matrix, 194 Cracking catalytic, 264 fluid catalytic, 13 thermal, 264 Credit card, 57, 58, 59, 60, 232 embossed plastic, 58 Credit card accounting, 59 Credit information, 51 Credit level, 116 Credit verification, 230 Cropping, 88 Cross examine, 199 Cross office function, 144 Crude distillation, 13, 264, 2 Crystal controlled oscillator III Cumulative bias, 226 Customer passbooks, 234 Customer remittances, 59 Cybernetics, 200 Cycle data, 125 Cycle counter, ·90 Cycle statements, 237 Cyclic basis, 337 Cyclic code, 1I3 Cyclic repetition, 316 Cypher, 161 DAC-I,273 Data, II Data acquisition, 271, 90, 181 Data automation, 18,229, 103 Data bases, 339 Data block, 43, 326 Data calculation, 148 Data carrier, 6, 115 Data channels, 108, 169 Data checks, 38 Data classification, 148, 192 Data collection, 83, 267,374,63,184,185, ... Data collection activities, 88, 96 Data communication, 104 Data control, 3, 13, 89, 121, ... Data control ensemble, 116 Data control master, 56 Data control network, 70
255
Data control programming, 353 Data control system, 5, 10, 112, ... Data display, 373 Data efficiency, 40 Data feedback, 141, 144 Data filtering, 59 Data gathering units, 133, 134, 136 Data generation, II, 12,6 Data generator, 194 Data handling, 10,271, 89, 106, 116, ... integrated, 60 internal, 88 real time, 106 Data implication, 223 Data integration, 3, 103, 125, 141 Data interpretation, 226 Data linkage, 336 Data load, 102, 109 Data logging, 5, 16, 103,272, 104, 110 Data logger 89,64 Data network, 51 Data phone, 123 Data processing, 3, 264, 273, 57, 130 Data processing functions, 236 Data processing machines, 206 Data processing system, 67, 102, 118, ... Data processor, 202 Data recording, 148 Data reduction, II, 12, 148,236,310,63,90, 184, ... Data reduction procedures, 193 Data reduction techniques, 247 Data redundancy, 236 Data retrieval, 148, 337 Data sampling, 59 Data storage, 83 Data support, 271 Data synchronizer, 117 Data system, 279, 293 Data tagging, 139 Data transceiver, 123, 125 Data transcription, 1I6 Data transformation, 29 Data transmission, 14,318,112,186,217,232 Data word, 38 Datum, 207 Deadline, 5, II relative, 33 Deadline basis, 190 Deadline capacity, 351 Deadlock, 329, 330 Deadlock detection method, 331
256 Deadlock problem, 52, 328 Dead pockets, 81 Dead Sea Scrolls, 202 Debugging, 358, 362 Decimal, 162, 167 pure, 162 Decimal classification, 197 Decimal digits, 164 Decimal numbers, 160, 168 Decimal system, 158, 163 Decision making, 199,225 Decoder, 372 Demand inquiry, 6 Density, 96 Deposited check, 236 Design analysis, 273 Design automation, 273 Design logic, 275 Desensitize, 101 Deterioration, 231 Deterministic approach, 9 Deterministic control mechanism, 9 Diagnosis, 14 Diagnostic application, 374 Diagnostic coordinator, 315 Diagnostic flag, 140 Diagnostic program, 18 Diagram engine, 216 train formation, 216 train running, 216 Dial telephone, 231 Dichotomy, 10, 11, 159, 198 man-machine, 219 Dictionary, 215 original, 216 Dictionary of key words, 214 Differential calculus, 233 Digesters, 13 Digital,51 Digital automation, 9, 342, I, 85, 86, 218 total, 218 Digital buffer module, 85 Digital clock, 152 Digital communication, ISO Digital computation, 173 Digital computer, 256, 88 Digital control, 4, 10, 58, 59, 103, 217 Digital control application, 243, 244 Digital control equipment, 4 Digital control function, 93 Digital control problem, 242
INDEX Digital control scheme, 104 Digital control system, 13,40,72,319,374,94 Digital data transmission, 273 Digital display, 90 Digital experimentation, 182 Digital fast scanner, 70 Digital guidance, 254 Digital information, 99 Digital input, 369 Digital manipulation, 160 Digital mathematics, 159 Digital master guidance Digital subject, 123, 126 Digital system, 200 Digital technology, 346 Digitizer absolute, 102 frequency conversion, 102 incremental, 102 Dimensional tolerance, 227 Dionysius, 65 Dip tube, 89 Direct hydrocarbon oxidation, 13 Direction-finding equipment, 187 Discrete particles, 158 Discriminatory power, 214 Di~unction, 178,234 Disk file, 37, 200, 165 Dispatching economic, 13 power, 12 Display automatic, ISO data, 373 digital,90 visual, 66, 75, 77 Display printer, 169 Disseminate, 191 Dissemination, 84, 203 Distributed storage libraries, 6 Division, 159, 160 Document scanner, 95 Doppler radar, 221 Double-precision arithmetic, 38 Drawbacks, 191 Due time, 336 Dump, 333, 337 Dumping condition, 334 Duplication, 158 Duplication of programs, 161 Dynamic optimal control, 253 Dynamic program, 22
INDEX Dynamic programming, 216 Dynamic response, 78 Early warning aircraft, 201 Economic dispatch, 13 Economic Statistics, 39 Economically unattractive, 246 Editing function, 211 Education, 194 Effective car utilization, 213 Effective memory, 232 Effective memory capacity, 231 Effector, 5, 14,31,56,77 Effector mechanism, 80 Effector nerves, 78 Efferent, 33 Efferent neuron, 80 Efficient size, 290 Egypt, 65 Egyptian calculation, 158 Egyptians, 158, 160 Electrified cart, 231 Electroencephalocardiogram, 258 Electromagnetic induction, 226 Electromechanical relays, 231 Electronic beam switching tubes, 109 Electronic circuitry, 163 Electronic chauffeur, 226 Electronic component manufacture, 13 Electronic computation, 9 Electronic computing media, 98 Electronic data processing, 202, 56, 117, ... Electronic digital computer, 86 Electronic information system, 138 Electronic machines, 162 Electronic media, 263 Electronic switching arrangement, 12 Electronically cataloging, 202 Electronically processed simulator, 8 Electronics information, 18 industrial, 9 Emergency handling, 354 Emergency stand-by basis, 211 Encoder, 273 Encoding feedback, 107, 108 magnetic, 107, 109 spatial, 108, III time, 110 time base, 107 End-of-block, 139, 140
257
transmit, 140 End-of-block-check, 139, 140 control, 139, 140 End-of-block detection, 370 End-of-file, 364 End-of-run functions, 373 End-of-storage-area, 139 Endogenous information system, 41 Energy dissipation, 232 Engine diagram, 216 Engine testing, 13 Engineering, 229, 103, 105 Engineering design, 11, 142 Engineering sales, 20 English, 161 Entropy, 209, 210 relative, 210 Epiglotis, 80 Equilibrium, 79 Equipment maintenance, 3 Equiprobable subdivision, 197 Erratic behavior, 206 Error, 270 correcting, 165 data, 270 program, 272 programming, 318 quiet, 270 round-off, 38 value, 61 Error bit, 150 Error control, 150 Error correction, 310, 318 Error rates, 62 Error reduction, 62 Error detection, 10,34,270,310,318,370 mathematical-statistical, 270 Error-finding process, 270, 271 Error-free results, 279 Error procedures, 22 Error process, 270 Error rate, 9 Error recording, 354 Error routine, 348 Error scans, 9 Established system, 208 Estimated daily float, 244 Ethylene, 13 Ethylene oxide, 13 Etruscans, 160 Euclid, 160, 161 Euristic, 161
258
INDEX
Evaluation bid,232 cost efficiency, 14 scientific. 145 user, 145 Evaluation of documents, 215 Evolution, 223 control systems, 20 human, 228 industrial, 13 manned,224 mental, 224 purposeful, 224 purposeless, 224 self, 235 system, 13 Evolutionary knowledge, 229 Exception reporting, 148 Execution chain, 354 Executive monitor, 343 Executive programme, 342,343,354,355,359, 360,362 Executive routine, 335, 342, 350, 351, 352, 353,355 Expansion reaction, 86 Experimental communications, 14 Experimental life curve, 85 Experimentation, 47 digital, 182 mathematical, 7 Exteroceptors, 79 Extrapolating guidance, 41 Extrapolation, 14 Facsimilile, 30, 117 high-speed, 223 Fail-safe, 34, 314,227 Fail-slow, 8, 278, 299 Fail-soft, 8, 278, 299 Failure, ISO computer, 17 data, 17,318 interrupting, 297 reservation, 170 systems, 301 Failure analysis, 150 Failure data, 150 Failure-free operations, 285 Failure indicator, 145, 146 Failure proof, 146 Failure rate, 286, 146 Fallback, 40
Fans, 69 Fault recording, 110 Feedback, 10, 11,66,279,133,142, ... data, 141, 144 field,7 information, 150, 150 negative, 200 nonfeedback, 150 Feedback control, 84 Feedback encoding, 107, 108 Feedback loops, 34, 150 Feedback system, 34, 16 Feed-forward, 8, 84, 47, 143, 144 Feed-forward analysis, 90 Feed-forward control, 84 Feed-forward facilities, 278 Feed plate matching, 20 Feed stock, 4 Feed-water flow, 71 Feed-water pressure, 71 Field coil power, 110 Field feedback, 7, 145 Field information, 145 Field size, 38, 318 Field use, ISO File address, 59 index, 59 master tape, 128 transaction, 242 File maintenance operation, 134 Financial transactions, 230 Fisher, 195,219 Fixed information, 136 Fixed time melting, 'i!:7 Flag bit, 38, 318 Flight arrivals, 183 Flight path, 203 Flight plans, 177 Flight process strip, 185 Flip, 323 Flip flop, 74 Flood control, 249 Flood control works, 248 Flood frequency, 248 Flood frequency curves, 248 Flow, 28 reverse, 62 Flow diagramming, 307 Flow rates, 92, 14 absolute, 92 Flowing stream, 82
INDEX Flue dust losses, 92 Fluid catalytic cracking, 13 Fluid dynamics, 194 Auid filling, 81 Forge Metal Works, 119, 126 Format generator, 312 Forward looking, ISO Fractions, 160 Freight car inventory, 213 Freight tons, 214 Frequency, 101 logging, 65 use, 174 vibration, 86 Frequency band width, 186 Frequency distribution, 190 Frequency signal, 119 Galvanization lines, 13 Ganglion collateral, 33 pre ganglion, 33 vertebral, 33 Ganglionic, 33 Gantt chart, 98 Gas-cooled reactors, 10 Gas distribution, I3 Gas purification, I3 Gasoline accounting, 49 Gasoline blending, 13,263,264,7, 17, 19 Gating, 234 Gauge, 93, 94, 121 Gauge control automatic, 91 Gauge tolerance, 151 Gauss, 372 General ledger, 230 Generation, 26 first, second, third Generating unit, 3 Generator panel, 64 Generic problem, 214 Geometry, 160, 173 Goethe, 194 Gradient methods, 254 Gradual improvements, 246 Graphic source material, 224 Greeks, 158 Grems, 195,219 Grid-type, 83 Grinding, I3 Ground-approach control, 187
259
Guidance, II, 30 automatic, 3 computer, 3 sophisticated, 5 systems, 249 traffic, 7 Guidance action, 69 Guidance approaches, 177 Guidance efficiency, 272 Guidance equipment, 4 Guidance function, 12, 325 Guidance information, 2 Guidance logic, 325 Guidance profile, 191,202 Guidance resource, 43 Guidance system, 10 Gypsy I and II, 11 Gyro apparatus, 207 Halving, 158 Hardware, 10, 11,229,309,313,314,323, 324, 348, 349, 350 machine, 190 simulated, 335 Hardware feature, 45 Hardware identifier, 146 Hardware implication, 150 Hardware reliability, 374 Hardware software complex, 273 Hardware system, 434 Header item, 239 Hearing, 79 Heat exchange (performance), 19, 20 Heat losses, 92, 93 Held funds, 234 Hellenistic period, 158 Herodotus, 158 Heuristic procedure, 219 Heuristic retrieval, 196 Hexadecimal, 162, 167, 168, 169 High-density recording, 232 High frequency, 195 High order plant, 204 High pressure polymerization, I3 High-speed buffers, 10 High-speed calculators, 232 High-speed data processors, 73 High-speed density magnetic tape, 26 High-speed facsimile, 223 High-speed memory, 231 High-speed random access memory, 6 High-speed switching devices
260
INDEX
Higher control level, 2 Hindu-Arabic notation, 160 Hindu numerals, 162 Historical ephemeris, 202 Historical information, 178 Homeostatic concepts, 244 Homeostatic mechanism, 229 Homeostatic purpose, 205 Hot file, 172 Hot metal addition, 99 Hot metal preparation, 99 Hot spots, 81 Human channel, 219 Human limitation, 219 Human linkage, 99 Human operators, 99 Human organism, 232 Human nervous system, 77, 229, 231, 232 Human subsystem, 36 Humidity control, 166 Hunger, 79 Huxley, 228 Hydrodynamic equations, 194 Hyperbolic coordinations, 19 Identification key, 174 Independent variable, 24, 92 Index, 215,289 analytical, 202 commutative, 176 distributive, 176 evaluation, 47 nodal, 206 retrieval, 215 sorted-classified-abstracted, 190 Index number, 265 Index register, 309 Index term number, 215 Indexing, 195, 199 analytical, 194 automatic, 193 multidimensional, 191 primigenial indexing for heuristic retrieval, 195 source, 195 Indexing contradictions, 194 Indication register, 220 Indication system, 224 Individual account, 229 Individual transaction, 241 Industrial Engineering, 122 Industrial evolution, 13
Industrial process control, 136 In-flight tracking, 197 Information billing, 51 check, 237 credit, 51 field, 145 fixed, 136 guidance, 268, 2 historical, 178 management, 215 operational, 215 process, 93 quality, 146 quick look, 92 reservation, 162 semifixed, 136 speculative, 237 stored, 191 tally, 139 train, 222 variable, 136 Information circuit, 126 Information feedback, 150, ISO Information processing, 204 Information retrieval, 136, 180, 190, 191,202, 211,214,215 automatic, 193,205 Information retrieval system, 192,216 Information retrieval subsystem, 194 Information storage, 136 Information system, 204 Information tableau, 204 Information tag, 200 Information technology, 10 Information theory, 4, 223, 227 Information understructure, 4 Ingots, 96, 108, ... Initial order handling, 116 Initialization, 242 In-memory data selection, 326 In-memory deadlock, 330 In-memory operations, 239, 325, 330 In-memory overlaps, 320 In-memory stored records, 332 In-process, 14 In-process operations, 325, 326 Input, 199, 236 alphanumeric, 70 contact closure, 12 manual, 70 Input coordinator, 105
INDEX Input document, 89 Input key board, 169 Input scan checks, 12 Input scanner, 62 Input variable, 93 Input/output balance, 355 Input/output coordinator, 281 Input/output devices, 281, 322 Input/output interrupt, 348 Input/output typewriter, 85 Input requirements, 233 Input ring, 138 Input signal, 70 Input/throughput system, 152 Inquiry specification, 236 Inquiry terminals, 160 Inquiry transaction, 161 Inscribing unit, 236 Installment loans, 230 Instantaneous response, 40 Instruction word, 38 Instrument system, 189 Integer, 159 positive, 150 Integer constant, 317 Integrated circuitry array, 204 Integrated communications, 3 Integrated data handling, 60 Integrated processing, 55 Integrated real-time operations, 17 Intelligence, 191, 239 artificial, 8, 359 Interbanking communications, 231 Intercommunicate, 6 Interface, 7, 314, 319 central, 235 local, 234 Interface control, 319 Interface electronics, 85 Interface instruments, 10 Interface mechanism, 10 Interface media, 230 Interface memory space, 335 Interface units, 146, 123 Interlacing, 151 Interline accounting, 62 Intermixing of signals, 179 Internal circuitry design, 163 Internal data handling, 88 Internal system, 309 Interoceptor, 79 Interpolation, 97, 98, 201, 202, 225, 243, 371
261
Interrupt command, 318 Interrupt information, 336 Interrupt program, 48, 49 Interrupt stream, 9 Interrupt techniques, 44 Interval scanning timer, 9 Intra-extrapolation, 193 Intrinsic characteristic, 195 Inventory control, 88,105,117,118,119, 121, ... process, 55 Inventory file maintenance, 134 Inventory record, 108, 171, 172, 175 Inventory retrieval, 196 Iron, 13 pig, 85 Iron oxide, 91, 92 Irredundant, 182 Irredundant coverings, 183 Isothermal flash equations, 17 Item status listing, 111 Iterative techniques, 38 Islands, 91 discrete digital automation, 86 Jet aircraft, 197 Jet stream, 195 Jewish, 161 Journal storage, 135 Junction controller, 219 Keyboard, 7, 272 Keyboard data, 85 Keyboard punching, 84 Keyboard word, 206, 321 Kinesthesis, 80 Lagrange multipliers, 254 Landing instructions, 177 Language, 208 human, 208 machine, 208 optimal electronics, 205 pseudo, 211 query, 12 Language translation, 20I Large-scale electronic data machine, 3 Large-volume accounts, 237 Laser, 130 Latin, 161 Lattice, 198 Lattice-like, 198
262 Lattice-model, 200 Law, 96 associative, 175 commutative, 175 distributive, 175 mathematical, 94 Newton's, 212 Law of error, 248 Learning, 9 Learning control device, 204 Lease meter, 53 Ledger, 56 general,56 subsidiary, 56 Length character, 38 Levers, 207 Limit words, 38, 318 Linear, 97, 159 Linear dependent, 183 Linear independent, 183 Linear programming, 254, 257, 266, 373 Linear programming routine, 261 Linear sequence, 205 Linear sweep, 110 Line capacity, 216 Line segment, 181 Line speed assignment, 'ir7 Line user, 12 Linguistics, 194 Liquid flow rates, 24 Literary data processing, 193 Literary sets, 199 Little Gypsy, 2, 75 Load master, 132, 133 Loan installment, 230 Loan payments, 229 Local interface, 234 Logarithmic, 227 Log, 75 Log entry, 312 Log sheet, 68 Log tape, 211 Logging, 86, 312, 74, 155 automatic data, 5 data, 69 production, 94 routine, 308 Logging cycle, 31, 154 automatic, 28, 64 Logging frequency, 65
INDEX Logging interval, 65 Logging typewriter, 312,74 Logging sequence, 154 Logger data, 89, 66 Logic, 173,223,234 Logic operator, 316 Logic system, 190 Loop, 316 closed, 63, 92, 101,318 control, 46, 12, 69 information, ISO open, 27, 63, 313 priority, 337 semi-closed, 25 sub, 68 Loop transducer, 152 Louisiana Power and Light Co., 9 Loss of data, 208 Lowell, 247 Low-frequency words, 216 Low-volume transaction, 231 Lubricating oil temperature, 73 Lugubrious profession, 194 Machine output, 168 Macroop, 371, 373 Macroop computers arctan, 372 Macrooperation, 139,364,370,371 Magnetic cores, 40 Magnetic drum, 37 Magnetic ink, 229, 238 Magnetic tape, 118, 124,211,90,109,125,129,
130 Magnetic tape drives, 272 Maintenance, 290,291 equipment, 3 inventory file, 134 order file, 134 systematic, 289, 290, 291, 293, 85 Maintenance dependability, 291 Maintenance features, 293 Maintenance programs, 291 Management information, 215 Management planning, 14 Manchester University, 194 Man-library communication, 191 Man-machine dichotomy, 219 Man-machine system, 35 Man-made information machines, 235 Man-made information systems, 231, 232 Man-made nanowatt devices, 233
INDEX
Man-made system, 2, 6, 66, 224, 232, 234, 237, 282 Manned,15(unmanned) unmanned abstracting, 194 unmanned system, 193 Manned media, 192 Manual control, 224 Manual digit entry switches, 313 Manual input, 70 Margin of safety, 206 Marshalling yard, 219 Mask cyclic, 113 improved encoding, 108, 113 photographic binary, 113 rectangular, 11, 112 redundant, 113 Mass market, 140 Mass memory, 235 Mass production, 139 Mass spectrometer, 85,5 Master booking, 129 load, 132, 133 Master computer, 1 Master data control, 56 Master item card, 127 Master order, 129 Master plot, 203 Master record, 242 Master tape, 111, 129 Master tape file, 128 Material-component interaction, 10 Mathematical analysis, 9, 70, 71, 243, 258, 270, 14, 90, ... Mathematical analytical study, 259 Mathematical description, 217 Mathematical doctrine of probabilities, 173 Mathematical experimentation, 7 Mathematical expression, 34 Mathematical forecasting, 14 Mathematical hypothesis, 243 Mathematical law, 94 Mathematical logical system, 234 Mathematical manipulation, 60 Mathematical model, 10,70,73,241,243,245, 246,252,267,274,307,99, ... Mathematical programming, 257, 262, 263,264,266,267,268,269,273, 117, ... Mathematical programming system, 223 Mathematical reliability, 282
263
Mathematical simulation, 12, 70, 73,245,307, 329,6,47, ffl, ... Mathematical simulator, 10,7,54,68,93, ... Mathematical statistical approach, 29 Mathematical statistical definition, 142 Mathematical statistics, 247, 52 Mathematical system, 73 Mathematical test, 117 Mathematical tool, 104, 142 Mathematics, 223 applied,9,140 Matrix, 188, 189,265 basis, 265, 266 covariance, 194 data, 265, 266 mercury wetted relay, 70 stochastic, 198 transformation, 265 Matrix addition, 186, 372 Matrix equation, 185 Matrix representation, 217 Matrix theory, 180, 185 Maurin, 2, 10 Maxterm, 179 Maze-like, 198 Maze-like structure, 200 Mean time between failure (MTBF), 285 Mean time to repair, 292 Medical research, 14 Megasystem, 6 Memory, 236, 239 alarm, 154 auxiliary, 37 bulk,85 dead, 78 external, 37 high-speed,26,162 in-transit, 140 large, 26 low-cost, 26 mass, 235 nondestructive, 83 sequential, 234 shared, 9 Memory access, 349 Memory address, 320 absolute, 371 shared,9 Memory dump, 332, 348 Memory dump operation, 333 Memory guard, 322 Memory print, 322
264 Memory protect equipment, 37 Memory specification,_ 37 Memory size, 39 Memory system, 153 Memory-to-memory,35 Memory-to-memory transmission, 230 Mercury-wetted relay matrix, 70 Merge routine, 322 Mesopotamia, 160 Message alphanumeric, 75 Message auditing, 86 Message capability, 180 Message control, 86 Message processing function, 140 Message relays, 6 Message switching, 135, 136, 145 Metallurgy, 122 Metaphysical, 173 Meter lease, 53 plant, 53 sales, 53 system, 53 well, 53 Meteorological data, 189 Methane gas, 115 Methanol, 13 Method of summing Microwave, 130, 131 Microwave channel, 95 Microwave link, 230 Microwave network, 223 Military digital communication, 136 Military installations, 160 Military logistics, 14 Mills, 94 cold, 93 hot strip, 85, 86, 93 pulverized fuel, 69 rod, 125 rolling, 85, 86, 88 roughing, 91, 94 slabbing, 91, 96, 149 Mill invoicing, 125 Mill operating, 93 Mill rolling, 123, 124 Mill schedule, 93, 106 Mill utilization, 104 Minimal forms, 181 Minor burden, 232 Minterm, 179
INDEX Missile checkout, 13 Missile test stand studies, 14 Missile tracking, 205, 207, 210 Mode request, 319 Mode status, 319 Model dynamic, 253 linear, 247 mathematical, 10,70, 73, 241, ... 99, ... reliability, 293 TVA, 249 Model building, 263 Model checking, 263 Modular components, 6 Modulation carrier, 119 Modulator-demodulator, 115 Monitor, 41,51,312,343 human, 352 Monitor function, 86 Monitor program, 353, 181 Monitor system, 360 Monitoring, 15, 16,315, 141 alarm, 354, 110 nuclear, 12 Monitoring program, 332, 357 Monitoring subroutine, 358 Monte Carlo, 49, 86, 98 Monthly billing, 230 Motor driven coded disk, 16 Motor traffic control, 225 Movement, 79 atmospheric, 192 Moving vehicles, 217 MTBF,300 Mucosal lining, 79 Multichannel capability, 26 Multicomponent distillation calculations, 19 Multicomputer, 6 Multifield memory, 52 Multifunctional planning, 12 Multiguidance ensemble, 236 Multilevel memory, 39 Multimachine system, 160 Multimeaning, 193 Multimemory, 236 Multipath distortion, 87 Multiple-access scientific computing system, 136 Multiple correlation, 373 Multiple level memory, 309 Multiple regression, 8 Multiple remote input/output terminals, 12
INDEX Multiple unit control, 13 Multiplexing, 10, 52, 190 Multiplexing approach, 31 Multiplexing equipment, 161 Multiplexing unit, 169 Multiplexor, 89 Multiplication, 158 Multiplication process Multiprocessing, 6, 26, 236, 299, 331 Multiprocessing considerations, 337 Multiprogramming, 26, 275, 298, 349 Multiprogramming-multiplexor, 354 Multiplying sequential, 158 Muscle spindles, 79 Natural information network, 32 Natural logarithm, 163 Natural organism, 228, 230, 232 Natural system, 228, 231 Navigation plan, 185 N-dimensional cube, 183 Negations, 234 Neophyte, 308 Nerve disturbance, 231 Nervous system human, 231, 232 visceral, 33 Neumann (v.), 230 Neural cybernetic, 34 NEURON, 70, 74, 75 Neurons, 33, 230,232 Newton, 360 Newton's law, 212 Nielen,236 Noise, 290 Noise detecting, 34 Nondeterministic retrieval duties, 191 Nonlinear function generation, 78 Nonlinear programming, 254 Nonlinear system, 204 Nonvolatile, 83 Normal booking, 222 NTP,2 Nuclear energy process, 13 Nuclear monitoring, 12 Number of debits, 245 Number of deposits, 245 Numbering technique, 173 Numerical codes, 217 Numerical data processing, 193 Numerical reliability, 148
265
Numerical system, 157, 159, 162 Nylar,13 Nylon, 13 Objective time for failure maintenance (OTFM),292 Observing equipment, 193 Obsolence, 245 Obstacle detection, 226 Oceanographic data, 90 Octal,91, 165, 167, 168 Octet, 346 Octet band wagon, 346 Odometer, 66 Off cycled statements, 243 Off grade production, 45 Off-line(ness), 11,26,63,4 Off-line application, 18 Off-line data transmission, 35 Off-line media, 270 Off-line operation, 168, 172 Off-line processing, 2 Off-line system, 343 Off-line traffic agencies, 223 Off-line work, 357 Off-normal alarm, 76 Olfactory cells, 79 On-linemess), 5, 6, 14,49, 86, 262, 314,318, 2,4,90 On-line application, 307 On-line basis, 102 On-line blocking, 221 On-line computer, 99, 308, 310 On-line computer system design, 347 On-line computing, 266 On-line data, 98 On-line data application, 263 On-line data collection, 86 On-line data processing, 86 On-line digital control, 26 On-line display consoles, 12 On-line inquiry, 338 On-line input, 35 On-line inventory control, 136 On-line machine, 337 On-line operations, 64, 357, 229 On-line order processing, 136 On-line peripheral gear, 122 On-line process control, 94 On-line processing, 2 On-line purchase, 113 On-line real-time system, 342
266
INDEX
On-line system, 52, 64, 84, 96 On-line teletype send-receive set, 338 On-liners, 10 On-the-process, 14 Op code, 38, 318 Open hearth (operation), 13, 88, 93, 96, 99 Open loop, 27, 63,313 Open order report, 111 Open seats, 222 Operating executing programs, 343 Operating process, 199 Operation time, 35 Operational information, 215 Operational planning Operational pulse, 8 Operational real-time system, 26 Operatuibak traffic, 187 Operationally attractive, 246 Operations research, 70, 72, 194 Oppenheimer, 60 Optimal electronic languages, 205 Optimal solution, 214 Optimization, 30, 47 process, 310 self,9 steady state, 253 systems, 354 Optimization package, 163 Optimization program, 94 Optimization routine, 163 Optimum solution, 215 Optimum strategy, 217 Orbit plane, 212 Orbital calculation, 206 Order analysis, 89 Order file maintenance, 134 Order master, 129 Order placement, 232 Order system, 88 Order tracking, 152 Ordinary rotation, 160 Organism, 5 Orifice, 2 Oscillatory conditions, 25 OTFM,292 Out of balance blocks, 239 Out of balance tape, 239 Output, 199,236,242, 108 Output pickup, 310 Output scheduling, 370 Output space, 335 Output terminals, 107
Overdraft exception, 244 Overdue accounts, 116 Overshoot, 82 Oxygen sampling, 78 Page printer, 66 Pain deep, 79 superficial, 79 Paper-pulp factory, 25 Paper tape, 124,74,129,130 Paper tape reader, 70, 223 Parasympathetic, 33 Parasympathetic impulse, 33 Parasympathetic impulse subsystem, 33 Parity checks, 150 Parity ordering, 200 Partial correlation, 373 Parts summation, 227 Party lines, 166 Passenger wait(ing)-list(ed), 172, 221, 222 Passenger name, 171 Passenger record, 171 Pattern task,268 Pattern recognition, 8, 239 advanced, 8 Payment loan, 229 per diem, 223 stop, 234 Payroll, 11,86, 105, ... Payroll distribution, 55 Pearson, 252 Pentads, 162 Per diem payments, 223 Performance characteristics, 183 Perigee, 212 Peripheral blood vessel, 33 Peripheral equipment coordination, 311 Persian, 161 Personal flight control, 207 PERT, 259, 262 PERT/Cost, 374 Petroleum application, 3 Petroleum engineering, 18 Phase reversal modulation, 120 Philips, 247 Photocell driver, 204 Photoelectronics, 89 Pickup sensory elements, 41
INDEX Pig iron, 85 Pilot plants, 13 Pipeline pressure, 267 Piping system, 86 Pitch,206 Place value system, 161 Plan picker, 11 Plant meter, 53 Plateau, 107 Plate industry, 151 Plate plant, 151 Plato, 70 Playback, 126, 179 Pneumatic differential pressure transmitter, 89 Polling, 368, 369,370 Polling status, 369 Polybolon, 65 Polymerization, 264 Polynomial, 179, 185,372,373 minimal, 185, 186 Position, 79 angular, 209 Position indicating pencil, 275 Position reports, 186 Positional notation, 158 Positive integer, 150 Post-execution, 356 Post-flight analysis, 174 Post maintenance, 356 Potential digital experimentation, 7 Power dispatching, 12 Power generation load, 24 Power production, 64 Predictive purposes, 192 Predetermined function, 246 Preliminary orbit, 211 Pressure, 28, 79 barometer, 189 barometric, 190 pipeline, 267 Primary sorting, 236 Printed bibliography listing title, 197 Printer alarm, 76 page, 66 strip, 66 Print-outs, 28 Priority signals, 344 Private lines, 118 Probability of success, 282 Probability theory, 200
267
Procedures work, 121 Process control, 7, 78,256,266, 273. 310,312, 315,318,362,363,6,60,86, ... automatic, 26 industrial, 71 statistical, 71 stochastic, 224 Process-control application, 41, 161,242, 243, 327 Process-control computers, 144 Process-control for boiler operations, 66 Process-control program, 315, 357, 374 Process-control programming, 307 Process-control purposes, 228 Process-control systems, 322 Process data, 41 Process diagnostics, 310 Process industry, 1 Process information, 93 Process inventory, 55 Process lag, 82 Process optimization, 310 Process quality data, ISO Process storage, 38 Process-type data control, 141 Process-type problems, 307 Process-type studies, 5 Process value, 28 Process variable, 27, 312 Process word, 38 Processing, 10 batch, 3, 88 computer, 129 data, 3, 264, 273, 57, 130, ... electronic data, 202, 56, 117, ... information, 204 integrated, 55 warehouse, 55 Processing unit, 334, 337 Product assurance, 14, 144, 148 Product control, 140 Productivity, 214 Production analysis, 338 Production control, 117, 119, 122, 124, 125 Production logging, 94 Production planning, 85, lOS, 110, 121, 124, 137, ... Production scheduling, 13, 86 Production test plan, 156 Profile sheet, 155 Profitability analysis, 118 Program
268
INDEX
control, 3 monitor, 353, 181 optimization, 94 verification, 56 Program control words, 9 Program relocation, 9 Programmer, 308 systems, 357 Programming, 374 computer processed, 264 linear, 254,257, 259,266, 373 (nonlinear, 254) mathematical, 257, 262, 263,264,266,267, 268,269,273, 117, ... micro, 345 Programming system, 210 Programming routines, 308, 309 linear, 261 Progration time, 81 Prone blocks, 239 Propellerless fuselages, 197 Proprioceptive sensibility, 79 Proprioceptor, 79 Protozoan, 238 Proximity, 195 Pseudo-instruction, 371 Pseudo-random number, 372 Pseudo real-time, 35 Psychology, 194 Ptolemy, 161 Pulse count, 91 Pulse-count feature, 92 Pulverized fuel mills, 69 Pumps, 69, 73 signal,73 Pump valve, 73 Pump vibration, 73 Punched card, 21, 88, 234 Punched tape, 21, 88, 125 Quadratic equation, 159 Quality assurance, 62, 138, 139, 146, 151 Quality control, 1, 63, 138, 141, 144, 155 statistical, 62, 114, ... Quality history, 278, 299, 151 Quality history logs, 268 Quality history records, 348 Quality information, 146 Quality inspection, 149 Quality-oriented data network, 143 Quality purpose, 14 Quality records, 14
Quarternary, 162 Quarter space, II Queue, 40, 96, 100 Queing message, 144 Queing line, 98 Queing message, 135 Queing multidemands, 162 Queing theory, 96 Query, 40, 153, 197,353,54 Query language, 12 Quick-look information, 92 Quinary, 162 Quinary decimal, 162 Quinary system, 163 Radar doppler, 221 Radar blip, 203 Radar detection area, 198 Radar simulator, 179 Radiation effect, 196 Radio data links, 208 Radio telemetry, 207 Railroad classification yard control, 14 Railroad problems, 213 Railroad system, 214 Random access, 10, 37 Random access memory, 26 Random events, 96 Random noise, 227 Random normal number, 99 Random number generator, 372 Random sampling, 62 Random trial-and-error, 9 Random walk procedure, 49 Randomize, 200 Range evaluation, 372 Reaction kinetics, 255 Reactive agent, 89 Realtime, 5,6, 10, 17,49,83,84,92, 137,202, 262,270,281,308,309,314,322,356, 366, lIS, 138, ... Real time application, 313, 315, 318,359, 362 Real time basis, 114 Real time calculation, 110 Real time clock, 254, 312 Real time computer, 270, 308, 357, 366 Real time computing, 273 Real time control, 17, 323 Real time control application, 1 Real time control subroutines, 374
INDEX Real time coordinator, 346 Real time data handling, 106 Real time data system, 16 Real time executive routines, 342 Real time jobs, 325 Real time machine, 95 Real time operation, 115, 256, 343, 347, 349, 374 Real time problem, 20 Real time process, 257, 271 Real time program, 312, 313, 327 Real time purpose, 307 Real time system, 3, 37,116,190,278,330,332, 344, 345, 358 Receiver-emitter, 148 Receiving antennas, 208 Receptor, 5 Receptor nerves, 78 Reclassification, 214 Record carrier, 123 Record making, 148 Recorder central, 136 Recording device, 231 Recreation of historical scripts, 201 Recycle flow, 89 Recycle stream, 89 Reduction data, 11, 12, 148,236,310,63,90, ... error, 62 Redundancy, 10, 210, 302 data, 236 high, 208 low, 208 Refining, ffl Reflex arcs, 80 Regeneration of information, 202 Register, 38, 78 address, 349 indication, 220 reversible, 108 storage, 323 word,48 Regression analysis, 29, 8 Regression coefficient, 373 Regression estimates, 63 Relay organ, 230 Reliable service, 214 Reliability, 4, 9, 279, 281, 284, 2%, 144 mathematical, 282 numerical, 148 standard, 287, 288
269
systems, 278 Reliability consideration, 374 Reliability feature, 293 Reliability level, 35 Reliability model, 293 Reliability requirements, 204 Reliability studies, 7, 14, 279 Remaining balance, 239 Remittance customer, 59 Remittance stubs, 59 Remote deposit accounting, 136 Remote device, 40 Remote distance, 51 Remote unit, 314 Requirement determination phase, 43 Research engineer, 162 Reservation failure, 170 Reservation information, 162 Reservation message, 170 Reservation procedure, 173 Reservation system, 169, 221 Resource allocation, 42 Respiratory movements, 79 Response time, 39, 40, 153 Response time requirement, 160 Resistance thermometer, 83 Resistance wire, 81 Restricted delay, 111 Retention time, 86 Retrieval, 195, 199,203 Retrieval language compiler, 206 Retrieval system, 206 Reverse flow, 62 Rickover (Admiral), 142 Risk groups, 61 Rod, 79, 83 Rod mill, 125 Roman, 160 Rolling mills, 13 cold, 13,85 hot, 13,85 Roughing mill, 91, 94 Rolling stock, 215, 217 Routine executive, 335,342,350,351,352,353, 355, 12 linear programming, 261 optimization, 163 presort, 322 programming, 308, 309 real time executive, 342
270
INDEX
sort, 322 table-lookup, 213 transaction tracing, 39 Routine logging, 308 Running speed, 224 Saccule utricle, 79 Safe choice, 225 Safe stop, 312 SAGE, 30, 201 Sales analysis, 86, 117, 118, 137 Sales meter, 53 Sampling, 60, 63 data, 60 oxygen, 78 random, 62 statistical, 60 Sampling system, 62 Sample size, 60,61 statistical, 62 Sample survey, 63 Satellite bearing rocket, 206 Satellite computer, 96 Satellite machine, 61 Satellite tracking, 209 Satisfactory performance, 290 Savings bank, 233 Savings deposits, 229 Scalar multiplication, 372 Scaling, 78 Scan error, 9 Scan cycle, 312 Scan initiation, 106 Scan readout, 109 Scan signals, 106 Scanner, 7, 85 analog, 78 digital fast, 70 document, 95 input, 62 Scanner speed, 62 Scanning, 196, 242, 307, 311, 312,14,58,74, 108, 152, ... alarm, 12,69, 153 Scanning cycle, 18,52,307,74 Scanning intervals, 312 Scanning rate, 16 Scanning sequence, 154 Schedule maker Science, 223
Science Library, 194 Science of logic, 173 Scientific analysis, 11, 17 Scientific arbitration, 199 Scientific classification, 192 Scientific evaluation, 145 Scientific methods, 202 Scopostic, 225 Scopostic management, Scopostic phenomena, 225 Seats opening, 222 Seats available, 222 Seats inventory, 223 Secondary retarders, 221 Segment, 160 Seismic data, 18, 19 Seismic migration, 19 Selection-in-memory, 326 Selective fading, 87 Self-adjusting, 237 Self-adjusting faculty, 330 Self-checking circuitry, 204 Self-correcting capability, 191 Self-correcting features, 34 Self-correcting feedback loop, 84 Self-optimization, 9 Self-optimizing logical system, 180 Self-reparing, 237 Self-transforming process, 191 Semantic, 194,209 Semantic channel capacity, 209 Semicircular canals, 79 Semiclosed loop, 25 Semifixed information, 136 Semiprogram, 332, 333, 334, 335, 336 Sensor-actuator system, 84 Sensor inputs, 191 Sensory, 14 Sensory device, 83, % Sensory element, 78 Sensory technique, 77 Sequential multiplying, 158 Service charge, 244 Servo mechanism, 14 Servo motor, 78 Set, 204 Set point, 34 Set-up time, 104 Sexagesimal notation, 159 Shaft efficiency, 92, 93 Shaft position, 78
INDEX Shannon, 208 Shared memory, 9 Shewhard, 63 Shearing, 89 Shift word, 372 Signal audio frequency, 216 command, 183 code, 224 input, 70 priority, 344 scan, 106 slave computer, 320 video, 224 Signal lines, 34 Signal processing, 99 Signal pump, 73 Signal voltage, 267 Simple counters, 89 Simulated basis, 195 Simulated hardware, 335 Simulated work load, 181 Simulation, 361, 88 mathematical, 12, 70, 73, 245, 307, 329, 6,47,87, ... Simulation studies, 49, 193 Simulation technique, 86 Simulator, 70, 199,228,253,256,307 digital,9 electronical processed, 8 mathematical, 10,7,54,68,93 radar, 179 Simultaneous transaction, 168 Single function key, 168 Sintering, 13 Sinusoidal variation, 246 Siphons, 65 Skeleton tables, 202 Slab, 93, 94 temperature, 94 thickness, 94 width,94 Slabbing mills, 91, 96, 149 Slack period, 168 Slack variable, 265 Slack vector, 261, 265 Slave computer signal, 320 Smoothening, 310 Smell,79 Soaking pit, 91, 149 Soda ash, 13 Software, 4, 343, 366
Software implication, 150 Software package, 374 Software specification, 190 Solenoid-operated valve, 89 Solid(s)-charging, 87 Solid-state circuitry, 26 Solid-state devices, 374 Sophisticated diagnostics, 8 Sorting primary, 236 Sorting operation, 130 Sorter-reader, 238 Species-formation, 235 Spectrophotometer, 85 Speculation, 238 Spinal cord, 33, 80 Square root, 78 Staff routers, 216 Standard cost prices, 55 Standard elements, 11 Standard product cost, 123 Standard quality, 141 Standard reliability, 287, 288 Standard unit cost, 244 Standby computer 206 Standby emergency, 68 Standby device, 72 Standby oscillator, 227 Star basic, 183 essential, 183 Station activator, 191 Statistical package, 373 Statistical process, 71 Statistical quality control, 62, 114, ... Statistical research, 182 Statistical sample, 62 Statistical solver, 372 Statistical treatment, 193 Statistics, 200 economic, 39 mathematical, 247, 52 Status alteration, 370 Status display, 7 Steady-state optimization, 253 Steam electric generating, 3 Steel industry, 85 Steel process, 91 Steel works, 86, 96 Steering control, 227 Steering correction, 227 Steinmetz, 72
271
272 Stereognosis, 79 Stimulus response, 8 Stochastic, 71, 187, 330 Stochastic distribution, 313 Stochastic matrix, 188 Stochastic process, 224 Stochastic processing, 213 Stochastic searching, 206, 211 Stock-brokerage, 136 Stone-to-ore, 92 Stop minus, 207 Stop payment, 234 Stop plus, 207 Storage, 203 auxiliary, 12, 210 control, 318 core, ns, 200 in transit, 135 journal, 135 process, 135 Storage blocks, 315, 319 Storage circuits, 220 Storage media, 210 Storage register, 323 Stored information, 191 Strain gauges, 83 Strain gauge readings, 90 Stratification, 81, 60 Strip printer, 66 Stripping, 9S Stripper bay, 96 Structural characteristics, 230 Subloop,68 Subset, 179, 182, 183, 199 Subsidiary computer, 169 Subsonic, 262 Subtraction, 160 Subtraetor, 78 Subway (problem), 213, 224 Successful operating time, 298 Successor, 352 SUMER, 65, 367, 374, ... SUMER assembler, 365 SUMER executive, 364, 365, 366 SUMER language, 370 SUMER macro operation, 364 SUMER programming system, 365 SUMER tester, 365 Sumerians, 158, 160 Summarized entries, 234 Superheater, 72 Superheater outlet, 70
INDEX Supersonic, 262 Supersonic aircraft, 198, 199 Supporting circuitry, 232 Surge tank, 89 Sweep voltage, III Switching, 74, 230 Switching mechanism, 230 Switching network, 181 Switching speed, 231 Symbolic language, 173 Symbolic program tape, 365 Syllogism, 199,234 Sympathetic, 33 Sympathetic division, 33 Sympathetic impulses, 33 Sympathetic subsystem, 33 Synapse transmittal, 231 Synchonizer, 86, 323 Synchronous detection, 199 Synchronous motor, 90 Synthetical alcohol, 13 Synthetical rubber, 13 Syrakusia, 65 Syrian, 161 Syringe, 65 System, 68, 72, 262 accounting, 63, 96 anchor, 18 arithmetic, 158 autocontrol, 226 automatic data, 54 autonomous, 33 binary, 167 control, 9, 37, 57,60, 101,313,342,68 data control, 5, 10, 112 data processing, 67, 102, 118 digital control, 13,40,72,319,374,94 electronic information, 138 established, 268 feedback, 34, 16 hardware, 343 indication, 224 information, 204 input/throughput, 152 instrument, 189 interlacing, 309 man-machine, 35 man-made, 2,6,66,224,232, 237,282 mathematical, 73 mega, 6 memory, 153 multicomputer, 31, 46
INDEX natural, 228, 231 numeric, 190 numerical, 157, 159, 162 off-line, 343 on-line, 52, 64, 96 operating, 5 operational real-time, 26 order, 88 physical, 73 piping, 86 place value, 161 process-control, 322 processing, 335 programming, 209 railroad, 214 real-time, 116, 190,278, 330, 337, ... 160 real-time data-control, 58 reservation, 169, 221 retrieval, 206 sexagesimal, 158, 160 sub, 191 technological, 228 teletypewriter, 163 time-sharing, 37 time-varying, 204 two-valued, 173 Systematic maintenance, 85 Systemeter, 8 Systems analysis, 3, 4, 20, 57, 102, 122 Systems analyst, 22, 60, 66, 67, 72, 54 Systems approach, 57, 62 Systems behavior, 73, 268 Systems concept, 8, II, 13, 198 Systems controller, 90 Systems cost determination, 45 Systems design, 45, 229 Systems design phase, 44 Systems duplication, 309 Systems dynamics, 14 Systems engineer, 68, 69,72, 152 Systems engineering, 69 Systems evaluation, 45 System(s) evolution, 13 Systems function, 57 Systems guidance, 249 Systems interlock, 162 Systems logs, 355 Systems manager, 143 System(s) meter, 53 Systems operation, 104 Systems optimization, 354 Systems performance, 10
273
Systems philosophy, 374 Systems programmer, 357 Systems reliability, 278 Systems requirement, 160 Systems stability, 227 Systems studies, 3, 91 Systems supervisor, 356 Systems tape, 210 Systems work, 67, 70, 121 Systems value, 298 Table-lookup routine, 213 Tabulating cards, 136 Tactile discrimination, 79 Tactile sensitivity, 79 Tally information, 139 Tally service, 142 Tank gauges, 5 Tape log, 211 magnetic, 118, 124,211,90, 109, 125, 129, 130 master, 110, 129 out-of-balance, 239 paper, 124, 75, 129, 130 punched,21,88,125 systems, 210 teletype shipment, 135 transaction, 239 Tape logs, 355 Tape-to-tape, 35 Tapping, 87 furnace, 99 Target velocity, 198 Taste, 79 Taste buds, 80 Taxinomy, 192 Technical breakthrough, 246 Teeming, 99 Telecommunication, 32, 194 Telemetering,89 radio, 207 Telephone, 125 Teleprinter, 108,223 Telescope positioning, 13 Telesystem, 137, 139 Teletransmission, 6, 247 Teletransmission aspects, 167 Teletransmission media, 88 Teletransmission network, 234 Teletransmission subsystem, 42 Teletype, 125
274
INDEX
Teletype order writer, 126 Teletype room, 128 Teletype shipment tapes, 135 Teletypewriter system, 163 Television program switching, 13 Teller-proof function, 229 Teller unit, 234 Temperature, 28, 79, 90 air, 189, 190 lubrication oil, 73 water, 189 Tendon endings, 79 Terminal coupling, 168 Terminal operator, 10 Terminal sets, 161 Terminate pulse, 348 Ternary, 162, 163, 168, 169 Thermocouple, 16,83,71,79 Thermocouple junction, 81 Thermodynamic transfer, 255 Test(ing) boiler, 78 independent, 138 mathematical, 117 unrelated, 138 Test flight, 207 Thin film, 40 Thinking, 236 Thinking parts, 230 Thirst, 79 Throughput, 199,236 Throughput dispatcher, 323 Throughput operation, 163 Ticket voucher, 62 Ticketing installation, 168 Time interval, 5 Time lag, 218 Timeliness, 52 Time-out, 318 Time-share, 13 Time-shared basis, 74, 185 Time-sharing, 5,6,12,94,269,307,311,344, 358 Time-shar(ing) basis, 88 Time-sharing function, 357 Time-sharing system, 37 Time-scheduling, 233 Tin, 13 unflowed, 151 Tin-plate orders, 151 Tinning electrolytic, 151
Tinning lines, 13 Topology, 180, 183 Top pressure, 92 Torque, 101 Total digital automation, 218 Total transit, 214 Touch,79 Traced-in, 195 Traced-out, 195 Tracking operation, 93 Trade-off analysis, 148 Traffic control, 13 air traffic control, 1 Traffic controller, 181 air traffic controller, 182 Traffic notation, 185 Traffic performance, 174 Traffic problem, 213 Traffic reporting, 86 Traffic routing, 128 Train graphs, 215 Train formation diagrams, 216 Train information, 222 Train running diagrams, 216 Train traffic, 214 Trance, 322 Transaction financial, 230 individual, 241 low volume, 231 Transaction code, 238, 239 Transaction journal, 236, 240 Transaction tape, 239 Transducer, 14, 35 Transfer of storage, 318 Transit number, 233 Transition, 4 Transional path, 2 Transmission, 10 accurate, 166 data, 14,318, 112, 186,217,232 Transmission station, 223 Transmission time, 348 Transmittal, 203 synapse, 231 Transport plans, 215 Transistor emitter, 105 Transonic, 262 Trans-shipment, 215 Trapping, 319 Trapping model, 17 Tree-like, 197, 198
INDEX Trend recording, 75 Trial-and-error procedure, ISO Tributary channel, 141 Trimming, 89 Trunk channel, 141 Truth table, 178, 179, 183 Tube cathode ray, 112,202 display, 275 dip, 89 electronic beam switching, 109 monoscope, 1I3 vacuum, 230, 232 Turbine, 68 Turbine panel, 64 Twin computer center, 137 Two-way communication, 216 Typewriter, 66 logging, 312,74 Ultrasonic detectors, 228 Uncollected fund(table), 242, 244 Union, 176 Unit astronaut maneuvering, 270 car, 213 command, 236 data gathering, 133, 134, 136 generating, 3 inscribing, 236 Teller, 234 Unit application, 356 Unit encoder, 229 Unit fraction, 159 Universal, 176, 177 Universal credit card, 230 Updating run, 234 Up-to-date availability, 170 Utility, 281, 296, 297, 300, 301, 302 User evaluation, 145 Value, 281, 298, 299, 300, 301,302 real, 301 systems, 298 Value error, 61 Variable, 315, 317, 319 artificial, 265 charge, 92 control, 92 cost, 173 independent, 24, 92 input, 93
275
key, 209 logged, 14 operating, 92 slack, 265 Variable information, 136 Variable length, 318, 345, 372 Variable word length, 346 Variable-size character, 346 Vasoconstrictor fibers, 33 Vector, 265 artificial, 261, 265 slack, 261, 265 Velocity, 90, 101 target, 198 Velocity profiling, 19 Venetian treatise, 162 Vent loss, 89 Verification program, 56 Vertex, 181, 183 essential, 183 Vertical wedge, 160 Vibration frequency, 86 Vibrator bistable, 191 Vibratory sensitivity, 79 Video signal, 224 Visceral organs, 79 Vision, 79 Visual display, 66, 75, 77 Visual warning, 74 Vinyl chloride, 13 Voice drum, 192 Voice outlet, 191 Volatility data, 20 Voltage, 101,227 binary trial, 108 signal, 267 sweep, III trial, 109 Voltage encoder, 78 Volume, 53 Voucher, 55, 56, 62 ticket, 62 warehouse, 55 Voucher register, 56 Wage incentives, 165 Waiting lines, 40 Wait(ing) list(ed) passenger, 172, 221, 222 Warehouse processing, 55 Warehouse summary, 111 Warehouse voucher, 55
276 Warning audible, 74 visual,74 Warning lights, 226 Water drive, 19 Wave polarization, 208 Waybill,223 Way side control, 224 Wearing out, 246 Weather data reduction, 13 Weather forecasting, 188 Weather prediction, 186 Weather recognition, 14 Weather research, 188 Weather zone, 195 Weaver, 208 Well meter, 53 White collar, 61 Wind direction, 189, 190 Wind rate, 92 Wind speed, 189, 190 Wind tunnel, 262 Wind tunnel studies, 14
INDEX Word channel,318 data, 318 limit, 38,318 process, 318 shift,372 Word register, 48 Work procedure, 121 systems, 67, 70, 121 Work load simulated, 181 Worker, 11 Yard automatic human yard, 219 marshalling, 219 Yaw, 207, 206 Yield, 1, S Zero, 160 Zero offset, 78 Zero proof, 237