CAPE Edited by Luis Puigianer and Georges Heyen
Related Titles Kai Sundmacher, Achim Kienle, Andreas Seidel-Morgenste...
475 downloads
2668 Views
64MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
CAPE Edited by Luis Puigianer and Georges Heyen
Related Titles Kai Sundmacher, Achim Kienle, Andreas Seidel-Morgenstern (Eds.)
Integrated Chemical Processes Synthesis, Operation, Analysis, and Control 2005
ISBN 3-527-30831-8
Kai Sundmacher, Achim Kienle (Eds.)
Reactive Distillation Status and Future Directions 2002 ISBN 3-527-30579-3
Frerich Johannes Keil (Ed.)
Modeling of Process Intensification 2006 ISBN 3-527-31143-2
Ulrich Brockel, Willi Meier, Gerhard Wagner
Best Practice in Product Design and Engineering 2007 ISBN 3-527-31529-2
Ullmann’s Processes and Process Engineering 3 Volumes 2004 ISBN 3-527-31096-7
CAPE Computer Aided Process and Product Engineering Edited by Luis Puigjaner and Ceorges Heyen
WILEYVCH
WILEY-VCH Verlag CmbH & Co. KGaA
The Editors
Professor Dr. Luis Puigjaner Universitat Politechnica de Catalunya Chemical Engineering Department ESTEIB, Av. Diagonal 647 08028 Barcelona Spain Professor Dr. Georges Heyen Laboratoire d’Analyse et Synthese des Systemes Chimiques Universitk de Liege Sart Tilman B6A 4000 Liege Belgium
All books published by Wiley-VCH are carefully produced. Nevertheless, authors, editor, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate. Library of Congress Card No.:
Applied for British Library Catalogingin-Publication Data: A catalogue record for this book is available from
the British Library. Bibliographic information published by the Deutsche Nationalbibliothek
The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.d-nb.de.
0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form - nor transmitted or translated into machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law. Typesetting Mittemeger & Partner, Plankstadt Printing betz-druck GmbH, Darmstadt Binding Litges & Dopf GmbH, Heppenheim Cover Design 4t Mattes + Traut, Werbeagentur
GmbH, Darmstadt
Printed in the Federal Republic of Germany Printed on acid-free paper ISBN-13
978-3-527-30804-0
ISBN-lo 3-527-30804-0
I"
Table of Contents Preface X I I I Foreword X X I List of Contributors XXV
Volume 1 1
Introduction 1
Section 1 Computer-aided Modeling and Simulation 1
Large-ScaleAlgebraic Systems
15
Cuido 5uzzi Ferraris and Davide Manca
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 2
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8
Introduction 15 Convergence Tests 17 Substitution Methods 20 Gradient Method (Steepest Descent) 20 Newton's Method 21 Modified Newton's Methods 24 Quasi-Newton Methods 27 Large and Sparse Systems 28 Stop Criteria 30 Bounds, Constraints, and Discontinuities 30 Continuation Methods 31 Distributed Dynamic Models and Computational Fluid Dynamics 35 Young-il Lim and Sten 5ayJmrgensen Introduction 35 Partial Differential Equations 35
Method of Lines 40 Fully Discretized Method 58 Advanced Numerical Methods 68 Applications 75 Process Model and Computational Fluid Dynamics 98 Discussion and Conclusion 102
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
VI
I
Table ofcontents
107
3
Molecular Modeling for Physical Property Prediction Vincent Gerbaud and XavierJoulia
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8
Introduction 107 What is Molecular Modeling? 108 Statistical Thermodynamic Background 112 Numerical Sampling Techniques 175 Interaction Energy 121 Running the Simulations 124 Applications 125 Conclusions 132
4
Modeling Frameworks o f Complex Separation Systems 137 Michael C. Ceorgiadis, Eustathios 5. Kikkinides, and Margaritis Kostoglou
4.1 4.2
Introduction 137 A Modeling Framework for Adsorption-Diffusion-based Gas Separation Processes 138 Modeling of PSA Processes in gPROMS 148 Efficient Modeling of Crystallization Processes 149 Modeling of Grinding Processes 1GO Concluding Remarks 1GG
4.3 4.4 4.5 4.6 5
Model Tuning, Discrimination, and Verification Katalin M. Hangos and Rozalia Lakner
5.1 5.2 5.3
Introduction 171 The Components and Structure of Process Models Model Discrimination: Model Comparison and Model Transformations 174 Model Tuning 179 Model Verification 183
5.4 5.5
171
171
6
Multiscale Process Modeling 189 Ian T: Cameron, Gordon D. Ingram, and Katalin M. Hangos
6.1 6.2 6.3 6.4 6.5
Introduction 189 Multiscale Nature of Process and Product Engineering 189 Modeling in Multiscale Systems 193 Multiscale Model Integration and Solution 203 Future Challenges 218
7
Towards Understandingthe Role and Function o f Regulatory Networks in Microorganisms 223 Krist V. Gernaey, Morten Lind, and Sten BayJmrgensen
7.1 7.2 7.3 7.4 7.5 7.6
Introduction 223 Central Dogma of Biology 228 Complexity of Regulatory Networks 229 Methods for Mapping the Complexity of Regulatory Networks 236 Towards Understanding the Complexity of Microbial Systems 247 Discussion and Conclusions 259
Section z Computer-aided Process and Product Design
1
Synthesis of Separation Processes 269 Petros Proios, Michael C. Georgiadis, and €$ratios
1.1 1.2 1.3 1.4 1.5
Introduction 269 Synthesis of Simple Distillation Column Sequences 272 Synthesis of Heat-integrated Distillation Column Sequences 279 Synthesis of Complex Distillation Column Sequences 285 Conclusions 292
2
Process Intensification 297 Patrick Linke, Antonis Kokossis, and Albert0 A h - A r g a e z
2.1 2.2 2.3 2.4
Introduction 297 Process Intensification Technologies 299 Computer-Aided Methods for Process Intensification 303 Concluding Remarks 324
3
Computer-aided Integration of Utility Systems Franqois Marechal and Boris Kalituentzef
3.1 3.2 3.3 3.4 3.5
Introduction 327 Methodology for Designing Integrated Utility Systems 330 The Energy Conversion Technologies Database 334 Graphical Representations 339 Solving the Energy Conversion Problem Using Mathematical Programming 349 Solving Multiperiod Problems 367 Example 369 Conclusions 379
3.6 3.7 3.8
N. Pistikopoulos
327
4
Equipment and Process Design 383 1. David, L. Bogle, and B. Eric Ydstie
4.1 4.2 4.3 4.4 4.5 4.6
Introduction 384 The Structure of Process Models 384 Model Development 390 Computer-aided Process Modeling and Design Tools 390 Introduction to the Case Studies 393 Conclusions 416
5
Product Development 419 Andrzej Kraslawski
5.1 5.2 5.3 5.4
Background 419 Definition Phase 424 Product Design 431 Summary 439
Vlll
I
Table ofContents
Volume
2
Section 3 Computer-aided Process Operation 1
Resource Planning 447 Michael C. Georgiadis and Panagiotis Tsiakis
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
Introduction 447 Planning in the Process Industries 448 Planning for New Product Development 460 Tactical Planning 462 Resource Planning in the Power Market and Construction Projects 465 Solution Approaches to the Planning Problem 469 Software Tools €or the Resource Planning Problem 472 Conclusions 474
2
Production Scheduling 481 Nilay Shah
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12
Introduction 481 The Single-Site Production Scheduling Problem 483 HeuristicslMetaheuristics: Specific Processes 487 Heuristics/Metaheuristics: General Processes 488 Mathematical Programming: Specific Processes 489 Mathematical Programming: Multipurpose Plants 493 Hybrid Solution Approaches 500 Combined Scheduling and Process Operation 501 Uncertainty in Planning and Scheduling 502 Industrial Applications of Planning and Scheduling 506 New Application Domains 508 Conclusions and Future Challenges 509
3
Process Monitoring and Data Reconciliation 517 Ceorges Heyen and Boris Kalitventzef
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8
Introduction 517 Introductory Concepts for Validation of Plant Data Formulation 520 Software Solution 527 Integration in the Process Decision Chain 527 Optimal Design of Measurement System 528 An Example 534 Conclusions 538
4
Model-based Control 541 Sebastian Engell, Cregor Fernholz, Weihua Cao, and Abdelaziz Toumi
4.1 4.2 4.3
Introduction 541 NMPC Applied to a Semibatch Reactive Distillation Process 543 Control of Batch Chromatography Using Online Model-based Optimization 552
518
Table of Contents
4.4 4.5 4.6
Control by Measurement-based Online Optimization 556 Nonlinear Model-based Control of a Reactive Simulated Moving Bed (SMB) Process 565 Conclusions 572
5
Real Time Optimization 577 Vivek Dua, John D. Perkins, and Efstratios N. Pistikopoulos
5.1
Introduction 577 Parametric Programming 578 Parametric Control 581 Hybrid Systems 584 Concluding Remarks 589
5.2 5.3 5.4
5.5 6
6.1 6.2 6.3 6.4 6.5 6.6
Batch and Hybrid Processes 591 Luis Puigjaner andlavier Romero
Introduction 591
The Flexible Recipe Concept 597 The Flexible Recipe Model 601
Flexible Recipe Model for Recipe Initialization 602 Flexible Recipe Model for Recipe Correction 610 Final Considerations 617 621
7
Supply Chain Management and Optimization Lazaros C. Papageorgiou
7.1 7.2 7.3 7.4 7.5 7.6 7.7
Introduction 621 Key Features of Supply Chain Management 623 Supply Chain Design and Planning 624 Analysis of Supply Chain Policies 630 Multienterprise Supply Chains 635 Software Tools for Supply Chain Management 637 Future Challenges 639
Section 4 Computer-integratedApproaches in CAPE 1
Integrated Chemical Product-Process Design: CAPE Perspectives Rafqul Can;
1.1 1.2 1.3 1.4 1.5
Introduction 647 Design Problem Formulations 648 Issues and Needs 654 Framework for Integrated Approach 658 Conclusion 663
2
Modeling in the Process Life Cycle 667 Ian T: Cameron and Robert 6.Newell
2.1 2.2
Cradle-to-the-GraveProcess and Product Engineering 667 Industrial Practice and Demands in Life-Cycle Modeling 675
647
I
IX
X
I
Table ofcontents
2.3 2.4
Applications of Modeling in the Process Life Cycle: Some Case Studies 681 Challenges in Modeling Through the Life Cycle 689 695
3
Integration in Supply Chain Management Luis Puigjaner and Antonio Espuiia
3.1 3.2 3.3 3.4 3.5 3.6 3.7
Introduction 695 Current State of Supply Chain Management Integration 697 Agent-based Supply Chain Management Systems 702 Environmental Module 707 Financial Module 71 1 Multiagent Architecture Implementation and Demonstration 718 Concluding Remarks 727
4
Databases in the Field o f Thermophysical Properties in Chemical Engineering 731 Richard Sass
4.1 4.2
Introduction 731 Overview of the Thermophysical Properties Needed for CAPE Calculations 732 Sources of Thermophysical Data 733 Examples of Databases for Thermophysical Properties 733 Special Case and New Challenge: Data of Electrolyte Solutions 740 Examples of Databases with Properties of Electrolyte Solutions 741 A Glance at the Future of the Properties Databases 744
4.3 4.4 4.5 4.6 4.7 5
Emergent Standards 747 Jean-Pierre Belaud and Bertrand Braunschweig
5.1 5.2 5.3 5.4
Introduction 747 Current CAPE Standards 751 Emergent Information Technology Standards 755 Conclusion (Economic, Organizational, Technical, QA) 765
Section 5 Applications 1
Integrated Computer-aided Methods and Tools as Educational Modules 773 Rafiqul Cani andJens Abildskov
1.1 1.2 1.3
Introduction 773 Integrated Approach to CAPE Educational Modules 776 Conclusion 797
1.4
774
Table of Contents
799
2
Data Validation: a Technology for Intelligent Manufacturing Boris Kalitventzefi Ceorges H eyen, and Miguel Mateus
2.1 2.2 2.3 2.4 2.5 2.6
Introduction 799 Basic Aspects of Validation: Data Reconciliation 799 Specific Assets of Information Validation 806 Advanced Features of Validation Technology 815 Applications 821 Conclusion 826
3
Facing Uncertainty in Demand by Cost-effective Manufacturing Flexibility 827 Petra Heijnen andjohan Grievink
3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9
Introduction 827 The Production Planning Problem 829 Mathematical Description of the Planning Problem 830 Modeling the Profit of the Production Planning 832 Modeling the Objective Functions 836 Solving the Optimization Problem 840 Sensitivity Analysis of the Optimization 845 Implementation of the Optimization of the Production Planning Conclusions and Final Remarks 851 Authors’ Index 855 Subject Index 857
849
I
1 Preface Computer Aided Process and Product Engineering (CAPE): Its Pivotal Role for the Future of Chemical and Process Engineering
Chemical and related industries are at the heart of the great number of scientific and technological challenges involving computer-aided processes and product engineering. Chemical and related industries including process industries such as petroleum, pharmaceutical and health, agriculture and food, environment, textile, iron and steel, bitumous, building materials, glass, surfactants, cosmetics and perfume, and electronics are evolving considerably due to unprecedented market demands and constraints stemming from public concern over environmental and safety issues. To respond to these demands, the following challenges faced by these process industries involve complex systems, both at the process-scale and at the productscale: 1. Processes are no longer selected on a basis of economic exploitation alone. Rather, compensation resulting from the increased selectivity and savings linked to the process itself is sought after. Innovative processes for the production of commodity and intermediate products need to be researched where patents usually do not concern the products but the processes. The problem becomes more and more complex as factors such as safety, health, environment aspects including nonpolluting technologies, reduction of raw materials and energy losses, and productlby-product recyclability are considered. The industry, with large plants, must supply bulk products in large volumes and the customer will buy a process that is nonpolluting and perfectly safe, requiring computer-aided process engineering (CAPE). 2. New specialities, active material chemistry, and related industries involve the chemistry/biology interface of agriculture, food, and health industries. Similarly, it involves upgrading and conversion of petroleum feedstock and intermediates, conversion of coal-derived chemicals or synthesis gas into fuels, hydrocarbons or oxygenates.This progression from traditional chemistry is driven by the new market objectives where sales and competitiveness are dominated by the end-use properties of a product as well as its quality. It is important to underline that today, 60 % of all products sold by chemical companies are crystalline,polymer, or ComputerAided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
amorphous solids. These complex and structured materials have a clearly defined physical shape in order to meet the designed and the desired quality standards. This also applies to plastics, ceramics, soft solids, paste-like products, and emulsions. New developments require increasingly specialized materials, active compounds, and special effects chemicals. The chemicals are much more complex in terms of molecular structure than traditional, industrial chemicals. Control of the end-use property (size, shape, color, aesthetics, chemical and biological stability, degradability, therapeutic activity, solubility, touch, handling, cohesion, rugosity, taste, succulence, sensory properties, etc.), expertise in the design of the process, continual adjustments to meet changing demands, and speed to react to market conditions are the dominant elements. For these specialities and active materials the client buys the product that is the most efficient and first on the market. He will have to pay high prices and expect a large benefit from these short life-time and high-margin products, requiring most often computer-aided process and product engineering. The triplet molecular processes-product-processengineering (3PE) approach requires the tools of CAPE. Today, chemical and process engineering are concerned with understanding and developing systematic procedures for the design and optimal operation of process systems, ranging from nano and microsystems to industrial-scale continuous and batch processes: this is illustrated by the chemical supply chain concept. In the supply chain, it should be emphasized that product quality is determined at the micro and nanolevel and that a product with a desired property must be investigated for both structure and function. A comprehension of the structure-property relationship at the molecular (e.g., surface physics and chemistry) and microscopic level is required. The key to success is to obtain the desired end-use property of a product, and thus control product quality by controlling complexity in the microstructure formation. This will help to make the leap from the nanolevel to the process level. Moreover, most chemical and biological processes are nonlinear, belonging to the socalled complex systems for which multiscale structure is common nature. Therefore, an integrated system approach for a multidisciplinary and multiscale modeling of complex, simultaneous, and often coupled momentum, heat and mass transfer processes is required: 0
0
Different time scales (10-15-108 s) are used from femto and picoseconds for the motion of atoms in a molecule during a chemical reaction, nanoseconds for molecular vibrations, hours for operating industrial processes, and centuries for the destruction of pollutants in the environment. Different length scales (10-'-lO6m) are used from nanoscale for molecular kinetic processes; microscale for bubbles, droplets, particles, and eddies: mesoscale for unit operations dealing with reactors, columns and exchangers: macroscale for production units; and megascale for environment and dispersion of emissions (see the following figure):
Preface
Therefore, organizing scales and complexity levels in process engineering is necessary in order to understand and describe the events at the nano and microscales, and to better convert molecules into useful products at the process scales. This multiscale approach is now also encountered in biotechnology, bioprocesses, and product engineering, to manufacture products and to better understand and control biological tools such as enzymes and micro-organisms. In such cases, it is necessary to organize the levels of increasing complexity from the gene with known properties and structures, up to the product-process relation, by modeling coupled mechanisms and processes at different length scales: the nanoscale is used for molecular and genomic processes and metabolic transformations; pic0 and microscales are used for enzyme and integrated enzymatic systems, and biocatalyst and active aggregates; mesoscale is used for bioreactors, exchangers, separators; and macro and megascales are used for production units and interactions with the biosphere. Thus, organizing levels of complexity at different length scales, associated with an integrated approach to phenomena and simultaneous and coupled processes, are the heart of the new view of biochemical engineering (see next figure). Indeed this capability offers the opportunity to apply genetic-level controls to make better biocatalysts, novel products, or developing new drugs, new therapies, and biomimetic devices. Understanding an enzyme at the molecular level means that it may be tailored to produce a particular end-product. Also, the ability to think across length scales makes chemical engineers particularly well poised to elucidate the mechanistic understanding of molecular and cell biology and its large-scale manifestation, i.e., decoding communications between cells in the immune systems.
Ixv
XVI
I
Preface
These examples are at the center of the new view of chemical and process engineering: organizing levels of complexity, by translating molecular processes into phenomenologicalmacroscopic laws to create and control the required end-use properties and functionality of products manufactured by a continuous or batch process. I have defined this approach as the triplet molecular processes-product-processengineering (3PE): an integrated system approach of complex pluridisciplinarynonlinear and nonequilibrium processes and phenomena occurring on different length and time scales, involving a strong multidisciplinary collaboration between physicists, chemists, biologists, mathematicians, computer-aided specialists, and instrumentation specialists. Today’s tools are wide-ranging for the success of chemical and process engineering for modeling, complex systems at different scales encountered in the process and product engineering. It’s possible to understand and describe events on the nano and microscale in order to convert molecules into useful products on process and unit scales thanks to significant simultaneous breakthroughs in three areas: molecular modeling (both theory and computer simulation), scientific instrumentation and noninvasive measurement techniques, and powerful computational tools and capabilities for information collection and processing. At the nanoscale, molecular modeling assists in maintaining better control of surface states of catalysts and activators, obtaining increased selectivity and facilitating asymmetrical synthesis, e.g., chiral technologies. Molecular modeling also assists in explaining the relationship between structure and activity at the molecular scale in order to control crystallisation,coating and agglomeration kinetics.
Preface
At the microscale, computational chemistry is very useful for understanding complex media and all systems whose properties are controlled by rheology and interfacial phenomena. At the meso and macroscales, computer fluid dynamics (CFD) is required for scaling up new equipment, or for the design of new operation modes for existing equipment such as reversed flow, cyclic processes, and unsteady operations. It is especially useful when rendering multifunctional processes with higher yields in chemical or biological reactions coupled with separation or heat transfer. It also provides a considerable economic benefit. At the production unit and multiproduct plant scale, dynamic simulation and computer tools for simulation of entire processes are needed more and more. These tools analyze the operating conditions of each piece of equipment in order to simulate the whole process in terms of time and energy costs. New performances (product quality and final cost) resulting from any change due to a blocking step or a bottleneck in the supply chain will be predicted in a few seconds. It is clear that such computer simulations enable the design of individual steps, the structure of the whole process at the megascale, and place individual processes in the overall context of production, emphasizing the role and the place of computer assistance in process and product engineering. The previous considerations on the necessary multidisciplinary and multiscale integrated approach for managing complex systems encountered by chemical and related process industries in order to meet market demands led to the proposal of four main parallel objectives involving the tools of CAPE. The first objective concerns a total multiscale control of the process to increase selectivity and productivity by the nanotailoring of materials. The nanotailoring can be produced with controlled structure, or by supplying the process with a local “informed” flux of energy and materials, or by increasing information transfer in the reverse direction, from process to man, requiring close computer control, relevant models, and arrays of local sensors and actuators. The second objective concerns the process intensification by the design of novel equipment based on scientific principles, new operating modes, and new methods of production. Process intensification with multifunctional equipment that couples or uncouples elementary processes (transfer-reaction-separation),involving the reduction in the number of equipment units leads to reduced investment costs and significant energy recovery or savings. Cost reduction between 10% and 20 % are obtained by optimizing the process. But the use of such hybrid technologies is limited by the resulting problems with control and simulation leading to interesting but challenging problems in dynamic modeling, design, operation and strong nonlinear control. Also, process intensification using microengineering and microtechnology will be used more and more for high-throughput and formulation screening. Indeed microengineered reactors have some unique characterictics that create the potential for high-performance chemicals and information processing on complex systems. Moreover, scale-up to production by replication of microreactor units used in the laboratory eliminates costly redesign and pilot plant experiments, thus accelerating the transfer from laboratory to commercial-scaleproduction.
I
XVII
The third objective concerns the extension of chemical engineering methodology to product-focusseddesign and engineering in using the multiscale modeling of the above-mentionedapproach, 3PE. Indeed to be able to design and control the product quality of structured materials, and make the leap from the nanolevel to the process level, chemical and process scientists and engineers face many challenges in fundamental concepts (structure-activityrelationships on molecular level, interfacial phenomena, adhesive forces, molecular modeling, equilibria, kinetics, and product characterization techniques); in product design (nucleation growth, internal structure, stabilization, additive);in process integration (simulation and design tools based on population balance); and in process control (sensors and dynamic models). It should be underlined that much progress has been made in product-oriented engineering and in process control using the scientific methods of chemical engineering. The methods include examination of thermodynamic equilibrium states, analysis of transport processes and kinetics when they are separate and linked by means of models with or without the help of molecular simulation, and by means of computer tools of simulation, modeling and extrapolation at different scales for the whole supply chain up to the laboratory-scale. But how can operations be scaled up from laboratory to plant? Will the same product be obtained and will its properties be preserved?What is the role of the equipment design in determining product properties? This leads to the fourth main objective, which is to implement the multiscale application of computational chemical engineering modeling and simulation to reallife situations from the molecular scale to the overall production scale in order to understand how phenomena at a smaller length scale relate to properties and behavior at a longer length scale. The long-term challenge is to combine the thermodynamics and physics of local structure-forming processes like network formation, phase separation, agglomeration, nucleation, crystallization, sintering, etc., with multiphase computer fluid dynamics. Indeed through the interplay of molecular theory, simulation and experimentation measurements a better quantitative understanding of structure-property relations evolves, which, when coupled with macroscopic chemical engineering science, form the basis for new materials and process design (CAPE). Turning to the macroscopic scale, dynamic process modeling and process synthesis are also increasingly developed. Moreover, integration and opening of modeling and event-driven simulation environments in response to the current demand for diverse and more complex models in process engineering is currently taking a more important place. The aim is to promote the adoption of a standard of communication between simulation systems at any time and length-scale level (thermodynamic, unit operations, numerical utilities for dynamic, static, batch simulations, fluid dynamics, process synthesis, energy integration, process control) in order to simulate processes and allow the customers to integrate the information from any simulator into another one. Thus expanding and developing interface specification standards to ensure interoperabilityof CAPE software components justifies the creation of a standardization body (CAPE-OPENLaboratories Network, CO-LAN) to maintain and disseminate the software standards in the CAPE domain that have been developed in several international projects.
Preface
The CAPE Present Book a Vade Mecum in Process Systems Engineering
It is clear and I have shown that chemical and process industries have to overcome challenges linked to the complexity.They need to master phenomena in order to produce products “first on the market” with “zero pollution, zero accident, and zero defects” processes. Therefore, never before have enterprises invested so much in information processing and computer-controlled production, which proved in many cases that it was capable of reducing the costs and greatly increasing the flexibility than any other technology in past decades. Moreover, information and communication technology offered a great number of standardized but also specific possibilities of applications and solutions as never before. Therefore, a strategy aiming at the strengthening of competitiveness of production should obviously incorporate CAPE as a guideline for the reunion of the flexible production, the technical and administrative data processing, and the complete penetration of the enterprise activities with date processing. Within the European Federation of Chemical Engineering, the CAPE Working Party has been very active in this area since the end of the 19GOs, as shown by the success of the ESCAPE series of symposia. The activity of the Working Party is also shown by the publication of this book, which aims to present and review the state-ofthe-art and latest developments in process systems engineering. Its contents illustrates the modem day multidisciplinary and multiscale integrated approach for the integrated product-process design and decision support to the complex management of the entire enterprise. It also highlights the use of information technology tools in the development of new products and processes, in the optimal operation of complex and/or new equipment found in the chemical and process industries, and in the complex management of the supply chain. Actually, this book, based on the competences of scientists and engineers confronted with industrial practice, is a reference tool. Its ensuing and clear objectives in the topic of process systems engineering are: 0
0
0
0
the necessary multidisciplinary bases required for understanding and modeling the phenomena occurring at the different scales from molecular range up to the global supply chain and extended enterprise; the experimental and knowledge-based methods and tools available to assist in the conception of new processes and products design, and in the design of plants able to manufacture the products in a competitive and sustainable way; the presentation of needed advances to fight ever-increasing complexity involved within the product-process life cycle; some tutorial examples and cases studies aiming to the state-of-the-artcomputeraided tools.
The theoretical and practical aspects of the computer-aided process engineering COVered in this book, involving computer-aided modeling and simulation, computeraided process and product design, computer-aided process operation, and computer integrated approaches in CAPE should find use in libraries and research facilities and make a direct impact in the chemical and related process industries.
1
XIX
xx
I
Preface
This book judiciously titled “Computer Aided Process and Product Engineering CAPE” is a vade mecum in process systems engineering. It is a valuable and indispensable reference to the scientific and industrial community and should be useful for engineers, scientists, professors and students engaged in the CAPE topic. Bravo and many congratulations to our collegues Prof. Puigjaner and Prof. Heyen for this publication and to the many authors involved in the CAPE Working Party of the European Federation of Chemical Engineering that have made this vade mecum a reality. Prof. Dr. Ing. Jean-Claude Charpentier President of the European Federation of Chemical Engineering
Foreword The European Working Party on Computer Aided Process Engineering has been an important and highly effective stimulus for and promoter of research and educational advances in process systems engineering for over 40 years. The 1991 redirection of the Working Party from the broad and all inclusive scope of embracing "the use of computers in chemical engineering", which was its theme for some thirty years, to its present emphasis on product and process engineering has had important consequences. It has reenergized the organization, sharpened its focus and promoted higher levels of technical achievement. Indeed over the past decade, the ESCAPE series of annual conferences sponsored by the Working Party has become a vital world forum for disseminating, discussing and analyzing progress in state-ofthe-art methodologies to support product and process design, development, operation and management. This volume represents a well-directed effort by the Working Party to capture the current status of developments in this field and thus to give that field its current definition. To be sure, the volume is an ambitious undertaking because the process systems engineering field has expanded enormously from its traditional primary focus on the design, control, and operations of continuously operated commodity chemical processes and its secondary concern with the design and control of batch unit operations. That expansion has included methodologies to support product design and development, increases in both scope and complexity of the processing systems under consideration, and approaches to quantify the risks resulting from technical and market uncertainties and incorporate risk-reward trade-offs in design and operational decisions. The scope of systems encompassed by process systems engineering now ranges from the molecular, biomolecular and nanoscale to the enterprise-wide arena. The levels of complexity include self-assemblyprocesses at the nanoscale, self regulating processes at the cellular level, the combination of mechanical, electrical and surface energetic phenomena in heterogeneous particulate systems, the interplay between thermodynamic, reaction and transport phenomena in integrated reactionlseparation operations, and even the decentralized and semiautonomous interactions of customers, suppliers, partners, competitors and government regulators at the enterprise level. Certainly, these developments have been greatly facilitated by remarkable advances in computing and information technologies. However, at least as important has been the expanded scope of the models that underpin design and operational decisions as well as key advances in the tools for creating, analyzing, solving and maintaining these models over the life cycle of the associate productlproComputer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
XXll
I
Foreword
cess. The models of interest are now defined not just in terms of the traditional algebraic and differential equations, but also include systems of partial differential and integral equations, graphs/networks, logical relations/conditions, hybrids of logical conditionslrelations and continuous equations, and even object-orientedrepresentations of information, decision and work flows. Has this volume succeeded in addressing the expanded role of models, the thrusts in product design and development, the much enlarged scope and complexity of applications, and the innovative approaches to addressing uncertainty and risk? While it is impossible to address the full scope of these developments within the limited pages of a single volume, the editors and authors, all active contributors of the Working Party, have indeed done remarkably well in capturing and highlighting many of the most important developments. In Section 1, we find coverage of fundamental issues such as the development of modeling frameworks, model parameter estimation and verification methods, approaches to treatment of multiscale models, as well as numerical methods for solution of algebraic, differential and partial differential systems. The applications to particulate-basedprocesses such as crystallization, grinding, and granulation are of continuing special interest. Computational fluid dynamics and molecular modeling tools, which have become integrated into the process systems engineering toolbox are reviewed and the state of methods for the modeling and analysis of microorganisms are presented. The computational biology domain is receiving a high level of attention by the systems engineering community and will certainly receive even more extensive coverage in future reviews. The second section, which principally treats process design, reviews current developments in overall process synthesis as well as synthesis of reaction, separation and utility subsystems. The area of process intensification, which seeks to capture the potential synergies from exploiting the complex interactions of reaction-separationphenomena, is noted, discussed and recognized as an important direction for further research in process systems engineering. The third section on process operations covers important developments in the well-established hnctional levels of the process operations hierarchy: monitoring and data reconciliation, model based control, real-time optimization, scheduling, planning, and supply chain management. Additionally, issues related to the operation of flexible batch plants are reviewed. Key to progress in developments in scheduling, planning, supply chain and flexible batch plant operations have been advances in the formulation and solution of large-scale mixed integer optimization problems. The importance of these and need for continuing advances cannot be overemphasized. The fourth section treats three key integration issues as well as two supporting technology developments. While the basic features of product design are reviewed in Section 3, the progress in meeting the challenges of integrating product and process design are addressed in Section 4.As noted in that chapter, to date most of the progress has been in applications such as structured or formulated products in which the linkage between product and process is very close but hrther developments are on the horizon. The modeling technology required to support the product/process lifecycle raised in one of the chapters is a key issue facing many industry sectors. This
Foreword
issue is not yet as intensively addressed in the process systems community as it should be, given its importance in capturing product/process knowledge and managing corporate risk. The chapter on integrated supply chain management at roots deals with strategic and tactical enterprise-wide decisions. Uncertainty and risk are critical components of such decisions requiring more attention and intense future study. The sections on physical property estimation and databases, as well as open standards for CAPE software, discuss components that comprise an essential infrastructure for CAPE developments. The importance of tools for physical property prediction/estimation is evident in domains such as pharmaceutical products, in which the absence of such predictive tools has significantly retarded CAPE efforts in product and process design. The volume concludes with several enlightening case studies spanning the technologies reviewed in the preceding sections that are well chosen to make these technologies, their strengths and weaknesses more concrete. Given the limitations of a single volume, there necessarily are additional topics that will in the future require more intensive review and discussion. These include process intensification research at the micro and even nanoscales. While there have been research on microscale process design and control, the essential complexity of the phenomena that occurs at micro and nanoscales makes work in this area both challenging and of potential high impact. There has been progress in rigorous treatment of the full range of external and internal parameter uncertainties and promising computational methods for generating risk-reward frontiers that deserves notice, including the integration of discrete event simulation and discrete optimization methods. Algorithms and strategies for addressing multistage stochastic decision problems and incorporating the full valuation of the decision flexibility in multistage decision frameworks are receiving increased attention. Finally, large-scale optimization methods for attacking enterprise level decision problems of industrial scope are emerging and will become even more prominent in the near future. In summary, this volume is remarkably thorough in capturing the current state of development of the process systems engineering field and representing its broad scope. It will serve this field well in stimulating further research and in encouraging students to learn and contribute to a vital and growing body of knowledge that has important applications in broad sectors of the chemical, petrochemical, specialty, pharmaceuticals, materials, electronics and consumer products industries. The editors and authors are to be congratulated for a job well done. G. V. Reklaitis
I
XXlll
List of Contributors lens Abildskov
Bertrand Braunschweig
CAPEC Department of Chemical Engineering Splltofts Plads, Building 229 2800 Kgs. Lyngby Denmark
Institut Franiaise du Petrole Division Technologie, Informatique et Mathematiques Appliquees 1 and 4 avenue de Bois Preau 92852 Rued Malmaison CCdex France
Albert0 Alva-Argaez
Hankyong National University Department Chemical Engineering Kyonggi-do Anseong 456-749 Korea
/an T. Cameron
The University of Queensland School of Engineering Division of Chemical Engineering St Lucia QLD 4072 Australia
Jean-Pierre Belaud
National Polytechnic Institution of Higher Learning at Toulouse INPT Department of Process Systems Engineering 118, route de Narbonne 31077 Toulouse Cedex 4 France
Vivek Dua
University College London Department of Chemical Engineering Centre for Process Systems Engineering Torrington Place London WClE 7JE UK
1. David L. Bogle
University College London Centre for Process Systems Engineering Department of Chemical Engineering Torrington Place London WClE 7JE UK
Sebastian Engell
University of Dortmund Department of Biochemical and Chemical Engineering Process Control Laboratory Emil-Figge-Str.70 44221 Dortmund Germany
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
XXVI
I
List ofcontributors
Antonio Esputia
Universitat PolitPcnica de Catalunya Chemical Engineering Department ESTEIB, Av. Diagonal 647 08028 Barcelona Spain Cregor Fernhob
Process Systems Enterprise Limited Merlostrasse 12 50668 Koln Germany Cuido Buzzi Ferraris
Politecnico di Milano CMIC Department Piazza Leonard0 da Vinci, 32 20133 Milano Italy Rafiqul Gani
CAPEC Department of Chemical Engineering Sdtofts Plads, Building 229 2800 Kgs. Lyngby Denmark Weihua Cao
GE Global Research Center Real Tirne/Power Controls Laboratory 1800 Cailun Road Zhangjiang High-tech Park, Pudong New Area 201203 Shanghai P. R. China Michael C. Ceorgiadis
Imperial College London Centre for Process Systems Department of Chemical Engineering, Engineering Roderic Hill Building South Kensington Campus London SW7 2AZ UK
Vincent Cerbaud
Laboratoire de Genie Chimique BP 1301 5 rue Paulin Talabot 31106 Toulouse, Cedex 1 France Krist Y. Cernaey
Technical University of Denmark Department of Chemical Engineering Sdtofts Plads, Building 229 2800 Kgs. Lyngby Denmark lohan Crievink
Delft University of Technology Faculty of Applied Sciences Department of Chemcial Technology Julianalaan 136 2628 BL Delft The Netherlands Katalin M. Hangos
Hungarian Academy of Sciences Computer and Automation Research Institute Process Control Research Group Kende u. 11-13, PO Box 63 1518 Budapest Hungary Petra Heijnen
Delft University of Technology Department of Technology, Policy and Management PO Box 5015 2600 GA Delft The Netherlands Ceorges Heyen
Laboratoire dAnalyse et SythPse des SystPmes Chimiques UniversitC de LiPge Alee de la Chimie 3-BG 4000 LiPge Belgium
List ofcontributors
Gordon D. lngram
Margaritis Kostoglou
CSIRO Land and Water Private Bag 5 Wembley WA 6913 Australia
Aristotle University of Thessaloniki Department of Chemical Technology, School of Chemistry Box 116 54124 Thessaloniki Greece
Sten Bay jirgensen
Technical University of Denmark Department of Chemical Engineering Sdtofts Plads, Building 229 2800 Kgs. Lyngby Denmark Xavier joulia ENSIACET -
Laboratoire de GCnie Chimique 117 Route de Narbonne 310077 Toulouse, Cedex 4 France
Andrey Kraslawski
Lappeenranta University of Technology Department of Chemical Technology PO Box 20 53851 Lappeenranta Finland Rozalia Lakner
Department of Computer Science Pannon University Egyetem Street 10, PO Box 158 8200 Veszprem Hungary
Boris Kalitventzd
BELSIM s.a. Rue Georges Berotte 29A 4470 Saint-Georges-sur-Meuse Belgium Eustathois S. Kikkinides
University of West Macedoinia School of Engineering and Management of Energy Resources Sialvera and Bakola Street 50100 Kozani Greece Antonis Kokossis
University of Surrey Centre for Process and Information Systems Engieering Guildford, Surrey, GU2 7XH UK
Young4 Lim
Natural Resources Canada CETC - Varennes Energy Technology and Programs Sector PO Box 27043 Calgary, Alberta T3L 2Y1 Canada Morton Lind
Technical University of Denmark 0rsted DTU, Automation Elektrovej, Building 326 2800 Kgs. Lyngby Denmark Patrick Linke
University of Surrey Centre for Process and Information Systems Engineering Guildford, Surrey, GU2 7XH UK
I
XXVII
XXVIII
I
List ofcontributors
Davide Manca
Politecnico di Milano CMIC Department Piazza Leonard0 da Vinci, 32 20133 Milano Italy Francois Marechal
Ecole Polytechnique FCderale de Lausanne Industrial Energy System Laboratory LENI-ISE-STI-EPFL Station 9 1015 Lausanne Switzerland Miguel Mateus
Belsim s.a. Rue Georges Berotte 29A 4470 Saint-Georges-sur-Meuse Belgium Robert 6. Newel1
Daesim Technologies Pty Ltd PO Box 309 Toowong QLD 4066 Australia Lazaros C. Papageorgiou
University College London Centre for Process Systems Engineering Department of Chemical Engineering Torrington Place London WC1E 7JE UK John D. Perkins
University of Manchester Institute of Science and Technology Sackville Street, PO Box 88 Manchester MGO IQD UK
Efstratios N. Pistikopoulos
Imperial College London Department of Chemical Engineering Centre for Process Systems Engineering Roderic Hill Building South Kensington Campus London SW7 2AZ UK Petros Proios
Imperial College London Department of Chemical Engineering Centre for Process Systems Engineering Roderic Hill Building South Kensington Campus London SW7 2AZ UK Luis Puigjaner
Universitat PolitScnica de Catalunya Chemical Engineering Department ESTEIB, Av. Diagonal 647 08028 Barcelona Spain Javier Romero
Universitat PolitPcnica de Catalunya Chemical Engineering Department ESTEIB, Av. Diagonal 647 08028 Barcelona Spain Richard Sass
DECHEMA e.V. Department of Information Systems and Databases Theodor-Heuss-Allee25 GO486 Frankfurt-am-Main Germany
Nilay Shah
Panagiotis Tsiakis
Imperial College London Department of Chemical Engineering Centre for Process Systems Engineering Roderic Hill Building South Kensington Campus London SW7 2AZ UK
Process Systems Enterprise Ltd. Bridge Studios, 107a Hammersmith Bridge Road London WG 9DA UK
Abdelaziz Toumi
Bayer Technology Services GmbH PMT-AMS-APC,Bld. E41 51368 Leverkusen Germany
B. Erik Ydstie
Carnegie Mellon University Department of Chemical Engineering Pittsburgh 5000 Forbes Avenue Pennsylvania 15213-3890 USA
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
I’
Introduction Since 1991 the working party (WP) of the European Federation of Chemical Engineering with the title “The Use of Computers in Chemical Engineering” adopted new term of reference and a new title-the Working Party on Computer Aided Process Engineering. This decision followed an internal debate on issues involving concepts like computer integrated manufacturing (CIM), computer-aided process operations (CAPE) and computer-aided process design (CAPD).The consensus reached in the new title naturally embraced computer-aided process operations as a subset of CIM, which is focused on the application of computing technology to integrate and facilitate the key technical decision processes which arise in chemical manufacture. Thus, CAPE focuses on algorithms, procedures and frameworks, which automate the operating/ design decisions for those functions which are automatable and support those operating decisions for which human intervention is necessary or desirable (Pekny et al. 1991). This re-structuring of the WP reflected the actual trends in the process industries. Never before had the enterprises so much invested in information processing. Computer-controlled production proved in many cases that it was capable of reducing the costs and increasing the flexibility far more greatly then any other technology in the past decades. Information and communication technology offered plenty of standardized but also specific possibilities of applications and solutions as never before. Therefore, a management strategy aiming at the strengthening of the competitiveness of production should obviously incorporate CAPE as a guideline for the reunion of the flexible production, the technical and administrative data processing and the complete penetration of all enterprise activities with data processing (Westkamper 1992). During this last decade these trends have consolidated and expanded. Thus, CAPE is a network with separate functions which are linked to each other and/or integrated by using common data and information. CAPE systems are used in the development and design, operations planning and production equipment planning departments. Otherwise, order processing is carried out by process control systems (materials resource planning). Both systems deliver data to an information system, which is the basis for the operation of the production with its flexible and control systems. Moreover, as the system boundaries have expanded, CAPE contribution and opportunities are present in the integrated product-process design and decision support to the complex management of the entire enterprise. Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
2
I
I Introduction
This book aims to present and review the state of the art and latest developments in process systems engineering. It seeks to highlight the use of information technology tools in the development of new products and processes, in the optimal operation of complex equipment found in the process industries, and in the complex management of the whole business. The book is intended as a valuable reference to the scientific and industrial community, and should contribute to the progress in computer-aided process and product engineering. This work should be also useful for teachers of postgraduate courses in these areas. Following this introduction, the book consists of 27 chapters organized into five major sections: Section 1 Section 2 Section 3 Section 4 Section 5
Computer-aided Modeling and Simulation, Computer-aided Process and Product Design, Computer-aided Process Operation, Computer-integrated Approaches in CAPE, Applications
Section 1 presents a review on actual trends and shows new advances in mathematical modeling and digital simulation techniques that permit practitioners to solve the complex scenario that describes real engineering systems. Basic techniques needed to develop and solve models are reviewed this extends from applied mathematics to model validation and tuning, model checking and initialization, and to the estimation of physical properties. In Chapter 1 steady state process simulation is introduced, which involves largescale algebraic systems. The most known solving algorithms are first presented. Alternative methods of the quasi-Newtonfamily are then described and some issues such as convergence, ill-conditioned and singular Jacobian matrices are also discussed. Then, it focuses on large and sparse algebraic systems, how to work with bounds, constraints and discontinuities as well as how to deal with the stopping criteria to be adopted. Finally, a short introduction to continuation methods is provided. Chapter 2 deals with distributed dynamic models, that is, partial differential equations (PDEs) or partial differential algebraic equations (PDAEs) incorporating convection, diffusion, reaction and/or thermodynamic property terms. Numerical methods for solving PDEs are reviewed in three sections treating first semidiscretized (method of lines) and fully discretized methods before discussing adaptive and moving mesh methods. Some applications are discussed, such as preparative chromatography, fured-bed reactors, slurry bubble columns, crystallizers and microbial cultivation processes. Finally, an approach for combining computational fluid dynamic (CFD)technology with process simulation is discussed. Process and product design studies rely on good knowledge of materials behavior and physical properties. Chapter 3 deals with estimation methods based on molecular modeling, describing the behavior of atomic and molecular systems subject to energetic interactions. It allows running numerical experiments and as any experiment, it can provide not only accurate physicochemical data but also increase the knowledge on the system studied. Molecular modeling concepts are presented so as
1 Introduction
to demystify them and stress their interests for chemical engineers. Mdtiscale approach including molecular modeling is not illustrated due to restricted space. Rather, routine examples on the use of several molecular techniques suitable to get accurate vapor-liquid equilibrium data when no data is available are provided. Chapter 4 presents a critical review of modeling frameworks for complex processing systems with emphasis not only on the models themselves but also on specialized techniques for the efficient solution of these models. More specifically, due to their increased industrial interest a general modeling framework for adsorptiondiffusion-based gas separation processes is presented in detail with focus on pressure-swing adsorption and membrane-based processes for gas separations. For subsequent sections of this chapter, a critical review of models and specialized solution techniques for crystallization and grinding processes is made. Finally, concluding remarks are drawn up and future research challenges are discussed. Process models are of increasing size and complexity, therefore the methods and tools for their tuning, discrimination and verification are of great importance. The widespread use of process models for design, simulation and optimization requires the proper documentation, re-use and retrofit of already existing models, which also need the above techniques. Thus, Chapter 5 deals with computer-aided approaches and methods of model tuning, discrimination and verification that are based on a formal structured description of process models. Besides the length and time scales, a detail scale could also be considered which seeks to develop models with varying degrees of fidelity in relation to the real world phenomena. This is the subject of Chapter 6, which presents coverage of multiscale modeling through a discussion of the origins of such phenomena in process and product engineering, as well as discussing the approaches to the modeling of systems that seek to capture multiscale phenomena. The chapter discusses the development of the partial models that make up the multiscale model particularly focusing on the characteristics of those models. The issue of partial model integration is also developed through the use of integrating frameworks. Those frameworks are analyzed to understand the important implications of model coupling and computational behavior. Throughout the chapter reference is made to granulation processing that helps to illustrate the concepts and challenges in this important area. Finally, Chapter 7 of Section 1 addresses one of the current challenges facing the modeling community: the description of regulatory networks in micro-organisms. Micro-organisms constitute examples of entire autonomous chemical plants, which are able to produce and reproduce despite shortage of raw materials and energy supplies. Understanding the intracellular regulatory networks of micro-organisms is important to process systems engineering for several reasons: microbial systems still constitute relatively simple biological systems, the study and understanding of which may enable better understanding of higher biological systems such as human beings. Furthermore, microbial systems are used, often following genetic manipulation, to produce relatively complex organic molecules in an energy efficient manner. Understanding how to couple the microbial regulatory functions and the higher level process and production control functions is a prerequisite for process engineering.
13
4
I
7 Introduction
Section 2 of this book brings together process engineering and product design. One major use of models is the development and improvement of processes and products. This is a multidisciplinary approach, and it requires consideration of many aspects of the behavior of a production plant: equipment design, steady state and dynamic operation of integrated processes, raw materials and energy usage, economy, health and safety. Section 2 reviews the methods currently available to integrate knowledge from different disciplines, and presents tools available to assist in the conception of new products, and in the design of plants able to manufacture them in a competitive and sustainable way. Section 2 starts with a comprehensive review of the process separation synthesis problem with emphasis on complex distillation systems (Chapter 1). First, a critical overview of the synthesis of simple column sequences is presented with emphasis on the novel generalized modular representation framework developed at Imperial College London. Then, the synthesis problem of heat-integrated distillation trains is thoughtfully reviewed. Current state-of-the-art methodologies and algorithmic frameworks for the synthesis of complex distillation sequencing are also critically discussed. The term process intensification is associated mainly with more efficient and compact processing equipment that can potentially replace large and inefficient units commonly used in chemical processing but also includes methodologies, such as process synthesis methods, that enable the systematic development of efficient processing units. Chapter 2 provides an overview over current process intensification technologies and presents a number of recently developed systematic computeraided process intensification methods and tools. They enable the systematic screening and scoping of large numbers of alternative processing options and can identify novel options of phenomena exploitation that may lead to higher efficiencies. Such tools provide the basis for systematic approaches to novelty in process intensification and have the potential to identify processing options, which can easily be missed in design activities that rely on intuition and past experiences. Industrial processes require the use of energy, other utilities such as water and solvents, and produce wastes that need to be treated. The system performances rely not only on the efficiency of the process but also on the quality of its integration considering the energy conversion technologies, the possible combined heat and power production, the water usage and the waste treatment techniques. Chapter 3 presents computer-aided methods for solving the optimal integration of utility systems. Graphical representations support the engineer’s creativity; they are used to define the characteristics of a utility system, to analyze the potential of combined heat and power production and to analyze the quality of subsystems integration. From the requirement analysis, a utility system superstructure can be developed, to be later optimized using mathematical programming techniques. Several formulations are presented and discussed in order to integrate the different types of utility subsystems (e.g., combined heat and power, heat pumps and refrigeration) and optimize their integration with the processes. The problem of the water circuit integration can be addressed using similar concepts.
1 Introduction
The computational basis for equipment and process design in the chemical manufacturing industries is introduced in Chapter 4. Problems encountered are discussed through the use of case studies that range from modeling, simulation and optimization of existing and proposed processes. The work described in the case studies represents recent developments and trends in the industry in the area of Computer Aided Process Engineering (CAPE).The applications focus on the use of optimization techniques for obtaining optimal designs and better approaches for controlling the processes close to or at the optimal point of operation. Designs use rigorous models based on thermodynamics, conservation laws and accurate models of transport and fluid flow, with particular emphasis on dynamic behavior and uncertainty in market conditions. Product development and design starts to be a third paradigm in chemical engineering. This emerging field of research undergoes a phase of defining its scope and methods as well as the generalization of the existing industrial experience. Chapter 5 introduces the main phases of product development and the classes of the applicable methods. The special attention is given to the definition phase: methods for translation of the consumer requirements into the product parameters and approaches to generation of the product ideas. Also given is an introduction to the experimental and knowledge-based methods and tools for product design. Finally, the challenges that face CAPE in the field of product development and design are presented. Section 3 reviews the current problems facing process operations. It presents the state of relevant methods and technology, and needed advances to combat everincreasing complexity. The scope covers resource planning and production scheduling, extending to the analysis of supply chain. Process monitoring and measurement validation are described, as being preliminary steps for real-time process optimization and model-based process control. This section starts with a comprehensive review of state-of-the-art models, algorithms, methodologies and tools for the resource planning problem covering a wide range of manufacturing activities (Chapter 1). First, the long-range planning problem in the process industries is considered including a detailed critical discussion on the effect of uncertainty, the planning of refinery operations and offshore oilfields, the campaign planning problem and the integration of scheduling and planning. Then, the planning problem for new product development in pharmaceutical industries is discussed in some detail. Next, the tactical planning problem is briefly presented followed by a description of the resource planning problem in the power market and construction projects. Recent computational solution approaches to the planning problem are reviewed, while available software tools are outlined in the penultimate section of this chapter. Finally, concluding remarks are drawn and future challenges in this area are proposed. The complex problem of what to produce and where and how to produce it best is considered through an integrated hierarchical approach. Chapter 2 deals with production scheduling, focussing on the single site problem. Problem solution using heuristics is described, before presenting solution methods based on mathematical programming. Hybrid solutions are also mentioned, as well as the combined solution of scheduling and optimal operation. The state of the art for industrial applica-
I
5
6
I
7 Introduction
tions is described, before concluding with new application domains and future challenges. Measurements are needed to monitor process efficiency and equipment condition, but also to take care that operating conditions remain within acceptable ranges to ensure good product quality, avoid equipment failure and any hazardous conditions. However, measurements are never error free. Model-based data reconciliation techniques allow the detection and correction of random experimental errors, taking profit of redundant measurements. Chapter 3 deals with process monitoring and online estimation of performance indicators. This includes also fault detection capability, and is required as part of a model-based control system, since model tuning should be based on validated plant data. The design of effective redundant sensor network is also addressed. Operating a real plant at its optimal design conditions does not guarantee optimal operation. Some plant-model mismatch cannot be avoided, nor the effect of disturbances, this is why some sort of feedback control is needed. Chapter 4 deals with model-based control, i.e., the use of rigorous process models for feedback control by model-based on-line optimization. Several implementations with increasing level of complexity are discussed. Some plant inputs can be fixed by an off-line optimization while other inputs are controlled to keep some key process parameters on target. When nonlinear process models are available this leads to nonlinear model predictive control (NMPC)where the future values of the controlled variables are predicted over a finite horizon (the prediction horizon) using the model, and the future inputs are optimized over a certain horizon (the control horizon). As an extension of this concept, feedback control can be combined with model adaptation and re-optimization. Such a control scheme is presented for the example of batch chromatographic separations, including experimental results. Structural plant-model mismatch is a major problem also addressed. A solution is the use of optimization strategies that incorporate feedback directly; this idea is presented in detail and the application to batch chromatography is used to demonstrate its potential. To conclude, the problem of controlling quasicontinuous chromatographic separations is formulated as an on-line optimization problem where the measured outputs have to meet the constraints on the product purities but the optimization goal is not tracking of a pre-computed trajectory but optimal process operation. With the increasing fundamental understanding of the underlying physicochemical phenomena of various processes and strict environmental, safety and energy consumption constraints the need for efficient real time optimization tools has reached unprecedented levels. A better understanding of the processes is leading to highfidelity but complex mathematical models that can not always be solved efficiently in real time. The computation of the best operating or control strategy, given these models, is further complicated by the presence of constraints on control variables. In Chapter 5 a parametric programming approach is presented, which moves real time computational effort off-line. This is achieved by a priori computing the optimal control Variables as a set of explicit functions of the current state of the plant, where
7 Introduction
these functions are valid in certain polyhedral regions in the space of the state variables. This reduces real-time optimization to simple function evaluations. Actually, many production facilities constitute large hybrid systems, making it necessary to consider the continuous-discrete interactions taking place within an appropriate framework for plant and process simulation and optimization. The next chapter of Section 3 (Chapter 6) briefly discusses existing modeling frameworks for discrete/hybrid production systems embodying different approaches. A very recent framework for process recipe initialization that integrates a recipe model into the batch plant-wide model is introduced. The on-line and off-line recipe adaptation from real-time plant information is presented. Finally, a model-based integrated advisory system is described. This system gives on-line advice to operators on how to react in case of process disturbances. This way, an enhanced overall process flexibility and productivity is achieved. Application of this promising approach is illustrated through examples of increasing complexity. Process operation management and business competitiveness cannot be understood without considering supply chain activities. The main aim of Chapter 7 is to provide a comprehensive review of recent work on supply chain management and optimization mainly focused on the process industry. The first part describes the key decisions and performance metrics required for efficient supply chain management. The second part critically reviews research work on enhancing the decision-making for the development of the optimal infrastructure (assets and network) and planning. Next, different frameworks are presented, which capture the dynamic behavior of the supply chains by establishing efficient replenishment inventory management strategies. Finally, available software tools for supply chain management are outlined and future research needs for the process systems engineering community are identified. Section 4 focuses on recent developments aiming at the integration of different components in the CAPE world that offer a different degree of practical implementation. Supporting databases and a presentation of the necessary emergent standards in the CAPE domain are also included here. As chemical product design involves different disciplines, different types of data and tools, different solution strategies, etc., the need for a framework for integrated chemical product-process design becomes a subject of paramount importance. Moreover, there are chemical products where the reliability of the manufactured chemical product is more important than the cost of manufacture, while there are those where the cost of manufacture of the product is at least as important as the reliability of the product. Thus, product-centred process design is also very important. Identifying a feasible chemical product, however, is not enough; it needs to be produced through a sustainable process. The objective of Chapter 1 of Section 4 is first to define the general integrated chemical product-process design problem, then to identify the important issues and needs with respect to their solution and finally to illustrate through examples, the challenges and opportunities for CAPE/PSE methods and tools. Integrated product-process design where modeling and supply chain issues play an important role is also highlighted. Chapter 2 deals with the important issues of where, why and how models of various types are used throughout the life of an industrial or manufacturing process. The
17
a
I
7 Introduction
chapter does not deal specifically with the modeling of the life cycle process but concentrates on the use of models to address a plethora of important issues that arise during the many stages of a process' life, from the cradle to the grave. In this chapter, the life cycle concept is first discussed in relation to a "cradle to the grave" viewpoint, and then in subsequent sections consideration is made to specific issues related to the modeling goals and realizations. Some important issues are discussed which surround model development, reuse, integration, model documentation and archiving. Consideration is also made to the future needs of such modeling approaches and the important implications of life cycle modeling for corporations. Throughout this chapter the authors refer to several specific industrial case studies that help illustrate the importance of modeling throughout the life cycle as well as the challenges of doing so. What is evident in the following sections of this chapter is that there is a huge range of modeling used to help answer vital sociotechnical questions through the life cycle of the process or product. It is important to appreciate that process and product engineering have vital links to social and human factors within a holistic approach to modeling. Major infrastructure projects continually reinforce a more complete view than that which is often taken by process and product engineers. In this chapter the vision of modeling within the process or product life cycle is expanded to see just what has been achieved and where the challenges lie for the future. An introductory chapter (Section 3, Chapter 7) on the supply chain (SC) network has already presented the elementary principles and systematic methods of supply chain modeling and optimization. In Chapter 3 of Section 4, the need for and integrated management of the SC is further emphasized and novel challenging solutions are presented. As seen, supply chain management (SCM)comprises the entire range of activities related to the exchange of information and materials between costumers and suppliers involved in the execution of product and/or service orders in an extremely dynamic environment. A successful management of the supply chain management requires direct visibility of the global results of a planning decision in order to include this global perspective. This requires significant integration across multiple dimensions of the planning problem for nonconventional manufacturing networks and multi-site facilities over their entire supply chain. Objectives such as resources management, minimum environmental impact, financial issues, robust operation and high responsiveness to continuous needs must be simultaneously considered along with a number of operating and design constraints. All integrated applications needed to design and operate a plant during its whole life cycle need to access reliable physical properties for all chemicals and materials occurring in the process. Chapter 4 presents an overview of the thermophysical properties needed for CAPE calculations and describes the major sources of such data currently available. Several databases for pure component physical properties are described. Phase equilibrium data collections are also reviewed. The quality of data inside the data calculation modules is essential: inaccurate data may lead to very expensive misjudgements whether it is to proceed with a new process or modifica-
7 Introduction 19
tion of it. Inadequate or unavailable data may cause a promising and profitable process can be delayed or in the worst case be rejected, only for the reason that it was not properly modelled in a simulation. The text provides also up-to-date references to information sources available on the Internet. The lack of software standards in computer-aided process and product engineering has been a subject of concern for years, as a source of unnecessary costs, delays and inconsistencies between data produced and consumed by different nonintegrated systems using different bases, different calculation principles, different units of measurements, running on different computers under different operating systems and written in different languages. Chapter 5 introduces software standards intended to remove these problems by providing the desired interoperability between software tools, platforms and databases. With appropriate machine-to-machine interface standards, using the best available tools together becomes a matter of plug-and-play,supposedly as easy as connecting USB devices or hi-fi systems. Moreover, not only do these standards enable the putting together of several software pieces available on your local PC, but they allow interoperating heterogeneous software modules available on your organisations’ intranet, or on the Internet. The chapter starts with a discussion on the concepts of openness and of open standards; then some of the most significant operational standards in computer-aided process and product engineering are examined. Following this, the authors look at some of the current software interoperability technologies that will power future systems, namely web services, service-orientedarchitectures and ontologies for the Semantic Web. The chapter concludes with a brief look at the organisational and economic consequences of the trend towards interoperability and standards in CAPE. Section 5 presents tutorial examples and case studies aiming to illustrate typical problems that can be solved using state of the art computer-aided tools. The goal is not only to show the benefits of using CAPE methods, but also to indicate what are some current limitations and point out areas where future research and developments should be directed. Chapter 1 analyses the increased use of computers in chemical engineering education. The authors present a set of computer-aided educational modules that have been specially developed with the aim to avoid the dangers of misuse of the software. The motivation is to help the students to not only understand the concepts but also to appreciate how the theory can be applied to solve chemical engineering. The computer-aided educational modules presented here involve property prediction (suitable for a course on thermodynamics or product design), extractive distillation-based separation (suitable for courses on separation processes, distillation, or process design) and model derivation and solution (suitable for courses on modeling, simulation and/or numerical methods). The students are encouraged to first assemble modules for corresponding calculation steps into their own software for simple problems and then use specialized software for larger more complex problems. At this stage, an integrated computer-aided system also becomes very useful and data transfer between the various calculation steps and the corresponding software options take place automatically, thereby sav-
10
I
I Introduction
ing considerable time, which can be spent instead to understand the problem better and to analyze the results. Chapter 2 describes industrial case studies in plant operation and process monitoring. The application and benefits of data validation is illustrated by several examples, taken in a range of industrial environments: oil and gas, chemicals, power plants. Besides plant monitoring and fault detection, the on-line evaluation of key performance indicators is also illustrated. It is shown how the use of more detailed models (e.g., starting with component mass balance, adding energy balances, and later equilibrium constraints) contributes to improving the result quality. Chapter 3 describes the application of a production planning method for a multiproduct manufacturing plant, which optimizes profit under uncertainties in product demands. Flexibility refers exclusively to the planning problem here and it is not coupled with a plant re-design. The development and the application of the method have been highlighted by means of a case study taken from a food additives plant. This method is considered practical for plant management, because the required input data for the demand and process models and the profit function is easy to get by the users of the method. The generated output facilitates an easy interpretation of sensitivities of the optimized production planning in terms of common economic and product demand specification parameters. Overall, the theoretical and practical aspects of the computer-aided process engineering covered in this book should find wide use in libraries and research facilities, and a direct impact in the chemical industry, particularly in production automation, utility networks, supply chain and business management with embedded computer integrated process engineering. The editors would like to acknowledge the many authors that have made this book a reality. We would also like to thank everyone who has assisted in producing the material for this book, and in particular Waltraud Wust at Wiley-VCH for her help in editing the final copy. Luis Puigjaner Georges Heyen
References 1 Pekny, /. Venkatasubramanian V. Reklaitis G. V. L. Prospects for Computer-Aided Process
Operations in the Process Industries, in Puigjaner, L., Espuiia A. (eds.) Computer Oriented Process Engineering, Elsevier, Amsterdam, (1991) pp. 427-434
2 Westkiimper E. Business Challenges in
Industrial Production, in Proceedings of Mdti Supplier Operations: Strategies, Management and Techniques for Improving the Performance of Supply and Distribution Chains, European Community Conference, Stuttgart, Germany 1992
Section I Computer-aided Modeling and Simulation
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
I
Section 1 presents a review on actual trends and shows new advances in mathematical modeling and digital simulation techniques that permit practitioners to solve the complex scenarios that describe real engineering systems. The material in this section is organized i n seven chapters covering basic methods and techniques needed to develop and solve models: these extend ffom large-scale algebraic systems found in steady-state process systems (Chapter 1 ) to partial differential equations (PDEs) or partial diferential algebraic equations (PDAEs) encountered i n distributed dynamic models (Chapter 2). In Chapter 1 a description of direct substitution gradient and Newton‘s basic methods isfollowed by the presentation of more elaborated alternative methods of the quasi-Newtonfamily and continuation methods. Distributed dynamic models are dealt with i n Chapter 2, which involve the solution of PDEs or PDAEs incorporating convection, dijksion, reaction and/or thermodynamic property terms. Since there only exist analytical solutions in few cases due to nonlinearity and complexity, computational methods (or numerical analyses) are i n general required to solve such distributed dynamic models. This chapter ends with a n approach for combining computational fluid dynamic (CFD) technology with process simulation, which is illustrated and discussed through motivating case studies. Chapter 3 presents molecular modeling concepts involved i n the multiscale modeling approach for process study i n a broader jarnework that promotes computer-aided integrated product and process design. Molecular modeling is presented as a n emerging discipline for the study of energetic interaction phenomena. A molecular simulation performs numerical experiments that obtain accurate physicochemical data provided sampling, and energyforcejeld issues are addressed carejhlly. Still, computer-demanding molecular modeling tools will likely not be used “online” or be incorporated in process simulators. But, rather like computerJuid dynamics tools, they should be used i n parallel with existing eficient simulation tools i n order to provide information at the molecular scale on energetic interaction phenomena and increase the knowledge of processes that must manufacture ever more demanding end products. Modeling jameworks for complex processing (specijcally separation) systems with emphasis not only on the models themselves but also on specialized techniques for the e&cient solution of these models are considered in Chapter 4. Specijcally, modelingjameworks on pressure-swing adsorption and membrane-based processes for gas separations, crystallisation, and grinding processes are presented due to their increased industrial interestThe increased complexity and size of process models requires appropriate methods and tools for their tuning, discrimination, and verijcation. An extension to model validation and tuning, model checking and initialization is made in Chapter 5,followed by a coverage of rnultiscale modeling through a discussion of the origins of such phenomena i n process
13
and product engineering, as well as discussing the approaches to the modeling of systems that seek to capture multiscale phenomena (Chapter 6). Finally, Chapter 7 of Section 1 addresses one of the current challenges facing the modeling community:the description of regulatory networks in micro-organismsas examples constituting entire autonomous chemical plants.
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
1 Large-Scale Algebraic Systems Cuido Buzzi-Ferraris and Davide M a m a
1.1 Introduction
In this section, we address the solution of a system of N nonlinear equations:
f(x) = 0
(1)
in N unknowns, x, with particular attention given to large systems. It is worth noting that the equations of system (1) must not necessarily be algebraic but may originate, for example, from the solution of a differential system with some initial conditions, or by the evaluation of the upper limit of an integral equation. The solution of nonlinear equations is therefore significant not only as a problem per se, but it is also connected to the solution of differential-algebraicequation (DAE) and ordinary differential equation (ODE) stiff problems. In the following, we will describe some iterative methods for the solution of system (1).With the term “iteration”we mean that given a previous point xi the following one is determined by the equation: Xi+l
= Xi
+ aipi
(2)
The numerical methods for the solution of Non Linear Systems, NLSs, are characterized by the selection of direction pi and by the amplitude of the movement (xi along pi. Some methods require the evaluation of the Jacobian matrix defined as:
In the following, fi represents f(xi)and Ji represents J(xi). As far as large systems are concerned, it is instinctive to try to reduce the dimensions of the problem. Using this idea, some previous numerical techniques, such as tearing and partitioning, were developed to automatically rearrange the system in order to minimize the Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
16
I
1 Large-Scale Algebraic Systems
number of equations to be solved simultaneously. Unfortunately,the following problems should not be underestimated 0
0
It is not certain that the solution of a small NLS requires less time then a larger one. In an NLS, the role of unknowns within an equation is not symmetric. In other words, a function can be easily solved with respect to a variable but it can be difficult to find the solution when another unknown is involved. In spite of the original NLS being well-conditioned, the reduced NLS obtained from the original one can be ill-conditioned.
The first problem arises from the fact that it is not possible to evaluate the nonlinearity of a system of equations. In other words, contrary to the linear case, it is not possible to determine a priori the time required to solve an NLS as a hndion of its dimension. For example, system (4)was shown to be easier to solve than the smaller system (5) by Powell (1970).
The second problem is once again bound to the nonlinearity of the system. An example is given by the conversion c in an adiabatic reactor as a function of the reaction temperature T. Often, it is possible to write the following energy balance: c =g m (6) Evidently, it is trivial to determine the conversion when T is assigned. Conversely, it may not be easy to evaluate the temperature that produces a specific conversion. The third problem is due to the fact that a rearrangement of the NLS can introduce an ill-conditioningthat was not originally present. Let us suppose we have to solve the following system: x1+ 1000x4 = 1001
+ x2 = 1001 + x3 = 1001 + + x3 + = 4
1000x1 1000x2 x1 xz
(7)
x4
whose solution is: x1 = x2 = x3 = x, System (7) can be rearranged into: XI
x2 x3
= 1001 - 1 0 0 0 ~ ~ = 1001 - 1000x1 = 1001 - 1000x2
f4 = x1+ x2 + + x4 X)
-4
=0
=
1.
1.2 Convergence Tests
The new problem (8), although being characterized by only one equation with unknown x4 has an extremely ill-conditionedform. As a matter of fact, if x4 is evaluated numerically as: x4 = 0.99999, then the following values are obtained: x1 = 1.010014, x2 = -9.013580, x3 = 10,014.58 andf(x,) = 10,003.58. It should be emphasized that system (8)is linear (therefore a simpler problem) and that it is reduced to a very simple equation in one unknown: x4. Quite often, it is better to avoid any manipulation of the system if the goal is to reduce its dimensions. Actually, it is advisable to leave the NLS in its original form since it comes from modeling a physical phenomenon. Doing so, there are more guarantees that the numerical system is well-posed because it describes a real problem. What should be done is something that is apparently similar to rearranging the system but it is conceptually quite different. It is advisable to try exploiting the structure of the system without manipulating it. A very simple example is represented by the solution of a steady-state distillation column. The liquid-vaporequilibria and the material balances of the unit should not be solved stage by stage, in a top-bottom sequence, while iterating towards convergence through the overall material balance of the column so to have the input/output flowrates consistent. By doing so, the physical structure of the problem would shift to obey to a sequential mathematical algorithm that would solve several apparently simplified subproblems. Such an approach would not respect the physical structure of the equilibrium stage in the sense that it would be equivalent to solving a flash problem starting from the known composition of one output stream to determine the compositions of the input and second output streams. On the contrary, the intrinsic structure of a distillation column brings tridiagonal organization to the correlated mathematical problem. Such a tridiagonal structure should be exploited to efficiently solve the numerical problem.
1.2 Convergence Tests
Working at the implementation of a numerical algorithm for the solution of NLSs, a quite important matter must be addressed. How do we determine if the new estimate, xi+l.is better or worse than the previous one, xi? The typical approach is to accept the new value, xi+,,if: (9)
Ilf(xi+l)ll2 < Ilf(xi)lll
or, equivalently, if there is a decrease in the merit function: 1
N
1 2
@(x) = - C 4 2 ( x ) = -fTf
j=1
18
I
I Large-Scale Algebraic Systems
The criterion represented by Eqs. (9) and (10) should be avoided within a generalpurpose program whenever the functions fare unbalanced and have significantlydifferent orders of magnitude. In those cases, the equations with lower orders of magnitude do not contribute to the merit function. As an example, we can report the evaluation of a flash, or the modeling of a distillation column. The stoichiometric equations (order of magnitude: 1) stay together with the enthalpy balance equations (order of magnitude: l.EG-l.E9) and significant differences in terms of orders of magnitude are present in the resulting NLS. An improvement of the previous criterion is given by weighting each equation with a suitable weight, wj.Consequently, the merit function becomes: . N
By introducing the diagonal matrix, W, which has elements equal to the weights, 9, the matrix notation follows: 1 1 @,(x) = -(Wf)T(Wf) = -fTW2f 2 2
(12)
More generally, the weights can vary with the iterations. Consequently, the weight matrix becomes Wi. A reasonable criterion for the definition of the weights is to make all equations have the same order of magnitude. To do so, it is sufficient to use weights equal to the inverse of the order of magnitude of the corresponding equations. This criterion can be implemented in the following ways: 0 0 0
The user directly writes the equations in an adimensionalized form. The user assigns the weights to be used by the numerical solver. The numerical solver evaluates the weights of Eq. (11).
Another approach is to consider whether vector, x, is sufficiently near the solution of the problem. Let us suppose we have the following linear system:
Ax=b
(13)
The distance of a point xi from one of the planes, j: ajlxl
+ aj2x2 + . . . +
ajNxN
= bj
(14)
is determined by calculating the point at which the orthogonal line passing through xi intersects the plane itself. The square of the distance between that point and xi is: dj
=
[ajl(xl)i
+ aj2(x2)i + . . . + ajN(xN)i - hi] 2 N
m=l
(15)
7.2 Convergence Tests 119
By adopting the following weights: 1
2
w.= J
N
m=l
every term of summation (11) evaluated at point x, represents the square of the distance between such a point and the planes of system (13). As far as NLSs are concerned, matrix A becomes an approximation of the Jacobian, J. Since the Jacobian matrix changes with the iterations, the weights should also be modified. This strategy may be adopted to automatically evaluate the weights in Eq. (11). Unfortunately, the aforementioned strategy does not benefit from the following property (Buzzi-Ferrarisand Tronconi, 1993): Given a merit function, F(x), applied to a linear system, it is assumed that if F(x,+,) < F(x,) then point x , + ~is closer to the solution than point x,. The criteria described so far do not benefit from this property except for the linear system (13) consisting of orthogonal planes. with reference to system (13),let us suppose we know the exact solution, x,, that makes the residuals null:
b-Axs=O
(17)
Given a point xi, other than x,, we have the residual:
b-Axi = f i
(18)
by subtracting Eq. (17) from Eq. (18) we obtain: A ( x ~- Xj) = f j
(19)
Formally, the Euclidean norm of the distance xi - x, is:
and the geometric interpretation of Eq. (20) is that the quantity IIA-'f,llz measures the distance of xi from the solution x,. With regards to NLSs, Eq. (20) is a measure of the distance of point xi from the solution of the linearized system, where A = Ji represents the Jacobian matrix evaluated using xi. Finally, the distance of a new point xi+, from the solution of the same system is:
Whenever a nonlinear system is concerned, the new point
must be accepted if
given that, in the case of linear systems, Eq. (22) means that xi+l is closer to the solution than xi is. It is worth highlighting that the Jacobian of Eq. (22) is kept constant while f, and f,+, are the residuals at points xi and xi+'.
20
I
I Large-Scale Algebraic Systems t
If Newton’s method is adopted to solve the NLS (see subsequent paragraphs) then the Jacobian matrix J i has already been factored to solve the linear system produced by the method itself. The evaluation of the merit function using the two points, xi and xi+l, is therefore straightforward and manageable. Often, besides normalizing the functions, it is also advisable to normalize the variables. A practical way to implement the normalization is to scale the variables by multiplying them for a coefficient so that all the variables have the same order of magnitude. By indicating with D a suitable diagonal matrix of multiplying coeficients, the proposed transformation is: z = DX
(23)
Consequently, the merit function with the new variables becomes: &D(z)
1 2
= -f(D-*z)
T
W2f(D-*z)
1.3 Substitution Methods
Before applying the substitution method to the solution of an NLS it is necessary to transform the equations into the following formulation:
h(x) = q(x)
(25)
where system h(x) should be easily solvable if the value of q(x) is known. The method consists of applying the iterative formula: h(xi+l) = q(xi)
(26)
where xi+l is obtained from xi. The easiest iterative formula is: x j = &(XI, x2, . . ., + l ,
xj+l,
. .., X N )
(27)
where each variable is obtained explicitly from the corresponding function. The procedure shown in (27) has the same shortcomings as the monodimensional case. Moreover, it is quite difficult to find a proper formulation that converges to the solution.
1.4 Gradient Method (Steepest Descent)
The gradient of a function is a vector. The function changes more rapidly in the direction of the gradient. With reference to the merit function (lo),the gradient in xi is given by:
Consequently, vector
P ( x ~= ) pi = -gi = - J T f i
(29)
describes the direction where the merit function (10) decreases more rapidly. Obviously, the direction of the gradient changes whenever a different merit function is adopted. When the merit function (11) is involved, the search direction becomes:
If the variables are also weighted and the merit function (24) is adopted, then the search direction becomes:
The evaluation of the space increment, ai,is performed by a monodimensional search. The procedures that adopt the gradient (steepest descent) method, as the search direction, have major limits if used alone. Actually, such methods are efficient only at the initial steps of the solving procedure. The gradient method may be efficiently coupled with Newton’s method since it is quite complementary to it. Newton’s method is rather efficient in the final steps of the search while the gradient method is efficient in the initial ones.
1.5 Newton’s Method
If the fi of the NLS can be expanded in terms of a Taylor series:
f(xi
+ di) = f i + Jidi + O( lldi 112)
(32)
and point xi is rather close to the solution, it is possible to stop the expansion to the first order terms. In this case, the correction vector, Di, to be summed with point xi comes from the solution of the system:
The following iterative procedure represents the elementary formulation of Newton’s method:
where di comes from the solution of the linear system (33):
J1. d1 .-- - f . 1
(35)
22
I
7 Large-Scale Algebraic Systems
Consequently, Newton's method has the following search direction: pi = di
(36)
and a,= 1 Whenever Newton's method converges, its convergence rate is quadratic. It is possible to identify one difference between the solution of nonlinear systems and multidimensional optimization. Usually, the Jacobian matrix of system (35) is not symmetric. Thus, it is not possible to either solve the linear system with the Cholesky algorithm or to halve the memory allocation. The most efficient methods adopted for the Jacobian factorization require twice as much time as the Cholesky algorithm. The correction, di, obtained from system (35) is independent from either a change of scale in the variables or the merit function. In fact, by introducing the scale change: y=cx+c
(37)
the new Jacobian, with respect to the variables y, becomes: Jy = JS1
(38)
and the Newton's method estimate for the x variables is: X;+I
= X; - Ji'fi
As a result, the Newton's method estimate is invariant with respect to a linear transformation using the variables x as well as the merit function (22). Vector di represents a direction where all the merit functions decrease. Actually:
g'd; = (JTfi>' (-JL'fi) T
= -fTJjJi'f; = -fTfi
gTd, w 1 -- (JTWtfi) (-Jr'fi)
= -fTWfJiJi'fi =
g%,di = (JTWtfiDF2)T (-JL1fi) =
i0
-fTWtfi < 0
-Di2fTWffi < 0
Such a property is valid if the following two conditions are satisfied: 1. the Jacobian matrix is not singular, i.e., the inverse matrix must exist; 2. the Jacobian in Eq. (41)must be a good approximation of the true Jacobian matrix and not a generic matrix Bi, otherwise:
1.5 Newton's Method
J,B;' # I (45) and the previous Eqs. (42-44) may not be true. It is also possible to outline another difference between the solution of nonlinear systems and multidimensional optimization. As far as multidimensional optimization problems are concerned, matrix Bi may also be a bad approximation of the Hessian (provided it is positive and definite) and at the same time be able to guarantee a reduction of the merit function. Conversely, matrix Bi involved in the solution of NLSs should be a good estimate of the Jacobian. Besides the previously mentioned advantages, Newton's method also presents some disadvantages that suggest not using it with the trivial iterative formulation of Eqs. (34) and (35). Three different categories for the classification of the problems related to Newton's method can be outlined: 1. Problems related to the Jacobian matrix 0 The method undergoes a critical point if the Jacobian is either singular or illconditioned. 2. Problems related to the convergence of the method 0 The method may not converge to the solution; 0 The new prediction may be worse than the previous one with respect to all merit functions. 3 . Problems related to the Jacobian evaluation and the linear system solution 0 Every new iteration requires the evaluation of the Jacobian matrix. If the Jacobian is evaluated numerically, this means that the nonlinear system (1)is called N times; 0 Each new iteration requires the solution of the linear system (35).
The algorithms derived by the original Newton's method may be divided into two classes depending on how the Jacobian matrix is evaluated. The first class comprises Newton's modified methods where the Jacobian is evaluated analyhcally or numerically approximated at point xi. The second class comprises the quasi-Newtonmethods that update the Jacobian by means of the information gathered during the iterative process. For both classes, as soon as the Jacobian matrix has been either evaluated or updated, it is recommended to immediately execute a Newton's iteration in order to exploit the efficiency of such a method. Consequently, both of the aforementioned classes first verify the point:
xi+l = xi
+ di = xi - Ji'fi
Such a point is accepted if it satisfies at least one of the following tests:
(46)
I
23
24
I
I Large-Scale Algebraic Systems
ffflW2fi+l < fTWZfi(1 - y )
(48)
The y parameter guarantees a satisfactory improvement of the merit functions.
1.6 Modified Newton's Methods
In these methods the Jacobian matrix is evaluated analytically or is approximated numerically. Since the Jacobian is recalculated at each iteration it is also necessary to solve the linear system (35). Several expedients and precautions are necessary to reduce the drawbacks of Newton's method.
1.6.1 Singular or Ill-conditioned Jacobian Matrix
The solution of system (35) is performed through the factorization of the Jacobian matrix. Whichever factorization is adopted, it is mandatory to evaluate the condition number of the system and to properly operate if such a number is too high. Actually, it is possible to introduce the auxiliary function:
The minimum of function (49) is:
which is equivalent to the prediction of Newton's method (35) applied to the solution of the nonlinear system. If the Jacobian is well-conditioned then the correction, Di, is achieved by solving system (35) instead of system (SO). Conversely, the use of function (49) becomes interesting when the Jacobian matrix is quite ill-conditioned or even singular. As a matter of fact, the system matrix, JTJi, is the Hessian of function (49) and it is symmetric. By using function (49) instead of the merit function (lo),there is the advantage of knowing the Hessian without having to evaluate the second derivatives. At the same time, it is possible to apply to matrix JTJi, all the expedients exploited when an ill-conditionedminimum problem is involved. Two algorithms implement the aforementioned idea: 0
The Levenberg-Marquardtmethod modifies system (50) in the following way:
1.G Modified Newton’s Methods
The ,u parameter may be chosen so to transform matrix (jTiji+ p ~ in) a wellconditioned matrix. Besides being an artifice to reduce the ill-conditioning of the Jacobian,the LevenbergMarquardt method is also an algorithm that couples Newton’s method to the gradient one. Actually, if the Jacobian is not singular, the solution of system ( S l ) , with p = 0, is equivalent to Newton’s estimate. Conversely, when high values of parameter p are involved, the search direction tends to the gradient of the merit function (10). The Gill-Murray criterion represents the second alternative. The idea is to make the diagonal coefficients of matrix J:Ji positive. If the Jacobian J is QR factored and matrix R is worked out in order to avoid any zeros in the main diagonal, then matrix JTJ = RTR is symmetric positive definite. Buzzi-Ferrarisand Tronconi (1986)showed a new methodology for the modification of the Jacobian matrix. If the Jacobian is ill-conditionedor singular then some equations in system (35) are linearly dependent. Consequently, it is possible to eliminate those linearly dependent rows. Since the resulting system becomes underdimensioned, it is appropriate to adopt the LQ factorization that produces the solution with minimum Euclidean norm for vector di. Thus, it is possible to avoid an excessively large correction on such a vector. The numerical solution satisfies not only the subsystem but also the equations that were removed, since, if compatible, they are almost a linear combination of the others. This criterion is often efficient and is preferable to the previous one since it is not influenced by the merit function. At the same time, it does not produce a false solution. By the term false we mean a solution of the minimum problem that is not the solution of the NLS. 1.6.2 The Convergence Problem
As mentioned, the Newton’s estimate is not satisfactory whenever point xi+ldoes not meet conditions (47) and (48).In such a case, it is possible to adopt the following strategies: 0
0 0
A monodimensional search is performed in the same direction as the Newton’s one. The region where the functions are linearized is reduced. An alternative algorithm to the Newton’s method is adopted.
Before addressing these points, the following feature should be emphasized: an NLS may not have a solution. Moreover, if a solution exists, we are not sure that it will be possible to determine it. A numerical program should warn the user about its incapability of solving the problem. 1.6.2.1 Monodimensional Search
Usually, since the monodimensional optimization is both not time-consuming and quite efficient, it is adopted in all-purpose solvers. Normally, the monodimensional
26
I
I Large-Scale Algebraic Systems
search algorithm is not pushed to the extreme. Actually, the optimization is intended to identify a new point where Newton’s method might easily converge. The merit function that is usually adopted is the weighted one (12).At the outset, the following data are known: 0 0
the value of Qw at point xi; the gradient g, at point xi and that in the direction di, g:di; the value of Qw at point xi + di.
Since there are three data in the direction di, the merit function Qw may be approximated by the parabola:
Y ( t ) = Y(0)+ tY’(0) + t2 [ Y ( U - y(0)
- Y’(O)]
(52)
Tne minimum of the parabola is:
In the following,we will assume to have a good estimate of the Jacobian matrix. Consequently, the following equation may be adopted:
and the minimum of the parabola becomes: U =
fTWffi f;+,Wff;+, fTW;fi
+
(55)
It is recommended to check that u is not too small by imposing: u > 0.1
(56)
Since point xi+l does not satisfy Eq. (47),an upper limit for u is automatically set. If the minimization is not successful then a new artifice should be exploited, otherwise the program must stop with a warning. 1.6.2.2 Reduction o f the Search Zone
The Levenberg-Marquardt method may be considered from three distinct perspectives: 0 0 0
as an artifice to avoid the ill-conditioned problem of the Jacobian matrix; as an algorithm that couples Newton’s method to the gradient one; as a method exploiting either a reduced step or a confidence region.
The third point is the most interesting when a reduction of the search zone is concerned. In this case, it is required to identify the correction di that minimizes the auxiliary function (49)with the constraint:
1.7 Quasi-Newton Methods
where 6 has a specified value. A valid alternative to the Levenberg-Marquardt method is represented by the dog leg method, also known as Powell’s hybrid method (1970). Once again, such a method couples Newton’s method to the gradient one. The original version of Powell’s method was close to the concept of either confidence region or reduced step. Powell proposed a strategy for the modification of parameter 6 subject to both the successes and failures of the procedure. 1.6.2.3 Alternative Methods
Whenever Newton’s method fails, it is necessary to switch to an alternative method. For this reason, the most commonly used method is the gradient of a merit function. There are several alternatives. It is possible to perform a monodimensional search along the gradient direction. Even better, the two methods may be coupled as it happens with the Levenberg-Marquardt algorithm or the dog leg method. Another choice is to perform a bidirectional optimization on the plane defined by both search directions. Unfortunately, there are no heuristic methods for the solution of NLSs. Only for very specific problems can a substitution method be expressly tailored and coupled to Newton’s method. As an example, in the field of chemical engineering, the boiling point (BP) method may be implemented and applied to distillation columns. In the following,we will introduce the continuation methods. Such methods transform the functions of the NLS and solve an equivalent and dynamically easier problem.
1.7 Quasi-Newton Methods
Let Bi be an approximation of the Jacobian matrix at point xi.As mentioned before, matrix Bi must be a good approximation of the Jacobian. Consequently, also in the case of quasi-Newton methods, it is necessary to evaluate either analytically or numerically the Jacobian matrix. If during the search of the solution the rate of convergence should decrease, reevaluating the Jacobian is recommended. Therefore, it is not possible to implement a quasi-Newton method without a modified Newton’s method. During the solution procedure the values of functions f, and f,+lare known in the points xiand xi+l. Such points must not necessarily correspond to previous Newton’s method estimates. Given:
AX;= ~ i + l xi
(58)
if the distance between the two points is not significant, it is possible to link the function values f, and fi+l through a Taylor expansion:
127
28
1
I Large-Scale Algebraic Systems
fi+l = fi
+ BAxi
(59)
where the Jacobian, B, is evaluated in a suitable point between xi and xi+l. Specifically, it is possible to impose that the Jacobian satisfies the following condition:
f i+l = f i
+ Bi+l AX;
(60)
Equation (GO) does not allow univocal evaluation of all components of the Jacobian when the number of equations N > 1. In this case, N - 1 more conditions are necessary. In 1965 Broyden proposed to choose the conditions to be added to Eq. (GO) in order to keep invariant the product between the Jacobian, evaluated in xi and in xi+l, and an orthogonal vector to Axi. Generally, for any given vector, qi, with: qTAxi = 0
(61)
it must result that:
This condition is reasonable ifwe consider Eq. (59). Actually, it is possible to modify the Jacobian in the direction Axi so as to satisfy condition (GO). On the contrary, in a direction orthogonal to the previous one, there is no additional information and the behavior of the Jacobian,with respect to a Taylor expansion in that direction, should be invariant. By coupling conditions (62) and (GO), it is possible to univocally identify the Jacobian in xi+l: Bi+l = Bi
+ (fi+l
-
fi -
B~AX~)AXT
AX:AX~
1.8 Large and Sparse Systems
When the number of equations and variables is quite large, often each equation depends on a reduced set of variables. As far as the Newton’s and quasi-Newton methods are concerned, it is necessary to exploit the sparsity of the Jacobian matrix so as to reduce the memory allocation while saving CPU time. In particular, the following expedients are essential: The solution of system (35) should be made by the method that best exploits the Jacobian sparsity and structure. If the Jacobian has no specific structure that can be directly exploited, it is worthwhile rearranging both the variables and equations so as to reduce the CPU effort and memory allocation required by the factorization of the Jacobian matrix. The null Jacobian components should not be evaluated. This happens automatically if the Jacobian is evaluated analytically. Conversely, whenever the Jacobian matrix is approximated numerically, the following computations:
1.8 arge and Sparse Systems
Jik =
J(Xi
+ hkek) -j(xi) hk
(64)
should be avoided if it is a pAoA known that: x ( x i + h&) =f;(xi) It is possible to exploit some formulas to update the Jacobian, which are able to preserve its sparsity. At the same time, if some elements are constant, they should not be updated by those formulas. Schubert (1970) proposed a modification of the Broyden formula (1965), while Buzzi-Ferraris and Mazzotti (1984) proposed a modification of the Barnes formula (1965). These formulas take into account the coefficientsthat are known and do not modify them. The update is performed only on the coefficients that are unknown. 0 If the Jacobian is evaluated numerically, it is not convenient to increment a variable one at a time and to perform a call to the nonlinear system. This point must be emphasized. If Eq. (64) is adopted to evaluate a Jacobian matrix that is supposed to be full, then vector ek is the null array except for position k, where the element is equal to 1. In this case, system (1) is called N times to evaluate the derivatives of the functions with respect to the N variables. Let us now consider the following sparse Jacobian matrix, where the symbol x represents a nonzero element (see Fig. 3.1). It can be observed that when the system is called to evaluate the derivatives with respect to variable xl, the only functions to be modified arefi and&. If at the same time variable x2 were modified, it could be possible to evaluate the derivatives with respect to this variable since it only influences functionsh andf7. Going on with the reasoning, it is possible to show that only three calls to the system of Fig. 3.1 are sufficient to evaluate the whole Jacobian matrix. In fact, with the first call it is possible to increment variables x1x2x3x4x6x9.With the second call we increment variables x5x8. Finally, with the third call we increment variables ~7x10.When the system is sparse, the total number of calls necessary for the evaluation of the Jacobian matrix can be drastically reduced. It is not easy to identify the sequence of variable groupings that minimizes the number of calls to the nonlinear system. Curtis, Powell and Reid (1972) proposed a heuristic algorithm that is often optimal and can be easily described. We start with the first variable and identify the functions that depend on it. We then check if the second variable does not interfere with the functions with which the first variable interacts. If this happens, we go on to the third variable. Any new variable introduced in the sequence also increases the number of functions involved. When no additional variables can be added to the list, this means that the first group has been identified and we can go on with the next group until all N variables of the system have been collected. It is evident that the matrix structure of the Jacobian must be known for this procedure to be applied. This means that the user must identify the Boolean of the Jacobian, i.e., the matrix that contains the dependencies of each function from the system variables (see Fig. 3.1).
29
30
I
I Large-Scale Algebraic Systems Figure 3.1 The Boolean matrix describes the Jacobian structure and the function dependency from the variables of the nonlinear system.
1.9 Stop Criteria
When the problem is supposed to be solved or when there are insurmountable problems, there are some tests to bring the iterations to an e n d It is advisable to implement a limitation on the maximum number of iterations. If the weighted function (12) is lower than an assigned value, there is a good chance a solution has been reached. The procedure is stopped if the estimate of Newton’s method, di, has all components reasonably small. With multidimensional optimization it is not sufficient to check whether a norm of vector di is lower than an assigned value. On the contrary, it is advisable also to to check the following relative:
Even if a quasi-Newton method is used, a good approximation of the Jacobian is known. Consequently, this criterion is adequately reliable. Nonetheless, it should be emphasized that this test is correct only when the difference between two consecutive iterations, di = xicl - xi, comes from a Newton-like method and the Jacobian is not singular.
1.10 Bounds, Constraints, and Discontinuities
Some problems have solutions that are not acceptable since they belong to unfeasible regions. In these situations, it can be worthy assigning some bounds to the variables in order to avoid the solution from falling in those unfeasible regions. This issue requires the adoption of specifically tailored numerical algorithms that are able to
1 . 7 7 Continuation Methods
depart from the unfeasible attractor while moving towards the feasible region. Similar considerations apply when discontinuities are involved. In this case, the numerical algorithm should be able to work across the discontinuity while avoiding a crisis due to its presence. The discontinuity can be either in the function itself or in its derivatives (Sacham and Brauner 2002).
1.11 Continuation Methods
Let us suppose we have a nonlinear system of N equations whose solution is quite difficult. For some reason that will be explained in the following we suppose we have a vector of adjoint parameters z, of M elements, in the equations of the system. Therefore, the NLS can be rewritten as:
f(x, 2 ) = 0
(66)
The system must be solved with respect to the N unknowns, x, given a specific value, z = zF,of the parameter vector. Let us now suppose we know another system, q(x, z) = 0, in some way related to the previous one, whose solution, for a given value of parameters z, is quite easy. In this case it is possible to write a new system that is a linear combination of the previous two:
The parameter t in Eq. (67) is called the homotopy parameter. When the parameter t varies in the interval 0, ... 1, system h is solved for a value of x that satisfies both systems q and f. The parameters z can be a function of parameter t in any way, provided that for t = 1 we have z = zF. The most straightforward functional dependency between t and z is the linear one: z = ZO
+ (ZF
-
zo)t
(68)
where zo corresponds to the initial value of the parameters. Another functional dependency is the following one:
There are several alternatives for the auxiliary system q(x, z). The common characteristic is that for t = 0 the solution of system q ( q , zo) = 0 should be effortless. The following are some choices: 0
Fixed point homotopy: q(x, z)
=x -q
+ z - zo
I
31
32
I
7 Large-Scale Algebraic Systems
h(x, Z, t ) = tf(x, Z) 0
+ (1 - t) [(x
ZO)
+ (1- t)J(%,
f ( q , zo) (71)
=
J(%. ZO) [(x - %) +
+
20) [(x - ~ g ) ( 2
Parametric continuation method: q(x, Z) h(x, Z, t ) = f(x, Z) = 0
-
(70)
=0
Homotopy with scale invariance: q(x, z) h(x, Z, t) = tf(x, Z)
0
- ZO)] = 0
Newton or global homotopy: q(x, z) = f(x, z) h(x, Z, t ) = f(x, Z) - (1 - t)f(Xg,
0
+
- ~ g ) (Z
- Z0)I = 0
(Z
- ZO)]
(72)
=0
(73)
The fourth criterion (73) deserves some explanation since one could think that the original problem has not been modified. In many practical cases, a problem may have a simple solution in correspondence to a value zo of the parameters, while there are numerical difficulties for zF. In this case the system is solved by setting t = 0 and z = zo. We then get a first solution ~0 that satisfies: h(xo, zo,O) = f h , 20) = 0
(74)
Successively, we change t from 0 to 1in order to modify continuously the parameters from zo to zF.By doing so, several intermediate problems are solved through a stepby-step procedure. It is worth highlighting two cases: 1. The parameters, z, correspond to some specifications that should be satisfied. Often, the problem can be easily solved if the specifications are mild, while it becomes hard when the requirements are tight. A typical example is represented by a distillation column. The continuation parameter can be the product purity. If the product purity is quite high, there can be some problems concerning the numerical solution. In this case, it is recommended to start with a lax specification. Once the solution has been evaluated, the problem is slightly modified by tightening the specification. A new solution is performed by adopting as a first guess the previous solution. By continuing the procedure, it is possible to reach the final product purity. 2. The problem can be solved easily by introducing some simplifications. The continuation method modifies continuously the simplified hypotheses, carrying the system towards the detailed model. In the case of separation units, one of the problems may be the evaluation of the liquid-vapor equilibrium constants, k. If the k vector strongly depends on the compositions then it can be difficult to identify the first guess values that make the Newton’s method converge. In this case, it is convenient to consider the system ideal. By solving the simplified problem under the hypothesis of ideal k values, a solution is easily obtained. Such a solution becomes the first guess for the continuation procedure that takes the system towards the hypothesis of nonideal liquid-vapor equilibria. The parameter vector z comprises the k values as follows:
7. 7 7 Continuation Methods
Initially, when t = 0, all parameters are equal to the ideal k values. The homotopy parameter, t , evolves from 0 to 1. By doing so the k values continuously change from the ideal to the real hypotheses. The same reasoning can be applied to the enthalpies of the mixture. The main advantage obtained by the continuation method is that the intermediate problems have a physical implication. Consequently, each intermediate problem has a solution where the variables take up reasonable values. Another approach to the solution of the continuation problem is to implement an ODE system that integrates the x variables and the z parameters from an initial condition (easy problem) to a final time (difficult problem). Seader and coauthors (Kuno and Seader 1988; Seader et al. 1990; Jalali and Seader 1999; Gritton et al. 2001) have worked extensively on this approach.
References 1 Barnes J. C. P. An Algorithm for Solving
2
3
4
5
6
7
8
9
Nonlinear Equations Based on the Secant Method. Comput. J. 8 (1965) p. 66 Broyden C. G. A Class of Methods for Solving Nonlinear Simultaneous Equations. Math. Comput. 21 (1965) p. 368 Broyden C. G. A New Method of Solving Nonlinear Simultaneous Equations. Comput. J. 12 (1969)p. 94 Buzzi-Ferraris G. Mazzotti M. Orthogonal Procedure for the Updating of Sparse Jacobian Matrices. Comput. Chem. Eng. 8 (1984) p. 389 Gritton K. S. Seader J. D. Lin W,]. Global Homotopy Continuation Procedures for Seeking all Roots of a Nonlinear Equation. Comput. Chem. Eng. 25 (2001) p. 1003 Buzzi-Ferraris G. Tronconi E. BUNLSI - A Fortran Program for Solution of Systems of Nonlinear Algebraic Equations. Comput. Chem. Eng. 10 (1986) p. 129 Buzzi-Ferraris G. Tronconi E. An Improved Convergence Criterion in the Solution of Nonlinear Algebraic Equations. Comput. Chem. Eng. 17 (1993) p. 419 Curtis A. R. Powell M./. D. Reid j . R On estimation of Sparse Jacobian Matrices. Report TP476 AERE Hanvell 1972 Jalali F. Seader J . D. Homotopy Continuation Method in Multi-phase Multi-reaction Equilibrium Systems. Comput. Chem. Eng. 23 (1999) p. 1319
10 Powell M . J. D. A Hybrid Method for Nonlin-
ear Equations. In: Numerical Methods for Nonlinear Algebraic Equations. Ed.: Rabinowitz G . Breach London, 1970 11 Schubert, L. K. Modification of a QuasiNewton Method for Nonlinear Equations with a Sparse Jacobian. Math. Comput. 24 (1970) p. 27
Further Reading 1
2
3
4
5
6
Allgower E. L. Georg K. Homotopy Method of Approximating Several Solutions to Nonlinear Systems of Equations. In: Numerical Solution of Highly Nonlinear Problems, Foster, Amsterdam, 1980 Davidenko D. On a New Method of Numerically Integrating a System of Nonlinear Equations, Doklady Akademii Nauk USSR 88 (1953) p. 601 Eberhan /. G.Solving Equations by Successive Substitution - The Problems of Divergence and Slow Convergence. J. Chem. Edu. 63 (1986) p. 576 Gupta Y. P. Bracketing Method for On-line Solution of Low-Dimensional Nonlinear Algebraic Equations. Ind. Eng. Chem. Res. 15 (1995) p. 239 Jalali F. Process Simulation Using Continuation Method in Complex Domain. Comput. Chem. Eng. 22 (1998) p. S943 /sun Y. W. Multiple-step Method for Solving Nonlinear Systems of Equations. Comput. Appl. Eng. Edu. 5 (1997) p. 121
I
33
34
I
1 Large-Scale Algebraic Systems 7 Karr C. L. Weck B. Freeman L. M. Solutions
8
9
10
11
12 13
14
to Systems of Nonlinear Equations via a Genetic Algorithm. Eng. App. Artif. Intell. 11 (1998) p. 369 Kuno M. Seader]. D. Computing all Real Solutions to Systems of Nonlinear Equations with a Global Fixed-Point Homotopy. Ind. Eng. Chem. Res. 27 (1988)p. 1320 Neurnaier A. Interval Methods for Systems of Equations. Cambridge University Press, Cambridge 1990 Paloschi]. R. Bounded Homotopies to Solve Systems of Algebraic Nonlinear Equations. Comput. Chem. Eng. 19 (1995) p. 1243 Paterson W. R. A New Method for Solving a Class of Nonlinear Equations. Chem. Eng. Sci. 41 (1986) p. 135 Rice R. J. Numerical Methods, Software and Analysis. Academic Press, London, 1993 Seader]. D. Kuno M. Lin W. J. Johnson S. A. Unsworth K. Wiskin]. W. Mapped Continuation Methods for Computing all Solutions to General Systems of Nonlinear Equations. Comput. Chem. Eng. 14 (1990)p. 71 Shacham M. Kehat E. Converging Interval Methods for the Iterative Solution of a Nonlinear Equation. Chem. Eng. Sci. 28 (1973) p. 2187
15
16
17
18
19
20
Shacham M. Brauner N. Numerical Solution of Non-linear Algebraic Equations with Discontinuities. Comput. Chem. Eng. 26 (2002) p. 1449 Shacham M. Brauner N. Cutlip M. B. A Web-based Library for Testing Performance of Numerical Software for Solving Nonlinear Algebraic Equations. Comput. Chem. Eng. 26 (2002) p. 547 Schnepper C. A. Studthm M. A. Application of Parallel Interval Newton/Generalized Bisection Algorithm to Equation-Based Chemical Process Flowsheeting. Interval Comput. 4 (1993) p. 40 Sundar S. Bhagavan B. K. Prasad S. Newtonpreconditioned Krylov Subspace Solvers for System of Nonlinear Equations: a Numerical Experiment. Appl. Math. Lett. 14 (2001) p. 195 Wayburn T. L. Seaderj. D. Homotopy Continuation Methods for Computer-Aided Process Design. Comput. Chem. Eng. 11 (1987) P. 7 Wilhelm C.E. Swanq R. E. Robust Solution of Algebraic Process Modeling Equations. Comput. Chem. Eng 18 (1994) p. 511
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
I
35
2 Distributed Dynamic Models and Computational Fluid Dynamics Young-il Lim and Sten Bay Jdrgensen
2.1 Introduction
Chemical and biotechnical processes are often described by distributed dynamic models, that is, partial differential equations (PDEs) or partial differential algebraic equations (PDAEs) incorporating convection, diffusion, reaction and/or thermodynamic property terms. The PDAE models represent temporal as well as spatial variation of state variables. Since analytical solutions only exist in few cases, due to nonlinearity and complexity, computational methods (or numerical analyses) are generally required to solve such distributed dynamic models. In this chapter numerical methods for solving PDEs are reviewed in the following three sections, first treating semidiscretized (method of lines) and fully discretized methods before discussing adaptive and moving mesh methods. Several applications of distributed models appearing in preparative chromatography, futed-bed reactors, slurry bubble columns, crystallizers and microbial cultivation processes are treated in section 2.6 as a means to introduce various relevant aspects for the solution of PDE/PDAE models for chemical and biotechnical processes. Finally in section 2.7 an approach for combining computational fluid dynamic (CFD) technology with process simulation is illustrated and discussed. 2.2 Partial Differential Equations
Chemical and biotechnical processes often take place in spatially distributed systems and are therefore most appropriately described by distributed dynamic models, that is, partial differential equations (PDEs)incorporating convection, diffusion, reaction and thermodynamic property terms (Heydweiller et al. 1977; Kohler et al. 2001). For example, the material, energy, and momentum balances on moving fluid phases result in PDEs with respect to time and one or more space dimensions. Partial time derivatives occur as a direct consequence of the transient operation, while convective Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 200G WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
36
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
and diffusive (or dispersive) effects normally lead to first and second order partial space derivatives, respectively. Material and energy balances on stationary phases (e.g., the solid adsorbent in a packed-bed adsorberlreactor) may not involve any convective or diffusive terms and are therefore free of partial space derivatives.The properties of such a stationary phase at any single point obey ordinary differential equations (ODES).Algebraic equations (AEs) are often used to define chemical equilibria, physical properties (e.g., enthalpy in terms of temperature, pressure and composition), or other intermediate quantities appearing in the differential equations. Therefore, physical models are generally expressed as PDEs coupled with AEs, i.e., socalled partial differential algebraic equations (PDAEs) submitted to initial conditions and boundary conditions. For the purpose of this review, a PDAE system with one spatial coordinate can be expressed as follows:
where u(t, x ) is the state variable as a firnction of time (toIt Itf) and space (xo I x 5 xf),F(u)is the convection flux, D is the diffusion (or dispersion) coefficient, r(u, 8) is the reaction rate equation depending on state variables (u) and parameters (8), and g(u) is a nonlinear algebraic equation. On the right-hand side of Eq. (la), the first, second and third terms take into account convection, diffusion and reaction, respectively. The partial differential equations govern a family of solutions. A particular member of the family of solutions is specified by the auxiliary conditions like initial and boundary conditions. For a PDE containing a first-order time derivative, one initial condition (IC) is required at an initial time level, t = toalong the space ( x ) : U(t0,
x ) = uo
(2)
For a PDE containing a second-order spatial derivative like Eq. (la), two boundary conditions are required at the physical boundaries of the solution domain. For example, the well-known Danckwert’s boundary condition (BC) can be imposed for Eq. (la):
=0
at x = xf for all t
where f(u) 11 in is the inlet flux predescribed by the operating condition. In the literature, Eq. (3b) is called the Neumann BC and Eq. (3a)is a mixture of the Dirichlet BC and the Neumann BC. Proper specification of auxiliary conditions is a necessary condition to obtain a well-posed problem (Hoffman 1993).
2.2 Partial Differential Equations
Physical mathematical models like the partial differential equations (PDEs)have a continuous form, while theose for solution purposes have to be discretized into a semidiscrete form (e.g.,using only spatial discretization, Ax) or a fully discrete form (e.g., combining temporal and spatial discretization, At and Ax) in order to represent the models in the temporal and spatial (or computational) domain. Among the large number of numerical methods developed for the solution of PDE or PDAE systems, the following is a well-established classification: 0
0
0
Method of lines (MOL),including finite difference methods, finite element methods and finite volume methods (Finlayson 1980; Schiesser 1991; Leveque 1998; Lim et al. 2001a, Mantzaris et al. 2001a and 2001b). Fully discretized methods (Hoffmann 1993; Chang 1995 and 2002; Lim et al. 2004). Adaptive mesh refinement (AMR) or adaptive grid methods (Berger and Oliger 1984; Berger and LeVeque 1998; Vande Wouwer et al. 1998). Moving grid methods (Miller and Miller 1981; Dorfi and Drury 1987; Huang and Russell 1997; Li and Petzold 1997; Lim et al. 2001b).
The semidiscretized method is called MOL, where PDEs (or PDAEs) are converted into a system of ODES (or DAEs) with respect to time by spatial discretization (see Section 2.3 for details). The main advantage is that well-established time integrators, e.g., Runge-Kutta or backward differentiation formula (BDF) methods, can be used for solving a large set of ODES or DAEs. A main drawback is, however, that it is difficult to control and estimate the impact of the spatial discretization error (Oh 1995). For fully discretized methods (Section 2.4), a system of nonlinear algebraic equations is obtained after temporal and spatial discretization. Adaptive and moving grid methods seem to be most promising since the idea is to use a numerical method in which nodes are automatically positioned in order to follow or anticipate steep moving fronts (Section 2.5). The node positioning may be achieved by using two basic strategies, namely AMR (i.e., local mesh refinement) and moving mesh methods (i.e., continuous spatial redistribution of a futed number of mesh points). These two types of methods are appropriate for solving PDEs in the presence of steep moving fronts or discontinuities. One of the key challenges facing process modeling today is the need to describe the interactions between fluid flow and phenomena models such as chemical reactions, mass transfer and phase equilibrium (Bezzo et al. 2003). Process simulations taking convection, diffusion and reaction into account and using computational fluid dynamics (CFD)for fluid hydrodynamics are important tools for the design and optimization of chemical and biochemical processes. The two technologies are largely complementary, each being able to capture and analyze some of the important process characteristics (Bezzo et al. 2000). Their combined application can therefore lead to significant modeling and simulation benefits, as will be discussed in Section 2.7. Before proceeding with this review, several preliminary concepts are summarized to facilitate presentation of the numerical methods.
I
37
38
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
2.2.1 ODE (or DAE) Integration
In the MOL framework, the numerical solution of PDEs (or PDAEs) is obtained by the time integration of the ODEs (or DAEs) resulting from spatial discretization. The general form of ODEs is expressed
(4)
M(t)u = h(t, U )
where u is the time derivative. When the matrix M(t) is singular, Eq. (4)represents a DAE system rather than an ODE system. Solving the DAE system is more complicated than solving the ODE system (see Section 2.2) because the DAE system only has a solution if the initial conditions ~0 are consistent in the sense that the equation M(to) uo = h(b,ug) has a solution, &, for the initial slope. Computations in a DAE integrator does not require M(t) to be nonsingular (Ascher and Petzold 1998). If the time dependent ODE has a condition number that is large, then the problem
a
is stiff. In other words, system (4)is stiff if the Jacobian J = - au (in the neighbor-
I
h
IL >>1. For the stiff ODE/DAE hood of the solution) has eigenvalues A,, where 3 lilminl
systems, implicit BDF time integrators, such as DASSL (Petzold 1983), LSODI (Hindmarsch 1980), and DISCO (Sargousse et al. 1999) and o d e l b in Matlab (The Mathworks Inc., MA, USA) are used for accurate evaluation of time derivatives.
2.2.2 Accuracy and Computational Performance
How close the numerical solution is to the true solution (or analytical solution, if that exists) at the finite temporal and spatial stepsizes (At and A x ) is assessed by evaluating the accuracy of the discretization. As At and A x converge to zero and as the approximation order of derivatives increases, the approximation error generally diminishes. However, one must also account for computational efficiency in terms of the computational time that increases as the accuracy rises. In this context, there is a tradeoff between accuracy and computational efficiency.Thus, to simultaneously minimize the approximation error and the computational time, it can be considered as a multiobjective problem (Lim et al. 2001a). The set of AEs or ODEs obtained after discretization of PDEs differs from the original PDE by the presence of the truncation error terms, which implicitly contribute to the numerical difision or dissipation. The truncation error related to accuracy is always present in the finite approximation of a PDE. An appropriate numerical method should be selected for a given PDE system in order to meet a tolerable numerical error within a reasonable computational time.
2.2 Partial Diferential Equations
2.2.3 Automatic Differentiation
The numerical methods employed in the solution of many scientific computing problems require the computation of derivatives of some functions. Probably the best known are the gradient methods for optimization (e.g., successive quadratic programming, Powell 1971; see also Section 2.4), Newton’s method for the solution of nonlinear algebraic equations (see Section 2.1), and the numerical solution of ODEs and PDEs. Both the accuracy and computational requirements of the derivative computation are usually of critical importance for the robustness and speed of the numerical solution (Bischof et al. 1992). Taking a system of ODEs converted from a partial differential equation with MOL (see Section 2.1) as an example, the system is given as: du = h(t, AX, U )
(5)
dt
where the state variable u = [u1 ... u N and the nonlinear function h = [h, ... hNare represented (or approximated) on N discrete spatial mesh points (or finite elements). ODE solution methods, such as implicit Runge-Kutta and BDF methods, require a
(3
( N x N) Jacobian - , which is either provided by the user or approximated by a
difference quotient also called divided differences. In fully discretized methods (e.g., the conservation element and solution element (CE/SE) method, see Section 2.2) for the numerical solution of a PDAE, a nonlinear system is obtained as a function of time, spatial stepsizes, and state variables. 0 = h(At, AX, U)
(6) For a futed time and spatial stepsizes, Eq. (6) is solved by a Newton-type iteration dh requiring the Jacobian -. Therefore, the computation of derivatives (or JacodU
bian) is a crucial ingredient in the numerical solution of PDEs or PDAEs. Hand-coding is increasingly difficult and error prone, especially as the problem complexity increases. Numerical approximation by divided differences has the advantage that the function is only needed as a black box. For example, a central divide difference is expressed as:
The main drawback of divided differences is that their accuracy is difficult to assess. In addition, they are computationally expensive. The basic idea of automatic differentiation (AD) is to avoid not only numerical approximations, which are expensive and contain rounding errors, but also handcoded differentiation, which is error prone. Automatic differentiation techniques rely on the fact that every function, no matter how complicated, is evaluated on a com-
I
39
40
I
2 Distributed Dynamic Models and Computational Nuid Dynamics
puter as a sequence of elementary operations such as additions, multiplications and elementary functions (Bischof and Hovland 1991). By applying the chain rule
over and over again to the composition of those elementary operations, one can compute derivative information of h(u)exactly and in a completely mechanical fashion. Several AD packages such as Automatic Differentiation in FORTRAN (ADIFOR) (Bischof et al. 1998) are available from the AutoDiff organization Web site (http:/ /www.autodiff.org/)). 2.2.4 Fixed, Adaptive and Moving Grids
The numerical study of evolutionary PDEs with steep moving fronts has demonstrated the need for numerical solution procedures with time and space adaptation. Over recent years, a great deal of interest has developed in adaptive mesh methods (Vande Wouwer et al. 1998).The objective of such approaches is to obtain solutions as accurately as could be obtained if a fine mesh was used over the entire physical domain, but at significantly lower computing cost. One would normally like to concentrate a large proportion of the nodes in regions where the solution exhibits rapid variation with respect to space. In the solution of many chemical engineering problems, steep moving profiles also appear. Common examples are (1) concentration breakthrough curves in fixed-bed absorbers (Kaczmarski et al. 1997), (2) particle (or crystal) size distribution governed by a population balance equation (Kumar and Ramkrishna 1997),and (3) heat conduction problems with a phase change (Mackenzie and Robertson 2000). The futed grid method uses the constant spatial mesh size (Ax) during time integration, whereas the moving mesh method continuously moves a fued number of nodes to the regions of rapid solution variation over time. In the adaptive grid method (or AMR),meshes are locally added or removed at certain time levels according to solution steepness. In Section 2.3 adaptive and moving mesh methods are reviewed and compared. 2.3 Method of Lines
Time-dependent PDEs can be solved by means of the following two-stage procedures. First, the spatial variables are discretized on a selected spatial mesh so as to convert the original PDEs (or PDAEs) into a system of ODES (or DAEs) with time as an independent variable. Secondly, the discretization in time of the ODE/DAE system then yields the required fully discretized scheme, normally using an ODE/DAE solver. This two-stage approach is often referred to as the method of lines (MOL) in the literature.
2.3 Method of Lines
The spatial discretization means that the physical spatial domain (x E Rd, in d dimensions) is discretized, replacing the analytical domain by its discrete equivalent domain (computational domain, 5 E Rd) satisfylng the original PDE in a finite number of discrete points distributed over the physical domain. The discretization of PDEs on spatial domains normally leads to a Jacobian matrix whose elements lie within a narrow band (band Jacobian matrix). But, the bounded structure will be destroyed by the equations resulting from the boundary conditions, recycle streams or other nonlinear features. Hence, a sparse matrix would often be seen. In the ODE/DAE solver, the user will define the appropriate type of the Jacobian matrix to be evaluated by numerical difference, user-provided code or automatic differentiation, as discussed above. The discretization techniques are important since not satisfying local conservation equations will give meaningless results. The numerical scheme has to closely mimic the behavior of the original PDEs and guarantee local conservation of flow properties. To achieve this, it is necessary to not only use conservative formulation of the governing equations but also a conservative numerical scheme. The discretization of the spatial derivatives in Eq. (1) can be accomplished using three main categories: the finite difference method (FDM),the finite volume method (FVM) and the finite element method (FEM). The grid system may be a fixed grid, an adaptive grid or a moving grid. 2.3.1 Finite Difference Methods
The finite difference approximation of Eq. (la) can be expressed in a simple way on N mesh points as follows:
where the first-order spatial derivative is approximated by a first-order upwinding
af 2 0 (or positive convective flow) and the secondscheme under the condition I 3U order derivative by a central difference scheme. For i= 1 and N, boundary conditions such as Eq. (3) are applied. The spatial discretization of the parabolic PDE (1) may cause stiffness due to second-order spatial derivatives, while that of the convection-dominated PDE (i.e., large convection velocity relative to diffusion coefficient) may cause instability, which is associated with oscillatory behavior of their solution due to first-order spatial derivatives (Finlayson 1980).The instability is encountered in using central schemes and higher-order upwinding schemes. To improve the accuracy of the FDMs for the PDEs, numerous attempts have been focused on approximating the first-order spatial derivatives, x i n Eq. (la). Some guidance is provided in the selection of ax
upwind methods in the FDM solution (Saucez et al. 2001).
I
41
42
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
In the traditional finite difference discretization (i.e., the fixed-stencil approach to be introduced later), the stencil (Si) used to approximate spatial derivatives is fixed in both size (number of grid points) and position of the stencil points over the discretization procedure. Fixed-stencil (FS) approximations may not be adequate near discontinuities or steep fronts where they may give rise to oscdlations. These problems have motivated the idea of an adaptive stencil (AS) (Shu and Osher 1989) and a weighted stencil (WS) (Jiangand Shu 1996). Namely, the left stencil shift (r) changes with the location xi (see Fig. 2.1), but retaining the total number of points in the stencil. We consider the cell-centered grid rather than vertex-centered grid, as shown in cell centers (xi), and stencils (Si) in one spatial dimension are Fig. 2.1. Cells (Ci), defined by Ci
= [xi-1/2,
+ xi+1/2),
xi = 0 . 5 ( x i - 1 / 2 Si[Ci-r,
(10)
xi+1/21,
. ., Ci, C i + l , . . . , Ci+sl =
Ci-r+l,.
Si[xi-r-1/2,
(11)
~i-r+1/2,
. . ., x i - 1 / 2 , 3 ~ i + 1 / 2 ,
. . .,~ i + s - 1 / 2 ,
(12)
xj+,+l/Z]
where rand s denote the left and right stencil shifts, respectively. The approximation order (k)is defined as: k-l=r+s Consequently, FS approximations can be classified according to the left stencil shift (r) at the given @-order accuracy (see Table 2.1). For the AS methods (e.g., essentially nonoscillatory (ENO) schemes, Shu and Osher 1989), the left stencil shift (r) changes with locations (xi) in order to avoid including a discontinuous (or steep front) cell (Ci)if possible. Just one stencil is selected out of some candidate stencils changed by r when doing the reconstruction, retaining the same order of accuracy.
i
Figure 2.1
;
I I
I
I'
I
fl-
i r=3 , s=3 -v '1 r
-
I
1
Right stencil shift (s)
Left stencil shift (r)
I
,-,'ci. ci+,.c
Si=[C,-,,Ck2,c
Stencil (5,) and cell (C,) structures in one dimensional problems
2.3 Method of Lines
As a result, there are no oscillations and peaks are sharpened. The weighted stencil (Jiang and Shu 199G), however, uses all candidate stencils, each being assigned a nonlinear weight that depends on the local smoothness of the numerical solution. For convective conservation laws, the one-dimensional hyperbolic PDE is expressed by, Ut
= -A
(14)
where the subscripts t and x indicate temporal and spatial partial derivatives (du/dt and df/dx), respectively. If a function h(x) satisfies at a discrete point xi,
then, its derivative with respect to x (i.e.,
fx) can be expressed as follows:
Therefore, if a numerical flux f;+1/2 approximates h ( ~ , + to ~ /a ~kth-order ) accuracy, the convection term can be discretized into k"-order accurate conservative forms:
where fi+llz and are numerical upflux and downflux, respectively, and the uniform mesh size (Ax)is used. Note that in the spatial direction, Eq. (17)can be considered as the finite volume discretization (see Section 2.3.3). The two numerical fluxes are exactly symmetrical with the one mesh distance. So, we only define the numerical upflux, in this text. 2.3.1.1 Fixed-Stencil Approach
In the FS approach, the stencil ( Si) is fixed both in number and position. The numerical flux is approximated in a conservative manner from the flux point values with constants to meet k*-order accuracy:
The constants ci are shown in Table 2.1 only for r 2 0 (Shu 1997). Note that the constants ci are obtained from the derivative of the Newtonian interpolation polynomial (see Eq. (25) for details). For instance, FS-upwind-1stands for a method of the futed stencil with the first-order accuracy (k = 1) in the upwind direction. When the convection velocity is 2 2 0 in Eq. (14), its numerical upfluxldownflux are given by dU
Eq. (18) and Table 2.1:
I
43
44
I
2 Distributed Dynamic Models and Computational Fluid Dynamics Table 2.1 Accuracy order
The constant ci up to fifth-order accuracy ( r 2 0)
Left stencil
(4 (3
j=i
=O
j=z
j=3
0
2
0 1
1I2 -112
112 312
0
113 -116 113
516 516 -716
-116 113 1116
2 3
114 -1112 1/12 -114
13/12 7/12 -5112 13/12
-5112 7/12 13/12 -23112
1/12 -1112 114 25/12
0 1 2 3 4
115 -1120 1/30 -1120 115
77/60 9/20 -13160 17/60 -21120
-43160 47/60 47/60 -43160 137160
17/60 -13160 9/20 77/60 -163160
4
5
1 2 0 1
fx= (J +AI x)
Reference name in this section
FS-upwind-1
1
3
j=4
I1
FS-central-2 FS-back-Z(TPB) FS-upwind-3
FS-central-4
-1112 1/30 -1120 115 137160
FS-upwind-5
af
for - > 0 au -
which was also introduced in Eq. (9). When the convection velocity is a f <0, Eq. dU (20a) is modified symmetrically in the opposite direction:
The first-order upwind scheme (FS-upwind-1)in Eq. (20)gives a very stable solution while poor accuracy because of low accuracy (k = 1).When k = 2 and r = 0, the FScentral-2 is obtained -J-1) fx= (J+l 2A x
which is called the second-order central scheme. The FS-back-2 is equivalent to the three-point backward (TPB) method (Wu et al. 1990). In approximating the firstorder spatial derivative, the central difference formulas (e.g., FS-central-2 and FScentral-4) tend to induce phase errors that appear in the form of numerical oscillations, as mentioned earlier. Higher-order upwinding schemes (e.g., FS-upwind-3
2.3 Method ofLines
I
45
and FS-upwind-3in Table 2.2) cannot remove the numerical oscillatory behavior in steep regions (Lim et al. 2001a). 2.3.1.2 Adaptive Stencil Approach
Finite difference E N 0 schemes were developed by Harten et al. (1987). They employed adaptive stencils in order to obtain information of solution gradients from smooth regions near discontinuities. This provides a sharp, E N 0 shock transition coupled with a formal uniformly high-order accuracy in smooth regions. Shu and Osher (1988, 1989) have proposed an efficient implementation of E N 0 schemes on the basis of fluxes rather than cell averages. The numerical are evaluated using high-order interpolating polynomials constructed from adaptive stencils in the upwind direction. The primitive of h(x)from Eq. (15), N ( x ) ,can be defined by:
Once H ( x ) is approximated by a k*-order Newtonian interpolation polynomial P(x), using central divided differences (DD) at the k + 1 points, k
j-1
,
We can obtain the numerical flu J+1/2
=
through the derivative of the above equation.
dP(xi+1/2)
dx
The O* degree divided differences (DDdi('))are defined by: (0)
DDi
=J
(26)
[~i-1/2.x i + l / ~ I
and in general the kth degree divided differences, for k . DD(k)tx,-r-1/2,
~i-r+1/2,.
DD'"''[~i-~+lp
. ., xi-r+j-1/2,
~i-r+j+1/21
2
1, are defined by:
=
. . . x;-,+j+1/2] - DD(k-1)[~i-r-1/2 . . . ~ i - ~ + j - ~ p ] Xi-r+j+1/2
- xi-r-1/2
(27)
46
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
For first-, second-, and third-degree divided differences, DD1)i+l/l,DD!2)and DDJ)i+lj2 are defined, respectively, as flux point values (6):
In the case of a fmed left stencil shift (r),Eq. (25)becomes equal to Eq. (18).Note that the constants ci (see Table 2.1) are obtained from Eq. (25), when the accuracy order (k) and the stencil shift (r) are given. In the AS approach, the stencil shift (r) is adaptively chosen in Eq. (25). Since a smaller I D D ~ ~ implies that the function is smoother in that stencil, a smaller one (i.e., r) is chosen through the comparison of two relevant divided differences (e.g., IDDil and IDDi-ll). The E N 0 schemes are nonlinear even for linear problems and are especially suitable for problems containing both shocks and complicated smooth flow structures (Shu and Osher 1989). The E N 0 schemes also have some drawbacks. One problem is the freely adaptive stencil, which could change by a rounding error perturbation near zeroes of the solution and its derivatives. Also, this free adaptation of stencils is not necessary in regions where the solution is smooth. Another problem is that the E N 0 schemes are not cost-effective because the E N 0 stencil selection procedure involves many logical statements (i.e., if/then statements). 2.3.1.3 Weighted Stencil Approach
The weighted stencil (WS)method (i.e., WENO scheme) is an approach used to overcome the aforementioned drawbacks while keeping the robustness and high-order accuracy of E N 0 schemes. The idea of WENO is to use a convex combination of all candidate stencils instead of approximating the numerical flux by using only one of the candidate stencils (Jiang and Shu 1996). For the third order WENO scheme in the upwind sense, two candidate stencils are used to define the numerical upflux That is, based on the FS approximation at k = 2 (see Table 2.1), two numerical fluxes (i.e., qo and ql) from the two stencils (Si and Si+lfor r = 0) are incorporated with the weighting:
where, wo =
a0
ffo + f f l ~
and w1 =
ff1
ffo + f f l ~
.
2.3 Method of Lines
qr is obtained as in the FS approach of Eq. (18) for k = 2 in Table 2.1:
C 1
40
=
Qjfi+j= COO&
j=O
+ ~ l f i + =l (J; + ~ + 1 ) / 2
1
41 =
C
Cljj+j-1
I
= c10J-l
+ SIJ
= (-J-I
+ 3J)/2
j=O
The question now is how to define the weighting parameters (aoand al), such that the E N 0 property is achieved. The weighting parameters are calculated by divided from Eq. (28): differences (DDi)i+112 213
(334
1/3
where E is a small positive real number that is introduced to avoid the denominator becoming zero (often, E = lo-' lo-"'). It is suggested in Jiang and Shu (1996) that dJ+l2 . is the power p = 2 is adequate to obtain E N 0 approximations. If a flow speed 1
-
dX
negative, the numerical flux is defined in the reverse order as fi, fi-l). Thus, the WENO scheme is a type of upwinding scheme. The WENO scheme has the following properties: (1)it involves no logical statements, which do appear, however, in the basic E N 0 schemes, (2) the WENO scheme based on the (k - 1)* order EN0 scheme is a k* order approximation in smooth regions, (3) it achieves the E N 0 property by emulating EN0 schemes at discontinuis a smooth function ities, (4)it is smooth in the sense that the numerical flux fi+1/2 and (5) the WS method combines the FS method with the AS method. Hence, the WENO scheme (Jiang and Shu 1996) improves on the EN0 scheme in robustness, smoothness of fluxes, convergence properties and computational efficiency (Shi et al. 2000). 2.3.1.4 Comparison of FS, AS and WS Approaches
Table 2.2 displays the formulation of 12 spatial discretization methods. The notation
X-Y-k is used, where X stands for stencil type (FS, AS or WS), Y indicates the high-
lighted characteristics (upwind, central or backward), and k is the approximation (or accuracy) order. Table 2.3 shows how the stencil structure (i.e., mesh points used for the numerical upflux) changes the position of a shock. In the FS and WS approaches, the position and number of the mesh points do not vary with the shock position. However, a stencil xi, xi+l] 0fj+lj2for the WS scheme is composed of two substencils Si-1/2[x,-l,
47
If-&)/2 h (3f;-+f;., +f;r)/2 h (2f;+, + 3f;- Gf; +f;-2)/6 h
(-f;+2+ 8f1 - 8 x 1 +X2)/12 h
FS-central-2 FS-back-2 FS-upwnd-3 FS-central-4
AX DD&:
=
&+I
Name
Numerical flux
Adaptive stencil (AS)
L ( x), refer to Eq. (28). 2 - x,
+ 15J-2 -
first order divided differences, e.g., DD!&
(- 3f;+z + 30J+l + 20J 2f;-3)/60 h
v;-f;-l)/h*
FS-upwnd-l
FS-upwind-5
Spatial discretization (fJ
Name
Fixed stencil (FS)
Classification of the flexible stencil methods (FS, AS and WS) in the upwinding sense, n the flow speed is positive
le 2.2
Numerical flux &+,,')
-
Name
-
Weighted stencil (WS)
2.3 Method of Lines Table 2.3 a
Stencil structures o f third order FS/AS/WS methods for numerical upflux in a positive flow velocity FS (FS-upwind-3)
shock Right
t
n
t" XI1
*
XI1
+
x,,
x,
X,+,
l
AS (AS-upwind-3)
WS (WS-upwind-3)
XI*?
~
XI1
~
XI1
x,
I
XI
x,+,
X,+?
XI1
xi2
xi,
x, x*,
X,+i
x] and Si+l12[x,xi+l] weighted with respect to the magnitude of neighboring divide differences (DD). In the third-order AS scheme in Table 2.3, the position of mesh points for the numerical upflux shifts to avoid a cell involving the shock. That is, the stencil adapts to solution variations. To compare numerical performance for the three approaches, a linear equation of the conservation law is tested with an initial condition (uo) of various wave forms:
+ u,
= 0, -1.0 < x < 1.0,
(34)
u(x, 0 ) = uo(x)
(35)
where the initial condition
u'=
I
(Jmax(1 - 1OO(x - 0.495)2,0)
I' 10
49
j--#fl~j-j--f-&
The arrows indicate the position of shocks.
Ut
I
fJmax(1 - 1OO(x - 0.505)2, 0 ) +4Jmax(l - 1OO(x - 0.5)2,0 )
0.4 5 x 5 0.6;
otherwise
50
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
contains a smooth but narrow combination of Gaussians (e-"(x-m2),a square wave, a sharp triangle wave and a half-ellipse (Jiang and Shu 1996). Since this PDE only has a linear convective term, its analytic solution shows the same shape as the initial condition, i.e., u(x, t) = uo(x- t). Using a BDF ODE integrator (DISCO, Sargousse et al. 1999), the ODE system is solved on a PC. The band Jacobian matrix is numerically evaluated. To check accuracy at a given time level, the L1-erroris measured: L1-error =
(U(x)analytical- U(x)numerical
sx=l x=-1
I dx
(36)
Table 2.4 shows the benchmarking results achieved from the 12 discretization methods on uniform 200-mesh points ((Ax = 21200). The L,-error is measured at t = 0.4 s and the computational time required is for time integration of 0.4 s. Instability is indicated by spurious oscillatory behavior in the numerical solutions. Figure 2.2 depicts numerical performance within L1-error vs. CPU time spaces from the data of Table 2.4. In general, as the approximation order increases, the error decreases. However, the fourth-order central discretization method (FS-central-4) produces much error over the FS-upwind-3 because of strong oscillations near the shock. Minimizing both the L1-errorand the computational time simultaneously, the six methods (FS-upwind-1,AS-upwind-314, and WS-upwind-3/4/5) are selected as the effective methods with consideration of the stability of the numerical solution (Lim et al. 2001a). It is found that the AS-upwind-2takes an abnormally long computation time due to excessive iterations required for convergence. In Fig. 2.3, the numerical solutions of FS-upwind-1(shortest computational time) and WS-upwindd (smallest L1-error)are compared to the FS-upwind-5.The numerical results of FS-upwind-1are stable but not accurate due to the truncation error. The Table 2.4 Accuracy, computational performance, and stability evaluation of the FDM in a linear convection equation ( N = 200, L,-error at t = 0.4 s and CPU time during 0.4 s integration time) Stencil Type
Name
Accuracy ( b enor)
Computational performance (CPU time, s)
FS
FS-upwind-1 F S-central-2 FS-back-2 FS-upwind-3 FS-central-4 FS-upwind-5
0.2696 0.1878 0.1345 0.0538 0.1219 0.0379
0.8
AS-upwind-2 AS-upwind-3 AS-upwind-4
0.0961 0.0548
9.1 4.6 4.8
0 0 0
WS-upwind-3 WS-upwind4 WS-upwind-5
0.0841
3.5 4.9 5.7
0
AS
ws *
0.0440
0.0452 0.0421
Stability evaluation: 0 (stable), X (not stable)
Stability*
1.1 1.1 1.1 1.3 1.4
0 0
-4
10 -
-(l)FS
8-
(2) AS -9-(3)WS
-
h
J
-A-
6-
.a 2 48
* 2
07
-Analytic solution 0 FS-upwind-1 -4 FS-upwind-5 0 WS-upwind5
Figure 2.2 Error and computational time comparison of FS, AS and WS methods for the linear convection law (each curve follows the point from right to left). (1) FS: FS-upwind-1 + FS-central-2 + FS-back-2+ FS-upwind-3 + FS-central-4 + FS-upwind-5, (2) AS: AS-upwind-2 + AS-upwind-3 + AS-upwind-4, (3) WS: WS-upwind-3 + WS-upwind-4 + WSupwind5
n -0.2 Axial direction (x)
Figure 2.3 Numerical solutions of a linear convection equation according t o discretization methods on 200 fixed-grid points at t = 0.4 s and -1
.o 5 x 5 1.o
FS-upwind-5 method yields a stable solution in smooth regions, but produces some oscillations near discontinuities. One of the WS approaches, WS-upwind-5,is stable and accurate over all regions but computationally somewhat prohibitive (see Table 2.4).
2.3.2 Finite Element Methods
The finite element method (FEM) divides the physical domain into many smaller subdomains (elements) and applies weighted residual methods within each element. Each physical variable over the entire domain is expressed as a sum of finite elements. Additional restrictions are introduced to ensure various degrees of continuity
52
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
of the solution at the element boundaries. In principle, any weighted residual methods can be combined with the finite element concept to yield a corresponding finite element method. In practice, the most commonly used method is the orthogonal collocation on finite elements (Finlayson 1980). 2.3.2.1 Orthogonal Collocation Method on Finite Elements
The method of orthogonal collocations on finite elements was presented in Villadsen and Michelsen (1978).An orthogonal collocation method approximates the solution by weighted combinations of orthogonal polynomials of degree M, and demands that the describing equations be satisfied exactly at a finite set of points called collocation points, which are the zeroes of an orthogonal polynomial. Table 2.5 lists the normalized collocationpoints for the orthogonal Legendre polynomials of degree of 2,3 and 4 (Finlayson 1980). Table 2.5 Normalized collocation points for orthogonal Legendre polynomials of degree 2, 3 and 4 Degree of polynomial Collocation points
0, 0.5, 1 0, 0.21132, 0.78868, 1 0, 0.1127, 0.5, 0.8873, 1
2 3 4
In many areas, such as reaction engineering, the orthogonal collocation method has proved to be a powerful method leading to accurate results. However, when the solution has steep gradients, it is more beneficial to use it in conjunction with a finite element approach. As shown in Fig. 2.4, the physical domain is divided into a number of elements and an orthogonal collocation method is applied in each element. This gives rise to the orthogonal collocation method on finite elements. The position of thefh point in element i is denoted by xp The approximated solution G ( x ) in the element i can be given by:
;(xi)
M
zz
U(~y)Ly(g), i = 1.. . N
(37)
j=O
-
--
--
--
2.3 Method of Lines
92 - sk is the Lagrange interpolation polynomial of degree M, where ~ ~ ( 3 GE0 2 ) J
k#j
xj
-
xk
N is the number of elements, and is the normalized position within the element i: - xi0
X"
osi="
Ax
(1 -
where Ax is the equidistant element length From Eq. (37), the first-order derivative of the approximated solution +I) k in element i becomes:
l M du(xik)x U(xij)A$, dx A x j=o
i = 1 . ..N ,k = 0 . . . M
at position
(39)
where AYk is a constant (M + 1) X (M + 1)matrix defined by dLy(92k)
AM
~
Jk
d32
, j , k = O ...M
From the definition of the Lagrange polynomial and the normalized collocation points in Table 2.5, the constants AMfor M = 2, 3 and 4 are evaluated:
[
A'=
-3.0 fl
-1.0
-7 8.19615 A =[ -2.19615 1
1 ;4]
(414
-2.73205 1.73205 1.73205 -0.73205
0.73205 -1.73205 -1.73205 2.73205
2.19615 -8.19615 7
-13 -5.32379 14.78831 3.87298 -2.66667 2.06559 1.87836 -1.29099 -1 0.67621
1.5 -3.22749 0 3.22749 -1.5
-0.67621 1.29099 -2.06559 -3.87298 5.32379
3
1 -1.87836 2.66667 -14.78831 13 I 1
(4V
The second-order derivative of the approximated solution a ( x ) at position k in element i can be obtained using a similar procedure: M
d2k(X;k) 1 x-cU(~jj)B$, dx2 Ax2 j=o
i=l
... N , k = O ... M
(42)
where BYk is a constant (M + 1) x (M + 1) matrix defined by BM Jk
d2LJM(?k) , j , k = O ... M d922 =
(43)
I
53
54
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
Using Eq. (37), the formula for the integral of ii(x) over any one of the subintervals Xi,k+lI can be derived as: Xi,k+l
M
G ( x ) dx x AX
C ii(~ij)C’,
i = 1 . .. N , k = 0.. . M - 1
j=O
l i k
where Cykis a constant (M + 1) x M matrix defined by:
Using the orthogonal collocation method on finite elements, Eq. (la) can consequently be approximated by:
It is interesting to note that the FDM and FEM presented here can be derived in a very similar fashion, namely by defining and manipulating interpolating polynomials over a finite set of points (see Eqs. (24) and (37)), despite their apparent differences. In both cases, spatial derivative approximations at a point xi (or integral approximations over an interval xi+llz])involve the values of the function at a set of neighboring points. The main difference between the two methods is in the composition of the set. For the FDM, this normally involves a fixed number of points (i.e., stencil shift in Section 2.3.1) to the left and right of the current point xi. For orthogonal collocation of the FEM, it involves all points within the element to which xi belongs (Oh 1995). 2.3.2.2 Continuity at Element Boundaries
An important facet of all finite element methods is the treatment of the boundaries between elements. In general, the solution values are assumed to be continuous at the element boundaries, and this normally corresponds to physical reality. One could also make the first spatial derivative continuous across the interface, thus resulting in continuous solution approximations throughout the domain. However, in some cases (e.g.,for inhomogeneous domains),it may be more appropriate to enforce continuity of some other quantity at the element boundaries, for instance, dispersive mass flux or conductive heat flu (Oh 1995). The continuity of the first derivative at each boundary can be written as:
and can be described in the computational domain as follows: M
M
j=O
j=O
C i i ( ~ q ) A g= C U(xi+lj)A$,
i = 1 . .. N - 1
2.3 Method of Lines
- --
-
--
In general, the discretization of a PDE and its associated boundary conditions using the orthogonal collocation method on finite elements results in three different dasses of relation being applied at three different types of points (see Fig. 2.5): 0
0
0
the appropriate boundary conditions are applied at the physical domain boundary such as Eq. (3) + 2 AEs (or 2 ODEs). the discretized PDE is enforced at the collocation points within each element + N(M - 1) ODES element boundary continuity is enforced at the boundaries between elements -+ (N- 1) ODES
Therefore, (NM + 1) DAEs (or ODEs) are obtained for one PDE. 2.3.3 Finite Volume Methods
For conservation laws, it is often preferable to use a finite volume method rather than a finite difference method in order to ensure that the numerical methods conserve the appropriate quantities of physical PDEs (Leveque 1998). Consider the numerical solution of time dependent one-dimensional conservation laws,
for (x,t ) ES2 = (xL,xR) x (0,r), where r(u) accounts for all considered source and sink terms. We consider that the domain S2 is partitioned into strips such that
where Ntimeis the number of time steps. Each strip is made up of two spatial grids in the case of nonuniform spatial grid, while in the fxed grid strips of control volume are rectangular as shown in Fig. 2.6.
I
55
56
I
2 Distributed Dynamic Models and Computational Fluid Dynamics Figure 2.6 Time and space control volume filled with dots (Q:) and its path line (any) used on a uniform fixed grid
X
The midpoints of spatial grids (xi+li2) are defined at t" simply as
The finite volume approximation of Eq. (49) over the control volume Q: is derived from the original integral expression of the conservation law
1 (e+ ""> at
ax
dx dt =
J, r( u) dx dt
Application of the divergence theorem based on Green's theorem' to Eq. (52) yields a line integral along the boundary, 8Qp.Performing the line integral on the left hand side of Eq. (52) yields:
is obtained. For the right hand side, the source term is simply approximated: ~ ( udxdt ) = T(u)AxiAt,
(54)
where Axi = xi+lj2- ~ i - ~ / ~ Atn a n d= t"+l- P. A n approximation of Eq. (53) is obtained using a numerical quadrature. A number of possibilities are available that give rise to either explicit or implicit methods. For example, using the midpoint formula to integrate along the bottom and top edges of Q y we get the approximations: 1) 1 Green's theorem: Let R be a closed region bounded by C in the xy-plane. Let P(x,y) and Q(x,y) be functions
defined and continuous first partial derivatives.Then
2.3 Method of Lines
For the right and left hand side edges, we use the following family of approximation:
p
Finally, the numerical results of the line integral yield: aay
[f(u) dt - udx] = axi (u:+~ -
.r>
+A;;;*
+ 0.5 (&2
-fi’lt1/2
-A:;;2)
(59)
For the explicit form of f(u), approximatingf”+1i+l/2 and fl+’i-1/2 to f ? + l p andf?-1,2, respectively, we can obtain a simple numerical form of the conservation law as follows,
which is a fully discrete formula for Eq. (49)where the convection term is discretized by the second-order central scheme (i.e., FS-central-2 in Section 2.3.1). Therefore, it seems that the explicit FVM for one-dimensional conservation law has almost the same formulation as the conservative FDM of Eq. (15). However, there are differences between the FVM and the FDM in accordance with the definition of the numerical f l w t e ~ f ? +In ~ ~Eq. ~ . (60),f?+l12is in fact an approximation to the average flux at x = along the finite volume Q::
and for the conservative FDM at t
= t,,
like Eq. (15):
Note that a complete FVM (namely the CE/SE method) in space and time domains is introduced in Section 2.4.3.
I
57
58
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
2.3.3.1 Spatial Finite Volume Method
Rather than attempting to discretize simultaneously in space and time, our attention is paid to discretization of spatial derivatives in the MOL using an adaptive time integrator, e.g., a BDF ODE solver. A naturally conservative spatial discretization procedure is provided by finite volume methods, where the discrete value viewed as an in Fig. 2.1, approximation to the average value of f(xi, P)over a cell Ci[xi-l,z, rather than as an approximation to a point-wise value of f(xi,t") (Leveque 1998).
where
The advantage of the semidiscretized approach is to achieve high accuracy in space. The cell average is simply the integration of f(x, t) over the cell divided by its area, so conservation can be maintained by updating this value based on fluxes through the cell edges. Although the derivation of such methods may be quite different from that of the conservative FDM,the resulting formulas are identical to Eq. (17). The flux function$+l,2 delivering high-order accuracy in space can be obtained by using higher-order interpolation polynomials like E N 0 schemes and WEN0 schemes (see Section 2.3.1).
2.4 Fully Discretized Method
The generic PDE with convection, diffusion and reaction terms can also be solved by temporal and spatial discretization of the original PDE. The time discretization procedure can be explicit or implicit. Several Mly discrete schemes are introduced in finite difference and finite volume approximations.
2.4.1 Explicit Time Discretization
In this section we consider a PDE with the flow velocity (a) and diffusivity (D) like in Eq. (la): ut = -au,
- Dux, - r(u)
(65)
The above equation is discretized by the forward-time methods. For example, the Leapfrog scheme can be expressed on equidistant At and A x as:
.;+" = UP ,-
where v
2.4 Fully Discretized Method
V
- (UP ,+I -
+P
u:-1)
-
2u:
+
+ Atr (ur)
(66)
a At = - is
called the Courant-Friedrichs-Lewy (CFL) number or convection Ax number and p = DAt is the diffusion number (Hoffman 1993). The partial time Ad derivative (u,) is approximated by a first-order forward difference and the partial space derivatives (u. and uxx)are approximated by a second-order central difference. Since the central scheme of the first-order spatial derivative is unconditionally unstable as mentioned in Section 2.3.1, an upwind scheme is given by the equation below when a 2 0:
The method is shown to be convergent but only conditionally stable. It introduces significant amounts of implicit numerical dissipation into the solution in the presence of steep fronts. The Lax-Wendroff scheme (1960)is a very popular explicit finite difference method for hyperbolic PDEs (i.e., D = 0 in Eq. (65)).To suppress numerical instability caused by the central discretization, an artificial diffusion term is introduced by a secondorder Taylor expansion in time:
ur'" x ur
1 + ( u t ) l A t + -(utt):At2 = u: 2
- a(ux)lAt
1 2 (u,,)lAt + -a 2
2
(68)
where the time derivatives ut and uttare determined directly from ut = -au,. Applying central discretization to Eq. (68),Lax-Wendroff scheme is given for Eq. (65):
+
(69) + u,?_,)+ Atr ( u r ) From a stability analysis, the method is stable only if1 ' 1 5 1. However, it is not V
un+" = UP - -
(UP 1+1 -
u:,)
V2
(uG1 - 24'
often used to solve convection-diffusion PDEs (Hoffman 1993). The Dufort-Frankel method (1953) proposed a modification of the Leapfrog scheme Eq. (GG), which yields a conditionally stable explicit method. In this modification, u? is replaced by the approximation u l =
+
At
U:fl
-
$1
2
(r(uf+l)
+ +-I))
Solving Eq. (70) for u:+' yields:
+
(1 2y)u;+l
=
-v
( U S " - U,P_")
+ (1
-
At +2 (r(uf+') + r@-1))
2y)uy-l
+2 p
-
(71)
I
59
60
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
The scheme cannot be used for the first time step because of the term u7-l. The 5 1. However, large values of p result in inaccurate solumethod is stable only if tions and a starting method is required to obtain the solution of the first time step. The MacCormark method (1969)is based on the second-orderforward time Taylor series as Eq. (68).
(YI
u:+'
u:
1 + (ut):At + -(utt);At2 2
2:
u:
1
+ -2
((ut):
+ (u,):")
At
(72)
where (h); is obtained from the approximation (u)C= (-au? + D(u.)"&. The method is composed of a predictor and a corrector step. In the predictor step, ur+' is approximated by the first-order forward difference: ;;+I
= u: - v (u? 1+1 - ur)
+ p (u,?+'- 24' + u,?-,) + Atr(ur)
(73)
For the corrector step, u?' is given: l
=1
2
(.:+ q + 1 )
- "(;;+I
ui-')
- *n+l
+ ?p(":
- 26";
+ a;::) + Atr (uF)(74)
The two-sep method that shows second-orderaccuracy in both time and space is very popular for solving Eq. (65) and is conditionally stable. A numerical solution can be convergent only if its numerical domain of dependence contains the true domain of dependence of the PDE, at least in the limit as At and Ax go to zero. The necessary condition is called the CFL condition. All fully discrete explicit schemes require fulfillment of the CFL condition:
For stiff PDEs, implicit time discretization is usually preferred. The method of lines (see Section 2.3) using implicit time integrators is originally motivated to solve stiff PDEs, as mentioned above. In the next section, implicit methods fully discretized in both time and space are presented.
2.4.2 Implicit Time Discretization
The implicit Euler central difference method is: U!+1
p(
= UP - -
.?+I 1+1 - .?+I 1-1
) + p(uz:
Rearranging Eq. (76a)yields:
-(i v +
p ) u:
+ (1+ 2p)u:" +
-
2u7"
)::+7~
:':4.1
= ur
+ Atr (u:)
(76a)
+ Atr (ur)
(76b)
2.4 Fully Discretized Method
Eq. (76) cannot be solved explicitly for un+'i,because the two unknown neighboring ~ i ~ "++ ' ~i + also ~ appear in the equation. Due to the implicit feature, this values ~ ~ + and scheme, which has first-order accuracy in time and second-order accuracy in space, is unconditionally stable and convergent (Hoffman 1993). This implicit Euler method yields reasonable transient solutions for modest values of Y and p. The Crank-Nicolson central difference scheme is constructed by a second-order approximation in both time and space: .?+I
=
V un, - [(u::;'
-
- 2u;+l
+
At 2 [r(u;+l)
+
qy)+
(U>l
-
+ u;-+;) + (u;+l
.;-,)I
- 2u;
+ u;-l)]
(77)
I($)]
The Crank-Nicolson method is also unconditionally stable and convergent. The implicit nature of these methods yields a set of nonlinear algebraic equations, which must be solved simultaneously. Therefore, the iterative calculation requires substantial computational time, especially for multidimensional problems. Recently, an explicit fully discrete method called the CE/SE method has been developed as a finite volume approach to solve fluid dynamics problems. The CE/SE method enforces flux conservation in space and time, both locally and globally. The method is explicit and, therefore, computationallyefficient. Moreover, it is conceptually simple, easy to implement and readily extendable to higher dimensions. Despite its second-orderaccuracy in space, this method possesses low dispersion errors (Ayasoufi and Keith 2003).
2.4.3 Conservation Element/Solution Element Method
The CE/SE method has many nontraditional features, including a unified treatment of space and time, the introduction of conservation element (CE) and solution element (SE) and a novel shock capturing strategy without special techniques. Spacetime CE/SE methods have been used to obtain highly accurate numerical solutions for l D , 2D and 3D conservation laws involving shocks, boundary layers or contacting discontinuities (Chang 1995; Chang et al. 1999). The CFL number insensitive Scheme I1 (Chang 2002) has recently been proposed for the Euler equation (i.e., convection PDEs for mass, momentum and energy conservation). Stiff source term (e.g., a fast reaction) treatment for convection-reactionPDEs (Yu and Chang 1997) is also presented for the space-time CE/SE method. The extension to a PDAE system (Lim et al. 2004) derived from the original CE/SE method (Chang 1995) and Scheme I1 (Chang 2002) is presented in the following. Consider a PDE model like Eq. (65): Ut
=
-6 - p ( u )
(78)
I
62
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
where the flux (F, implying convection and difision is defined as
f = au - D U X
(79)
Thus, Eq. (78) is identical to Eq. (G5). By the divergence theorem the equation is equal to flux conservation as follows:
(80)
V.h=p
. By using Gauss's divergence theorem
(or Green's
theorem) in a space-time E,, it can be shown that Eq. (80) is the differential form of the integral conservation law:
where S (V) is the boundary of an arbitrary space-time region V in E2, and ds = d (a . n with d a and n,respectively, being the area and the outward normal vector of a surface element on S(V). Note that, because h . ds is the space-time flux of h leaving the region Vthrough the surface element ds, Eq. (81) simply states that the total space-time flux of h leaving Vthrough S (V) is equal to the integral of p over V. Also, since in E,, dais the length of a differential line segment on the simple closed curve S (V), the surface integral on the left-hand side of Eq. (81) can be converted into a line integral. In fact, Eq. (81)is equivalent to (Chang 1995):
i;?;
(-udz+fdt)
=
s,
pdv,
where the notation C.C. indicates that the line integral should be carried out in the counterclockwise direction. In Fig. 2.7, the mesh points (e.g., points A, C and E) are marked by circles. They are staggered in space-time. Any mesh point 0,n ) is associated with a solution element S E 0 , n) and two conservation elements CE-0, n) and CE+(I',n). By definition, SEO, n) is the interior of the shaded space-time region depicted in Fig. 2.7a. It includes a horizontal line segment, a vertical line segment, and their immediate neighborhood (Chang 1995).Also, by definition, (1) CE-(j, n) and CE+(j, n), respectively, are the rectangles ABCD and ADEF depicted in Fig. 2.7a and b; and (2) CE(j, n) is the union of CE_(j,n) and CE+(j, n),i.e., the rectangle BCEF. Let the coordinate of any mesh point (j,n) be (3,t") with xj = j A x and t" = nAt. Then, for any ( x , t ) E SE(j, n),u(x, t ) , f ( x , t ) and h(x, t ) , respectively, are approximated by a first-order Taylor expansion: .(xj, t " )
= uj" + (U.)j"(X
- Xj)
+ (ut)j"(t- t " )
(83)
2.4 Fully Discretized Method 1-112
1-1
I
1+1l2
l+1
Br
-
C
CE+W
Figure 2.7 Solution element (SE) and conservation element (CE) atPh position and nfhtime level (Chang 1995). (a) Space-time staggered grid near SE(j, n). (b) CE-(j, n) and CE+(j, n)
so that, q x j , t")
= (J'(Xj,
t"),
qzj, t")) .
(85)
Here u;, (ux);, (ut)T,J, (f")?and v;); are constants in SEG, n). In the CE/SE framework, (ut);,J, &); and, v;)) are considered as functions of (u); and (ux);.These functions will be defined as follows. According to Eq. (79),one has:
fJ " = QU; - D(U,);
(86)
Also, by neglecting the contribution from the second-order derivative, 6); may be obtained using the chain rule:
In order that (ut)? can be determined in terms of (ux);,it is assumed that for any (x, SEG, n),
t) E
v .q*j,
t") = 0
63
f-lF E
CEUA
I
(88)
Thus, within SE(j, n),the contribution of the source term (p) that appears in Eq. (80) is not modeled in Eq. (88).Note that (1)because it is the interior of a region that covers a horizontal line segment, a vertical segment and their immediate neighborhood, as shown in Fig. 2.7, SEO, n) is a space-time region with an infinitesimally small volume; and (2) as will be shown, the contribution of source terms will be modeled in a numerical analogue of Eq. (82). As a result, Eq. (88)implies:
Distributed Dynamic Models and Computational Fluid Dynamics
(J)Jn
=
(”In (y a. 25
- (f.);
. (f.); (&);
-a 2 (ux)jn
at
Note that, by using Eqs. (86), (87), (89) and (9O),J, V;)j”, (ut)j”and v;): can be determined explicitly in terms of uj” and (.&. 2.4.3.1 Iterative CE/SE Method
The approximated conservation flux,
Fn =
#
C.C.
S(CE(j,n))
in Eq. (82), is defined within CElj, n):
(-iidx+fdt)
With the aid of Eq. (83) and (84), the line integral in Eq. (91) results in:
The approximated source term flux (Pj”) is obtained within V(CE(j, n)) as:
?,‘f
~fv(p)jndV
The volume integral in Eq. (94) leads to: Ax
y=pjnl
d x l
At/2
dt=-
AxAt 2 Pj”
The numerical analogue of Eq. (82) becomes:
With the aid of Eqs. (92) and (95), Eq. (96) implies that:
Equation (97) is a nonlinear algebraic equation in terms of uy, which originates from a nonlinear source term (py), Since this system of equations should be solved iteratively (e.g., using a Newton’s iteration method), it is called the iterative CE/SE method, where Jacobian matricesf, and pu are required in Eq. (90) and (97). Here, A x and At are user-supplied parameters. How their values should be chosen is problem-dependent.A small spatial step size (Ax)should be chosen for a problem associated with steep moving fronts. Also, a small CFL number
2.4 fully Discretized Method
ferred for a problem that is stiff with respect to time. Note that the stability of a CE/ SE scheme requires that the CFL number IYI < 1 (Chang 1995), as mentioned for explicit schemes in Section 2.4.1. Without using special techniques that involve ad hoc parameters, the numerical dissipation associated with a CE/SE simulation with a fixed total marching time generally increases as the CFL number decreases. As a result, for a small CFL number (say J Y ~< O.l), a CE/SE scheme may become overly dissipative. To overcome this shortcoming, a new CFL number insensitive scheme, i.e., the so-called Scheme 11, was introduced in Chang (2002).The new scheme differs from other CE/SE schemes only in how (ux)Yis evaluated. Refer to Chang (2002) or Lim et al. (2004) for the detailed formulation. 2.4.3.2 Noniterative CE/SE Method
The noniterative CE/SE method is simply obtained from a first-order Taylor approximation of the source term (Molls and Molls 1998; Lim and Jarrgensen2004).
jP J =p"p"(X-xj)+p;(tJ xj
J
t")
(98)
Using the above equation, Eq. (95) is replaced by
where the time and space derivatives of source terms (p) are reformulated through the chain rule: p[
ZE
ap a u au at
- - =puU[
(100a)
(100b) With the aid of Eq. (99), Eq. (97) evaluated on CE(j, n) is replaced by:
At where w;;l)lzz= - (4p;;;%z+ AxpZ;'{h+ Atp$#$). (u,),!'is also evaluated by Scheme I1 8 proposed by Chang (2002). Thus, two unknowns (u5 u5) are obtained from four known values (u;!y: at the previous time level (tn-'I2).u; in Eq. (101) is obtained without nonlinear iteration procedure. This scheme is a noniterative CE/SE scheme, where Jacobian matrices fu and pu evaluated at the previous time level ( t = t"-'I2) are required in Eq. (101).
U;;Y,~)
I
65
66
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
2.4.3.3 Boundary Conditions
Boundary conditions (atj = 1 and Nmesh)for state variables (u) and its spatial derivatives (u,.) are needed only at each integer-time level (n = 0, 1,2, 3, ...) because of the staggering mesh structure and the intrinsically space-time triangle computational elements (see Fig. 2.7). At each half-time level (n = 1/2, 1 + 1/2, 2 + 1/2, ...), the Values of u and u, for all mesh points (j = 1 + 1/2, 2 + 1/2, ..., NmeSh-1/2)are calculated on the basis of the values at the previous integer-time level without requiring boundary values. When the Danckwert boundary condition Eq. (3) is applied, conservative boundary conditions (BCs) at x = ~0 and x = xfcan be constructed within CE+(I, n) and CE-(Nmesh, n), respectively. Performing a line integral along CE+(l, n) and using Eq. (3a), the boundary condition at x = xo (i.e., j = 1) for the iterative CE/SE method is obtained: At 2 J
u? - - p ? + q ? J
J
= u n-112 . - n-112 J + ~ P 'j+1/2
(102a) (102b)
Ax At --fyj At2 andfk is the inlet flu predefined by the 4 A$' 4Ax operation condition. Eq. (102)leads to a nonlinear equation with respect to two variables, urand uz'when j = 1. The boundary condition at x = xf (i.e.,j= Nmesh) for the iterative CE/SE method is obtained in the same way but by performing a line integral along CE-(N-h, n):
where q;
=
-u
(103a) u,", = 0 J
(103b)
At Here, since q; can reduce to q;= - - with the aid of Eq. (90) and (103b), ur is A J computed from Eq. (103a)through a nonlinear iteration. For the noniterative CE/SE simulation, Eq. (102a) and (103a)are replaced, respectively, by:
When other boundary conditions are imposed, appropriate formulations can be derived in a conservative manner within the conservation elements (CE, (j, n)).
2.4 fully Discretized Method
2.4.3.4 Comparison of CE/SE Method with Other Methods
The iterative CE/SE method, at each time level, is associated with a block diagonal Jacobian matrix. Let the number of PDEs and spatial mesh points be N p D E and Nmesh, respectively. The maximum number of nonzero Jacobian elements for the CE/SE method, JgZLsE, is: = (NPDE x NPDE) x Nmesh
(106)
In the case of linear source terms or noniterative CE/SE simulations, the Jacobian matrix is further reduced to a diagonal form: CE/SE -
Jmi,
- (NPDE x 1) x Nrnesh
(107)
When an implicit ODE integrator is used in the MOL framework for Eq. (78),a band matrix is obtained. Let the length of the upper and lower band matrix be M U and M L dependent on the spatial discretization and nonlinearity of the PDE considered. The maximum number of nonzero band-Jacobian elements for the MOL is known as (Lim et al. 2004):
JEzL= NPDE
'
Nrnesh(ML
+ M U + 1) - -21M L ( M L + 1)
-
1
-MU(hfU 2
+ 1)
(108)
For example, in the simple case where the convection term is discretized by a firstorder backward scheme and the diffusion term by a central scheme like Eq. (9),ML = M U = N p D E . The smallest number of nonzero Jacobian elements in this case,Jip, can be approximated at each time step: (2NPDE X NPDE)X Nmesh
(109)
As a result, the following relation can be derived CE/SE
Jmi,
<
]:YE IJ:p IJgL
JizL,
Eq. (110) means that the number of nonzero Jacobian elements for the MOL, is not less thanJ$EfE. The computational time is normally proportional to the number of nonzero Jacobian elements u) multiplied by the number of time steps (N,im), i.e., J x Ntime. Therefore, it is expected that the computational time of the CE/SE method is shorter than the MOL for the same number of time steps. Especially for nonstiff systems (e.g., chromatographic adsorption problems), the CE/SE method will save computational time because a small number of time steps can be used (Lim et al. 2004). In Section 2.6, the MOL and the CE/SE methods are compared for several PDE problems in terms of accuracy and computational efficiency. In Chang et al. (2000), the CE/SE method is compared with the Leapfrog, LaxWendroff, DuFort-Frankel and MacCormarck schemes (see Section 2.4.1).Here, the CE/SE method shows promising performance compared to these fully discrete methods.
68
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
While the implicit ODE integrator has a self-adaptive feature, i.e., variable order and time stepsize (At), the present CE/SE method has a fixed value of At satisfylng the CFL condition. Thus, for stiff problems a main disadvantage of the CE/SE method could be the futed time step (At).
2.5 Advanced Numerical Methods
Adaptive mesh methods can improve the accuracy and efficiency of the numerical approximations to evolutionary PDE systems that involve large gradients or discontinuities. As the solution changes in an evolutionary PDE, the mesh must also change to adaptively refine regions where the solution is developing sharp gradients, and to remove points from regions where the solution is becoming smoother (Li 1998). Over the past years, significant interest has been devoted to adaptive mesh methods. Various sophisticated techniques have been proposed. For example, adaptive mesh refinement (AMR) removes/adds the nodes at discrete time levels and moving grid methods function by moving the nodes continuously over time (VandeWouwer et al. 1998). Adaptive mesh methods have important applications for a variety of physical and engineering problems (e.g., solid/fluid dynamics, combustion, heat transfer, etc.) that require extremely fine meshes in a small part of the physical domain. Successful implementation of the adaptive strategy can increase the accuracy of the numerical approximation and also decrease the computational cost. This section addresses two different strategies: AMR and the moving mesh method.
2.5.1 Adaptive Mesh Refinement
The AMR approach (Berger and Oliger 1984; Berger and LeVeque 1998) has been shown to be one of the most effective adaptive strategies for PDEs and refines in space and/or time. The AMR process is composed of three steps: error estimation, mesh refinement and solution interpolation. An AMR package called the conservation laws package (CLAWPACK) from the University of Washington is available from http://www.amath.washington.edu/claw/.
-
2.5.1.1 Error Estimation
One way to estimate errors is to use a weighted combination of first and second solution differences. The error (Ei) at x = xi is estimated to be: Ei =
NPDE
C k=l
W1 Iuk,i+l - 4 , i l
+ w2 Iuk,i+~
-
24.i
+ Uk,i-ll
1
i = 2 . . . (Nm&
-
1) (111)
2.5 Advanced Numerical Methods
where NpDEdenotes the number of PDEs and the weighting factors w, and w 2are user-defined. In the adaptive algorithm, the mesh is refined in portions of the physical domain where the inequality Ei 2
F
is satisfied and where E is a user-specified error tolerance.
(112)
2.5.1.2 Mesh Refinement
AMR adds new refinement grids where the error is estimated to be large. The refine-
ment grids are usually aligned with the underlying base grid. The refinement grids are arranged in a hierarchy, with the base grids belonging to level one, the next grids being added to level 2 and so on. Grids on level m are refined by a refinement ratio r, (usually 2 or 4)from the grids on level (m - 1).The grids are normally properly nested so that a grid on level m is completely contained in the grids on level (m - 1). A hierarchical block grid structure for two-dimensional AMR with r = 2 is shown in
overall structure
t
I
I
I
I
I
I
I
r
>,
e
f
.I
Figure 2.8
A hierarchical block grid structure of AMR
I
I
70
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
Fig. 2.8 (Li 1998). Each refinement level consists of several blocks. A block is a logically rectangular grid. After each refinement, the refined cells are clustered into several blocks. Buffer zones and ghost boundaries may be added to each block. All the blocks are managed by a hierarchical data structure. The data structure in the AMR algorithm is complex due to the existence of several levels (m).If u(xi,t", m) were used to store the data, it would be a waste of memory, because at higher refined levels, only a small part of the grid is used. In order to efficiently manage all the discrete points at the same level, we need to cluster them into several disconnected segments, called patches, and treat the patch as the basic data unit (Li 1998). The patches are building blocks of the hierarchical grid structure. 2.5.1.3 Solution Interpolation
The regridding includes computing the physical locations for each fine grid and copying or injecting the solution from the old grid to the new grid. The physical mesh positions are easy to compute by linear interpolation. The solution needs more attention. Although the solution can be obtained from the coarse grid by injection or interpolation, a more accurate solution is obtained from the old grid at the same level, which partially overlays the new grid. One of the secrets behind the success of the AMR algorithm is that flow discontinuities always fall within the overlay regions between the new and old grid. Thus the adaptation process cannot introduce further errors in these problem regions by solution interpolation. The method used to interpolate the solution from the coarse grid needs to be chosen with care. Conservative interpolation is useful in regions near a discontinuity. The coarse grid solution is assumed to be piecewise linear. The slopes for each grid are found by applying a MinMod limiter function to the forward and backward slopes between cell centers (Li 1998). So, for a coarse cell i, ui+lp - ui-lp = MinMod(ui+l - ui, ui - ui-1) where MinMod(a, b) =
("
sign(a) . min(la1, Ibl),
ifab < 0 elsewhere
2.5.1.4 Boundary Conditions
There are two types of boundaries in an AMR system: external boundaries and internal boundaries. External boundaries are given by the problem definition and internal boundaries are generated by refinement. Each patch in one-dimension has two ends: the left and the right. The boundary values are often collected only at the backward time t"-' just before the integration from t"-' to t". This causes a problem when the time integration is performed by an implicit or higher-order MOL, because the boundary values at t" are usually required to compute the intermediate time derivatives of the boundary cells in an MOL approach. This problem can be solved by collecting the values for the internal boundaries from the parent coarse grid at the forward time t" before integrating the current time level.
2.5 Advanced Numerical Methods
2.5.2 Moving Mesh Methods
Although AMR or local mesh refinement is quite reliable and robust, Furzeland et al. (1990)stated that it is cumbersome in some cases to apply it because of the interpolation procedure, nonfxed number of grid points, restart of integration at certain time steps, etc. The moving grid methods (Miller and Miller 1981; Dorfi and Drury 1987; Huang and Russell 1997), where the number of mesh points is kept unchanged, could be very powerful due to the continuous grid adaptation with the evolution of the solution. In the MOL framework, Furzeland et al. (1990) consider the moving finite difference (MFD) approach (Do15 and Drury 1987; Huang and Russell 1997) as promising with respect to reliability, efficiency and robustness. The moving finite element (MFE) approach (Miller and Miller 1981; Kaczmarski et al. 1997; Liu and Jacobsen 2004) that enables one to handle more complicated physical domains may be considered difficult to use because of tuning parameters and to be computationally inefficient. Moving mesh methods have traditionally used a finite difference method (normally with a simple three point central difference) to discretize both the physical PDE and moving mesh PDE (MMPDE). The MFD approach using the central discretization proposed by Huang and Russell (1997) and Dorfi and Drury (1987) is still unstable in some cases because of the central discretization of first-order derivatives. The E N 0 and WEN0 methods (Shu and Osher 1989; Jiang and Shu 1996) are uniformly high-order accurate right up to the discontinuity. Moreover these methods may well be applied to the moving grid method due to the reliable numerical results of first-order derivatives. Li and Petzold (1997) presented a combination of the moving grid method of Dorfi and Drury (1987) with the E N 0 schemes (Shu and Osher 1989) in order to improve stability and accuracy in the discretization procedure. We are interested in the numerical solution ofwell-posed systems of PDEs, e.g., in Eq. (la). If meshes are moving continuously with time, i.e, xi= xi@),by the chain rule, the solution of Eq. (la) satisfies the following equation (Dorfi and Drury 1987):
where U and x denote the time derivatives of u and x , respectively,when nonuniform physical meshes (xi)are transformed into uniform computational meshes (5;).Mesh movement is governed by m(u, x, 3;). The PDE and the mesh equation are intrinsically coupled and are generally solved simultaneously. 2.5.2.1 Equidistribution Principle
The grid equation, m(u, x, X), is induced from the equidistribution principle (EP), which means that the grids are spaced in order to make each arc length of discrete solutions equally distributed at each grid step. Therefore, the nodes are concentrated
I
71
72
I
2 Distributed Dynamic Models and Computational Fluid Dynamics Figure 2.9
of the solw
Arc length (ds =
tion a t a time level (t)
0
1
X
in steep regions. The one-dimensional EP can be expressed in its integral form with the computational coordinate (0 I 5 5 1)and the monitor function, M(x, t), as a metric of each arc length (see Fig. 2.9):
where M (x, t) is called the monitor function. For example, as shown in Fig. 2.9, the arc length monitor function is given: ds M ( x ,t ) = - = dx
d
m 1
Let the total arc length of a numerical solution at a time t be 0 ( t ( = J-M ( x , t ) dlj. 0 Therefore, Eq. (115) is replaced by:
where lji = i/Nmcshis the uniform computational coordinate mentioned above. O(t) is fmed at a given time regardless of 5 and is unknown. As it is difficult to treat this unknown term, O ( t ) , in the numerical procedure, it is eliminated by differentiating Eq. (117)with respect to once and twice. A quasistatic EP is so obtained
From equation (118)one can obtain various mesh equations involving node speeds (X), the so-called moving mesh PDE (MMPDE),which are employed to move a mesh having a fxed number of nodes in such a way that the nodes remain concentrated in regions of rapid variation of the solution (Huang and Russell 1997). For most discretization methods of Eq. (114),abrupt variations in the mesh will cause deterioration in the convergence rate and an increase in the error (Huang and Russell 1997). Moreover, most discrete approximations of spatial differential operators (e.g., u, in Eq. (114)) have much larger CFL condition numbers
2.5 Advanced Numerical Methods
(e.g., =
($+ k
73
) e in Eq. (114))on an abruptly varying mesh than they do on a
gradually varying one. The ill-conditioned approximations may result in stiffness in the time integration in the framework of MOL. Robust mesh equations spatially smoothed are proposed by Dorfi and Drury (1987) and Huang and Russell (1997). As an illustration, the well-known Burgers’ equation with a smooth initial condition is considered: U t = -uux U(X,
+ 10-~~,, o 5
0) = sin(2nx)
I 1
(119)
+ sin(rrx)/2
(120)
The boundary condition is given as u(0, t ) = 0.0 and u(1, t ) = 0.0. A uniform grid structure is used as the initial grid position. The solution is a wave that develops a very steep gradient and subsequently moves towards x = 1. Because of the zero boundary values, the wave amplitude diminishes with increasing time. This is quite a challenging problem for both fmed and moving mesh methods. Proper placement of the fine mesh is critical, and a moving grid method tends to generate spurious oscillation as soon as the mesh becomes slightly too coarse in the layer region, just like nonmoving mesh methods with a central difference (Lim et al. 2001b). Figure 2.10 shows numerical results of the Burgers’ equation solved by the MOL with the third-order W E N 0 scheme on 40 moving grid points (Lim et al. 2001b). The mesh points are well concentrated on the moving front and adapt to physical fluid flow. In Fig. 2.11, grid evolution with time is shown for this case. The mesh points move continuously according to variation of the solution. The moving grid method attains a resolution corresponding to 5000 equidistant grid points near the shock, and to 7.5 equidistant grid points near the smooth regions. 1.5 1 .o
a
2
0.5
Y
a 0.0 -0.5 -1.0 1
I
Figure 2.10 Numerical solutions of Burgers’ equation on 40 moving grid points
74
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
1.5
1.25
1
0.75 E .w
0.5
0.25
0
0.2
0.6
0.4
0.8
1
mesh, x Figure2.11 Grid evolution with time on 40 moving grid points
2.5.3
Comparison between AMR and Moving Mesh Method
In the numerical analysis of PDAEs, discretization methods on fixed mesh points are generally more robust and easy-to-use than those on adaptive mesh points. In the cases involving steep moving fronts, the adaptive mesh methods are efficient with respect to accuracy and computational time. AMR and moving mesh methods are two of the most successful adaptive mesh methods. However, some care is needed to successfully use them. AMR has been developed for explicit temporal integration, while the moving mesh method works efficiently for implicit temporal integration (e.g., MOL in Section 2.3) because of stiffness of the grid. Moving-grid methods use a fixed number of spatial grid points, without need for interpolation. Moving mesh methods implemented via implicit time integration take advantage of the fully automatic adaptation of temporal and spatial stepsize (At, and Ax,). However, simultaneous solution procedures of physical and mesh equations typically suffer from the large computation time due to highly nonlinear coupling between the two equations, often requiring an excessive Newton iteration at each time step. This problem is further exacerbated by the dense clustering of mesh points near discontinuities, which degrades the convergence of the iteration (Stockie et al. 2001). Moreover, the extension of moving mesh methods from one dimension to higher dimensions in not straightforward (Li 1998).The two-dimensional moving mesh equation is much more complicated, because it includes many factors such as temporal smoothness, orthogonality and skewness of the mesh (Huang and Russell
2.6 Applications
1999). Simplicity and efficiency for the extension from 1 D to 2D/3D motivate development of the local refinement method such as AMR (Li 1998). Data structure and algorithms in AMR for a one-dimensional grid can be extended to higher dimensions without difficulty. Adaptive mesh methods also introduce overhead. For the moving mesh, such overhead includes evaluation of the monitor function, regularization of the mesh function, computation of the mesh velocity and solving additional mesh equation for the node positions. Compared with the moving mesh method, the overhead for the AMR method is much less. The evaluation of the monitor function is much cheaper and there is no need for regularization of the mesh function. Most of the overhead comes from the refinement and management of the hierarchical data structure (Li 1998). Recently, the combination of the two adaptive mesh strategies was presented (Hyman et al. 2003) and the two methods are compared and reviewed.
2.6 Applications
This section illustrates applications of the introduced numerical methods for the solution of PDE or PDAE systems in several dynamic chemical/biochemical processes. In Table 2.6, the five examples to be presented are characterized according to the type of equations and physical dominant phenomena. Each of the five problems is described by a time-dependent process model within one-dimensional space. First, chromatography columns modeled by a PDAE system are presented in Section 2.6.1. Here, numerical performances of several MOL methods and the CE/SE method are compared for both linear and nonequilibrium adsorption. In the fmedTable 2.6
Classification o f application examples
Section
Type o f equations
Physical meanings related
Characteristics
2.6.1
Chromatography
PDAE with source term
Convection, diffusion, and adsorption
Steep moving fronts
2.6.2
Fixed-bed reactor
PDE with source term and recycle
Convection, diffusion, and reaction
Mass and heat recycle and oscillation profiles
2.6.3
Sluny bubble column reactor
PDE with source term
Convection, difFusion, and reaction
Chemical reaction related to three-phase hydrodynamics
2.6.4
Population balance equation
Integro-PDE with source terms
Growth, nucleation, agglomeration, and breakage
Dynamic behaviors of the particle size with discontinuous fronts
2.6.5
Cell population dynamics
Integro-PDE with source terms
Growth and cell division
Oscillatory behaviors of cell populations
I
75
76
I
2 Distributed Dynamic Models and Computational Nuid Dynamics
bed reactor model (see Section 2.6.2), oscillatory behaviors of state variables caused by mass/energy recycling are examined and several numerical methods are compared. In Section 2.6.3, a slurry bubble column reactor for Fischer-Tropschsynthesis is considered, where three-phase hydrodynamics are modeled by empirical equations given by De Swart and Krishna. (2002). The dynamics of gaslliquid concentrations and temperature are predicted at the beginning of operation. A population balance equation modeling crystal growth, nucleation, agglomeration and breakage is solved in Section 2.6.4, where discontinuous moving fronts appear due to initial seed crystals. Finally, Section 2.6.5 considers cell population dynamics in microbial cultures described by cell population balance equation (PBE) coupled to metabolic reactions relevant to extracellular environment (Zhu et al. 2000).
2.6.1 Chromatography
Packed-bed chromatographic adsorption between the stationary and mobile phases leads, for each component, to a partial differential algebraic equation (PDAE) system involving one partial differential equation (PDE), one ordinary differential equation (ODE) and one nonlinear algebraic equation (AE) (Lim et al. 2004): (121a)
dn _ - k(n* - n)
(121b)
0 = g(C, n*)
(121c)
dt
where vLis the interstitial velocity, D,, is the axial dispersion coefficient, a is the volume ratio between the two phases, and k refers to the mass transfer coefficient. The liquid and solid concentrations for each component are referred to as C and n, respectively. n* is the equilibrium concentration (or adsorption isotherm). Since the Peclet number (ratio of convection to diffusion, Pe
=
YLL
Da,
where L, is the column
length) is often large in chromatographic processes (Poulain and Finlayson 1993), Eq. (121) is classified as a convection-dominatedparabolic PDAE system. The padted-bed chromatographic problem in Eq. (121), is solved for one component with the volume ratio a = 1.5, the fluid velocity v L = 0.1 m/s, the axial dispersion coefficient D, = 1.0 x lo-’ m’/s, and the adsorption rate coefficient k = 0.0129 s-’. A linear adsorption isotherm is used for the algebraic Eq. ( 1 2 1 ~ ) . n* = 0.85C
(122)
The column length is in the interval 0 5 z 5 1.5 and the integration time is 0 5 t 5 10 s. As the initial condition, C(0, z )= 0, n(0,z ) = 0 and n*(O, z ) = 0 for all z except z = 0 and z = 1.5.
2.G A p p h t i o n s
Suppose that the Danckwert's boundary condition for Eq. (121a) is imposed as below: aC At z = 0 and Vt, U L ( C- Cin) = D,, . (123a)
ac --
Atz=l.SandVt,
az
(123b)
az - 0
where Ci, is a known feed concentration just before entering to the column. Here, an inlet square concentration pulse is considered as follows: Ci, = 2.2, for 0 5 t 5 2.0s
( 124a)
Ci, = 0.0, for 2.0s 5 t 5 10.0s
(124b)
The numerical solutions are obtained on 201 equidistant spatial mesh points. The CFL number for the iterative CE/SE method (Lim et al. 2004; see also Section 2.4.3) is given at Y = 0.4. The reference solution is obtained on 401 equidistant mesh points through the iterative CE/SE method. The error is estimated using Eq. (36). Table 2.7 reports numerical performance on accuracy, computational efficiency and stability for the chromatographic adsorption problem with axial dispersion on 201 mesh points. The second-order central and fifth-order upwinding schemes give spurious oscillatory solutions near steep regions. Thus, the two methods seem to be inadequate for convection-dominatedproblems as mentioned in Lim et al. (2001a). The first-order upwinding scheme (called first-order upwind, or FS-upwind-1)is not accurate because of its low order of accuracy. The two WEN0 schemes (third-order and fifth-order) enhance accuracy and stability but at the cost of longer computation time. The CE/SE method gives, in this case study, the most accurate solution with very short calculation times in a stable manner. In Fig. 2.12 numerical solutions of the fluid concentration (C) are depicted near z = 0.9 at t = 10 s for the adsorption problem. The reference solution is a smeared square profile at z = 0.8 and 1. The CE/SE method shows the best solution without Table 2.7 Accuracy, temporal performance and stability evaluation for a chromatographic adsorption PDAE with axial dispersion and square input concentration on 201 mesh points Accuracy (LI error)**
MOL
FS-upwind-1 FS-central-2 FS-upwind-5 W S-upwind-3 WS-upwind-5
Iterative CE/SE
(CFL= 0.4)
**
Unstable numerical solution. L1 error at t = 10 s.
*t*
CPU time during 10-s integration time.
-L
CPU time (s)***
0.2075 0.0979" 0.0060" 0.0449 0.0168
1.6 1.9 2.9 11.3
0.0087
1.3
7.7
I
77
78
I
2 Distributed Dynamic Models and Computational Fluid Dynamics Reference Solution ..... .. FS-upwind-1 FScentral-2 W S-upwind-5
I
+
2.5 1
A
2.0 1.5
v
CFISE 0 . 4
C
.=0 1.0
e c 8 0.5
c.
0.0 0 -0.5
0
Axial
M
Figure 2.12 Fluid concentration (C) profiles for different numerical schemes around z = 0.9 at t = 10 s for the single component chromatographic adsorption problem with axial dispersion (Dax= 1.0 x lo-') and square input concentration on 201 mesh points
spurious oscillation of the six schemes tested. As expected, first-order upwind (or FSupwind-1) is not accurate and second-order central (or FS-central-2)is highly oscillatory. The MOL with fifih-order WENO (or WS-upwind-5)and the CE/SE with CFL = 0.4 exhibit similar resolution in steep regions. Figure 2.13 shows the propagation of steep waves with time. Note that the fifthorder WENO scheme (circles)and the CE/SE method (solid line) have a nondissipative
0
c'
.-
0
.I-
2
.I-
C 0,
2
8
o.:i1 0
0
0.5
1
0 r^ 0 ..a-
2
.a-
C
Q
0 C
8
1.5
axial direction, z
axial direction, z
I
2
0
c-
2
-
0
1.5
.-0 2 1 C
C
CI
0 .c.
w
4-
2 0)
8
E l C
a,
2
0.5
0
1.5
0
0.5
1
axial direction, z
1.5
8
0.5
0
0.5
0
1
axial direction, z
Figure 2.13 Fluid concentration (C) propagation with time for a chro. matographic adsorption problem with axial dispersion on 201 mesh points (dashed line: first-order upwind; circles: fifth-order WENO; solid lines: CE/SE with CFL = 0.4)
2.6 Applications
I
79
feature owing to conservative discretization, since the waves do not widen with time. In contrast, the peak of the first-order upwinding scheme (dashed line) broadens continually as time increases.
2.6.2 Fixed-Bed Reactor
Recycling is often used in industrial processes to reduce the costs of raw materials and energy. The nonlinear effects of introducing recycling on a futed-bed reactor are considered in plant-wide process control and bifurcation analysis (Recke and Jmgensen 1997). This nonlinearity has the most pronounced effect around bifurcation points, i.e., points where the system solutions change stability and/or number of possible solutions. The fEed-bed reactor we consider here is a packed-bed tubular reactor with a single irreversible exothermic reaction of hydrogen’s catalytic oxidation to form water (Hansen and Jnrrgensen 1976).
The reactor is mass- and heat-integrated, which means that unconverted reactants are recycled and the reactor effluent is used to preheat the reactor feed in an external heat-exchanger, as shown in Fig. 2.14. The reaction is assumed to be first order in oxygen concentration with Arrheniustype temperature dependence. The model describing the reactor with both mass and energy recycling is given by:
I
Fresh feed
Figure 2.14 Schematic drawing of the fixed-bed reactor with mass/ energy recycles
80
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
where the ratio of mass to thermal residence time x = 1/600, the dimensionless flow rate Y ) 1.0, the axial dispersion mass Peclet number Pe, = 270, the Damkohler number Da = 0.376, the Arrhenius number y = 9.0, the axial dispersion heat Peclet number PeH = 118, the Biot number Bi = 0.5, the dimensionless surrounding temperature 0, = 0.79, and the dimensionless heat reaction Be = 0.49 are used for simulation. The variables t, 6,y and O are the dimensionless time, axial direction, oxygen concentration and temperature, respectively. The mass and energy recycling are assumed to follow first-order dynamics:
where ,z and re denote mass and energy recycle time lag constants with the units [t-'1, respectively. The above two ordinary differential equations have analFc solutions as follows: Yrec
= ~ C =-I (Yrec,o
erec =
e,=l
- YO=I,O)
e
-rmt
- (&c,o - ec=l,o)ecTet
(130) (131)
where yreC,(,and y5=l,o are the initial conditions for yrc0 and ybl and Orec,Oand OE=~,Oare those for Ore, and E = ~Tqe . ~ e q w q h tipe s hay q o v m a v t a z, = 30 and teare used in this Simulation. The Danckwert boundary conditions at the inlet point (6= 0) are expressed as:
where the dimensionless feed oxygen concentration ( y f e d ) and temperature ( Ofeed) are given as y f d = 1.0 and Ofe, = 0.8 and the mass and energy recycle ratios are assumed to be cr, and ae. The boundary conditions at the outlet point (ij= 1)are given as:
The bed is initially set to no reactant (i.e., yo = 0 for all g at t = 0). The initial bed temperature is Oo = 0.79 for all 5. The model is solved by the MOL and the noniterative CE/SE method for solution comparison. In the framework of the MOL, the convection term is discretized on uniform 201-mesh points by the first-order upwinding scheme (FS-upwind-1)and
2. G Applications
the third-order WEN0 scheme (WS-upwind-3),and the diffusion term by the central scheme (see Section 2.3.1). The boundary conditions Eqs. (132)-(134) are converted into nonlinear algebraic equations (AEs) by spatial discretization. The band Jacobian structure is broken by mass and energy recycles. The resulting system is thus a set of DAEs with a sparse Jacobian matrix. Using the noniterative CE/SE method, the two coupled PDEs are fully discretized on uniform 201-mesh points at CFL = 0.6 The resulting system has 402 linear algebraic equations at each time level for uyand u respectively.
zj
-y
+y -y
x
*
0
02
04
(FS-upmd-1) (WS-up~d-3) (WSEmethod) theta (FS-upwmd-I) theta (WS-upwmd-3) theta (WSEmethod)
06
Reactor length
(u
04
06
08
I
08
I
2
? iu-
15
n =
A-
Y?!
@ a o e 1
0
c g 2kE
gg
05
0
0
02
Reactor length ( x ) Figure 2.15 Comparison o f numerical solutions for dimensionless oxygen concentration ( y ) and temperature (0) variations with respect to the reactor length (x) at (a) t = 3 and (b) t = 3 5
I
82
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
Figure 2.15 shows the spatial distribution of dimensionless oxygen concentrations (y) and temperatures (8) at two time levels, t = 3 and t = 3.5, for each of the three numerical methods. Even though smooth fronts move with time, the solution profiles depend highly on the numerical methods used due to mass/energy recycling. A different numerical method provides a different oscillatory frequency, phase degree, and/or amplitude. In Fig. 2.16, it is shown that a steady-statesolution is reached differently depending on the numerical method used.
2
-yy (FS-upwind-1) (WS-upwmd-3) U
-9-y
1
2
3
( W S E method)
5
4
6
time (t)
2
(b) 1 5 4 4 0
y (F'S-upwind-1) UY (WS-upwind-3)
d
y (WSEmethod) theta (FS-upwind-1) x theta (WS-upwind-3) o theta (WSEmethod)
___
0
15
16
17
18
19
time (t) Figure 2.16
Comparison o f numerical solutions for dimensionless oxygen concentration ( y ) and temperature (0) variations with respect to time, (a) 1 < t < 6 and (b) 1 5 < t < 20, at the reactor outlet point (5= 1)
20
2. G Applications
Liu and Jacobsen (2004)stated that some discretization methods such as finite differences and finite elements can result in spurious bifurcation and erroneous prediction of stability. To minimize discretization error, they proposed a moving mesh method (see Section 2.5.2), i.e., an orthogonal collocation method on moving finite elements, for solving a futed-bed reactor model with energy recycling. For the futed bed reactor system, there is clearly a need for checking the approximation error of spatial and temporal derivatives as the fronts move. This would mean for the CE/SE method that an adaptive mesh method is applied. In addition, it is questionable whether the exit boundary conditions Eq. (134) are physically reasonable, especially when steep fronts are moving out of the reactor.
2.6.3 Slurry Bubble Column Reactor
Slurry bed reactors are applied increasingly in the chemical industry. The specific example selected here focuses on Fischer-Tropsch (FT) synthesis. FT synthesis technology, such as fluidized bed, multitubular futed-bed, and three-phase slurry bed, forms the heart of many natural gas conversion processes that have been developed by various companies in recent years (e.g., SASOL, Shell, Exxon, etc.). The FT reaction converts the synthesis gas (H,+CO) into a mixture of mainly long straight chain paraffins. This example concerns the Fe-based (or Co-based)catalyk slurry bed reactor, as shown in Fig. 2.17. Unconverted gas
t
I+
Slurry
Model
-
I
-
Synthesis gas Figure 2.17 Hydrodynamic model of slurry bubble column reactor (SBCR) in the heterogeneous flow regime (Van der Laan et al. 1999)
I
83
84
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
A highly exothermic reaction takes place on the Fe-based catalytic surface at high temperature (about 250°C): 1 + (1 + E)H2 + -CnH, + H2O + 165kJ/mol n CO + H 2 O ++ C 0 2 + H 2 + 41 kJ/mol
CO
(135)
where n is the average length of the hydrocarbon chain and m is the number of hydrogen atoms per carbon. Since hydrogen is considered the limiting component, balance equations can be set up for hydrogen only (De Swart et al. 2002). The complex hydrodynamics of gas bubbles are simplified by gas holdups (&&big) of large bubbles (dg,big= 20-80 mm) and those ( E ~ , ~ , , , ~of U ) small bubbles (dg,smll= 1-6 mm), as shown in Fig. 2.18. The dimensionless mass and energy balances for hydrogen are described for the three phases: 0
gas phase
ay big & g , b i g F = -
0
liquid phase
0
solid phase
(1
1f
acont
+ acont . Ybig) 2
(usg
- udf) aYbig ugo
at
The dimensionless energy balance for the slurry phase is:
The dimensionless variables are denoted ybig = CH2,g,big/ CH2g07ysmall= C H ~ , ~ , ~ ~ xU / C H ~ , ~ ~ ~ = mCH,,L/CH,,g@8 = T/T,, = h/H and z = tugO/H,where the initial hydrogen concentration CHz,go = 0.38412 kmol/m3,the distribution coefficient of hydrogen between gas and liquid phases m = 5.095, the heat-exchanger wall temperature T, = 501 K, the slurry reactor height H = 30 m, and the inlet gas velocity ugo= 0.14 m/s are preliminarily given for simulation.
2. G Applications
For the gas phase mass balance, the gas holdup of large bubbles ( ~ ~ , can b ~ be ~ ) estimated by the following relation with the gas superficial velocity (usg= 0.14 m/s), the small bubble superficial rising velocity (u& the gas density (pg 7 kg/m3 at P 40 atm and T- 500 4, and the reference gas density (& 1.3 kg/m3 at P = 1 atm and
-
-
-
The gas holdup of the small bubbles is given where the transition from the homogeneous to the churn turbulent flow regime occurs (van der Lann et al. 1999):
where the small bubble holdup in solids-freeliquid is .&= 0.27 and the solid holdup is given as q,= 0.25. The gas contraction factor is assumed to be a,,,, -0.5 in Eq. (136). The gas phase Peclet numbers for large and small bubbles are assigned to be Peg,big 100 and Peg,small= u@G/EL 80, respectively. The Stanton numbers of the large bubbles (St,b, = kL,H2,bigabigH/m/ug0 4.51) and the small bubbles (St,,,lI = kL,Hl,smallClsmallH/m/upo 24.7) are calculated as an empirical correlation proposed by Calderbank and Moo-Young (1961).The superficial velocity of small bubbles, udf, is defined as:
-
-
-
-
-
Pa . s, surface tension u= 0.019 Pa . m, liquid density 7.0 kg/m3 and gravity g = 9.81 m/s2.
= 680
kg/tn3, gas density
=
For the liquid phase mass balance, the liquid hold up is determined by: &L =
- Ep -
(&g,big
+ Eg.small(1 - &g.big))
(144)
The liquid phase Peclet number (PEL = u@H/Er) is assumed to have the same value as the small bubble Peclet number. The liquid Stanton number for large bubbles and small bubbles are defined as Stg,big = ~,~~,bipa~,i~H/u@ (- 22.97) Stg,rmall = kL,H2,srnaiias. mallH/~gO (- 125.87), respectively. The superficial slurry velocity is equal to us,= 0.01 m/s in the simulation and the average catalyst concentration fraction is = 0.25. The Damkohler number as the dimensionless pre-exponential kinetic factor is defined as:
rs
Da = AmELH/ugo
(145)
where the preexponential kinetic factor (or collision frequency factor) is A = 5.202 X 10” s-’. The Arrhenius number is given from the kinetic data (De Swart et al. 2002):
I
85
86
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
1.175 x lo5 J/mol = 28.209 8.314 J/mol/K. 501 K
E,
y=-=
RT,
For the liquid phase energy balance, the heat transfer Peclet number ugo H %ff a, (Pe, = -x 7Q)the heat transfer Stanton number ( S ~ H= EL
~
Ps cps ugo
x 7.0),
and the dimensionless heat reaction (Be = -AHRCH2’go x 0.0488) are from mpsCpsTc De Swart et al. (2002). The model described by the above set (i.e., Eqs. (136)-(140)) of partial differential equations (PDEs), which include convection, diffusion and reaction, is solved with initial conditions and Danckwert’sboundary conditions (see Eq. (3)). De Swart et al. (2002) solved the above model numerically using the MOL with finite difference method (FDM) and a BDF ODE integrator. We here use the noniterative CE/SE method (see Section 2.4.3) to solve the model. Figure 2.18 shows dynamic contours 0.4,
d C d
= -
J
0.3
’$ 6
8% 2 2 0.2
$n a
-P
C
a 0
5
35 a g
0.1
0 0
0.2
0.4
0.6
column height (5)
0.8
0.25 0.2
0.15
3
0.1
R(
0.05 0
0
1
0.2
0.4
0.6
column height (5)
0.8
250,
0
8
sf
Y
P 0
0.2
0.6 column height 0.4
(5)
0.8
1
1
I
245 240 235 230 225‘ 0
0.2
0.4
0.6
column height (5)
Figure 2.18 Unsteady-state concentration contour o f (a) large bubble gas concentration; (b) small bubble gas concentration; (c) liquid concentration; and (d) slurry temperature with respect to the reactor height, within 5 min
0.8
I
1
2.G Applications
of large bubbles, small bubbles and liquid concentrations of hydrogen, and bed temperatures along the reactor height. These profiles show how to reach steady state within 5 min. From the model-based dynamic simulation we can predict conversion ratio and temperature changes with feed composition, heat-exchanger temperature, feed flow rate, and catalyst types. The optimal operating conditions can be obtained for a given objective function (e.g., cost-benefit function) using the method of nonlinear programming (NLP, see also Section 2.4). The three-phase bubble column shows complex hydrodynamics of reactant gas bubbles at elevated pressures (e.g., 10-40 atm). Several recent publications have established the potential of computational fluid dynamics (CFD) for describing the hydrodynamics of bubble columns (Krishna and van Baten 2001). Using a commercial CFD code (CFX, AEA Tech., UK) to solve mass/momentum conservation equations in the three phases, fiishna and van Baten (2001)predict the gas holdup and the liquid velocity within a cylindrical two-dimensional reactor at different column dimensions, pressures, and superficial gas velocity. The empirical correlations of the gas holdups in Eqs. (141)and (142)can be verified or predicted for different column dimensions by the CFD simulation results (Krishna et al. 2000). In Section 2.7, we will present in detail a combination of process simulation and CFD.
2.6.4 Population Balance Equation
The population balance equation (PBE) has been demonstrated to describe the particle size distribution (PSD) in various chemical/biological engineering problems such as crystallization, polymerization, emulsion, and microbial cultures. Indeed, modeling with the PBE provides a good description for parameter identification, which may be used for determination of operating conditions, and for process design and control. In crystallization processes, the PBE, which governs the crystal size distribution (CSD), is solved together with mass/energy balances and crystallization kinetics such as nucleation, crystal growth, breakage, and agglomeration. The system, which often leads to hyperbolic-like integro-partial differential equations (IPDEs), is complex due to a lot of feedback relationships between the equations (Wey 1985). To determine the CSD, all equations (e.g., PBE, mass, and energy balances) must be solved simultaneously. An inaccurate solution of a PBE will affect particle nucleation and subsequently particle growth and results in an incorrect CSD. Therefore, a numerical procedure to obtain an accurate solution of PBEs is necessary (Lim et al. 2002). The crystal size distribution (CSD) is usually expressed as the crystal number (N, no.) or number density (n,no./m, or no./m3)with respect to the crystal size (L,m) or volume (v,m3).A simple relationship between the crystal number (N) and the crystal number density (n) is given as follows using the finite volume approach:
I
87
88
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
Ni =
Li
Li+l
ndL
n;(Li+l - Li)
or Ni =
Jci
"if1
ndu x ni(ui+l - ui)
(147)
Both bases (i.e., Nand n) can give a good description of the CSD. However, the CSD based on the number (N) is often preferred, for conservation of the mass and the number of crystals, in the cases involving agglomeration and breakage kinetics (Kumar and Ramkrishna 1997). A number-based PBE as a governing equation ofthe CSD is usually described in terms of the birth of nuclei, their growth, agglomeration, and breakage:
+ (-ddtNi )breakage
dt where
Nrnesh
yoAL.
(148)
La
2. N.J'
i=l
j=i+l Lj '
yo AL .
La Li
La
. N;
Nmesh
La
j=i+l
Lj
+ 2y0A L .
yoAL. 2 . Ni - yoLq . Ni, Li
. N j - y0Lq . Ni,
i = 2 . . . (Nmesh- 1) = Nrnesh
(152)
2.G Applications
2.6.4.1 Method of Characteristics
It is well known that for the scalar linear conservation law (e.g., PBE considered here) there usually exists a unique characteristic curve along which information propagates. If the solution moves along the path line of propagation, the convection term a(GN) in the PBE disappears. Hence, numerical error and instability caused by 8L approximation of the convection term is removed. Kumar and Ramkrishna (1997) derived a modified MOC formulation for the PBE:
!Lo-($) ,
dt
nucleation
+(%)
(153a)
agglomeration
dL; dt
(153b)
- = G;
where Eq. (153b)is the mesh movement equation. The MOC formulation is numerically solved by using the MOL. To overcome the nucleation problem, a new mesh of the nuclei size (Ll) is added at given time levels. The system size can be kept constant by deleting the last mesh at the same time levels. Since the number of crystal nuclei can vary with the number of mesh points added or deleted, a proper number of added mesh points should be selected according to stiffness of nucleation. Suppose that a stiff nucleation takes place only at a minimum crystal size ( L , = as a function of time: n(t, L1) = 100
+ lo6 exp (lOP4(t
- 0.215)2)
(154)
5 L 5 2.0, the nuclei grow and the crystals aggregate as Within the size range well as break for 0.0 5 t 5 0.5. A square initial condition as seeds is also given:
n(0, L ) = 100, for 0.4 5 L 5 O h n(0, L ) = 0.01, elsewhere
(155)
The kinetic parameters are given: G = 1 (linear growth rate), = 1.5 X (constant agglomeration kernel) and y = 1.0 x L2 (breakage kernel). See Lim et al. (2002) for details. The discretized PBE based on the crystal number (N,)or the crystal density (ni) is solved by using the implicit BDF O D E integrator in the framework of the MOL. 2.6.4.2 Nucleation and Growth
When the PBE with the nucleation and growth terms is considered on the basis of the crystal density (n),its analFc solution is derived from the MOC: n(t, L ) = 100
+ lo6 exp (-
2
104((Gt - L ) - 0.215) ), for 0.0 5 L p Gt,
n(t, L ) = 100, for 0.4 5 ( L - Gt) 5 O h ,
(156a)
(15Gb)
90
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
-Analytic solution WS-upwind-3 0 WS-~pwind-SA 0
1.E-02h 0.0 0.5
1 .o
MOC-5%
Crystal size ( L )
1.5
2.0
Figure 2.19 CSDs for the stiff nucleation case without agglomeration and breakage
n(t, L ) = 0.01, elsewhere
(156c)
In the solution, a discontinuous front (due to square seed) and a narrow wave (originating from nucleation) move along the propagation path line, L = LI+Gt. The numerical tests are carried out on the 200 fEed grids for both the numerical MOC and the WENO schemes. In Fig. 2.19, the numerical results of the WS-upwind-3/5A (see Lim et al. (2002) for details) and MOC-50p (i.e., numerical MOC with an additional 50 mesh points) are compared to the analybc solution Eq. (155) at the end time (t = 0.5). While moving fronts are smeared near discontinuities using the WENO schemes on the weighted stencil, the numerical MOC-50p shows a quite good resolution even at the discontinuous fronts. 2.6.4.3 Nucleation, Growth, Agglomeration, and Breakage
When the agglomeration and breakage kinetics are added to the previous PBE, the analytic solution can not be derived. The numerical solution of Eqs. (148) or (153) is obtained on 101 points of the uniform grid, using MOC-2Op or WS-upwind-5. Employing the MOC-2Op (insertingldeleting 20 mesh points), the following mesh equations are used dL1 -=O,
dt
d Li -=1 dt
f o r i = 2 . . . 101
(157)
In Fig. 2.20, CSD changes obtained from MOC-20p are depicted according to the kinetics used. The solid line is the analytic solution for the pure growth problem without agglomeration and breakage. Since numerical diffusion error is small, high resolution is observed at the corners of steep fronts. Due to the agglomeration term, the CSD spreads out and the population of large crystal sizes increases (see Fig.
2.6 Applications
(b) Grow th+Agglome ratio
c
.5 rn
5
-0
102
loo 10'
0.5
0 1o6
1
1
1.5
0
0.5
0
0.5
1
1.5
2
1.5
2
crystal size, L
(c) Growth+Breakagyl
I
0
2
crystal size, L
0.5
1
crystal size, L
1.5
1 2
1
crystal size, L
Figure 2.20 CSD changes obtained by the MOC-20p on 102 meshes according to the growth, agglomeration, and breakage terms (solid line: analytic solution for pure growth)
2.20b). In contrast, the breakage term increases the population of small crystal sizes (see Fig. 2.20~).The CSD of the PBE with four kinetics is dispersed more broadly, as shown in Fig. 2.20d. Using the fifth-order WENO scheme (WS-upwind-5),Fig. 2.21 also shows effects of the growth, agglomeration, and breakage terms on the CSD. Considerable numerical dissipation is found in steep regions (or discontinuities) in Fig. 2.21, and also shown in Fig. 2.19. However, comparing Fig. 2.20d with Fig. 2.21d, the two solutions are similar due to the effects of agglomeration and breakage on the CSD. Though the modified MOC gives more accurate numerical results than the WENO scheme, there are some limitations to using it such as the need for careful determination of addingldeleting time levels and a unique mesh velocity equation (or growth rate, see Eq. (153b)).Using the spatial discretization methods (e.g., MOL with WENO schemes) to circumvent these limitations, attention must be paid to discretization of the growth term (convection term), which can cause much numerical error and instability in the presence of steep fronts or discontinuities.
92
I
2 Distributed Dynamic Models and Computational Fluid Dynamics lo6
(a) Pure growth
,
h
I
'
(b) Growth+Agglomeratio
1o4
1o4 C
2 v)
C
W
U
1o2 1oo 10"
I
0
0.5
I
1
1.5
2
crystal size, L
I 0
0.5
0
0.5
I 1
1.5
2
1.5
2
crystal size, L
1o6
1 o6
(c) Growth+Ereakage
1o4 C
C
.25
.2;
u)
u)
C
C
W
W
U
U
1o2
1oo
10" I
0
0.5
I
1
1.5
crystal size, L
2
1
crystal size, L
Figure2.21 CSD changes obtained by the WS-upwind-5 scheme on 101 meshes according to the growth, agglomeration, and breakage terms (solid line: analytic solution for pure growth)
2.6.5
Cell Population Dynamics
Cell cultures are composed of discrete microorganisms whose population dynamics play an important role in bioreactor design and control. The cell cultures are known to exhibit autonomous oscillations that affect bioreactor stability and productivity. To increase the productivity and stability, it is therefore desirable to derive a dynamic model that describes the oscillatory behavior and to develop a control strategy that allows modification of such intrinsic reactor dynamics (Henson 2003). As a model example for cell culture dynamics, consider a segregatedlunstructured modeling based on the cell population balance equation (PBE) coupled to metabolic reactions that are relevant for the extracellular environment (Zhu et al. 2000; Mhaskar et al. 2002). The segregatedlunstructured model provides a realistic description of the cell cycle events that lead to sustained oscillation in cell cultures under the assumption that oscillations arise as a result of interactions between the cell population and the extracellular environment.
2.G Applications
The cell population dynamics including cell growth and cell division is described
X w , newborn-cell birth term (I 2 p r w d m ) , mother-cell division death term (-rW)and dilution loss by a partial differential equation including a convection term
(a,)
(-DW):
a w ( m , t ) - - a(v,(s’). ~ ( mt ),) at am +
I”
2p(m,m’)r(m’, S’) W(m’, t ) dm‘ - [ D + r ( m ) ] W(m, t )
(158)
where W(m, t) is the cell number concentration as a function of mass (m) and time ( t ) ,vS(S’is the overall single cell growth rate at the substrate concentration ( S ’ ) , p(m, m’) is the newborn-cell mass distribution function with newborn-cell mass m and mother-cell mass m’, r ( m ’ , S’), is the division intensity function, and D is the dilution rate. Detail models and their parameters are given below on the basis of Mhaskar et al. (2002) for a S. cerevisiae (or yeast) culture. For convenience, all of the g] and the cell number concentration ( W (m, t ) has masses have the units [ x the units [ X 10-l~no./g]. The division intensity function is introduced to account for the probability nature of cell division:
where m“, is the cell transition mass, mo = 1 is the minimum cell mass for division, and m“’d is the division mass. E = 5 and y = 200 are the constant parameters that determine the transition rate and the maximum intensity value, respectively. Sustained oscillations are generated through the introduction of a synchronization mechanism in which the transition and division masses are functions of the nutrient concentration. The following saturation functions are used (1GOa)
(1GOb)
where the substrate concentration is S’ = G’ + E , and the constants are given as Sl = 0.1 g/l, S ~ = ~ , O ~ , ~=, ,4.55, , ~ O mdo = 10.75, Kt = 0.01 . 1-l and & = 3.83 * 1-’.
I
93
94
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
The newborn cell probability function p (m,m’)has the form: p ( m , rn’) =
(y
. e-~(m-m;)2
lo.
+
. e-~(m--m’+m;)2 , m’> rn and rn’ > rn: elsewhere
+ mg
(161)
Here the constants are set to a = and p = 40. This function yields two Gaussian peaks in the cell number distribution, one centered at mgrtcorresponding to mother cells and one centered at m*,- m’corresponding to daughter cells. Oscillatory yeast dynamics are observed in glucose-limited growth environments. Under such conditions, both glucose and the excreted product ethanol can serve as substrates for cell growth. The following reaction sequence accounts for the relevant metabolic pathways, glucose fermentation, glucose oxidation, and ethanol oxidation:
where G’ and E represent intracellular glucose and ethanol concentrations, respectively, and 0 is the dissolved oxygen concentration. hgf= 30( x lo-’’ g/h),hgo = 3.25 ( x 10-13 g/h), and he,, = 7(x g/h) are maximum consumption rates, Km&= 40 g/l, KMgo= 2 g/l, Kmgd = 0.001 g/l, K,, = 1.3 g/l, and Kmed = 0.001 g/l are saturation constants, Kinhibit = 0.4 g/l is a constant that characterizes the inhibitory effect of glucose on ethanol oxidation. The overall single cell growth rate vg(g(s’)is the sum of the growth rates due to the three metabolic reactions. vg(S’) = Kgf(G’)
+ Kgo(G’, 0)+ Keo(G’, E’, 0)
(164)
For intracellular glucose and ethanol concentrations (GI, E ) and liquid oxygen concentrations (0)in Eq. (163),the mass balance equations of these substrates are d G’ dt
- = kg(G - G’)
d E’ dt
- = ke(E - E’)
do dt
-=
( 192 Kgo(G’) 180 Ygo
koa(O*- 0) - -~
96 Q,(E’)) +--46 Ye,
Ntotal
2.6 Applications
where G and E are extracellular concentrations, kg and k, = 20 h-1 are glucose and ethanol uptake rates, respectively, k,a = 1500 h-’ is the oxygen mass transfer rate, and Ygo= 0.65 g/g and Ye,= 0.5 g/g are the yield coefficients in the glucose oxidation and of microorganethanol oxidation reactions, respectively. The total cell number ( isms is defined by
The saturation oxygen concentration, 04, is obtained from the oxygen solubility, RT which is assumed to be governed by Henry’s law: 0” = H , -Ooul with the HenMW.02
ry’s rate constant (H, = 0.0404 g/l/atm), the gas constant ( R= 0.082057 1 . atm/mol/ K ) , the absolute temperature ( T = 298 K ) , the molecular weight of O2 (M,,o, = 32) and the oxygen concentration in the gas exhaust stream (Ooul). The gas phase oxygen balance is: V
* dt
= F(Oi, - Oout)- koa(O* - 0) . y
where Vg = 0.9 l and V, = 0.11 are the gas phase and liquid phase volumes, respectively, F = 90 l/h is the volumetric air-feed flow rate, and Oi, = 0.275 g/l (= 0.21 atm) is the oxygen concentration in the air-feed stream. For extracellular glucose and ethanol concentrations (G and E), the substrate mass balance equations are:
where D = 0.18 h-’ is the dilution rate, Gf= 30 g/l and Er= 0 g/l are the feed glucose and ethanol concentrations, respectively, and Yd= 0.15 g/g is the yield coefficient in the glucose fermentation reaction. The total cell number for microorganisms related to ethanol excretion is denoted in Eq. (171) as:
Experimental data suggests that key products, such as ethanol, are excreted primarily by budding cells. This behavior is modeled byf(m):
where ye = 1.25, E,
=
15 and me = 1.54 are constant parameters.
I
95
96
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
The liquid phase carbon dioxide balance (C) is
where k,a
=
1500 h-' is the C 0 2 mass transfer rate and C? is the saturation C02 con
centration modeled by c;' = H,RT C,, with the Henry's rate constant (H, = Mw,co* 1.48g/l/atm at pH = 5.0), the molecular weight of C02(Mw,co,= 44) and the COz concentration in the gas exhaust stream (C,,,,). The gas phase C02 balance is:
V
dt
= F( Ci, - Gout) - kca( C" - C) . Vl
(175)
where Ci, = 0.00054 g/l (= 0.0003 atm) is the carbon dioxide concentration in the airfeed stream. In summary, this cell PBE model is described by a PDAE system containing one PDE for the cell population (W(m, t ) )and eight ODES for eight substrate variations (G, E, G', E , 0, C, O,,,, and COu,).The single cell growth rate (vg(G',E , 0))is computed in Eq. (164). The initial condition of W(m, t) is set to W(m, 0) = 0.5 e-S.(m-G)', 1 5 m 5 11.The boundary condition of W(m, t ) is also given as W(11, t) = 0 for 0 5 t 5 6 hr. For the eight-substrate concentrations, their initial values are G' = G = 0.8, E = 0.01, E = 0.0001, 0 = 0.008, C = C,, = 0.003, and O,,, = 0.275. For the solution of cell PBE models, Zhu et al. (2000) and Mhaskar et al. (2002) used the orthogonal collocation FEM (Finlayson 1980). Motz et al. (2002) reported that the CE/SE method gives better performance in terms of accuracy and computational time than a flux limited finite volume method. Mantzaris et al. (2001a,b)compared several numerical methods such as finite difference methods (FDM) with explicit/implicit time integration and finite element methods (FEM) with explicit/ implicit time integration, where they suggested (i) the time-explicit scheme (e.g., Runge-Kutta method) is better for computational-efficiency than the implicit timeintegration scheme (e.g., BDF-types) and (ii) the finite difference method is preferred for multidimensional PBEs to the finite element method due to computational eficiency. Fortunately, this model can be solved by the modified MOC (see Eq. (153)),since there is a unique growth rate Eq. (164). The integration terms appearing in Eqs. (158), (168),and (172) are simply evaluated by the Trapezoidal rule:
where Nmsh is the number of mesh points. When the numerical MOC is used in the framework of the MOL, it is not easy to provide its Jacobian full matrix with hand-coding due to strong nonlinearity and it will be prohibitive to numerically evaluate the Jacobian because of the large system size (i.e., the number of equations is (2 X N,& + 8), and the Jacobian matrix is 2 X
2.6 Applications
N,& + S)2).The automatic differentiation technique is appreciated in this case for accuracy and computation efficiency. Figure 2.22 shows the dynamics of the cell population number density (W(rn,t ) ) is solved by the numerical MOC on 82 mesh points (N,,&,) where 135 mesh points are added at rn = 1 and also deleted at rn = 11. Due to cell division, the cell number density of small sizes tends to increase with time and the oscillatory behavior of the cell number density are regularized after about t = 6 hr. The oscillatory behaviors of the cell number (Ntotal) in Eq. (168) and the cell mass
(= c
Nrnesh-l
Ni
(mi
+ mi+l)
are shown in Figs. 2.23a and b. The cell number varia-
i=l
tion affects the extracellular glucose/ethanol (G and and oxygen/carbon dioxide (O,,, and C,,,) concentrations in the gas exhaust stream (see Figs. 2.23~-f,respectively). As ethanol is excreted, primarily by budding cells (see Eq. (173)),it is shown that the extracellular ethanol concentration ( E ) slowly reaches a regular oscillatory state in Fig. 2.23d. Figures 2.23g,h depict the dynamics of the consumed oxygen concentration ratio 100 x 100 x
(9)
and evolved carbon dioxide concentration ratio
, respectively.
800
-
600
m
2 D
z
7
z
400
v
200
0
12
5
6
13
mass, m (x10
Figure 2.22 Distribution of cell population concentration W(rn.t) over mass and time
g)
I
97
98
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
-
4.2
1
8
0
0.27361
,
2
4
6
8
W t ' hr
1
0 I
1
I
00001
2
1
0.0003
L - - . . . L r
0
2
4
I
6 time ( t l hr
8
10
12 I
'
I
0
0 0029
.
0 0028
8
00026
0 0025
s 6 3 N-
2 82 0
h
cu'
62 61 60
0
2
4
6 time (t),hr
8
10
12
Figure 2.23 Oscillatory behaviors in time of (a) cell number; (b) cell mass; (c) extracellular glucose (C); (d) extracellular ethanol (4; (e) oxygen in exhaust gas stream (Oout); (f) carbon dioxide in exhaust gas stream (&); (g) evolved oxygen ratio; and (h) evolved carbon dioxide ratio
All of the parameters used for this simulation need to be adjusted to experimental data, as shown in Mhaskar et al. (2002).
2.7 Process Model and Computational Fluid Dynamics
A multiscale model is a composite mathematical model formed by combining partial models that describe phenomena at different characteristic length and time scales. For example, modeling of a packed-bed catalytic reactor involves microscale chemical kinetics at the active sites on the catalyst, mesoscale transport processes through the pores of the catalyst pellets, and macroscale flow and heat exchange at the reactor vessel level. Computational tools such as molecular dynamics (MD), computational fluid dynamics (CFD), and process simulation have been used to help fill particular
22.7 Process Model and Computational Nuid Dynamics
length- and timescale gaps. In general, despite their obvious connection, phenomena at different characteristic scales have usually been studied in isolation (see Section 2.5). One of the key challenges facing process modeling today is the need to describe complex interactions between hydrodynamics and the other physical/chemical phenomena. This is particularly important in the case of complex systems (e.g.,polymerization, crystallization, and agitated bioreactors) in which the constitutive phenomena interacts with mixing and fluid flow behavior. Process simulation tools, which play an increasingly central role within most process engineering activities are able to represent (i)multicomponent, multiphase, and reactive systems, (ii) individual unit operations, multiple interconnected units, or entire plants, and (iii)thermodynamic properties. However, most of the models used by process simulation tools either ignore spatial variations of properties within each unit operation (invoking the well-mixed tank assumption) or are limited to simple idealized geometries. Moreover, the treatment of fluid mechanics is usually quite rudimentary (Bezzo et al. 2003). CFD techniques solve fundamental mass, momentum, and energy conservation equations (e.g., the Navier-Stokes equation) in complex three-dimensional geometries. From CFD simulation, some valuable information (e.g., mass flow rate, heat transfer coefficient, velocity, etc.) for process simulation can be obtained. However, CFD’s ability is still limited in application to complex reactive systems and multiphase processes with multicomponent phase equilibria. Furthermore, performing realistic dynamic simulation often requires excessive computational time. In view of the above, CFD and process simulation technologies are highly complementary (Bezzo et al. 2000). Combination of process simulation and CFD can therefore lead to significant advantages in accurate modeling of processes.
2.7.1 Computational Fluid Dynamics
The CFD technique has focused on the solution of PDEs representing conservation equations describing fluid flow over domains of often complex geometry. There are several commercial CFD packages such as Fluent (Fluent, Inc.), CFX (AEA Tech., Hanvell, UK) and FemLab (COMSOL). The CFD packages usually comprise three distinct elements, namely preprocessing (geometry specification, model selection, parameter specification, and grid generation), numerical solution procedure and post-processing (visualization and data treatment). In the solution procedure, mass/momentum/energy conservation equations are solved within the specified geometry. Generic conservation equations may be described by PDEs with advection, viscosity/diffusivity and source terms:
-a4 + - axa (~ 4 - ra ~ -- s 4 = o at a )l
(177)
I
99
100
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
where @ is a conserved quantity such as mass, energy or momentum, rQ the viscosity or difisivity, sg the source or sink and x the set of spatial dimension variables. The models are called the compressible (or incompressible) Navier-Stokesequation. When the viscosity/diffusivity terms can be neglected due to relatively small influence on the result, the inviscid Euler equation is obtained. Let p, u,p, and e be the mass density, velocity, pressure, and energy per unit volume, respectively. The inviscid Euler equation of a perfect gas can be expressed as:
where u = (p, p,e)' andf= (p, p + eu2,ue +
We may write e = @eint,,r
1 +-
2 p2, where eintemris the internal energy per unit mass. Therefore, this equation is the three-dimensional hyperbolic PDE. Most CFD packages do not use the MOL approach (see Section 2.3) of reducing PDEs into ODEs in time. Instead, they choose to discretize both temporal and the spatial dimensions (i.e., fully discrete methods, see Section 2.4), thereby reducing the PDEs into a set of nonlinear algebraic equations (Oh 1995). The process simulation models mentioned in Section 2.6 represent specific and simplified conservation equations related to the macroscopic process level. Here, complex fluid dynamics is lumped by parameters or simple empirical equations (e.g., the three phase bubble column model in Section 2.6.3). Microscopic chemical reactions are simplified by kinetic equations as a function of temperature, pressure, and concentrations. If we use a full CFD simulation including complex reactions, thermodynamics, population dynamics and hydrodynamics, it would be practically infeasible because of high computing load and lack of existing tools. To effectively take into account interactions between hydrodynamics and the other physicallchemical phenomena, a hybrid approach, namely, multizonal/CFD simulation (Bauer and Eigenberger 1999 and 2001; Bezzo et al. 2003),is proposed (see Fig. 2.24). Several zones, assumed to be well-mixed compartments, are described by process models (e.g., AEs, ODEs, PDEs or PBEs) with the exception of the fluid-flow ones, and CFD simulation provides each zone with the mass flow rate at interzonal interfaces and additional fluid dynamical properties such as mass transfer coefficient and turbulent energy dissipation rate (Bezzo et al. 2003).
2.7.2 Combination o f CDF and Process Simulation
Figure 2.24 shows a structure of the general multizonal/CFD model. The spatial domain of interest is divided into several zones (z1-z5).Each single zone (2)is considered to be well mixed and homogeneous. Two zones can interact with each other via an interface that connects a port (p) of one zone with a port of the other. The flow of material and/or energy across each interface is assumed to be bidirectional. The transient behavior of a zone is described by a set of algebraic equations (AEs),ordi-
22.7 Process Model and Computational Fluid Dynamics
Multizonal model (all phenomena except fluid dynamics)
physical properties
I
1
I
1
I
I
l
l
I
r
l
l
l
l
l
l
l
CFD model (total mass and momentum conservation only) Figure 2.24 al. 2003)
l
l
1
Structure of the general multizonal/CFD model (Bezzo et
nary differential equations (ODES),partial differential equations (PDEs) or population balance equations (PBEs).The multizonal model uses detailed dynamic modeling of all relevant physical phenomena, with the exception of fluid-flow, over a physical domain divided into a relatively small number of zones. Mixing parameters and interzonal mass flow rates are determined by solving a detailed CFD model over the same physical domain. The CFD model focuses solely on fluid-flow prediction, trylng to do this as accurately as possible by dividing the space into a relatively large number of cells and solving the total mass and momentum conservation equations. Thus, the CFD model does not attempt to characterize intensive properties such as composition, temperature or particle size distribution. The transient behavior is ignored, based on the assumption that fluid-flow phenomena operates on a much shorter time scale than all other phenomena. The solution of the CFD model will require knowledge of the distribution of physical properties (e.g., viscosity, density, compressibility factor, etc.) throughout the physical domain of interest. These properties are usually a function of the systemintensive properties and are computed within the multizonal model. The hybrid model is formed by the coupling of the multizonal model with the CFD mode, both representing the same spatial domain. The mapping between the zone and cell is achieved by means of appropriate disaggregation and aggregation procedure (Bezzo et al. 2003). The multizonal concept and CFD simulation is applied to bubble column reactors (Bauer and Eigenberger 1999, 2001) and to a bioreactor processing and mixing a highly viscous fluid (Bezzo et al. 2003).
I
101
102
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
2.8 Discussion and Conclusion
In many chemical and biotechnical processes, partial derivatives result from a consequence of dynamic behaviors of mass, energy and momentum in space. The derivative or algebraic terms describe fluid flow, physical phenomena and/or constitutive relations in the different phases. The description of convective and diffusive (dispersive) fluxes introduces first and second order spatial derivatives. Mass exchanges between the fluid and stationary phases (e.g., reactions and adsorptions) are described by time-dependent differential equations. Equilibrium relations and physical properties are described with algebraic equations (AEs).Thus process models are in general represented as partial differential equations (PDEs)coupled with algebraic equations (AEs),i.e., PDAEs with pertinent initial and boundary conditions. In this chapter, a large class of one-dimensional PDAE models is presented in Section 2.6 to introduce the most frequently applied numerical methods for their solution (Sections 2.3-2.5). Finally, the complementary relations between process simulation and computational fluid dynamics most often employed to solve models with fully developed flow field are demonstrated (Section 2.7). A very important element for a correct discretization is to ensure that the discretized formulation (or numerical approximation) indeed converges to the continuous formulation (or physical model), as the discretization (or spatial and/or temporal stepsize) is refined. However, this issue contains several subtleties and is not dealt with in detail in this chapter. To improve accuracy and efficiency of numerical solutions, it is desirable to select appropriate numerical methods according to the physical models considered. The method of lines (MOL),which includes time integration and spatial discretization, is adequate to solve stiff problems such as diffusion-dominated models and fast reaction problems (Section 2.3). In the presence of steep moving fronts, the MOL incorporating adaptive and moving mesh methods to capture large spatial variations provides efficient solutions (Section 2.5). When there is a unique solution propagation path line, the method of characteristics (MOC) can be formulated. The MOC formulation can be solved in the framework of the MOL (Section 2.6.4). Since the solution moves along the propagation path line, numerical error and instability caused by approximation of the convection term is avoided. In solving convection-dominatedmodels, the conservation element and solution element (CE/SE) method is appreciated for accuracy and efficiency due to a finite volume approach and explicit time integration (Section 2.4.3). The CE/SE method possesses low numerical dissipation error but a fine time stepsize is needed for stiff models. Most CFD packages use fully discrete methods (Section 2.4) rather than the MOL (Section 2.3) for solving conservation laws for fluid flow within an often complex multidimensional geometry. Combination of process simulation and CFD is useful to describe complex interactions between fluid hydrodynamics and other physical/ chemical phenomena (Section 2.7).
2.8 Discussion and Conclusion
Although many efficient methods have emerged for solving PDEs/PDAEs, several challenges remain. The numerical method of PDEs with stiff nonlinear source terms is one of the currently active research areas (Ahmad and Berzins 2001; Hyman et al. 2003), where mesh refinement by an efficient control of space-time errors is needed. Simulation methods for the processes considerably influenced by fluid hydrodynamics should be improved by combining CFD technologies. Hybrid dynamic systems that exhibit coupled continuous and discrete behaviors have also attracted much attention (Ma0 and Petzold 2002). When a reactant within a phase of a spatially distributed reactor disappears and appears, special considerations are required for the hybrid system to be properly handled by the numerical methods.
References 1 Ahmad 1. Berzins M . MOL solvers for hyper-
2
3
4
5
6
7
8
9
10
bolic PDEs with source terms, Mathematics and Computers in Simulation 56 (2001)p. 115-125 Ascher U. M . Petzold L. R. Computer methods for ODES and DAEs, SIAM, Philadelphia 1998 Ayasouj A. Keith T. G. Application of the conservation element and solution element method in numerical modeling of heat conduction with melting and/or freezing 13(4) (2003) p. 448-471. Bauer M. Eigenberger G . A concept for multiscale modeling of bubble columns and loop reactors Chem. Eng. Sci. 54 (1999) p. 5109-5117 Bauer M. Eigenberger G. Multi-scale modeling of hydrodynamics, mass transfer and reaction in bubble column reactors Chem. Eng. Sci. 56 (2001) p. 1067-1074 Berger M. /. Oliger J. Adaptive mesh refinement for hyperbolic partial differential equations J . Comp. Phy. 53 (1984) p. 484-512 Berger M . J. Leveque R. J. Adaptive mesh refinement using wave-propagation algorithms for hyperbolic systems, SIAM J . Numer. Anal. 35(6) (1998) p. 2298-2316 Bezzo F. Macchietto S. Pandelides C. C. A general framework for the integration of computational fluid dynamics and process simulation, Computers and Chemical Engineering 24 (2000) p. 653-658 Bezzo F. Macchietto S. Pandelides C. C. General hybrid multizonal/CFD approach for bioreactor modeling AlChE J. 49(8) (2003)p. 2133-2148 BischofC. Houland P. Using ADIFOR to compute dense and sparse Jacobians, Tech-
11
12
13
14
15
16
17
18
nical memorandum ANL/MCS-TM-159, Mathematics and computer science division, Argonne National Laboratory, USA 1991 Bischof C. Carle A. Corliss G. Griavank A. Hovland P. ADIFOR: Generating derivative codes from Fortran programs, Scientific Programming l(1)(1992) p. 11-29 Bischof C. Carle A. Khademi P. Mauer A. Hovland P. ADIFOR 2.0 user’s guide, Technical report ANL/MCS-TM-192, Mathematics and computer science division, Argonne National Laboratory, USA 1998 Calderbank P. H. Mooyoung M. B. The continuous phase heat and mass-transfer properties of dispersions Chem. Eng. Sci. 16 (1961) p. 39-54 Chang S. C. The method of space-time conservation element and solution element-A new approach for solving the Navier-Stokes and Euler equations J. Comput. Phys. 119 (1995) p. 295-324 Chang S. C. Courant number insensitive CE/ SE schemes, 38* AIAA joint propulsion conference, AIAA-2002-3890,Indianapolis, USA 2002 Chang S. C. Wang X. Y. To W. M . Application of the space-time conservation element and solution element method to onedimensional convection-diffusion problems J. Comput. Phys. 165 (2000) p. 189-215 Chang S. C. Wang X. Y . Chow C. Y.The space-time conservation element and solution element method: A new high-resolution and genuinely multidimensional paradigm for solving conservation laws J. Comput. Phys. 156 (1999)p. 89-136 De Swart S. W.A. Krishna R. Simulation of the transient and steady-state behavior of a bubble column slurry reactor for Fischer-
I
103
104
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
19
20
21
22
23
24
25
26
27
28
29
30
31
Tropsch synthesis Chem. Eng. Processing 41 (2002)p. 35-47 Do@ E. A. Dmry L. O’C. Simple adaptive grids for 1-D initial value problems J. Comput. Phys. 69 (1987) p. 175-195 Dufort E. C. Frankel S. P. Stability conditions in the numerical treatment of parabolic differential equations, Mathematical Tables and Other Aids to Computation 7 (1953) p. 135-152 Finlayson B. A. Nonlinear analysis in chemical engineering, McGraw-Hill, NewYork 1980 Furzeland R. M . VemerJ. G. Zegeling P. A. A numerical study of three moving-grid methods for one-dimensional partial differential equations which are based on the method of lines J. Comput. Phys. 89 (1990) p. 349-388 Hansen K. ]orgensen S. B. Dynamic modeling of a gas phase catalytic futed-bed reactor 1-111, Chemical Engineering Science 31 (1976)p. 473-479,579-598 Harten A. Engquist B. Osher S. Chakravarthy S. Uniformly high order essentially nonoscillatory schemes I11 J. Comp. Phy. 71 (1987) p. 231-303 Henson M. A. Dynamic modeling of microbial cell populations, Current Opinion in Biotechnology 14 (2003)p. 460-467 Heydweiller]. C. Sincovec R. F. Fan L. T. Dynamic simulation of chemical processes described by distributed and lumped parameter models Comp. Chem. Eng. 1 (1977)p. 125-131 Hill P. /. Ng K. M. New discretization procedure for the breakage equation AIChE J. 41(5) (1995)p. 1204-1216 Hindmarsch A. C. LSODE and LSODI: two new initial value ordinary differential equation solvers, ACM SIGNUM newsletter, 15 (1980) p. 19-21 H o f i a n ] . D. Numerical methods for engineers and scientists, McGraw-Hill Inc., Mechanical engineering series Part 111. Partial differential equations (1993) p. 371-774 Huang W. Russell R. D. Analysis of moving mesh partial differential equations with spatial smoothing SIAM J. Num. Anal. 34 (1997) 1106-1 p. p. 126 Huang W. Russell R. D. Moving mesh strategy based on a gradient flow equation for two-dimensional problems SIAM Sci. Comput. 20(3) (1999)p. 998-1015
32 Hyman]. M. Li S. Petzold L. R. An adaptive
33
34
35
36
37
38
39
40 41
42
43
44
moving mesh method with static rezoning for partial differential equations Computers and Mathematics with Applications 46 (2003) p. 1511-1524 jiang G. Shu C. W. Efficient implementation of weighted E N 0 schemes J . Comp. Phy. 126 (1996) p. 202-228 Kaczmarski K. Mazzotti M . Storti G. Morbidelli M. Modeling fuced-bed adsorption columns through orthogonal collocations on moving finite elements Comp. Chem. Eng. 21 (1997) p. 641-660 Kohler R. Gerstlauer A. Zeitz M. Symbolic preprocessing for simulation of PDE models of chemical processes Mathematics and Computers in Simulation 56 (2001) p. 157-170 Krishna R. van Baten]. M . Eulerian simulations of bubble columns operating at elevated pressures in the churn turbulent flow regime Chemical Engineering Science 5G (2001) p. 6249-6258 Krishna R. van BatenJ. M. Urseanu M. I. Three-phase Eulerian simulations of bubble column reactors operating in the chumturbulent regime: a scale up strategy Chemical Engineering Science 55 (2000) p. 3275-3286 Kumar S. Ramkrishna D. On the solution by discretization balance equations by discretization-111. Nucleation, growth and aggregation of particles Chem. Eng. Sci. 52(24) (1997) p. 4659-4679 Lax P. D. Wendrof B. Systmes of conservation laws Comm. Pure and Appl. Math. 13 (1960) p. 217-237 Leveque R. 1.Finite difference methods for PDEs Lecture note Department of Mathematics University of Washington 1998 Li S. Adaptive mesh methods and software for time dependent PDEs PhD thesis University of Minnesota 1998 Li S. Petzold L. Moving mesh methods with upwinding schemes for time-dependent PDEs J. Comput. Phys. 131 (1997) p. 368-377 Lim Y. I. Jsrgensen S. B. A fast and accurate numerical method for solving simulated moving bed (SMB)chromatographic separation problems Chem. Eng. Sci. 59(10) (2004) p. 1931-1947 Lim Y. I . Chang C. S. ]srgensen S. B. A novel partial differential algebraic equation
2.8 Discussion and Conclusion
45
46
47
48
49
50
51
52
53
54
55
56
(PDAE) solver: iterative conservation element/solution element (CE/SE) method Comput. Chem. Eng. .28(8) (2004) p. 1309- 1324 Lim Y. I. Le Lann]. M. ]oulia X.Accuracy, temporal performance and stability comparisons of discretization methods for the solution of Partial Differential Equations (PDEs) in the presence of steep moving fronts Comp. Chem. Eng. 25 (2001a)p. 1483-1492 Lim Y. I. Le Lann J . M. Joulia X. Moving mesh generation for tracking a shock or steep moving front Comp. Chem. Eng. 25 (2001b)p. 653-663 Lim Y. I. Le Lann J. M. Meyer X. M . Joulia X , Lee G.B. Yoon E. S. On the solution of Population Balance Equations (PBE) with accurate front tracking methods in practical crystallization processes Chem. Eng. Sci. 57 (2002) p. 177-194 Liu Y. Jacobsen E. W. On the use of reduced order models in bifurcation analysis of distributed parameter systems Computers and Chemical Engineering 28 (2004)p. 161-169 MacCortnark R. W. The effect of viscosity in hypervelocity impact cratering American Institute of Aeronautics and Astronautics paper (1969) p. 69-354 Mackenziej. A. Robertson M. L. The numerical solution of one-dimensional phase change problems using an adaptive moving mesh method J. Compt. Phys. 161(2) (2000) p. 537-557 Mantzaris N. V. Daoutidis P. Srienc F. Numerical solution of multi-variable cell population balance models: I. Finite difference methods Computers and Chemical Engineering 25 (2001a)p. 1411-1440 Mantzaris N. V. Daoutidis P. Srienc F. Numerical solution of multi-variable cell population balance models: 111. Finite element methods Computers and Chemical Engineering 25 (2001b)p. 1463-1481 Ma0 G. Petzold L. R. Efficient integration over discontinuities for differential-algebraic systems Computers and Mathematics with Applications 43 (2002)p. 65-79 Mhaskar P. Hjorts0 M. A. Henson M. A. Cell population modeling and parameter estimation for continuous cultures of S. cerevisiae Biotechnol. Prog. 18 (2002) p. 1010-1026 Miller K. Miller R. N. Moving finite elements I SIAM J. Numer. Anal. 18 (1981) p. 1019-1032 Molls T. Molls F. Space-time conservation method applied to Saint Venant equa-
57
58
59
60
61
62
63
64
65
66
67
68
69
70
tions J . Hydraulic Eng. 124(5) (1998) p. 501-508 Motz S. Mitrovic A. Gilles E.-D. Comparison of numerical methods for the simulation of dispersed phase systems Chem. Eng. Sci. 57 (2002) p. 4329-4344 Oh M. Modeling and simulation of combined lumped and distributed processes, PhD thesis, University of London 1995 Petzold L. R. A description of DASSL A differential/algebraic system solver in scientific computing , eds. R. S . Stepleman et al., North-Holland, Amsterdam (1983)p. 65-68 Poulain C. A. Finlayson B. A. A Comparison of Numerical Methods Applied to Nonlinear Adsorption Columns” Int. J. Num. Methods Fluids 17(10) (1993) p. 839-859 Powell M. J. D. On the convergence of the variable metric algorithm J. Inst. Math. Appl. 7 (1971)p. 21-36 Recke B. Jargensen S. B. Nonlinear dynamics and control of a recycle fmed bed reactor Proceedings of the 1997 American Control Conference vol4 (1997) p. 2218-2222 Sargousse A. Le Lann I. M. Joulia X. Jourdu L. DISCO:un nouvel environnement de simulation orient objet; Proceeding of MOSIM 1999 61-66, (1999) p. Nancy, France Saucez P. Schiesser W. E. van de Wouwer A. Upwinding in the method of lines Mathematics and Computers in Simulation 56 (2001) p. 171-185 Schiesser W. E. The numerical method of lines-integration of partial differential equations Academic press New York 1991 ShiJ. H u C. Shu C. W. A technique of treating negative weights in WEN0 schemes Brown University Scientific Computing Report Series BrownSC2000 (2000) p. 15 USA Shu C. W. Osher S. Efficient implementation of essentially non-oscillatory shockcapturing schemes J. Comp. Phy. 77 (1988) p. 439-471 Shu C. W.Osher S. Efficient implementation of essentially non-oscillatory shockcapturing schemes 11 J. Comp. Phy. 83 (1989) p. 32-78 Shu C. W. Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws ICASE Report No. (1997) p. 97-65 Stockiej. M. Mackenziej. A. Russell R. D. A moving mesh method for one-dimensional hyperbolic conservation laws SIAM J. Sci. Comput. 22(5) (2001) p. 1791-1813
I
105
106
I
2 Distributed Dynamic Models and Computational Fluid Dynamics
Van der Laan G. P. Beenackers A. Krishna R. Multicomponent reaction engineering model for Fe-catalyzed Fischer-Tropsch synthesis in commercial scale slurry bubble column reactors Chem. Eng. Sci. 54 (1999) p. 5013-5019 72 Vande Wouwer A. S a m z P. Schiesser W. E. Some user-oriented comparisons of adaptive grid methods for partial differential equations in one space dimension App. Num. Math. 26 (1998) p. 49-62 73 Villadsen J. Michelsen M. L. Solution of differential equation models by polynomial approximation. Prentice-Hall Englewood Cliffs New Jersey 1978 74 W q J .S. Analysis of batch crystallization processes Chem. Eng. Commun. 35 (1985) p. 231-252 71
Wu]. C. Fan L. T. Erickson L. E. Three-point backward finite-difference method for solving a system of mixed hyperbolic - parabolic partial differential equations Comp. Chem. Eng. 14 (1990) p. 679-685 76 Yu S. T. Chang S. C. Treatment of stiff source terms in conservation laws by the method of space-time CE/SE AIAA 97-0435 35" Aerospace Sciences Meeting, Reno, USA 1997 77 Zhu G.-Y. Zamamiri A. Henson M. A. Hjortse M. A. Model predictive control of continuous yeast bioreactors using cell population balance models Chem. Eng. Sci. 55 (2000) p. 6155-6167
75
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8 I107
3 Molecular Modeling for Physical Property Prediction Vincent Cerbaud and XavierJoulia
3.1 Introduction
Multiscale modeling is becoming the standard approach for process study in a broader framework that promotes computer-aided integrated product and process design. In addition to the usual purity requirements, end products must meet new constraints in terms of environmental impact, safety of goods and people, and specific properties. Engineering achievements can be startling from the user perspective like aqueous solvent paint that is still washable after drying! This can only be done by improving process knowledge and performance at all scales, right down to the atomic scale. Current experimental and modeling approaches assess with difficultly such submicronic scales. In experiments, there is the question of how to conceive experimental devices small enough and how to introduce them in molecular systems without irreversibly affecting the phenomena that they look at. In modeling and simulation, the questions are: which hypotheses are still relevant? How does one handle boundary effects? Numerical difficulties may arise along with the necessity of defining new parameters. They will be adjustable ones as no experiments can obtain them. This latter statement is particularly true for energetic interaction parameters like binary interaction parameters in current liquid-vapor equilibrium macroscopic thermodynamic models based on the activity coefficient approach or on the equation of state approach. The study of any process, phenomena attributed to energetic interactions has always been left for another time, but that time has come. Indeed, molecular modeling is a field of study that is interested in the behavior of atomic and molecular systems subject to energetic interactions. It is then a natural complement of experimental and modeling approaches to expand multiscale approaches towards smaller scales. Besides, process flows primarily concern molecules from raw materials to end products. Therefore, at any process development step, the challenge of knowing the physical properties and thermodynamical state of molecules is critical. However, the future of this challenge is dim when one thinks Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
108
I
3 Molecular Modelingfor Physical Property Prediction
about the millions of chemical compounds referenced in the chemical abstract series. Neither experimental approaches nor current themodynamic models can handle the combination of properties needed. In some cases experiments are not even practical because of material decomposition or safety issues. Universal group contribution methods are a pipe dream and existing ones are efficient but are restricted to specific areas like petrochemical and small molecular systems. As providers of accurate physicochemical data, molecular modeling methods offer an alternative to an intensive and expensive experimental campaign once molecular models are available, which is becoming increasingly the case (Case, Chaka, Friend et al. 2004). But the first goal is nothing compared to the main interest of molecular methods, that is, probing matter at the molecular level (Chen and Mathias 2002; De Pablo and Escobedo 2002; Sandler 2003). Indeed, molecular modeling can be seen as a “third way to explore real matter” (Allen and Tildesley 1987). Like a theoretical approach, it is based on a model system of the real one. But unlike theory, no hypothesis and no transcription of key phenomena into equations or correlations is performed. Rather, molecular modeling performs numerical experiments to simulate directly the behavior of the model system. The concept of numerical experiment is strong. First, the model system is made of a boundered molecular system and of an interaction model analogous to an experimental sensor that enables one to compute the internal energy of the model system. Second, think of the pseudoconstant thermometer temperature and of the Brownian motion of atoms in a liquid that generates a fluctuating temperature. More generally, any macroscopic property value measured by an experimental probe is a timeaverage over many instantaneous fluctuating values. Statistical thermodynamics postulates that this time-average equals an ensemble average over a statistically significant number of model system configurations. Molecular modeling generates them numerically using methods like molecular dynamics or Monte Carlo methods. Any property of interest is then derived using thermodynamical laws from instantaneous property value averages and correlation factors. Thirdly, numerical standard deviation associated to the ensemble average is the equivalent of experimental accuracy. This chapter presents molecular modeling concepts so as to demystify them and stress their interests for chemical engineers. Multiscale approach including molecular modeling are not illustrated due to restricted space. Rather, routine examples on the use of several molecular techniques suitable for acquiring accurate vapor-liquid equilibrium data when no data is available are provided. 3.2 What is Molecular Modeling?
Molecular modeling includes computer theoretical chemistry and molecular simulation. Computer theoretical chemistry calculations are carried out at 0 K and solve Schrodinger’s equation to obtain nuclear and electronic properties such as conformation, orbital, charge density, and electrostatic potential surface in fundamental or excited states. Computation time is huge, being proportional at best to N2.Selectrons,
3.2 What is Molecular Modeling?
which restricts its use to small systems. The precision of the results is significant because the only assumptions are linked to approximations carried out to solve Schrodinger’s equation. In particular, there are no adjustable parameters. Besides, it provides crucial information on the electronic distribution that enables one to evaluate electrostatic interactions in molecular simulation. Molecular simulation is a numerical technique used to acquire the physicochemical properties of macroscopic systems from the description, on an atomic scale, of the elementary interactions and from the application of statistical thermodynamics principles. It concerns the calculation of a model system internal energy at a positive temperature. Computation time is proportional to Nmolecules, which makes it a technique adapted to the study of real systems: phase properties, interfaces, reaction, transport phenomena, etc. Molecular simulation carries out dynamic modeling of the system subjected to realistic temperature and pressure conditions thanks to an adequate sampling of the system configurations. A configuration is a set of particle coordinates and connections. Inaccuracy may arise from the energetic models that contain fitted but physically meaningful parameters or from system configuration sampling techniques that must comply with statistical thermodynamic principles. Molecular simulation offers the most potential for process engineering. Wherever energetic-interaction-relatedphenomena have a prevalent place, molecular simulation deserves to be considered for use in studying and loolung further into the knowledge of the phenomena in the heart of the processes. In particular, it is suitable for the study of phase equilibrium, interfacial properties (specific adsorption on catalyst), transport coefficients, chemical reactivity, activity coefficients, etc.
3.2.1 Scientific Challenges of Molecular Modeling in Process Engineering
The use of molecular simulation in process engineering lies mainly in the difficulty of establishing the link between the macroscopic properties and their energetic description or that of significant parameters at mesoscopic or molecular scale. The micro-macro relation can be simple: in distillation, the knowledge of phase equilibrium data enables one to run an extensive study and design of the process. In tablet processing, the relation is more complex: the tablet properties (compactness, friability, dissolution) are related to the pellet’s cohesion and to the substrate’s solubility. Obviously, the energetic interaction is a key phenomenon and is taken into account through solubility parameters, which can be broken down into primary energy contributions (van der Waals repulsion-attraction, Coulombic interaction, etc.), precisely the applicability of molecular simulation. But particle size and solvent effects on the aggregate size and homogeneity are equally important, notwithstanding operational process parameters, and are still difficult to address at a molecular scale. So, identifying the limiting phenomena is a priority before any molecular simulation. The size of the model systems is not an unsolvable problem as periodic boundary conditions can be applied to replicate the original system box and mimic a homogeneous macroscopic phase. Rather, the scientific challenges concern issues often
110
I
3 Molecular Modelingfor Physical Property Prediction
encountered in experiments, the sensor challenge, the sampling challenge, and the multiscale challenge. 3.2.1.1 The Sensor Challenge
For data-oriented simulations, accurate force fields/sensors are needed to evaluate precisely energetic interactions. The study of highly polar systems, reliable and relevant extrapolation of carefully set force field parameters, and the absence of temperature dependency of these parameters are key improvements of molecular simulation models over existing macroscopic models. The model system is usually a parallelepiped box filled with particles whose energetic interactions are described by a force field enabling one to compute the system internal energy. In order to mimic a homogeneous phase, the box is usually replicated in 3-D by applying periodic boundary conditions. The typical size ranges from 20 to 1000 8, and may vary during simulations. Edge effects are to be envisaged and can be attenuated by increasing the box size. The development of a force field requires a strong collaboration with theoretical chemists and physicists. Indeed, different lunds of force fields can arise: some based on quantum chemistry concepts and some based on molecular mechanics (Sander 2003). Quantum-based models are used in static modeling and naturally in computer theoretical chemistry calculations. Solving the Schrodinger equation, they provide the nuclear and electronic properties system and consequently the true energy of the system (e.g., the energy of ionization) that is physically measurable. Molecular mechanics models are used in molecular simulation to calculate intensive properties (T, P) and extensive properties, among which the internal energy of the system, which is not directly measurable by experiment, but enables one to calculate other thermodynamic properties by using thermodynamic laws. Properties like vaporization enthalpy connected to differences in internal energy are computed and can be compared to experiments. Quantum models (QM)are practical on a few tens of atoms at best and are being used more and more in combination with molecular mechanics models for some part of the system where accurate electronic distribution is needed, e.g., a reactive zone or to provide a description of the electronic distribution. Molecular mechanics (MM) models are the most used and are based on a springs and beads mechanistic description of the intermolecular interactions and of the intramolecular bonds. They allow calculations on several hundreds of particles, which enable one to model real systems in a satisfactory way. They contain physical parameters evaluated from quantum calculation but also empirical parameters, which must be regressed from experimental data. However, this empiricism is attenuated by some physical significance attributed to the parameters. Moreover, M M force fields show amazing properties. Valid over a large pressure and temperature range, they can be used to compute many properties and all molecules can be described from a small set of parameters if careful parameterization is conducted, which constitutes the first challenge.
3.2 What IS Molecular Modelmg?
ir I
Figure 3.1
Process engineering and molecular modeling
3.2.1.2 The Sampling Challenge
The second challenge requires a strong involvement of process engineers. Novel and smart methods must be developed to sample specific states of the model system, which are of great interest for process engineering, for instance, transition states that set the reaction energetic barrier, azeotropes that affect strongly the distillation process feasibility and design, dew points, etc. Usually, existing molecular simulation methods sample nonspecific states like a vapor-liquid equilibrium point. Unlike measurement time, its experimental equivalent, numerical sampling can be advantageously biased to sample the specific state of interest but it requires expertise to comply with statistical thermodynamics principles, which permit a bridging of the microscopic and macroscopic scales. Furthermore, for existing methods based on molecular dynamic or Monte Carlo methods, sampling efficiency should be improved, in particular for complex molecules like macromolecules, even if the alternate solution of running more simulations is still the leading choice as computer power increases. With this second challenge, process engineering finds a new use for molecular modeling: it cannot be solely data-oriented, but also discovery-oriented and assumes its status of numerical experiment. 3.2.1.3 Molecular Modeling in a Multiscale Approach
The integration of molecular modeling in applicable models for the study of macroscopic systems and their properties is of the utmost importance for process engineering. Indeed, often considered as decisive, phenomena related to energetic interactions have often been left aside during a process study because of a lack of suitable tools or incorporated into parameters. Thermodynamic models used in phase equilibrium calculations are a good example: binary interaction parameters must be found empirically despite their solid physical meaning. The first illustrative example addresses the issue of calculating binary parameters by molecular modeling methods.
I
1 1I
112
I
3 Molecular Modelingfor Physical Property Prediction
Process engineering models are knowledge-based models. In most domains, process study requires a multiscale approach. As a technique of experimentation, molecular modeling makes it possible to visualize on a molecular scale physicochemical phenomena. It can thus be used to develop or revisit theories, models or parameters of models and therefore improve our knowledge of processes and increase the capacity of predictions and extrapolation of existing models.
3.3 Statistical Thermodynamic Background
Suggested Readings 1
2
McQuarry D. A. Statistical thermodynamics. Harper and Collins. New York, 1976. ISBN 0060443669 Allen M. P. Tildesley D. J. Computer Simdation of Liquids. Oxford University Press, Oxford, UK. 1987. ISBN 0198556454
3
Frenkel D, Smit B Understanding Molecular Simulation. From Algorithms to Applications. Academic Press, San Diego, 1996. ISBN 0122673700
3.3.1 A Microscopic Description of Macroscopic Properties
Traditional thermodynamics and statistical thermodynamics address the same problems but differ in their approach: thermodynamics provides general relations without any consideration of the intrinsic constitution of the matter, while statistical thermodynamics supposes the existence of atoms, molecules, and particles to calculate and interpret thermodynamic properties at the molecular level. The objective of statistical thermodynamics is to describe the behavior of a macroscopic system in terms of microscopic properties of a system of molecular entities. The main idea is to evaluate an average property value and its standard deviation from a statistically significant number of configurations, much like a real experiment. Indeed, the temperature reading on a thermometer appears falsely constant. At the molecular level, a positive temperature is the result of atomic vibrations and collisions occurring on a time scale (e.g., IO-' s.) much lower than the sampling s.) (Fig. 3.2). Using statistical thermodyperiod of the experimental sensor (e.g., namic concepts, molecular simulation will do the same and perform a numerical experiment. Each instantaneous configuration (atomic positions and moments) of the system exists according to a probability distribution. The most probable will have the largest contribution to the computed average value. For the experimental system the macroscopic property X value is a time-average over a set of configurations r (t) sampled during the measurement time tmeas:
3.3 Statistical Thermodynamic Background
TO,
Measurement time (10-3s) >> fluctuations ( Figure 3.2
Measurement of a “mean” temperature and its relation with instantaneous temperature
s)
But knowing all configurations T(t)is impractical because the number of particles (6.023 x for a mole) and thus the number of positions and moments are incommensurable. Statistical thermodynamics was developed to solve this problem statistically. The first postulate of statistical thermodynamics is that “the value Ofany macroscopic property is equal to its average value over a sample of the model system conjgurations,” as shown here:
where ttotal is the number of sampled configurations. The notation ( )ensemble refers to a statistical ensemble. By definition it consists in a significant number of subensembles having the same macroscopic properties. The thermodynamic state of a macroscopic system is perfectly specified by a few parameters, for example the number of moles N,the pressure P, and the temperature T. From them, one can derive a great number of properties (density, chemical potential, heat capacity, difhsion coefficient, viscosity coefficient, etc.) through equations of state and other thermodynamic relations. Reproducing conditions occurring in experiments, the “canonical” NVT, and the “isobar-isothermal”NPTensembles are quite useful. The notations NVT and NPT mean, respectively, that the number of moles N + volume V + the temperature T and the number of moles N + the pressure P + the temperature T, are kept constant for each system configuration during simulations run in those ensembles. One considers that the postulate of statistical thermodynamics applies during simulations in a statistical ensemble on systems with a few thousands of particles replicated by periodic boundary conditions and that averages are made on a few million configurations. The sampling size and quality are often the Achilles’ heel of molecular simulations. 3.3.2 Probability Density
Equation 2 states that configurations have the same weight, on average, and the same probability of existence. This is the second postulate of statistical thermodynamics: “Allthe accessible and distinct quantum statesfrom a closed system offixed energy
I
113
114
I
3 Molecular Modelingfor Physical Property Prediction
(‘microcanonic’NVE) are equiprobable.” Equation 2 is therefore rewritten: X(r(t))h,,
= (X)ensemble =
(x(r(r)) Pensernble(r(r))) ’
Second postulate
(3)
r(r)
where pensemble(I’)is the probability density, which is the probability of finding a configuration with positions and moments r(t).In the NVT ensemble, any configuration probability density is connected to its energy E and to the total partition function, namely the sum over all configurations, by the Boltzmann formula:
am,
Two points are noteworthy: 1. The knowledge of the partition function allows the calculation of all thermodynamic properties. But this can never be done fully but rather imperfectly through the generation of a statistically representative number of configurations. 2. A model is required to evaluate any configuration energy in order to calculate the partition function. This is done through a force field.
3.3.3 Average, Fluctuations, and Correlation Functions
Equation 2 is the usual mean formula to calculate an average value (molar fractions, etc.). Other properties (heat capacity, etc.) are calculated from the variance expressing the fluctuations around the mean:
( x ) Variance ~
(5)
Correlation coefficients give access to properties describing the dynamic state of the system. The nonnormalized form of the correlation coefficient over T configurations is:
C x(tO).x(to+~)Correlation coefficient
Ittotal
correlxx(r) = ( x ( T ) . x (=~~))
ro=l
(6)
The integration of the nonnormalized correlation coefficients enables one to directly calculate macroscopic transfer coefficients (diffusion,viscosity, or the thermal diffusivity coefficient). Their Fourier transform can be compared with experimental spectra.
3.4 Numericd Sarnp/ing Techniques
3.3.4 Statistical Error
Molecular simulation is a numerical experiment. Consequently, the results are prone to systematic and statistical errors. The systematic errors must be evaluated, and then eliminated. They are caused by size effects, bad random number generation, and insufficient equilibration period (see below). Statistical errors are inversely proportional to sampling and are thus zero for infinite sampling. On the assumption that the Gauss law applies, the statistical error is the variance (Eq. (5)). However, sampling a large but finite number of configurations induces a correlation between the Ttotd configurations that persist during a certain number of successive Configurations. A statistical factor of inefficiency s is introduced to evaluate the number of correlated successive configurations. The ttotal configurations are cut into nb blocks of ‘Gb configurations upon which the average ( 4 b and its variance ( S * ( a b ) are computed. By selecting several increasing values of o b , the statistical inefficiencys and the statistical error 02((X)total) is evaluated:
3.4 Numerical Sampling Techniques
The generation of a statisticallyrepresentative sample of the model system configurations is mainly done by two techniques: molecular dynamics and the Monte Carlo method. They obey the principles summarized in Fig. 3.3. Both methods differ in their applications. The Monte Carlo method is adapted for the study of static phenomena (equilibrium and static interface) while molecular dynamics is suitable for the study of dynamic phenomena (shear induced flow). A phase equilibrium easily computed using Monte Carlo methods would be difficult to reach in molecular dynamics because of the time needed and of boundary effects near the interface.
3.4.1 Molecular Dynamics
Molecular dynamics generates a trajectory by integrating the classical equation of motion over time steps 6t starting from an initial configuration whose particle positions and velocity are known (Fig. 3.3). The n* configuration can be traced down the initial one by reverse integration. In the equation of motion (Fig. 3.3), the forces Fi acting on the particle of mass mi are equal to the derivative of the Vi(r)potential describing the interactions of the particle i with its surroundings.
I
115
I16
I
3 Molecular Modelingfor Physical Property Prediction
Trajectory
C b-= m ddV, t
+
1 1 ?&
I
Figure 3.3
i
Molecular Dynamics
Monte Carlo
Random configurations
I _.
.:,
...
I ;j‘:.’
1
Basic concepts of the Monte Carlo method and molecular dynamics
The integration of the differential set of equations is carried out mainly by Verletlike algorithms rather than by Gear-like algorithms, which are widespread in process engineering. The Verlet algorithm calculates the new particle positions r(t) using a 3rdorder Taylor expansion and replacing the second derivative by the forces thanks to the equation of motion, one gets a formula with no velocity term: r(t
F ( t ) tit2 + tit) = 2r(t) - r(t - tit) + . - + o(tit4) m 2!
(9)
Velocities are computed afterwards: dr(t) dt
-
u ( t ) = --
r(t
+ tit) - r(t - tit) 26t
+ o(st2)
(10)
This algorithm shows several interesting characteristics: (1)It is symmetrical with regards to 6t, which makes the trajectory reversible over the time. (2) It preserves the total energy of the system over long periods of integration, a key point to get long trajectories and deduce with accuracy some correlation functions. In particular, it is more precise than the Gear-like algorithms for large 6 t (the reverse is true for small 6 t ) making it suitable to simulate long trajectories, which is our goal. (3) It requires less data storage than Gear-like algorithms. Transport coefficients (self-diffusion,thermal diffisivity, and viscosity) are computed from autocorrelation coefficients, the “Green-Kubo”formulas, for instance, the coefficient of self diffusion Diis related to the relative particle velocities:
Similarly, viscosity is obtained from the shear stress tensor autocorrelation coefficient related to the pressure exerted on the particle and the thermal diffusivity is obtained from the energy flow autocorrelation coefficient.
3.4 Numericd .Samp/ingTechniques
A challenge with molecular dynamics run in a statistical ensemble where the temperature is set constant, is keeping it constant when moving and interacting particles inevitably heat the system. A solution is to place the system in a large thermostated bath periodically set in contact with the model system through techniques like the Andersen or Nose-Hoover methods.
3.4.2 Monte Carlo Method
The Monte Carlo method generates system configurations randomly. The n* configuration is related to its preceding one but it is impossible to go back to the initial configuration. First of all, randomness is particularly critical and has given its name to the method in reference to the Monte Carlo casino. The advice is to always use a published robust random number generator and never try to build one or use the falsely random precompiled “Ran”function on a computer. Systematic deviation and repetitive sequences can be checked by running simple tests. The second key issue is sampling (Fig. 3.4). Uniform sampling allows a good estimate of the partition function needed to compute all macroscopic properties, but at the expense of sampling high energy and thus improbable configurations. Preferential sampling (or metropolis sampling) samples the configurations with the largest contribution in the calculation of the partition function and of averages. The disadvantage of metropolis sampling is that the partition function (equivalent to the surface under the curve) is no longer correctly evaluated. Thus, the question arises of finding how to generate the configurations with a correct probability distribution without having to calculate the function of partition that occurs in the definition of the probability density (Eq. (4)). The solution is to obey the microscopic law of reversibility. Given an old (0)and a new (n) configuration, their probability densities p are proportional to exp(-E(,,)/kBT)and exp(-E,,,/kBII) in the NVTensemble (Eq. (4)).Defining the transition probability M(o+n) of going from (0)to (n), the microscopic reversibility states that at equilibrium the number of transitions from (0)to (n) and from (n) to (0)corrected by the probability densities must be equal.
In addition, an acceptance criterion is introduced acc(o + n) along with a ( o-+ n) an (0)to (n),which is supposed to be symmetrical (a(o+ n) = a ( n + 0)):
a priori probability of trying to go from
M(o + n) = a(o + n) . acc(o -+ n)
(13)
I
117
118
I
3 Molecular Modelingfor Physical Property Prediction
Then by exploiting the symmetry of a , Eq. (12) becomes:
(-(%
acc(o -+ n) - pg& acc(n -+ 0) = eXP
&
kbT
)
(14)
In this equation, the partition function that is so difficult to calculate no longer appears. The metropolis idea is to choose acc(o+n) asymmetrically. As indicated in Fig. 3.5:
If the new configuration energy is lower than the old one, the transition is always accepted If the new configuration energy is higher than the old one, one picks a random number between 0 and 1: o if = 5 exp(-(E(,) - E(,,))/keT), the transition is accepted, o if 5 = > exp(-(E(,, - E(o,)/kBTj,the transition is rejected.
c
Applied to Eq. (14), this acceptance criterion enables one to define the acceptance probability of a random displacement in general and in the NVT ensemble:
Last, a symmetrical a(o + n) is chosen to allow a sampling effective in terms of acceptance and efficient in terms of the configuration space. Usually one defines a maximum value associated with the transition, like a maximum displacement d,that is fxed in order to satisfy a rate of 50 % of accepted transitions. If several movements are possible (e.g., translation, rotation, and volume), the type of movement will be chosen randomly from a predetermined statistical distribution. Again, one insists on the randomness of the choices of particle and of the type of movement in order to respect the microscopic reversibility. In ensembles other than NVT, probability densities are corrected with respect of the microscopic reversibility law.
probability
1
r\
probability
ILAttt
Uniform sampling Figure 3.4 Uniform and preferential sampling
ttt
Preferential sampling
configurations
r.
0 Figure 3.5
E,,,-E,,
AE
Metropolis preferential sampling. Criterion of acceptance
3.4.3 Phase Equilibrium Calculations using Cibbs Ensemble Monte Carlo
The Gibbs ensemble was developed by Panagiotopoulos in 1987 to simulate vaporliquid equilibria. Simulations are carried out in a NVT ensemble on two microscopic boxes located within two homogeneous phases far from any interface. Each box is simulated with periodic boundary conditions. Constant total volume V and total N particles are divided between the two phases V,, N1 and V,, N2. The temperature is set constant in the simulations and random movements are performed to satisfy the phase equilibrium conditions as described in Fig. 3.6: 0
0
0
Displacements (translation, rotation) within each phase to ensure minimal internal energy; A change of volume proportional between the phases: AVl = -AV2 so that the total volume is constant. This should satisfy the pressure equality. Transfer of the particle from one box to the other to equalize the chemical potentials.
The acceptance probabilities of the various movements in the case of a single component system are given below. For the translation in each area:
For the change of volume, V, is being increased by A V and V2 is being decreased by just as much:
with A V chosen by generating a uniform random number 5 between 0 and 1 ; 6 V, being the change of maximum volume adjusted to obtain a fixed percentage (e.g., 50 %) of acceptance of the move: AV =
< . SVma, . min(V1, V2)
(18)
120
I
3 Molecular Modelingfor Physical Property Prediction
For the transfer of a particle of area 2 to area 1:
One of the main difficulties of the Gibbs ensemble Monte Carlo method resides in the transfer of particles to satisfy the chemical potentials equality because of the difficulty to insert polyatomic molecules in the dense phase. An alternative is to seek open spaces where insertion is eased into the particle. This affects randomness and introduces a statistical bias like the configurational bias method, which consists of inserting segment by segment a molecule in a phase. The probability of acceptance of the transfer of the particle represented by Eq. (19)is then modified by introducing the energy differences A Ei into weighting factors Wi that represent the total energy of interaction with the surroundings of the inserted molecule. For an L segment molecule inserted in m possible directions:
More generally, the introduction of a bias consists of defining an a pliori probability a(o+n), which is no longer symmetrical. Equations 14 and 15 become:
Real bulk phases
Initial
Gibbs Ensemble Monte Carlo
state
Mavement type
Internal l.-placements
-.
Equilibrium conditions Figure 3.6 ensemble
AV, = -A&
............ .-
E, minimal
Principles of phase equilibrium simulations in the Gibbs
Partide lransfer .........
P, = P,
&,I = K,2
3.5 Interaction Energy I 1 2 1
To conclude this section, both molecular dynamics and Monte Carlo methods require the calculation of the interaction energy; for molecular dynamics to derive forces exerted on the system particles and for the Monte Carlo method to calculate the acceptance criterion. The following section reviews the main features of the force fields enabling to calculate intra and intermolecular interaction energies. 3.5 Interaction Energy Suggested Readings 1
Leach A. R. Molecular Modelling, Principles and Applications. Longmann, Harlow, UK, 1996
2
Karplus M. Porter R. N. Atoms and Molecules. Benjamin inc., New York, 1970
3.5.1 Quantum Chemistry Models
Quantum chemistry models are never used alone in molecular simulation because of the still prohibitive computation time. However, they must be considered as they can provide less-sophisticated molecular mechanics models with partial electronic charges and various dipoles useful for computing Coulombic and dipolar interactions. They can also provide accurate spring constant values describing the bonding intramolecular interactions associated with the various oscillatory modes within the molecules (stretching, bending, and torsion). In quantum chemistry, only atomic nuclei surrounded by revolving electrons are considered. Calculations provide the nuclear and electronic properties system and the true total energy of the system. Total energy is related to the general time dependent wave function Y (r, t) by means of the generalized Schrodinger equation: where H is the Hamiltonian, a mathematical operator with kinetic and potential energetic contributions. Apart from an analytical solution for the sole hydrogen atom, Schrodinger equation solutions are always approximate to some degree because a compromise must be made between computation time and accuracy. Three levels of approximation are considered, namely ab initio methods, mean field methods, like density functionnal theory (DFT),and semiempirical methods. Among ab initio methods, configuration interaction (CI) methods are the most accurate, but the slowest. Calculated energy values have a precision comparable with
122
I
3 Molecular Modelingfor Physical Property Prediction
experimental ones (0.001 ev). CI solutions are obtained by minimizing a linear combination of the wave functions associated with the system fundamental state and all excited states. The self consistent field molecular orbital concept considers atomic orbitals that represent wave functions of electrons moving within a potential generated by the nucleus and by an average effective potential generated by the other electrons. The best such wave functions are Hartree-Fockwave functions and solve the Schrodinger equation for a given electronic configuration (e.g., the fundamental state) without any empirical parameter. They can be used for CI calculations. Atomic orbital wave functions are approximated using Gaussian functions, which leads to peculiar denominations like STO-3G Hartree-Fock calculations (use of a basis set of three Gaussian functions). The larger the basis set, the longer and the more accurate the calculation. The semiempirical methods are the most approximated quantum methods: Hiickel calculations can be done on a sheet of paper; finer semiempirical models enable one to obtain with good precision the ionization energy, optimal conformations, and electronic surface potential. However, they present the disadvantage of calculating the wave functions approximately by replacing various integrals by fitted empirical parameters. Between the two levels of approximation, one finds the mean field methods of the popular density functionnal theory. The idea is rather than to seek to solve the exact Hartree-Fock problem in an approximate way, one could seek to solve an approximate problem in an exact way. That consists of modifylng the Hamiltonian operator and replacing the term of exchange of correlation accounting for multiatomic orbital interactions by the electronic density pi. The results are obtained with satisfactory accuracy and much faster than Hartree-Fockcalculations, enabling one to even study periodic systems of interesting size. Nevertheless, all quantum mechanical calculations are performed for a static configuration of the system under 0 K conditions. But as a provider of key properties like electronic distribution, they should be systematically used in any molecular simulation aiming to be quantitative.
3.5.2 Molecular Mechanics Models
Molecular simulation uses molecular mechanics models to calculate the internal energy of the system. It considers that the molecules can be represented by centers of forces like beads and the bonds can be represented by springs (Fig. 3.7). As Fig. 3.7 shows, the total internal energy is the sum of intramolecular or bonding interactions and of intermolecular or nonbonding interactions. The set of molecular mechanics parameters is called a force field. Intramolecular energy takes into account vibration phenomena between bonded centers of forces. As the beads and spring model suggests, they are described by harmonic functions and handle stretching, bending, torsion as well as improper rotation
3.5
/flterOdOfl
Energy
if needed. Average parameters lo, +o and harmonic constants ki and are usually fitted to accurate vibration energy calculations made with quantum mechanical methods. Intermolecular energy takes into account the two-body interactions between the centers of forces. Three-body interactions are rarely included. Short range interactions can be described by a van der Waals potential modeled by a 12-6 Lennard-Jones function. The 1/rI2term represents the repulsive contribution, which becomes significant below 3 A. The 1/4term represents the attractive contribution related to the dispersive effect of induced dipoles. More rigorous forms may include 1/$ or l/rl’ terms or other functional forms (Buckingham potential, “exponential-6”potential, etc.). Electrostatic interactions are a major contribution to intermolecular energy as they are long-range interactions felt up to a distance of 25 A for multicharged ions. Permanent dipole and multipole are rarely included, but Coulombic interactions related to partial atomic charges qiand q, are a must. All electronic parameters (dipoles, partial charges) should be fitted to electronic surface potentials computed by quantum mechanics to improve quantitative predictions of molecular simulations. Hydrogen bonding interactions are either modeled explicitely by a 12-10LennardJones function or assumed to be implicitely taken into account in the van der Waals interaction. The functional form of the molecular mechanics energy shows that it is not a true energy that can be measured experimentally. Rather, for a single molecule, it is zero at its most stable conformation, whereas true zero energy corresponds to the protons, neutrons, and electrons infinitely split apart. Molecular mechanics predictions of conformations are in excellent agreement with experimental ones. Nevertheless, the practical use of molecular mechanics is great because for a system of several molecules, it computes the thermodynamic internal energy from which many interesting properties can be derived. Force fields can be of all-atoms (AA) type, as shown in Fig. 3.7, in which there is a center of force on each atom. Their names are Dreiding, Universal Force Field, Compass, OPLS, etc. But other types exist where atoms are grouped (e.g., -CH3) under a single center of force in order to reduce the computing time of short range interactions. This leads to united atoms (UA) force fields. In all cases, long-range electrostatic interaction is split in as many centers as possible, usually on all atoms and sometimes on virtual centers. A similar idea is at the origin of polarizable force field like the anisotropic united atoms (AUA), which intends to take into account the electronic cloud shift when two particles approach: the charged center is displaced along the resultant of the nearby bonds. Since all intramolecular parameters and electrostatic parameters are systematically derived from quantum mechanical calculations, molecular simulations using molecular mechanics force fields have greatly improved their accuracies. However, even if for a particle i, Lennard-Jones parameters ui and E~ are, respectively, associated with the collision diameter (the distance for which energy is null) and with the potential well. They must still be fitted to some extent, as will be shown later using experimental data (enthalpies, formation energies, densities, etc.).
I
123
124
I
3 Molecular Modelingfor Physical Property Prediction
I-stretching
4,s;
Figure 3.7
2-bending
3-torsion
--
%Van der Waals 5-Coulomb
Typical molecular mechanics force field.
For multi component systems, the diameter ay and the energy parameter E Y are obtained from pure substances using traditional mixing rules like those of LorentzBerthelot:
These very simple rules have rarely been questioned, more proof of the strong physical basis of molecular simulation. Furthermore, they highlight that the study of a system with M different centers of forces only requires the knowledge of 2M parameters, whereas a traditional approach with a thermodynamic model with binary interaction parameters would require M(M+1)/2 such parameters. Even if the main functional forms of the potentials (stretching, bending, torsion, van der Waals and Coulomb) are present in all quoted force fields, the choice must be made by knowing the type of experimental data used to regress the Lennard-Jones parameters and the way electrostatic interaction are described. Mixing LennardJones parameters from several force fields without any confrontation to experimental data is acceptable only if qualitative results are sought. 3.6 Running the Simulations
How should one represent the behavior of macroscopic systems when the model system typically contains only a few thousand particles?The problem is solved by adopting periodic boundary conditions that duplicate in all directions identical images of the model system. In molecular dynamics, care should be taken that any particle moving through one wall of the main image will reenter at the opposite wall with the same velocity. Notice that the interaction energy of a particle must include interaction with all included replicated particles. However, for long-range interactions this would require too much computer effort and limiting techniques are implemented: rough ones, such as a “cutoff distance beyond which the interaction is supposed to be null, or more accurate ones like the Ewald summation. In the case of a cutoff, it is necessary to include long range corrections.
3.7 Applications
The initial particles in the box are usually set along a periodic network to avoid overlaps that would result in an infinite energy. Then, a statistical ensemble and a sampling technique are chosen. Force field parameters are associated with all force centers and for molecular dynamics simulations, a statistical distribution of initial velocities is set. Finally, the simulation is launched. It consists of a phase of equilibration and a phase of production. The purpose of the phase of equilibration is to bring the system from an initial configuration to a configuration representative of the system: random distribution of the molecules and the velocities within a system with imposed thermodynamic conditions (that of the chosen statistical ensemble). In molecular dynamics under fured temperature T,the system in gradually heated to the T set value. The phase of production starts when key properties like potential energy, pressure, and density fluctuate over mean values. Each configuration then generated is kept to calculate the macroscopic properties from averages of fluctuation to coefficients of correlation. As statistical error decreases when the number of configuration increases, at least 10' configurations should be generated. 3.7 Applications
Vapor-liquid equilibrium calculations are a major field of investigation because of the importance of processes like distillation. Too often, data are missing. We present two approaches that use molecular modeling to obtain such data. The first example aims at computing binary interaction parameters occurring in the UNIQUAC activity coefficient model. The second example directly computes the equilibrium compositions using a Gibbs ensemble Monte Carlo method.
3.7.1 Example I: Validation of the UNIQUAC Theory 3.7.1.1 Overview o f UNIQUAC Model
The practical calculation of vapor-liquid equilibrium (Eq. (26)) involves an activity coefficient (yi to describe the nonideality of the liquid phase due to energetic interactions.
By applying the thermodynamic relation of Gibbs-Duhem, one connects the individual coefficients of activity yi with the excess gibbs energy GE:
I
125
126
I
3 Molecular Modelingfor Physical Property Prediction
The UNIQUAC model proposes an expression for the G Ewith two contributions: a combinatorial part that describes the dominant entropic contribution and a residual part that mainly occurs due to the intermolecular forces responsible for the mixing enthalpy. The combinatorial part is related to the composition xi and to the molecule shape and size. It requires only pure component data. The residual part depends, in addition, on the interaction forces embedded into two binary interaction parameters Ay and Aji. GE
RT
combinatorial
residual
Parameters ri, qi,and qi' are molecular constants for each pure component i, related respectively to its size, its external geometrical surface, and its interaction surface. q' can be different from q, in particular for polar molecules. The model system upon which the UNIQUAC theory was developped considers interacting molecules. Then, the two binary interaction parameters Ay and A,i can be expressed in terms of interaction energies Uvbetween dissimilar molecules i and j, and Uii between similar molecules i:
where NAis the Avogadro number. The Wilson activity coefficient model also proposes two binary interaction parameters. It is a simplification of the UNIQUAC model in which parameters r, q, and q' are all set to unity. Interaction surfaces are not taken into account and molar volumes are eliminated in the equation. The relationship between the Wilson and UNIQUAC parameters is as follows:
where V; and y a r e the molar volumes of components i and j. The traditional approach consists of regressing Av and A,i from experimental data, with all drawbacks associated with this approach: data specific parameters, poor extrapolation capacity, temperature and pressure dependency, and the need of experimental data. A few years ago, an attempt to directly calculate the binary interaction parameters was made and is reported below.
3.7 Applications
3.7.1.2 Calculation Using Molecular Mechanics UNIQUAC Binary Interaction Parameters
In 1994, Jonsdottir, Rasmussen and Fredenslund (1994) computed interaction energies between isolated couples of molecules. They used molecular mechanics models not in a molecular simulation perspective, but rather like a quantum mechanical approach. For a given orientation of the two molecules, an energy minimization was run to reach a stable conformation. Many orientations are selected and the mean interaction energy Ui,and U, is evaluated by weighting each value using its Boltzmann factor exp(- Uti/kBT). This corresponds to a rough sampling, obviously not statistically representative as only a few hundred couples are investigated. This questions the validity of the first statistical thermodynamic postulate that equals the ensemble average and macroscopic time-average value. Rightfully, the authors claim to perform a molecular static approach in between quantum mechanics and molecular simulation approaches. The consistent force field parameters are optimized for the alkanes and ketones that are the molecules of interest but no value in particular and no partial atomic charge values are provided. Alkane conformers are taken into account, however, which is an advantage of molecular modeling approaches over classical parameter fitting. The Uiiand U,interaction energies are computed as the difference between the molecules couple energy and the energy of each isolated molecule:
Simulation results are used with the UNIQUAC equation to predict vapor-liquid equilibrium data (Eq. (26)),which are compared with experimental ones: 0
0
For the alkane/alkane systems (n-butaneln-pentane; n-hexane/cyclohexane (Fig. 3.8) and n-pentane/n-hexane), the relative error ranges from 1.1 to 4% for the pressure and the absolute error tandis ranges from 0.011 to 0.042 for the molar fractions. For the alkanelketones (n-pentane/acetone; acetone/cyclohexane (Fig. 3.8); cyclohexane/cyclohexanone), the relative error ranges from 4.3 to 17.6 % for the pressure and the absolute error ranges from 0.016 to 0.042 for the molar fractions.
In conclusion, the error increases along with the molecule polarity. One may question the force field ability to handle electrostatic interaction in addition to likely insufficient sampling of the system configurations. Finally, the authors tested the Wilson model and found errors four times greater for the n-pentane/acetone system. They concluded that the UNIQUAC equation has a better physical basis than that of the Wilson equation. This result was foreseeable since the Wilson model is a simplified form of the UNIQUAC model.
I
127
3 Molecular Modelingfor Physical Property Prediction
n-hexane I cyclohexane
12%!0
'
0:2
'
0:4
'
0:s
'
40.0
0:s
'
1jO
acetone I cyclohexane
XI
XI
1 .o I
- 30.0 sn m
20.0
10.0' 0.0
'
'
0.2
'
'
0.4
'
'
0.6
'
'
0.8
'
'
1.0
0.0 0.0
'
s
0.2
'
'
'
"
0.4
XI
"
0.6
0.8
1.0
XI
Bubble curve at 298.15 K for the n-hexane/cyclohexane (top) and acetone/cyclohexane (bottom) systems. Solid line: simulation. Stars: Figure 3.8
experimental data (reprinted from Jonsdottir, Rasmussen and Fredenslund (1994) with permission from Elsevier)
3.7.1.3
of UNIQUAC Binary Interaction Parameters Compared to Jonsdottir, Rasmussen and Fredenslunds (1994)work, Sum and Sandler (1999) used quantum mechanics models to improve the representation of electrostatic interactions. Often in such calculations, no rigorous sampling is performed, the emphasis being made on minimizing the total system energy. For each binary system, eight molecules are considered (four of each). A stable system conformation is found by minimization using semiempirical methods. Then the energy is minimized using ab initio methods (Hartree-Fock method with an extended basis set 6-3119<*G(3d, 2p)). Couples of molecules are then isolated and their interaction energy is computed according to Eq. (31). The average interaction energy is computed at best on 10 pairs of molecules. Then binary interaction parameters are derived for the UNIQUAC and Wilson equation, which enables one to com-
Ab lnitio Calculation
3.7 Appkations
pute vapor-liquid equilibrium data using Eq. (26).The systems studied are highly polar: water-methanol;water-ethanol;water-formicacid; water-acetic acid, and wateracetone. Simulation data are then compared with experimental data and with prediction using the activity coefficient group contribution method UNIFAC. The results obtained for the Wilson model are never quantitative and are even qualitatively wrong as it does not manage to reproduce the azeotropic behavior of the water-ethanol mixture (Fig. 3.9). On the other hand, despite the poor sampling, simulations with the UNIQUAC model give good quantitative results, comparable with experimental data and UNIFAC predicted data. Two points are significant: no experimental data were used at any stage and no temperature or pressure conditions were set, which is an advantage over regressed binary parameters. Indeed, the same set of water-acetone parameters is used to generate accurate data over a large temperature and pressure range (Fig. 3.9). In conclusion, when no experimental data are available, vapor-liquid equilibrium data can be predicted using UNIQUAC binary interaction parameters directly computed with molecular modeling methods. If the sampling issue is not yet settled, quantum mechanics methods, which accurately describe electronic distribution, have demonstrated their use, while force field approaches did not for polar systems. The next example shows that accurate predictions can be made with carefully set force field approaches using efficient sampling of phase equibrium systems thanks to the Gibbs ensemble Monte Carlo method. 5.00
.--
4.00
T = 523.15 K
-z 3.00 ep!
' $
I
2.00
0.08 0.06 0.04
0.021 0.0 0.2
0.4
0.6
XI, Y1
0.8
I
1.0 -PPRSV-WS
0.0 0.2
water I ethanol Figure 3.9 Vapor-liquid equilbriurn at 298.15 K for the water-ethanol
and water-acetone systems (reprinted with permission from Sum and Sandler (1999)
0.4 0.6 XI, Y1
0.8
water I acetone
1.0
I
129
130
I
3 Molecular Modelingfor Physical Property Prediction
3.7.2
Example 2: Direct Prediction of Nitrile Vapor-Liquid Equilibrium
As highlighted before, the development of molecular simulation as a systematic provider of accurate physicochemical data is impeded by the availability of accurate force fields. In the 1980s, force fields were derived to reproduce physicochemical data of monophase systems or devoted to macromolecules of biological interest (amino acids, proteins, etc.). But the simulation of multiphase systems was neglected until the Gibbs ensemble Monte Carlo method and the active development of new AA-, AUA-, and UA-type force fields like, OPLS by Jorgensen, Madura and Swenson (1984),Trappe by Martin and Siepmann (1998), NERD by Nath, Escobedo and de Pablo (1998),Exp6 by Emngton and Panagiotopoulos (1999),and AUA by Toxvaerd (1990, 1997) added to this world effort. As stated before, force field development consists of deriving short range van der Waals interaction parameters like the (T and E Lennard-Jones parameters. But the challenge is to obtain generic values that can be used for many molecules, much like in a group contribution approach, and for many properties with various sampling techniques, such as phase equilibrium data (using the Gibbs ensemble Monte Carlo method, transport coefficients, and molecular dynamics) and absorption isotherms (Monte Carlo). By comparison, no existing macroscopic model can compute such a wide variety of properties using so few parameters. In the AUA4 model, generic parameters have been derived for linear, branched, and cyclic alkanes, aromatics, hydroxyl, carboxyl, and thiol groups (Delhommelle, Granucci, Brenner et al. 1999; Ungerer, Beauvais, Delhommelle et al. 2000; Delhommelle, Tschinvitz, Ungerer et al. 2000 Bourrasseau, Ungerer, Boutin et al. 2002; Bourrasseau, Ungerer and Boutin 2002). For the nitrile group -C=N, we proceeded as follows (Hadj-Kali, Gerbaud, Joulia et al. 2003): 1. Quantum mechanics calculations using DFT for the acetonitrile molecule, for which many experimental data are available, to find a stable conformation, determine harmonic constants for the intramolecular contribution of the force field potential, and determine discrete partial atomic charges from quantum electrostatic surface potentials. 2. Setting up the acetonitrile (CH,CN) force field for which CH3 Lennard-Jones parameters are taken from the generic databank of the AUA4 force field. The same general expression shown in Fig. 3.7 is used. 3. Running Gibbs ensemble Monte Carlo simulations to identify missing ( E ~ oN) , and ( E C . ac) Lennard-Jonesparameters of the nitrile group. The acetonitrile molecule is fully flexible and long-range electrostatic interactions were evaluated with a cutoff and tail corrections. Reference experimental data (Francesconi, Franck and Lentz 1975; Chakhmuradov and Guseinov 1984; Kratzke and Muller 1985; Warowny 1994) are the saturated vapor pressure ln(Pat)at 433.15 and 453.15 K, the vaporization enthalpy AH,,,, and the liquid density elis at 273.15, 298.15, 433.15, and 453.15 K. The optimization method is a simple gradient method and the objective function used is a square mean root function with uncertainty values set equal to 0.1 for ln(Psat),1 kj mol-' for AHvap,and 10 kg m-3 for elis.
3.7 Applications
4. Once aN,ON) and EC, ac) values reproduce acetonitrile data accurately, their genericity is evaluated by predicting, with no further parameter adjustment, vaporliquid equilibrium data of other linear nitriles (propionitrile, butyronitrile). In these molecules aN, u N ) and ac, ac) are taken as being equal to the values obtained for acetonitrile, whereas CH2 and CH3 Lennard-Jones parameters are extracted from the AUA4 databank. The harmonic constant, partial charges, and stable conformations are obtained from quantum DFT calculations.
DFT electrostaticsurface potentials are fitted to partial atomic charges using the simple Mulliken population analysis, which equally splits the electronic distribution according to the van der Waals radius, or using the MEP method, which mimics the electrostatic potential surface with a least square method. As shown below, the MEP analysis gives the best results, but does not pass the genericity test. Each atom bears a partial charge. All quantum calculated conformations and dipolar moments agree with experimental data (Goldstein, Buyong, Lii et al. 1996) and harmonic constants agree with literature reference values (Goldstein, Buyong, Lii et al. 1996; Ungerer, Beauvais, Delhommelle et al. 2000). Parameter regression requires that each optimization cycle (two are used) performs 16 Gibbs ensemble Monte Carlo simulations to compute the gradients varying [(EN
6 E N , ON); (EC, OC)]? [(EN,
6ON); (EC, OC)], [(EN, ON); (ECt OC
+ 6cC, ac)]for each of the four temperatures considered.
6aC)], [(EN! ON);
(EC
Each simulation takes 20 h on a Linux Pentium IV, 1.9 GHz with 512 Mb RDRAM. Equilibration period requires lo6 configurations and the production period ranges from 2.3 X lo6 to 4.5 X lo6 configurations. Results of the optimization of the Lennard-Jonesparameters are shown in Table 3.1. As indicated below (Fig. 3.10), the set of MEP parameters gives the best results for the acetonitrile with a mean standard deviation over all reference values of 1.9 % and a very good estimate of the critical point. With an underestimation of the vapor densities, an overestimation of liquid densities at elevated temperature and a poor estimation of the critical point, the Mulliken set gives an error of 3.1 %. But MEP generic character is poor, whereas the Mulliken one is excellent for propionitrile and butyronitrile vapor pressure predictions (Fig. 3.11). A possible explanation for the poor MEP predictions lies in the charge values computed for the propionitrile and n-butyronitrile (Hadj-Kali2004). The least square fitting of the quantum calculated electrostatic potential surface has in that case led to unphysical values with positive nitrogen atomic charge, in contradiction with the well-known electronegativitycharacter of this atom. Also, the MEP nitrile uc parameter value, which is too elevated compared to other uc values associated with other carbonated chemical groups of the AUA4 force field (Table 3.2). Correctly, the oc value with the Mulliken distribution follows a decreasing trend as the carbonated chemical group size decreases. So, two criteria for a generic set of Lennard-Jonesparameters are able to model the van der Waals interaction are: (1)physically meaningful values of the Lennard-Jones u and E parameters and (2) physically meaningful set of atomic charges representing the electrostatic potential surface of the molecule. The importance of representation of the electrostatic potential surface has also been acknowledged in COSMO
I
131
132
I
3 Molecular Modelingfor Physical Property Prediction
approaches for the computation of physical properties, which recently won the first industrial fluid properties simulation challenge (Sander 2003; Case, Chaka, Friend et al. 2004). Critical points in Fig. 3.10 are obtained using the king method (Frenkel and Smit 1996). For direct simulations near the critical point, it is difficult to achieve convergence. As shown in Fig. 3.12 for the density versus configuration plot for a simple Lennard-Jonesfluid, the fluctuations increase and boxes interchange as the reduced temperature nears 1. Experimental observations of a similar phenomenon are well known and demonstrate that the molecular simulation is indeed a numerical experiment that can not only compute accurate physicochemical data, but also behave as an efficient sensor of system behavior on a molecular scale.
3.8 Conclusions
Molecular modeling is an emerging discipline for the study of energetic interaction phenomena. A molecular simulation performs numerical experiments that enable one to obtain accurate physicochemical data provided sampling and energy force field issues are addressed carefully. Still computationally demanding, molecular modeling tools will likely not be used “online” or be incorporated in process simulators. However, rather like computer fluid dynamics tools, they should be used in parallel with existing efficient simulation tools in order to provide information on the molecular scale about energetic interaction phenomena and increase the knowledge of processes that must manufacture ever more demanding end products.
+
300
4)
A simulations (Mulliken charges)
b
3.8 Conc/usions A
I
133
n-butyronitrile - Chakhmuradov 84
A n-butyronitrile (Mulliken charges) t
propionitrile - Chakhmuradov 84
0 propionitrile (Mulliken charges)
t
A
I
A
1 100
1I380
II570
lnemperature (K].'
Figure 3.1 1 Predicted saturated vapor pressure for propionitrile and butyronitrile. Mulliken charges
Tr = 0.8
Tr = 0.98
Tr =0.94 '
I
600
600c
400 200 OO
2 . M
Figure 3.12 Density versus configuration number in the vicinity of the critical point for a Lennard-Jonesfluid Table 3.1
Optimal ( E ~ uN) , and (EC,(JC) parameter values for the nitrile group. Mulliken charges
(K)
(A)
E C / ~(K)
€NlkO
Charges MEP optimization
50.677
65.470
3.5043
3.3077
Charges Mulliken optimization
95.52
162.41
3.2183
3.5638
~~
UC
uN
~
Table 3.2
Comparison of o Cparameters for various chemical groups in the A U M force field ~~
AUQ chemical group
(Jc
(4
-CH,
=CH,
=CH
-C MEP charges
-C Mulliken charges
3.6072
3.4612
3.3625
3.5043
3.2183
!je+o6
134
I
3 Molecular Modelingfor Physical Property Prediction
References 1 Allen M. P. Tildesky D. J. Computer Sirnula-
2
3
4
5 6
7
8
9
10
11
tion of Liquids. Oxford University Press, Oxford, UK, 2000. Bourrasseau E. Ungerer P. Boutin A. Fuchs A. Monte Carlo Simulation of Branched Alkanes and Long Chain n-Alkanes with Ansotropic United Atoms Intermolecular Potential. Molec. Sim. 28(4) (2002) p. 317 Bouvasseau E. Ungerer P. Boutin A. Prediction of Equilibrium Properties of Cyclic Alkanes by Monte Carlo Simulations-New Anisotropic United Atoms Intermolecular Potential-NewTransfer Bias Method. J . Phys. Chem. B 106 (2002) p. 5483 Case F. Chaka A. Friend D. G. Frurip D. Golab J. Johnson R. Moore J. Mountain R. D. Olson J. Schiller M. Storer J. The First Industrial Fluid Properties Simulation Challenge. Fluid Phase Equilibria 217(1) (2004)0,1 (also see http://www.cstl.nist.gov/FluidSimulationchallenge/) Chakhmuradov C. G. Guseinou S. 0. Iz. Vys. Uc. Zav. 1984 65-69 (in Russian) Chen C.C. Mathias P. M. Applied Thermodynamics for Process Modelling. AIChE J. 48(2) (2002) p. 194 Delhommelle J. Tschirwitz C. Ungerer P. Granucci G. Millie P. Pattou D. Fuchs A. H. Derivation of an Optimized Potential Model for Phase Equilibria (OPPE) for Sulfides and Thiols. J. Phys. Chem. B 104 (2000) p. 4745 DelhommelleJ. Granucci G. Brenner V. Millie P. Boutin A. Fuchs A. H. A New Method for Deriving Atomic Charges and Dipoles for Alkanes: Investigation of Transferability and Geometry Dependence. Mol. Phys. 97(10) (1999)p. 1117 De Pabb J. J. Escobedo F. A. Perspective: Molecular Simulations in Chemical Engineering: Present and Future, AIChE Journal 48(12) (2002) p. 2716 Errington /. R. Panagiotopoulos A. 2. A New Potential Model for the n-Alkanes Homologous Series J. Phys. Chem. B 103 (1999) p. 6314 Francesconi A. 2. Franck E. U. Lentz H. Die PVT-daten des Acetonitrils bis 450c und 2500 bar. Ber. Bunsen-Ges. Phys. Chem. 79 (1975) P. 897 (in German)
12 Frenkel D. Smit B. Understanding Molecular
13
14
15
16
17
ia
19
20
21
Simulation. From Algorithms to Applications. Academic Press San Diego, CA 2002. Goldstein E. Buyong M. A. Lii J. H. Allinger N. L. Molecular Mechanics Calcdations (MM3)on Nitriles and Alkynes. J. Phys. Org. Chem. 9 (1996) p. 191 Hadj-Kali M. K. Gerbaud V. Joulia X. Lagache M. Boutin A. Ungerer P. Mijoule C. Dufaure C. Prediction of Liquid-Vapor Equilibrium by Molecular Simulation in the Gibbs Ensemble: Application to Nitriles. Comput. Aided Chem. Eng. 14 653-658 proceedings of 6& 1 Cheap ed.: Pierucci S. (Pisa, Italy, 06/08-11/03) 2003 Haaj-Kali M. K. Application de la simulation moleculaire pour le calcul des equilibres liquide vapeur des nitriles et pour la prediction des azeotropes. These de Doctorat en Genie des Procedes Institut National Polytechnique de Toulouse Toulouse, France (in French) Jonsdottir S . 0. Rasmussen K. Fredenslund A,. UNIQUAC Parameters Determined by Molecular Mechanics. Fluid Phase Equilibria 100 (1994)p. 121-138 Jorgensen W.L. Madura J. D. Swenson C. J. Optimized Intermolecular Potential Functions for Liquid Hydrocarbons. J. Am. Chem. SOC.106 (1984) p. 6638 Kratzke H. Muller S.. Thermodynamic Properties of Acetonitrile 2. (P, @, T) of Saturated and Compressed Liquid Acetonitrile. J. Chem. Thermodyn. 17 (1985)p. 151 Martin M. G. Siepmann J. I. Transferable Potentials for Phase Equilibria. 1. United Atom Description of n-Alkanes. J. Phys. Chem. B 102 (1998)p. 2569 Nath S. K. Escobedo F. A. de Pablo J. J. On the Simulation of Vapor-Liquid Equilibrium for Alkanes. J. Chem. Phys. 108 (1998) p. 9905 Sandler S. I. Quantum Mechanics: A New Tool For Engineering Thermodynamics. Fluid Phase Equilibria 210(2) (2003) p. 147
3.8 Conclusions 26 Sum A. R Sandler S. I. A Novel Approach to
Phase Equilibria Predictions Using Ab-Initio Methods. Ind. Chem. Eng. Res. 38 (1999)p. 2849 27 Toxvaerd S. Equations of State of Alkanes 1. J.Chem. Phys. 93 (1990)p. 4290 28 Toxvaerd S.Equations of State of Alkanes 11. J. Chem. Phys. 107 (1997)p. 5197
29
Llngerer P. Beauvais C. Delhommelle 1.Boutin A. Rousseau B. Fuchs A. H. Optimisation of
the Anisotropic United Atoms Intennolecular Potentiel for n-Alkanes. J . Chem. Phys. 112(12)(2000)p. 5499 30 Warowny W . Volumetric and Phase Behavior of Acetonitrile at Temperatures from 363 K to 463 K. J.Chem. Eng. Data 39 (1994)p. 275
I
135
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein I 1 3 7 Copyright 02006 WILEY-VCH Verlag GmbH 8
4 Modeling Frameworks of Complex Separation Systems Michael C. Georgiadis, Eustathios 5. Kikkinides, and Margaritis Kostoglou
4.1
Introduction
Process modeling has always been an important component of process design, from the conceptual synthesis of the process flow sheet, to the detailed design of specialized processing equipment such as advanced reaction and separation devices, and the design of their control systems. Recent years have witnessed the traditional modeling approach being extended to the design of complex processes such as fuels cells, hybrid separation systems, distributed systems, etc. Inevitably the process modeling technology needed to fulfil the demands posed by such a diverse range of applications on different scales of complexity (Marquardt et al. 2000; Pantelides 2001). Years ago at the Foundations of Computer Aided Process Design Conference 1994, Pantelides and Britt (1995)presented a comprehensive review of some of the early developments in the area of multipurpose process modeling environments, i.e., software tools aiming at supporting multiple activities based on a common model. Recently, Pantelides and Urban (2004)presented a critical review of the progress achieved over the past decade and identified the key challenges for the next decade (Pantelides and Urban 2004). In recent years, complex processing systems such as periodic pressure-swing adsorption processes, zeolite membranes and hybrid separations processes have been finding increasing applications as energy efficient alternatives to other traditional separation techniques (such as cryogenic separation), and much progress has already achieved in improving their performance with respect to both the process economics and the attainable purity of the products (see, for instance, Ruthven et al. 1994). The performance of these processes is critically affected by a number of design and operating parameters (design of the processes, duration of the various processing steps, operating levels at each step, etc.). Therefore, their accurate modeling in a compact and robust way is a necessity so as to minimize the capital and operating costs of the process while ensuring that minimum purity and throughput specifications are met. Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright @ 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
138
I
4 Modeling Frameworks of Complex Separation Systems
This chapter presents a review of modeling frameworks for complex processing systems with an emphasis not only on the models themselves but also on specialized solution techniques related to these models. More specifically,due to their increased industrial interest, a general modeling framework for adsorption-diffusion-basedgas separation processes is presented in Section 4.2 with a focus on pressure-swing adsorption and membrane-based processes for gas separations. The subsequent sections present a critical review of models and specialized solution techniques for crystallization and grinding processes. Finally, concluding remarks are drawn in the Section 4.4.
4.2
A Modeling Framework for Adsorption-Diffusion-based Gas Separation Processes 4.2.1 General
Gas separation is important in many industries ranging from the development of natural gas and oil resources to petrochemicals and foodstuffs. Moreover, separation and recovery from gaseous industrial effluents are issues of considerable environmental significance to a world-widelevel and constitute a major problem demanding efficient solutions. It is generally accepted that the greatest energy consumption generally derives from the separation sections of the processes, which may also account for in excess of 50% of the total capital costs. The principal gas separation technologies include absorption, fractional distillation and adsorption-diffusion-based processes. The market leaders are absorption and distillation, both of which are capital and energy intensive. Adsorption-diffusion-based processes, compared with the other two processes, possess several advantages: 0 0 0 0 0
low energy requirements, small, easily operated, low cost units, compactness and light weight, non-labor-intensive, modular design allowing easy expansion or operation at partial capacity.
The selection of separation techniques depends primarily on the process scale. Distillation and, to a lesser extent, absorption exhibit large economies of scale. Conversely, adsorption-diffusion-basedseparation techniques are modular with relatively fixed capital/throughput ratios for a given separation and hence are favored for smaller scale operations (Yang 1987; Ruthven et al. 1994). The basic requirement in an adsorption separation process is the existence of an appropriate material (adsorbent) that preferentially adsorbs one component from a gas mixture. The selectivity of each adsorbent depends on a difference in adsorption equilibrium or kinetics (diffusion through the pore space of the adsorbent). All adsorption separation processes involve two major steps: (1) adsorption, during which the preferentially adsorbed species are captured from a feed mixture by the
4.2 A Modeling Frameworkfor Adsorption-D~usion-basedGas Separation Processes
adsorbent, and (2) desorption during which the adsorbed species are removed from the adsorbent in order to regenerate the material. It is evident that the emuent during the adsorption step corresponds to the light (weakly adsorbed) product of the separation process (often called the raffhate), while the effluent during desorption corresponds to the heavy (strongly adsorbed) product of the process. The need for process commercialization has lead to the use of cyclic or periodic adsorption separation processes where fxed beds packed with adsorbent operate at a certain sequence and are periodically regenerated by total or partial pressure decrease (pressure-swing adsorption (PSA), vacuum-swing adsorption (VSA)),or less often by temperature increase (temperature-swing adsorption (TSA)).Periodic adsorption processes are thus dynamic in nature and operate in a periodic mode having fued adsorption and desorption cycle times. The periodic excitation is achieved by regular periodic variation of the boundary conditions of certain properties of the gas mixture (temperature, pressure, concentration, velocity) at the two ends of each bed and the connectivity between two or more beds that operate in a certain sequence depending on the complexity of the process. After a certain number of cycles each bed approaches a so-called “cyclic steady state” in which the conditions at the beginning and at end of each cycle are identical to each other. Over the last two decades PSA-VSA processes have gained increasing commercial acceptance over TSA, which is preferred only if the preferentially adsorbed species is too strongly adsorbed imposing high vacuum demands for adequate adsorbent regeneration (Ruthven et al. 1994). An alternative technology employed in adsorption-desorption processes for gas separations is membrane technology. Membranes are thin films ranging from a few micrometers down to the order of several nanometers, which are made of organic (polymers) or inorganic materials and can be nonporous (dense) or porous. Inorganic membranes have considerable advantages in many gas separation processes, which are required to operate under demanding conditions, such as high temperatures and in corrosive environments. Recently there has been considerable interest in the potential of microporous zeolite membranes because of their regular and controlled pore size and geometry. Commercialization of membrane technology has prompted the growth of coherent, crack-free membrane films on top of planar or cylindrical macroporous supports (pore size of the order of 0.1-10 pm) that provide mechanical strength and do not significantly affect the separation performance of the membrane (Burggraaf 1996; Baker et al. 1997; Strathman 2001). Consequently a membrane unit or module is made using standard geometric arrangements (hollow fiber, spiral wound, etc.). The simple design of a membrane enables straightforward expansion of capacity compared to periodic adsorption processes. The concept of a membrane process is straightforward: the separation is achieved through preferential permeation of a species from a gas mixture through the membrane. The key parameters that determine membrane performance are the selectivity towards the gas to be separated and permeate flux or permeability. The former is related to product purity and recovery while the latter is related to throughput or productivity and determines the membrane area required.
I
139
140
I
4 Modeling Frameworks ofcomplex Separation Systems
4.2.2 Process Description 4.2.2.1 The PSA Processes
A typical PSA-VSA process consists of a high pressure, adsorption step during which the gas is fed through the bed co-currently and separation is achieved followed by the recovery of the light product (raffinate),and a low pressure, desorption step, where the bed is regenerated (usually in counter-current fashion) with the possible simdtaneous recovery of the heavy product (extract). These two basic steps are interconnected through the necessary depressurization (blow down) and pressurization steps that are employed either co-currently or counter-currently in the respective beds, depending on the specific needs of each particular application. These four basic steps constitute a single PSA or VSA cycle, which is repeated until cyclic steady conditions are achieved. Note that the desorption step is achieved by purging the bed with a fraction of the light product at low pressure (PSA) or by evacuation of the bed using pumps (VSA). The former method is favored in terms of energy-savings since the use of a pump is avoided, but on the other hand it produces a light product with significantly reduced recovery. The basic four-step cycle described above requires the use of only two beds and is shown schematically in Fig. 4.1.In practice more beds
Feed
Blowdown
Purge
Pressurization
Bed1
Purge
II -1
Pressurization
Feed
Blowdown
Bed 2
Figure 4.1 A typical four-step two-bed (Skarstrom) PSA cycle
4.2 A Modeling Framework for Adsorption-Dgusion-based Gas Separation Processes
are normally employed in typical industrial applications, based once again on the specifications of each application and the economics of the process. The performance of a PSA process is assessed on the basis of several important output quantities. These are, the (light) product purity collected during the adsorption step, the (light) product recovery defined as the amount of light product collected in the adsorption step minus the amount of product used to purge the bed in the desorp tion step normalized by the amount of the light product in the feed. If this amount is normalized on the basis of the amount of adsorbent used in the bed then one defines the (light) product productivity per unit time. Although in most cases PSA processes involve the recovery of the light product in a gas mixture there have been a few theoretical and experimental studies in the literature that deal with the additional recovery of the heavy product from the exhaust, during the blow down and purge steps (Ritter and Yang 1989; Kikkinides and Yang 1991, 1993; Kikkinides et al. 1993, 1995). The performance of the PSA process is critically affected by a number of design and operating parameters. The first category includes the size of the bed(s) in the process and the physical characteristics (e.g., particle size) of the adsorbent. On the other hand, important operating parameters include the duration of the various steps and the overall cycle and the pressure and/or temperature levels in each step. The process designer is therefore confronted with an optimization problem typically aiming to minimize the capital and/or operating costs of the process while ensuring that minimum purity and throughput specifications are met. In view of the large number of degrees of freedom, a mathematical programming approach to the optimization of PSA appears to be highly desirable, but this has to address the intrinsic complexity of the processes being studied and in particular the complications arising from their periodic nature. To this end, the optimization of periodic PSA systems has received some attention by the process systems engineering community. Smith and Westerberg (1991)presented a mixed-integer nonlinear program (MINLP)to determine the optimal design of PSA separation systems (operating configuration, size and operating conditions) using simple models and simple time-integrated balances to describe the initial and final concentrations and temperature profiles for each stage. The work of Nilchan and Pantelides (1998)is a key contribution to the optimization of periodic adsorption processes. They presented a rigorous mathematical programming-based approach to the optimization of general periodic adsorption processes. Detailed dynamic models taking account of the spatial variations of properties within the adsorption bed(s) are used. A new numerical method was proposed for the solution of the optimization problem and the calculation of the cyclic steady state, employing simultaneous discretization of both spatial and temporal variations to handle the complex boundary conditions. The approach is capable of handling interactions between multiple beds. Bechaud et al. (2001) investigated stability during cyclic gas flow with dispersion and adsorption in a porous column, as encountered during PSA. KO et al. (2003)presented a mathematical model and optimization procedure of a PSA process using zeolite 13X as an adsorbent. Serbezov and Sotirchos (2003) investigated a semianalyhcal solution of the local equilibrium PSA model for multicomponent mixtures. The solution involves simple algebraic and ordinary differential equations and can provide the basis for quick evaluation of different design alter-
I
141
142
I
4 Modeling Frameworks ofcomplex Separation Systems
natives and optimization studies. Recently, Cmz et al. (2004) presented a strategy for the evaluation, design, and optimization of cyclic adsorption processes. Jiang et al. (2003) developed a direct determination approach using a Newton-based method to achieve fast and robust convergence to cyclic steady state of PSA processes. An eficient, flexible and reliable optimization strategy that incorporates realistic detailed process models and rigorous solution procedures was investigated. 4.2.2.2 Membrane Processes for Gas Separations
Contrary to the PSA-VSA process where there is a certain degree of complexity in synchronizing the cyclic operation of two or more fured beds, the case of a membrane separation process is much simpler. In the latter case the membrane unit has a simple geometrical arrangement operating at steady state conditions. In this arrangement there are two main compartments: the retentate at high pressure where the feed is introduced, and the permeate at low pressure where the product is collected. The two compartments are separated by the membrane layer, which controls the separation performance of the process and the production rate of the product. Since the driving force for permeation is the difference between the partial pressures of the product in the two compartments, it is obvious that there will be a back-pressure effect that deteriorates the separation performance of the membrane. In order to reduce the back-diffusion effect one needs to reduce the partial pressure of the product in the permeate side as much as possible. This is once again achieved by either evacuating the permeate section with the use of a pump or by sweeping the product away with the use of a sweep or purge inert gas that lowers the partial pressure of the permeate product. In most commercial applications the sweeping is done countercurrently to the feeding achieving the maximum possible separation performance for the membrane unit. A typical single-stagemembrane unit for separation is shown in Fig. 4.2. Again practice has prompted the use of more complex configurations using recycle streams or membrane cascades depending on the specifications and the economics of each application.
membrane Sweep
Retentate Permeate
dFb (a) Co-currentoperation
Feed
Retentate
Retentate Sweep
(b) Counter-currentoperation
Figure 4.2 Single stage (a) co-current and (b) counter-current membrane processes
4.2 A Modeling Framework for Adsorption-D~usion-basedGas Separation Processes
The optimization of membrane-based gas separation systems has received limited attention. This can be mainly attributed to the complexity of the underlying mathematical models. Tessendorf et al. (1999) presented various aspects of modeling, simulation, design and optimization of theses processes. A membrane module model was developed capable of handling multicomponent mixtures and considering effects of pressure drop and energy balance. The module has implemented and tested in an external process simulators. Kookos (2002) proposed a superstructure representation of the membrane-based gas separation network along with a targeting approach to the synthesis of membrane networks. Using simple models, the membrane material is optimized together with the structure and the parameters of the network. Vareltzis et al. (2003) presented a mathematical programming approach to optimize complex structures of zeolite membranes using detailed models. Various tradeoffs between different optimization objectives were systematically revealed. The impact of detailed modeling on the optimization results was investigated through a comparison with corresponding results obtained using simple models.
4.2.3 A General Model of Adsorption-Diffusion-based Gas Separation Processes
In modeling the separation performance of each process we will assume for the sake of generality that the same material is used to develop the microporous adsorbent particles and the membrane layer. Furthermore, we will assume that essentially the same material is used to make the macroporous binder of the adsorbent particles and the macroporous support on top of which the thin membrane layer is formed. This assumption will enable us to uncouple any effects that depend on the physicochemical characteristics of the materials used in the two separation processes from the effect of the inherent process characteristics of each process. Of course this assumption is often difficult to hold in practice since there are materials that are easier to make in the form of adsorbent particles than coherent membrane layers and vice versa. Mass Balance in the Fixed Bed and/or the Membrane Compartments
For the sake of simplicity we will consider 1D transport along the axial direction neglecting any radial variations. This assumption is seen to be valid for the majority of theoretical and experimental cases found in the open literature. Thus, the mass balance for component i in the interstitial fluid is given by the following equation:
For i = 1, N where N is the total number of species. Alternatively it is equivalent to write the above equation for the first N - 1 species and include an overall material balance. The total concentration of the gas mixture C is related to the temperature T and total pressure P from an equation through an equation of state, which in most cases is represented by the ideal gas law:
I
143
144
I
4 Modeling Frameworks ofComplex Separation Systems N
c = c c j = -P j=1
RT
The flux term kJi(Ci - CRPi)in the above equations accounts for interparticle (film) diffusion in the gas phase of component i,transported from the interstitial fluid to the surface of the adsorbent particles or the membrane layer. For a membrane process, Eqs. (1)and (2) hold for the case of gas transport through a Ci the retentate and permeate units by simply neglecting the accumulation term, at ’ due to the steady state operation of the process, and by putting &b = 1 since in this case the compartments are completely empty. Also in this case the values of the mass ~ k,-i will be different for the same reason. transfer coefficients D L ,and Equilibrium Adsorption Behavior at the Particle or Membrane Surface
In many cases the adsorption equilibrium behavior of a multicomponent gas mixture at the surface of the adsorbent is adequately represented by the Langmuir isotherm:
j=1
The advantage of the Langmuir equation is that it is relatively simple and can be easily inverted and solved for the gas phase concentrations, while the parameters qyt and bi can be evaluated from the respective simple component equilibrium isotherm data. The main drawback of the above equation is that is predicts a constant separation factor an assumption that is often violated especially for the case of nonideal gas mixtures (Krishna 2001; Karger and Ruthven 1992). In the latter case more involved modes based on the ideal adsorption solution theory (IAST) should be employed. Mass Balance in the Microporous Particle or Membrane Surface
Where Ni is the flux of component i transported through the pore space of the particle. It is straightforward to show that:
Where denotes the volume-averaged adsorbed phase concentration. For a membrane process, Eqs. (4) and (5) hold after neglecting the respective accumulation terms, due to steady state operation.
4.2 A Modeling Framework for Adsorption-Dguusion-basedGas Separation Processes
Evaluation of the Flux Terms
Evaluation of the flux terms Ni in Eqs. (4) and (5) requires the identification and description of the major transport mechanisms that take place in the pore space of the adsorbent particles. Transport in the pores can take place through various mechanisms, depending on the strength of the interaction of the gas molecules of one species with the molecules of the other species and with the pore walls, and by the relative magnitude of three different length scales characterizing, the size of the molecules, the distance between the pore walls and the fluid density in the pores, respectively. In many cases the adsorbent particle consists of two interpenetrating networks of pores, one representing the pore structure of the particle crystallites (e.g., zeolites, silica gel, etc.) and consists of micropores (0.1-1.0 nm) according to the IUPAC classification and another one that represents the structure of the binder or support and consists ofmeso- (1-50 nm) and macropores (50-5000 nm) in the IUPAC classification (Gregg and Sing 1982). It is evident that different mass transfer mechanisms prevail at each pore network, with the micropores providing the necessary features for gas separation by selective diffusion through the micropores and/or adsorption at pore surface. The effect of the binder or support, on the other hand, is an additional resistance to the mass transfer of the species with no selective features in either adsorption or diffusion through the macropores of the material. Thus it is desirable to minimize as much as possible the effect of transport through the support in order to achieve a better separation performance. Fortunately, in many cases the resistance to mass transport through the micropores of the crystallites in much stronger compared to that through the support and thus the latter can be either completely ignored or approximated through a linear driving force expression assuming fast diffusion or permeation kinetics in the pore space of the support. In this case the additional diffusion resistance is incorporated into the film diffusion coefficient kJi. The generalized Maxwell-Stefan (GMS) equations provide an adequate basis for the accurate description of multicomponent mass transfer in porous media with minimum unary data (fiishna and Wesselingh 1997; Kapteijn et al. 2000; Krishna 2001; Karger and Ruthven 1992).The basis of the Maxwell-Stefan theory is that the driving force for movement, acting on a species, is balanced by the friction experienced by that species and each friction contribution is considered to be proportional to the difference in the corresponding diffusion velocities. The application of this theory on microporous or surface diffusion yields: 8;
--vjLi
R.T
=
i c e j ~-eiiNj +-
j=l
qTtqTtD.. tr
N~
qT'Di
The driving force for diffusion is the chemical potential gradient ( V p i ) .Parameters DVand Di are the Maxwell-Stefan surface diffusivities and represent inverse friction factors between molecules and the solid surface, respectively. In of a binary mixture with adsorption equilibrium behavior represented by the Langmuir isotherm and after some algebraic manipulations the surface flux Ni is given by the following expression:
I
145
146
I
4 Modeling Frameworks ofCornplex Separation Systems
Note that if we assume DQ+ 00 the above equation becomes: [(I - ej)]vei + eivej N. - -N. 1 10 (1 - ei - ej)
The above equations correspond to the GMS (DQ+ m) model, which basically assumes negligible difhsional adsorbate-adsorbate interactions and has been frequently employed to describe diffusion of binary mixtures in zeolites (Ruthven et al. 1994). Finally, in the limit of dilute systems (4;<< l),Eq. (8)becomes: Ni = --NioVtli
(9)
which is the classic Fick's law applied in the microporous adsorbent. For the case of fast diffusion kinetics in the pore space of the microporous particle it has been shown that the adsorbed phase concentration has a parabolic profile in space. Combining this assumption with Eqs. (3), (5),and (7) we come up with the linear driving force approximation (LDF) often employed to describe diffusion kinetics in the adsorbed phase:
Heat Effects
The exothermic nature of the adsorption process can result, under certain conditions and system sizes, in significant temperature variations resulting in the heating and freezing of the bed during adsorption and desorption, respectively. Considering 1 D adiabatic heat transport along the axis and assuming negligible variation between the temperature in the solid and fluid phase the following heat balance equation holds in the bed:
Where .p8 and cpa are the total density and heat capacity of the mixture in the gas phase and .pSand c ~ ,are ~ , the density and heat capacity of the adsorbent. Note that several alternative heat balance models with different levels of complexity can be developed, depending on the available degree of information for the above quantities. For the case of a membrane unit the heat effects are in most cases negligible and the process can be safely considered as an isothermal one. In the rest of the manuscript and without loss of generality we will consider isothermal operation in order to
4.2 A Modeling Frameworkfor Adsorption-Diflirsion-based Gas Separation Processes
have a better comparison between the two types of processes. Inclusion of the heat balance is straightforward and only adds more unknown parameters without significant changes in the solution approach during the simulation and optimization procedures. Pressure Drop Effects
In many applications the use of long beds and/or small adsorbent particles can induce a pressure drop of appreciable magnitude that results in certain changes in the propagation of the concentration and temperature waves through the bed. The pressure drop along the axial direction of the adsorption bed is usually determined by the Ergun equation:
Where p is the viscosity of the gas mixture and dp is the diameter of the adsorbent particles. For the case of a membrane unit the simpler Hagen-Poiseuille equation for parallel laminar flow is used to calculate the pressure drop through the axial direction of each section (Pan 1983, Giglia et al. 1991).Nevertheless, the pressure drop effects are many cases negligible and can be safely ignored. Boundary Conditions
The appropriate boundary conditions for the solution of the problem are described by the following set of equations: 1. Adsorption step or retentate compartment
a ci = o -
atz= L
u(0) = Uf
atz=O
P ( 0 ) = PH
atz=O
az
(13a)- (13d)
2. Desorption step or permeate compartment
a ci = o -
atz=O
u(L)= up
atz= L
P ( L ) = PL
atz= L
az
(14a)- (14d)
I
147
148
I
4 Modeling Frameworks of Complex Separation Systems
3. Blow down step
-a =cio
atz=Oandz= L
u(L) = 0
atz= L
az
(15a)- (15d)
atz=O 4. Pressurization step
atz=O
-a =cio
atz= L
u(L) = 0
atz= L
az
(16a)-(16d) atz=O
Note that the pressure histories at the feed end of bed during the pressurization and blow down steps, respectively, are known functions of time.
4.3 Modeling of PSA Processes in gPROMS
The advanced distributed process modeling capabilities of gPROMS (trademark of Process Systems Enterprise Ltd.) permit a detailed description of the complex phenomena taking place inside adsorption columns. A major advantage of gPROMS is its ability to describe detailed operating procedures and to handle discontinuities arising from major changes in the structure of the underlying models. In the context of PSA processes this is particular important since the boundary conditions depends on the operating stage. Furthermore, the entire operation involves successive transitions between the various operating stages thus introducing extra discontinuities in the model. For the simple case of one PSA column but with no loss of generally boundary conditions can be efficiently implemented in gPROMS as seen in Fig. 4.3. The implementation of the PSA operating schedule is also a complex simulation task given the transition between the various processing steps. Figure 4.4illustrates the process scheduling task for a specified number of cycles (one PSA column).
4.4 Efliccient Modeling ofCrystallization Processes
;ELECTOR OperationMode AS (Pressurization, Depressurisation)
Figure 4.3 Boundary conditions o f a single PSA column in gPROMS
IOUNDARY :At the feed end :ASE OperationMode OF #Pressurization Step WHEN Pressurization FOR i:=l TO NoComp DO Heed * Yfeed(i) = C(i,O) * R * Tfeed; END #FOR P(0) = Heed; #Depressurisation Step WHEN Depressurisation FOR i:=l TO NoComp DO PARTIAL(C(i,O), Axial) = 0; END #FOR P(0) = h a s t e ; :ND # Case At the product end Pressurization I Depressurisation step 'ARTlAL(C, BedLength), Axial) = 0;
J(BedLength) = (Qvol*Patm) / (BedArea*P(BedLength));
4.4 Efficient Modeling of Crystallization Processes 4.4.1 General
Crystallization from solution is one of the oldest and economically most important industrial separation processes. It is applied both as a large-scale continuous process for the production of inorganic (e.g., ammonium sulphate) and organic (e.g., adipic acid) material and as small-scale batch processes for the production of high purity pharmaceuticals or fine chemicals (e.g., aspartame). In order to optimize and control the crystallization process, reliable mathematical models are necessary. Detailed modeling of the crystallization process requires knowledge of phenomena on a microscopic as well as on a macroscopic scale. On the microscopic scale the basic phenomena are the primary (heterogeneous or homogeneous) nucleation, secondary
I
149
150
I
4 Modeling Frameworks ofComplex Separation Systems
SCHEDULE SEQUENCE Cycle := 1 ; WHILE Cycle <= NoCycles DO SEQUENCE CONTINUE FOR CycleTirne/2 SWITCH Column.OpexationMode := Co1umn.Depressurisation; END # Switch CONTINUE FOR CycleTime/2 SWITCH Column.OperationMode := ColumnPressurisation ; END # Switch RESET # Oxygen product purity Column.Purity := OLD(Column.Mgroduct(2)) I SIGMA(OLD(Co1umn.M-product)); # Oxygen product recovery Column.Recovery := OLD(Column.M-product(2)) / OLD(Co1umn.M-fed(2)) ; END # Reset Cycle := Cycle + 1 ; REINITIAL Co1umn.M-fed, Co1umn.M-product, Co1umn.M-waste WITH Co1umn.M-fed = 1E-6 ; Co1umn.M-product = 1E-6 ; Co1urnn.M-waste = 1E-6 ; END # Reinitial END # Sequence END #While END # Sequence END #Task OperateColum Figure 4.4
Operating schedule o f a single PSA column in gPROMS
nucleation, crystal growth, coagulation between crystals, and crystal fragmentation. A great variety of models with different degrees of complexity have been presented in the literature for the above processes. On the macroscale, the macromixing in the crystallizer is very important. Coagulation and fragmentation phenomena depend on local energy dissipation, which can be varied by orders of magnitude in a stirred tank.
4.4 Efficient Modeling ofcrystallization Processes
The modeling of crystallization processes poses special problems not encountered in more conventional process operations. The state of such systems is usually characterized by particle size distribution functions instead of, or in addition to, standard point properties such as concentrations. Moreover, the steady state and dynamic behavior of these systems is described by population balance equations rather than simple mass balances. Finally, the physical properties of solids encountered in crystallization processes are generally much less well characterized than those of fluids. Traditionally, most process modeling and simulation tools have been aimed primarily at the mainstream chemical and petrochemical industry. Commercial steady state simulation packages have now reached a high degree of sophistication, encompassing extensive libraries of unit operation models, as well as large compilations of physical property data and calculation techniques. However, given the differences outlined above, it is hardly surprising that the area of crystallization and grinding processes has not been served well by tools now used routinely by process engineers in other areas. It has been long realized that even with relatively sophisticated general process modeling tools, the modeling and simulation of particulate processes still presents serious difficulties. One key problem is the mathematical complexity of the models: population balances invariably lead to partial differential equations, and these are often coupled with other equations describing the evolution of properties in the fluid surrounding the particles through integral terms. This results in systems of integral-partial differential equations, which may be very difficult to solve. In fact most current equation-oriented modeling frameworks cannot even describe directly such distributed parameter systems (Pantelides and Oh 1996).To this end the rest of this part of the chapter will focus on presenting state-of-the-arttechniques for reducing the modeling complexity of crystallization and grinding processes, without any loss of accuracy and generality, to a level where standard modeling tools can be used for simulation and optimization purposes.
4.4.2 A Comprehensive Modeling Framework of Crystallization Processes
A generally accepted concept for the modeling of dispersed phase systems is the population balance approach introduced for the particular problem of crystallization by Randolph and Larson (1971). Each crystal in the system is described by a vector of properties x (internal coordinates) and its position in the crystallizer r (external coordinates). A very detailed model of the process would require as much internal coordinates as possible (which can be supported by experimental findings) and must be spatially distributed. The most extensively used crystallization model until today is based on a spatially homogeneous population balance (zero external variables) with one internal variable (crystal volume or linear size). Recently, several efforts have been made towards an increase of the number of internal or external variables to describe more accurately the process. Here, this relatively simple model will serve as the basis for a comprehensive discussion of the difficulties, existing solutions techniques and possible extensions of current crystallization process models.
I
151
152
I
4 Modeling Frameworks ofComplex Separation Systems
Apart for the common variables used for modeling of any chemical reactor (temperature, concentrations, etc.) an additional variable used for the crystallizer is the which is described by the differential crystal volume crystal size distribution (CSD), distribution functionf(x, t) where x is the crystal volume andJx, t)dx is the number concentration of crystals with volumes between x and x + dx. The evolution of the CSD (f(x, t)) is determined by the following population balance equation:
where the vector c contains composition and temperature of the liquid phase and is used to denote dependence of the undergoing phenomena on this. The function G ( x , c) is the volumetric growth rate of a crystal with volume x. The function K(x, y; c) is the so-called coagulation kernel defined such that the expression K ( x , y; c)f(x, t ) f ( y , t)dxdy is the rate of coagulation events per unit fluid of volume between a crystal with volume in [ x , x + dx] and a crystal with volume in [y, y + dy]. This is in general a symmetric function with respect to x and y. The function b(x) is the fragmentation frequency for a crystal of volume x while B(c) is the nucleation rate (i.e., the rate of generation of nucleus which are crystals with size a(c)).The function p ( x . y) is called the fragmentation kernel and is such that p ( x , y)dx is the probability for having a fragment of volume in [x, x + dx] as a result of fragmentation of a crystal with volume y. Finally z is the residence time in the system and%(%)is the inlet particle size distribution function. The initial condition for the solution of the Eq. (17) isf(x, 0) =fo(x). The above equation is rather comprehensive in the sense that includes processes with different features and different computational requirements (continuous versus batch system, precipitation versus crystallization). For the sake of clarity of our discussion it would be useful to discriminate between the process of crystallization (Mullin 1993) and precipitation (Sohnel and Garside 1992). Although the physical phenomena are the same (precipitationis a type of crystallization) the features exhibiting by the two processes (e.g., crystal sizes, supersaturation, etc.) are quite different requiring a different modeling approach. From the practical point of view precipitation is the crystallization of sparingly soluble substances (mainly salts). The mass of the active species in precipitation is small and thus the final volume fraction of the solid phase is also small and the crystal radius doe not usually exceed 1 pm. On the other hand in crystallization the mass fraction of the solid phase can be large and the size of crystals is of the order of millimeters. We can now focus our attention on the modeling of the different phenomena described in generic Eq. (17). Generally, the nucleation rate is the sum of the primary nucleation rate (given by the theory of homogeneous nucleation) and the secondary nucleation rate (production of small crystals from the fragmentation of the large ones) (Dirksen and Ring 1991).In case of precipitation there is no crystal fragmentation and secondary nucle-
4.4 EfJiccient Modeling ofcrystallization Processes
ation so B(c), a(c) can be directly computed from the homogeneous nucleation theory (a slight modification is needed for heterogeneous nucleation which is usually the case). In the case of crystallization, the nucleus size is extremely small in comparison with the mean crystal size in the system so it can be assumed equal to zero and the nucleation term in Eq. (17) can be replaced by the following boundary condition on particle size distribution (PSD).
From the mathematical point of view this is an important simplification which weakens the problem of multiple crystal size scales existing in Eq. (17).Although the secondary nucleation can be rigorously simulated using an appropriate fragmentation kernel (of attrition type), it is more convenient to be included in B(c) as a term proportional to total solid mass concentration (Mahoney and Ramkrishna 2003). The coagulation rate is the product of the collision frequency and the collision efficiency. For the case of precipitation the collision between the crystals is due to their Brownian motion and the carrier fluid flow field. The collision efficiency is the result of the microscopic interaction between the crystals (given by the DLVO theory) and does not depend on concentration c. Rigorous models based on first principles can be derived for the above phenomena (Elimelech et al. 1995). On the other hand, in case of crystallization the coagulation phenomenon is included in a purely phenomenological manner to achieve a fit of the model to the experimental data. The collision rate is usually assumed constant and the coagulation efficiency is associated with the creation of solid bridges between the collided particles so it depends on growth rate and thus on c. The growth rate for the case of precipitation is computed rigorously taking into account the bulk diffusion and surface reaction steps for each substance participating to the crystal growth (e.g., Kostoglou and Karabelas 1998).The growth rate used for the case of crystallization is of empirical nature and several expressions can be found in the literature (Abegg et al. 1968). In many cases a surface reaction dominated growth rate is assumed in combination with a diffusive (in crystal size coordinate) term to account for the stochastic nature of the crystal growth phenomenon (Tavare 1985). Regarding the fragmentation kernel and rate several empirical functions have been used for crystallization modeling, whereas the phenomenon does not exist in precipitation processes. Several attempts to increase the number of internal coordinates of-the model h r a better description of the crystals have been made. To mention but a few are the use of the intrinsic crystal growth rate as the second internal coordinate (Janse and de Jong 1976)and the case of different growth rates for different faces of the crystal (Ma et al. 2003). The heterogeneity in the crystallizer may be very important and must be modeled in some way. In case of crystallization (usually a continuous process) the compartmental modeling is the appropriate compromise between accuracy and computational efficiency. The crystallizer is approximated with a few well-mixed regions interconnected with material streams. The flow rate of the streams can be found by CFD calculations. For the case of crystallization there may be two-way coupling of
I
153
154
I
4 Modeling Frameworks of Complex Separation Systems
the CFD since the extent of crystallization influences the flow properties of the fluid. In case of precipitation (usually a batch process) a one-way coupling is always enough (due to small solid mass fraction).The nature of the process is such that the compartmental model is not appropriate and a fine grid, similar to that used by the CFD module is needed. So the direct implementation of the Eq. (18) in a CFD framework is necessary (Seckler et al. 1995).Furthermore, in the presence of strong nucleation the extremely strong nonlinearity of the nucleation term makes the usual averaging procedures for turbulent flows inapplicable,calling for the use of the complete probability density function (PDF) approach (Marchisioet al. 2002). It is worth noting the existence of a user-friendly software package (PARSIVAL particle size evaluation) designed for solving general integral-differential equations with one internal and zero external variables (Wulkov et al. 2001). The main application of the package is the simulation of crystallization processes. The algorithm behind the package is fully adaptive in both particle size and time coordinates. The size discretization is based on the Galerkin h-p method and the time discretization is of Rothe type. The package is not capable to handle control aspects.
4.4.3 Efficient Solution Approaches
In the general case Eq. (17)does not have an analytical solution and therefore it must be solved numerically. Its numerical solution is by no means a trivial task since the problem combines the following features: 0
0
0
0
an extraordinarily wide range for the independent variable x (particle volume) since the particle radius can be from the order of nanometers (nucleus) to order of millimeters; highly localized in the x variable domain nucleus size distribution imposing difficulties to the use of polynomials for approximation of PSD; convolution type integral and the associated nonlinearity imposed by the coagulation term; The hyperbolic form of the growth term and its ability to move discontinuities in the x domain makes its discretization difficult.
In the literature there are well known techniques to face each of the above problems but their simultaneous consideration is still a very challenging task. The conventional finite difference discretization it is not even capable to conserve integral properties of the system (e.g.. total particulate mass), which are of paramount importance for crystallization application so special techniques must be developed. In general the available methods for solving Eq. (17) can be divided into the following six categories: (1) analytical solutions, (2) finite element methods, (3) higher order methods, (4) Monte Carlo methods, (5) sectional (zero order) methods, and (6) methods of moments. A brief overview of each of these methods is presented below.
4.4 Eflient Modeling ofcrystallization Processes
AnalFcal solutions of Eq. (17) have been derived for certain simple forms of the growth rate (constant and linear) and coagulation kernel (constant and sum) certain combinations of the phenomena described by Eq. (17) and batch or steady state condltions. A special reference will be made in the work of Ramabhadran et al. (1976) deriving an analytical solution for combined nucleation growth and coagulation in batch conditions, and the work of Saleeby and Lee (1995) for the case of nucleation, growth and stochastic crystal growth dispersion in case of Continuous stired Reactors (CSTR).Although the value of analytical solutions for the simulation of realistic crystallization processes is limited, they have been extensively used as tools for the assessment of numerical techniques for the solution of Eq. (17). The finite element approach (for x discretization) to the solution of Eq. (17) is not a usual choice but some particular versions of the technique have been used over the last years. In particular Gelbard and Seinfeld (1978) used collocation on finite elements using third order polynomials with continuous first derivatives along the element boundaries, as basis functions. Tsang and Huang (1990) used Petrov-Galerkin finite elements to account for the hyperbolic character of the growth term. More recently, Nicmanis and Hounslow (1998) used a finite element Galerkin approach with Lagrangian third order polynomials for the steady state case. Rigopoulos and Jones (2003) developed a collocation finite element technique using linear basis functions. Finally, Mahoney and Ramkrishna (2002) used the Galerkin finite element techniques with linear basis functions to solve the linear size-based edition of the Eq. (17). Special care is taken to capture and follow discontinuities appearing in the PSD. In all the above approaches a geometric grid based on particle volume is used, except in the final one where the grid is linear and based on particle diameter. The higher order methods imply the global approximation of the PSD with a polynomial multiplied by a proper function. Lacatos et al. (1984)used a collocation procedure employing Laguerre polynomials. Recently, Hamilton et al. (2003) developed a collocation method based on Hermite polynomials defined on a grid moving in order to fulfill some integral conditions. Also collocation with wavelets as basis functions has been used. The higher order methods offer very high accuracy (on the cost of large computational effort and complex code implementation) but they have the drawback of requiring special treatment of singularities (e.g., a monodisperse initial distribution). The Monte Carlo method has a long history as a tool for the simulation of the particulate processes. Van Peborgh Gooch and Hounslow (1996),developed a stochastic approach for the particular process of crystallization with an arbitrary number of internal variables. Falope et al. (2001) also used another variant of the Monte Carlo method for crystallization with two internal variables. The significance of the Monte Carlo method increases sharply as the number of internal variables increases making the solution of the deterministic problem from difficult to impossible. On the other hand there is not a simple and efficient way to use the Monte Carlo method for the spatially distributed case. The sectional methods and the methods of moments are capable to transform Eq. (17) in a conventional ODE-DAE system that can be readily solved by existing integrators, and can be easily incorporated into existing modeling tools for flow sheet
I
155
156
I
4 Modeling Frameworks ofcomplex Separation Systems
simulation including crystallization processes. Due to their practical importance both methods will be discussed in further details. 4.4.3.1 Sectional Methods
According to the sectional methods (equivalent to finite volumes) the particle volume coordinate is partitioned using a number of points vi (i = 0,1,2 ... L). Particles with volume between vi-l and v L belongs to the ith class and their number concentration
(
is denoted as Ni i.e.,
If(.,
,,
t)&)
. Until the 1990s the most widely used schemes for
the discretization of Eq. (17) were those of Gelbard et al. (1980) and Gelbard and Seinfeld (1980).The corresponding codes, although developed for aerosol processes, were extensively applied to crystallization processes. As regards to the coagulation terms, the particle number-based approach of the discretization scheme does not conserve the total particle mass and the grid must be geometric with ratio larger than 2 for an efficient implementation. To overcome this deficiency Hounslow et al. (1988) developed a discretization method for crystallization applications conserving both particle number and mass having the disadvantage that the only choice for the grid is geometric with ratio 2. The method of Hounslow et al. (1988)was extended for a geometric grid with ratio the qth root of 2 ( q is an integer) permitting grid densification (Litster et al. 1995).The most general discretization scheme is that of Kumar and Ramkrishna (1996),which conserves also particle number and mass but admits a completely arbitrary grid. The crystal growth term in Eq. (17) makes it of hyperbolic form. The inability of fxed grid (Eulerian) discretization to handle properly this type of problem is well known. The direct finite volume discretization does not conserve particle mass. Kostoglou and Karabelas (1995)developed first and second order schemes that conserves particle number and mass simultaneously. Their first order scheme (of upwind type) is unconditionally stable but suffers from numerical diffusion. The second order scheme shows much less diffusion but numerical dispersion appears (source for instability).An efficient treatment of the crystal growth terms requires a grid moving along the characteristicsof the hyperbolic Eq. (17) (Lagrangian approach).The implementation of the moving grid approach is easy in the absence of coagulation and fragmentation (Gelbard 1990)but for the numerical solution of the complete Eq. (17) the moving grid must be compatible with the discretization of the coagulation or fragmentation terms. This compatibility can be achieved only using the Kumar and Ramkrishna (1996)discretization scheme. Their method made for the first time possible the use of a sectional approach with moving grid for the solution of Eq. (17) (Kumar and Ramkrishna 1997).The generation of new particles by nucleation makes necessary the addition of more and more sections during a particular simulation. This is not a desirable feature for any kind of numerical algorithm. The advantage of having a futed number of ODES instead of a variable (possibly uncontrolled one) is very important and it is strongly believed that a fxed grid (Eulerian)approach is preferable than the Lagrangian one. The increase of computing efficiency made possible the use of a large number of sections (fewhundredths versus few decades as seen ten
4.4 Eflcient Modeling ofCvstallization Processes
years ago) leading to a great reduction of the numerical difision error of the Eulerian methods. The proposed discretized form of Eq. (17) is (the fragmentation terms are not shown since they can be found in the corresponding Section 4.4):
where the x, are such that Y , - ~= + x,)/2, 6 , is the Kronecker delta in L-dimensional space, the integer m is such that Y , - ~ a (c) i Y, and Ai =
2 (h+1 -
w) (h - Ut-1)
1, vi
G ( x , c ) dx
4.4.3.2 Methods of Moments
A proper discretization of Eq. (17)with the sectional method leads to a model with at least 50 degrees of freedom (number of ODES). This renders the sectional method computationally intractable for the case of spatially distributed problems. The handling of complex spatially distributed problems imposes the need for low degrees of freedom approaches to the solution of Eq. (17).This made the method of moments a necessity to efficiently solve the population balance equation. These methods have a longer history than the sectional ones and they are based on the transformation of the Eq. (17) in a system of equations for some moments of the unknown distribution. The system is getting closed by defining a closure relation, relating the moments appearing in the right hand side of the system to those of the left hand side. The method of moments achieves an enormous reduction of the computational effort (typically 3-8 degrees of freedom) sacrificing the information content and the accuracy of the solution. Only some moments of the PSD can be computed with accuracy less than that of the sectional method but they are considered adequate for practical applications. There is a great variety of method of moments. The older variants were used only for the case of very simple rate and kernel functions. Other methods assume a particular shape for the PSD (e.g., log-normal, gamma, Weibull, see Williams and Loyalka 1991) and the results are reliable only if the actual solution resembles the assumed form (e.g., problems with bimodal PSD cannot be attacked by these methods).
I
157
158
I
4 Modeling Frameworks ofcomplex Separation Systems
Recently, a quite general method of moments (generalizedmethod of moments) was developed by Marchisio et al. (2003) and applied in crystallization systems. This method can be used without a restriction to the sophistication of the models for the occurring phenomena (i.e., rates and kernels) and to the actual shape of the PSD. The evolution equations for the moments of index ai of the PSD
+ (1 - s(ai))
c PI 2
x,?i-' ~ ( 3 c)wj ,
+ ~ ( c ) a a(c) i
j=1
The evolution of xj, wj (j= 1, 2, ... P/2) is given implicitly from the following nonlinear algebraic system:
j=1
The system of Eqs. (20) and (21) can be easily solved with a traditional ODE-DAE integrator. To derive the full crystallization model Eqs. (20) and (21) must be coupled with other equations describing the behavior of the crystallization process (component mass balances, energy balances, physical property models, etc.). For example, assuming a three-phase continuous mixed suspension mixed product removal crystallizer, by appropriate heating or cooling, a product is generated in the form of crystals. A vapor phase is also formed because of the evaporation of part of the liquid which comes into the crystallizer. The contents of the crystallizer are removed by means of the top outlet stream (vapor) and the bottom outlet stream (slurry with product crystals and solution). The material balance of each component is mathematically expressed as follows:
The energy balance has the following general form:
Supplementary relations for the mass balance equation involve calculation of the total amount of the solute in the crystallizer, the combined concentrations of the other components, the volume of the crystallizer, the mass fractions in the liquid and vapor phase, the total amount in the solid phase, vapor-liquid equilibrium relations, etc. In regards to the energy balance supplementary relations serve to calculate the specific internal energies for each phase, the fraction of the suspension volume and the rate of heat removal of the crystallizer. Due to space limitations details of these relations are not presented here.
4.4 Efficient Modeling ofCrystallization Processes
4.4.4 Modeling and Optimization of Crystallizers
The selection of the optimal method for solving the population balance equation strongly depends on the particular features of the mathematical problem under question. Some general guidelines for the selection of the appropriate method are given in Table 4.1. It is important to emphasize that the efficient solution of the population balance equation can provide the basis for model-based design, control and optimization studies of large-scale crystallizers. The presence of very little work in this area can be attributed not only to the lack of rigorous models but also, and perhaps more importantly, to the lack of techniques for efficient solving the population balance equation without loss of accuracy. It is clear that the population balance equation constitute the basic component of the overall crystallization modeling framework. Component mass balances, energy balance and auxiliary algebraic equations describing the physical properties complement the model. The design and optimization of crystallizes is a very challenging problem. There is a lack of systematic procedures for developing optimal operating policies and design options for complex crystallization systems. Kramer et al. (1999) presented a formal approach of design guidelines considering the influence of crystallizer geometry, scale, operating conditions and process actuators on the process behaviour and product quality. fiamer et al. (2000) developed a compartmental modeling framework to describe the crystallization process of evaporative and cooling suspension crystallizers. The framework has been implemented in the SPEEDUP environment and is capable of predicting a large supersaturation profile in a large-scale crystallizer. Ma et al. (2002) presented a rigorous compartmental crystallization model to achieve optimal control. Ge et al. (2000)have illustrated the application of mathematical optimization to the problem of batch crystallization. A targeting approach to the optimization of multistage crystallization networks has been investigated by Sheikh and Jones (1998). Bermingham et al. (2003) presented a formal optimization approach for the design of solution crystallization processes using rigorous models. A largescale industrial case study was used to illustrate the applicability and usefulness of the overall optimization methodology. Recently, Choong and Smith (2004a,b) proposed an optimization framework based on a stochastic optimization algorithm for
Table 4.1 Optimal method of solution for the crystallization model versus number of external and internal coordinates o f the model ~~~
~
External coordinates number Internal coordinates number
0
1
More
1
Sectional
Sectional, Moments
Moments
2
Monte Carlo, Moments
Moments
Moments
More
Monte Carlo
Not available
Not available
I
159
160
I
4 Modeling Frameworks ofComplex Separation Systems
optimizing batch cooling and batch, semi-batch and nonisothennal evaporative crystallization operations. The results demonstrate significant improvements over conventional approaches and heuristic rules. The deviation of the well-mixedbehavior in a crystallizer is primarily caused by the hydrodynamic conditions, which lead to temperature, supersaturation and participle concentration profiles in the crystallizer. An approach which can overcome the shortage of the well-mixed models is the employment of a multizonal representation which divides the equipment volume into a network of interconnected zones where the idealized mixing patter is assumed for each zone. Urban and Liberis (1999) used a hybrid multizonal/CFD modeling approach for the modeling of an industrial crystallizer. Each zone incorporates a detailed description of the crystallization phenomena in terms of population balance equations. Both homogenous and heterogeneous crystal nucleation are taken into account, the latter being a strong function of the turbulence energy dissipation. A CFD model of the process is used to determine the directionality and rate of flow between adjacent zones, and the mean energy dissipation rate within each zone. Zauner and Jones (2002) adopted a compartment mixing model to predict the mixing on crystal precipitation. The population balance is solved simultaneously with the mass balance using data obtained by CFD calculations. Recently, Bezzo et al. (2004) presented a formal multi-scale framework based on a hybrid multizonal/CFD model. The framework is applicable to systems where the fluid dynamics operate on a much faster time-scale than other phenomena, and can be described in terms of steady state CFD computations involving a (pseudo)homogenous fluid, the physical properties of which are relatively weak functions of intensive properties. A crystallization process was used to illustrate the overall modeling approach.
4.5 Modeling of Grinding Processes 4.5.1 A General Modeling Framework of Grinding Processes
Fine grinding of solid materials is of prime importance in many industrial applications. In addition to mineral processing, it is also widely used in the manufacture of paints, ceramics, pharmaceuticals etc. The grinding process can be performed under wet or dry conditions using a large variety of equipment. In general, the grinding of the feed material is made by mobile pieces (e.g., spheres, cylinders) or by large fued in space elements (e.g., rollers) made from hard material. Irrespective of the details of the particular grinding process, the scope is always the reduction of the particle size (the range can be from millimeters to micrometers) and the simulation approach is the same. Traditionally, the simulation of processes associated with the grinding of solids is based on the solution of a particular and rather simplified form of the population balance equation known as fragmentation equation. The main feature of grinding modeling is the application of a population balance, and the selec-
4.5 Modeling ofCrinding Processes
tion of appropriate independent variables. Hence, the usual approach taken in the literature is based on solution through the use of population balance equations to produce models which can simulate grinding. Today, a significant challenge which faces those who model grinding is the lack of a modeling tool which allows the easy implementation of the many different forms and evolutionary changes in the grinding models. This is coupled with the need for robust solution techniques for the various integral-partial-differential equations found in utilizing the population balance approach. Recent simulation examples in the literature include grinding of coal in ring-roller mills (Sato et al. 1996), the wet stirred ball attrition of alumina and synthetic diamond (Shinohara et al. 1999), the attrition of alumina hydrate in a tumbling mill (Frances and Laguerie, 1998),the dry stirred ball attrition of quartz (Ma et al. 1998) and the wet stirred bead attrition of carbon (Varinot et al. 1999).According to the particular formalism, the evolution of the particle size distribution (PSD) as it is described by the differential particle volume distribution functionflx, t ) can be calculated from the solution of the following linear integral-differentia1equation:
The first part of the right hand side of the above equation stands for the generation of particles of size x by fragmentation of larger than x particles and the second term for the loss of size x partides due to their fragmentation.The function b(x)is the fragmentation frequency for a particle of volume x (the term rate is also used instead of frequency). The function p ( x , y ) is called fragmentation kernel and is such that p ( x , y)dx is the probability for having a fragment of volume in [ x , x + dx] as a result of fragmentation of a particle with volume y. The grinding equipment usually operates under batch conditions so the corresponding form of the fragmentation equation (which can also be directly used for plug flow continuous operation) is examined here. The above formalism is purely phenomenological so there is not space for considerable improvements. The extension of the formalism using external coordinates is meaningless since the equation cannot be considered to describe a local phenomenon in the physical space. An axial external coordinate has been used in a purely phenomenological manner by Mihalyko et al. (1998) to account for the partial mixing (axial dispersion) during plug flow grinding process. Also attempts to add more internal variables for the characterization of the particles have not been made. In generally Eq. (22) is considered adequate to describe the grinding process. The challenge for the development of efficient techniques for its solution is coming not from the need to incorporate it (describing the “local”phenomenon) in CFD codes but for the need to incorporate efficient grinding equipment submodels to the large flow sheet simulators. The fragmentation functions b(x), p ( x , y) should satisfy the following requirements in order to give physically meaningful results: 1. lim b(x) = 0 to avoid the generation of smaller and smaller particles without limit. X
4
I
161
162
I
4 Modeling Frameworks ofcomplex Separation Systems
2.
j xp(x, y)dx
y this is the mass conservation condition and stipulates that the
=
0
total volume of particles resulting from the breakup of a particle of volume y must be equal to y. 3. ~ ( y =)
jp ( x , y)dy, where ~ ( y is) the number of fragments generated during the 0
breakup of a particle with volume y. In all cases, it should be ~ ( y 2) 2. 4.
k
0
xp(x, y)dx 2
Pk
(y - x ) p ( x , y)dy for k
< y/2. This condition is usually overlooked
having as consequence the use of fragmentation kernels without physical meaning for fitting experimental data. The fragmentation frequency used in the grinding literature (Varinot et al. 1997, 1999) has the following form
This is a composite law resulting from the matching of the two asymptotes (power laws) b(x)= ( x >> xR)and b(x) = xb ( x << xR)at the region of x = x,. Usually b-a is a small number implying an almost size independent fragmentation rate for large particles. As the particle size decreases the relatively large exponent b dominates prevailing further fragmentation of the smaller particles. The most general fragmentation kernel employed for grinding simulation (Eyre et a]. 1998) is
The values of the parameters C1, C, al, a, should be carefully chosen in order to satisfy the above-mentioned requirements 2 through 4. This kernel exhibits the very important property of homogeneity. A breakage kernel is called homogeneous if the shape of the fragment size distribution does not depend explicitly on the parent particle size y but only on the ratio x/y. An homogeneous kernel can be written as p ( x , y) =-1
Y
p
($).
4.5.2 Solution Approaches 4.5.2.1 Sectional Methods
The solution methods for the Eq. (22) with frequency in Eq. (23) and kernel in Eq. (25) can be organized in five categories as follows: a) Analytical (Ziff and McGrady, 1986) and large time asymptotic solutions (Ziff, 1991) exist only for the particular case a = C1 = 0 in Eqs. (24) and (25).
4.5 Modeling of Grinding Processes
b) Stochastic (Monte Carlo) methods (Mishra 2000). In the case of grinding simulation the addition of extra internal variables is not an option so stochastic methods are of little importance. c) Higher order (polynomial approximation) methods. In particular the Galerkin weighted residual formulation using as basis functions B-splines (Everson et al. 1997) and wavelets (Liu and Tade 2004) have been used. d) Sectional (zero order) methods. e) Moment methods. The last two categories include methods which allow direct and unconditionally stable transformation of Eq. (22) to a system of ODES that can be solved and further processed by existing integration codes. For this reason these methods will be discussed in detail. According to the sectional method the particle volume coordinate is partitioned using the points vi (i = 0, 1, 2 ... L). The particles with volume between Y ; - ~ and Y, belongs to the ith class and their number concentration denoted as N,
(x.
,:
).
i.e., J f(x, t)dx ’
=
Yi-1 ~
The characteristic particle size for the class i is taken to be
+ Y,. The direct sectional (finite volume) equivalent of (22) is ( i = 1, 2, ... L):
2
bi = b(x;)
ny =
lvi Vi-1
p(x, x j ) dx
(27)
This system of ODES can be solved analytically (Reid, 1965) and has been extensively used in the grinding literature as the fundamental equation and not a simplification of the continuous form of Eq. (22). Experimental values of Ni can be directly supplied from sieve analysis and rag can be found by fitting the model in Eq. (25)to the experimental N;sequentially. The problem arises from the fact that the discretized form of Eq. (26)does not conserve integral properties of the PSD and in particular total particle mass. This problem was considered by Hill and Ng (1995)who used two sets of unknown constants multiplied by both terms of the right hand side of Eq. (25).These constants were computed by the requirement of internal consistency with respect to the total particle mass and total particle number. The term “internal consistency” of a characteristic scheme with respect to a particular moment of the PSD refers to the ability of the discretized system to reproduce the discretized form of the evolution equation for the particular moment. Although this is a highly desired property (i.e., for the total particle mass, this is equivalent to the total mass conservation),it cannot guarantee the exact computation of the moment. The procedure developed by Hill and Ng (1995) may require complex analytical derivations that depend on the particular form of the fragmentation kernel and on the particular grid used for the characteristic. The above authors made these deriva-
I
163
164
I
4 Modeling Frameworks ofComplex Separation Systems
tions for three forms of the fragmentation kernel, two forms of the grid (equidistant and geometric) and exclusively for power law fragmentation rate. The major drawback of their procedure is that it cannot be directly generalized for arbitrary parameters and implemented in a computer code. Vanni (1999) improved the situation by replacing the requirement of internal consistency with respect to the total particle number, with a better handling of the second term of right hand side of Eq. (25) (death term). This new version (slightly less accurate than its predecessor) can be fully automated, i.e., computed numerically regardless of the fragmentation rate and kernel. A different approach for the development of a quite general sectional method with arbitrary fragmentation functions, arbitrary grid, exhibiting internally consistency with respect to two arbitrary moments has been investigated by Kumar and Ramkrishna (1996). In this case the internally consistency is achieved by the proper sharing of the fragments resulting from a fragmentation event to the respective sections. The coefficients ng for the particular case of internal consistency with respect to total particle mass are:
Z(a, b, c ) =
j bbb p- ( ~y ,
c) dy
a
where 6 is the Dirac delta function. Several improvements have been proposed for the above sectional approach. As an example, Attarakih et al. (2003) developed a method where the pivot (characteristic) size for each class is free to move between the boundaries of the class and in addition the grid is moving as a whole to capture better the features of the PSD. The improved methods of this type can be implemented only through custom codes and cannot cast the problem to the form of system of ODES directly solvable by commercial integrators. 4.5.2.2 Methods of Moments
The idea led to the development of the method of moments is that in some cases the amount of information on the PSD given by a sectional method can be sacrificed in favor of the reduction in the computational requirements. For example, in a large plant simulator the grinding submodel has to be solved as efficiently as possible even in the expense of having as the only output the total particle number concentration and the mean particle size. From the technical point of view the method of moments is a generalization of the methods of weighted residual having a trial function more general than a linear superposition of basis function, i.e.,f(x, t) = F ( x , t; c) where the function F has a known form and the vector c = (cl, c2, ... cp)contains P unknown time dependent parameters which can be found in the following way. The Eq. (22) is multiplied by the P power law test functions xu' (i= 1, 2, ... P) and then is integrated for x between 0 and ~0 to give the system of equations (assuming an homogeneous fragmentation kernel):
dMa.
2=
dt
(jai - 1)
/
4.5 Modeling ofCrinding Processes 00
b ( x ) F ( x , t ; c)dx
(29)
0
xaiF(x,t ; C) dx = Mai m
where Ja,
=
I x"' p (x)dx. The most widely used forms of the distribution F are the 0
lognormal distribution f ( x , t) = e tionf(x, t) =
C1 r (c2 + 1)
(?)@e-''' c3
x
p
[-
(E)]
the gamma distribu-
with (al, a2, a3) = (0, 1, 2) and
r
is the
gamma hnction (Kostoglouand Karabelas 2002; Madras and McCoy 1998). For the case of a power law fragmentation rate b(x),the integrations in equations can be performed in closed form leading to a simple system of ODES with respect to the a;. If b(x) is not of power law form, the integral in equation (29) must be computed numerically. The Hermite and Laguerre quadratures are ideally suited for the case of lognormal and gamma distribution respectively. A systematic way to improve the log-normal method is the so called interpolation between the moments method (Kostoglouand Karabelas 2002). This method can be applied only for power law breakage rate using the following set of ai (al,a2, ... ap) = (0, 1 , 2 ... P-1). An explicit form of F is not assumed and each moment z appearing in the right hand side can be calculated from the integer moments of the PSD by the following interpolation rule:
For P = 3 the log-normal method is recovered while improved results can be found using P = 4 and P = 5. Larger values of P cannot improve the solution because the amount of information contained in the higher moments of the distribution is limited. According to the generalized method of moments the PSD is approximated by a set of Dirac delta functions with unknown strength and location; i.e., (P is an even number)
c PI2
F(x, t ) =
j=1
W j q x - xj)
Substituting Eqs. (29) and (30) leads to the following system of ODES-DAEs:
1
165
166
I
4 Modeling Frameworks ofComplex Separation Systems
PI2
!%5 dt
= (Jai - 1)
wjx,? b(xj)
(34)
j=1
This method is quite general and can be used for any fragmentation rate and kernel. It is developed for the solution of the aerosol growth equation (quadrature method of moments; McGraw 1997) and aerosol coagulation equation (generalized approximation method Piskunov and Golubev 1999) independently. Kostoglou and Karabelas (2002)used it for the solution of the fragmentation equation (generalized method of moments). Typical values of P are 4 (Kostoglou and Karabelas 2004) and G (Marchisio et al. 2003), and the best choice for aiseems to be (al, az, ... ap) = (0, 1/3, 2/3, ... (P-1)/3). The system of Eqs. (33) and (34) can be solved directly using an ODE-DAE solver (Kostoglou and Karabelas 2002,2004) or using an ODE solver for (33) simultaneously with special procedures from the theory of Gaussian integration to find the weights wjand abscissas xj (Marchisio et al. 2003).
4.5 Concluding Remarks
Compared to the traditional tools and approaches for modeling and simulation of complex separation systems significant progress has been achieved the last decade. Today’s modeling tools provide advanced modeling languages and frameworks, either based on process engineering concepts or on mathematical perspectives that are suited to represent complex structural and phenomenological aspects of process systems engineering. However, a number of issues must be considered open still today. Significant challenges remain in all of the specific processing systems reviewed and these have been identified in the corresponding sections. A more general challenge is how to allow the incorporation of ideas originated from academic research into tools for industrial use. The emergence of open software architectures now provides reasonable straightfonvard routes for academic developments in some areas, such as physical properties and numerical solvers, to be directly used in process modeling tools. However, the situation is more problematic in areas of research that are related to the fundamentals of process modeling as reviewed in this chapter for specific processes. Arguably, the task of testing academic ideas and, ultimately, transferring them to commercial use has become more difficult in recent years due to the complexity of modem process modeling software and the degree of advanced software engineering that it entails (Pantelides and Urban 2004). Von Wedel et al. (2003) emphasized that the development of complex chemical process models can be improved towards a formal theory to automatically generate, manipulate, and reason about these models. Such a theory will have a strong impact on the capabilities of hture modeling tools for complex chemical processing systems, but it constitutes an on-going open research issues. It will enable inexperienced researchers to effectively use model-based techniques for a wide range of
4.6 Concluding Remarks
applications such as parameter estimation, process control and optimization. The excellent book by Hangos and Cameron (2001) provides the basic principles towards this direction. As mentioned in the excellent review by Pantelides and Urban (2004)the increasing power of process modeling technology brings new perspectives to the development and deployment of model-based solutions throughout the process lifecycle, from the initial process development to the detailed design of individual items of processing equipment and entire plants, and their control systems. To a large extent, this has been a natural evolution of earlier trends in this area and it is particularly true for the processes reviewed in this chapter. For example, a very interesting development in recent years has been the increasing permeability of the boundary between “off-line”and “on-line”applications for crystallization processes. This permeability has two distinct but related positive aspects. First, the process models themselves are re-used for both design and operational tasks, although in many cases some simpler models may be required in view of the special efficiency and robustness requirements posed by real-time and other applications, as illustrated in the review of crystallization and grinding processes. Secondly, standard process modeling software tools such as gPROMS are employed for tasks on both sides of the boundary,
References 1 Abegg C. F. Stevens J . D. Larson M. A.AIChE 2
3
4
5
6 7
8
9 10 11
J. 14 (1968)p. 118 Attarakih M. M. Barf H. J. Faqir N. M. Chem. Eng. Sci. 58 (2003) p. 1311 Baker W.]. W. van den Broeke L. J . P. Kapteijn F. Moulijn J.A. AIChE J. 43 (1997) p. 2203 Bechaud C. Melen S. Lasseux D. Quintard M. Bruneau C. H. Chem. Eng. Science 56 (2001) p. 3123 Bermingham S. K. Verheijen P. ]. T. Kramer H. J. M. Trans IChemE Part A 81 (2003) p. 893 Bezzo F. Macchietto S. Pantelides C. C. Comput. Chem. Eng. 28 (2004) p. 513 BurgraafA. J . in Burggraaf A. J. Cot L. (eds) Fundamentals of Inorganic Membranes, Science and Technology, Membrane Science and Technology, Series 4, Elsevier, Amsterdam 1996 Choong K. L. Smith R. Chem. Eng. Sci. 59 (2004a)p. 313 Choong K. L. Smith R. Chem. Eng. Sci. 59 (2004b)p. 329 Cruz P. Santos]. C. Magalhaes F. D. Mendes A. Chem. Eng. Science 58 (2003) p. 3143 Dirksen J . A. Ring T. A. Chem. Eng. Sci. 46 (1991)p. 2389
12 Elimelech M . Gregory J. ]ia X . Williams R.
13 14 15 16 17
18 19 20 21 22
23
Particle Deposition & Aggregation: Measurement, Modeling and Simulation, Butterworth-Heinemann,Oxford 1995 Everson R. C. Eye D. Campbell Q. P. Comput. Chem. Eng. 21 (1997) p. 1433 Eyre D. Everson R. C. Campbell Q. P. Powder Technol. 98 (1998) p. 265 Falope G. 0.Jones A. G.Zauner R. Chem. Eng. Sci. 56 (2001) p. 2567 Frances C. Laguerie C. Powder Technol. 99 (1998) p. 147 Ge M. Wang 0. G. Chiu M. S. Lee T. H. Hang C. C. Teo K. H . Chem. Eng. Res. Des. 78 (2000) p. 99 Gelbard F. Seinfld]. H. J . Colloid Interface Sci. 78 (1980) p. 485 Gelbard F. Seinfld J. H. 1.Comput. Phys. 28 (1978) p. 357 Gelbard F. Tambour Y. SeinfeldJ . H. /. Colloid Interface Sci. 76 (1980) p. 541 Giglia S. Bikson B. Penin]. E. Ind. Eng. Chem. Res. 30 (1991) p. 1239 G r e g S. J . Sing K . S. W. Adsorption, Surface Area and Porosity, Academic Press, New York 1982 Hamilton R. A. Curtis J . S. Ramkrishna D. AIChE J. 49 (2003) p. 2328
I
167
168
I
4 Modeling Frameworks ofComplex Separation Systems 24
25 26 27
28
29 30 31 32 33
34 35 36 37 38 39 40
41
42 43 44 45 46
47
48 49
Hangos K. M. Cameron I. T. Process Modeling and Process Analysis, Academic Press, New York 2001 Hill P.]. Ng K. M . AIChE J. 42 (1995)p. 1600 Hounslow M . J. Ryall R. L. Marshall V. R. AIChE J. 34 (1988)p. 1821 Janse A. H. de Jong E. J. in Industrial Crystallization Plenum Press, New York, p 145 1976 Jiang L. Biegler L. T. Fox V . G. AIChE J. 49 (2003) p. 1140 Kapteijn F. Moulijn]. A. Krishna R. Chem. Eng. Science 55 (2000) p. 2923 KargerJ. Ruthven D. M. Diffusion in Zeolite, Wiley, New York 1992 Kikkinides E. S. Yang R. T. Ind. Eng. Chem. Res. 30 (1991)p. 1981 Kikkinides E. S. Yang R. T. Chem. Eng. Science 48 (1993) p. 1169 Kikkinides E. S. Yang R. T. Cho S. H. Ind. Eng. Chem. Res. 32 (1993) p. 2714 Kikkinides E. S. Sikavitsas V . I. Yang R. T. Ind. Eng. Chem. Res. 34 (1995)p. 255 KO D. Siriwardane R. Biegler L. T. Ind. Eng. Chem. Res. 42 (2003) p. 339 Kookos I. K.]. Membrane Sci. 208 (2002) p. 193 Kostoglou M . Karabelas A.]. Chem. Eng. Commun. 136 (1995) p. 177 Kostoglou M . Karabelas A,]. Ind. Eng. Chem. Res. 37 (1998)p. 1536 Kostoglou M. Karabelas A. J. Powder Technol 127 (2002)p. 116 Kramer H . ] . M . Dijkstra]. W. Verheijen P , ] . T. Van Rosmalen G. M . Powder Technol. 108 (2000) p. 185 Kramer H. 1. M. Bermingham S. K. V a n Rosmalen G. M. J. Crystal Growth 198/199 (1999) p. 729 Krishna R. Int. Commun. Heat Mass Transfer 28 (2001)p. 337 Krishna R. WesselinghJ . A. Chem. Eng. Science 52 (1997) p. 861 Kumar S. Ramkrishna D. Chem. Eng. Sci. 24 (1997)p. 4659 Kumar S. Ramkrishna D. Chem. Eng. Sci. 51 (1996)p. 1311 Lacatos B. Varga E. Halasz S. Blickle J. in Industrial Crystallization , Elsevier, Amsterdam, p 185 1984 Litster]. D. Smith D.]. Hounslow M.]. AIChE J. 41 (1995) p. 591 Liu Y. Tad6 M. 0. Powder Technol. 139 (2004)p. 61 Ma D. L. TaJi D. K. Braatz R. D. Ind. Eng. Chem. Res. 41 (2002) p. 6217
50 Ma Z. H u S. Zhang S. Pan X . Powder Tech-
nol. 100 (1998) p. 69
51 Ma D. L. TaJi D. K. Braaz R. D. Comp.
Chem. Eng. 26 (2002) p. 1103
52 Madras G. McCoy B.J. AIChE J. 44 (1998)p.
647
53 Mahoney A. W. Rarnkrishna D. Chem. Eng.
Sci. 57 (2002) p. 1107
54 Mahoney A. W. Doyle F.J. Ramkrishna D.
AIChE I. 48 (2002) p. 981
55 Marchisio D. L. Barresi A.A. Garbero M.
AIChE J. 48 (2002) p. 2039
T. Fox R. 0. Vigil R. D. Barresi A. A. AIChE J. 49 (2003) p. 1266 Marchisio D. L. Vigil R. D. Fox R. 0. J. Colloid Interface Sci. 258 (2003) p. 322 Marquardt W. Wedel L. von Bayer B. Perspectives on Lifecycle Process Modeling in Malone M. F., Trainham J. A., Carnahan B. (eds) Foundations of Computer-Aided Process Design, AIChE Symposium Series 323, vol. 96, pp. 192-214 2000 McGraw R. Aerosol Sci. Technol. 27 (1997) p. 255 Mihalyko C. Blickle T. Lacatos B. G. Powder Technol. 97 (1998)p. 51 Mishra B. K. Powder Technol. 110 (2000) p. 246 Nicmanis M . Hounslow M. J. AIChE J. 44 (1998) p. 2258 Nilchan S. Pantelides C. C. Adsorption 4 (1998)p. 113 Pan Y . AIChE J. 29 (1983) p. 545 Pantelides C. C. Oh M. Powder Technol. 87 (1996)p. 13 Pantelides C. C. Britt H . I. Multi-purpose Process Modeling Environments in Biegler L. T., Doherty M. F. (eds) Proceedings of Conference on Foundation of ComputerAided Process Design 1994, CACHE Publications, (1995)p. 128-141 Pantelides C. C. New Challenges and Opportunities for Process Modeling in Gani R., Jorgensen S . B. (eds) European Symposium on Computer-Aided Process Engineering 11, Elsevier, Amsterdam 2001 Pantelides C. C. Urban Z . E. Process Modeling Technology: A critical review of recent developments in Proc. Conf. On Foundation of Computer-Aided Process Design 2004, Princeton University, NJ 2004 Piskunov A V. N..Golubev I.]. Aerosol Sci. 30 (1999)p. S451 Ramabhadran T. E. Peterson T. W.Seinfeld]. H. AIChE J. 22 (1976)p. 840
56 Marchisio D. L. PikturnaJ.
57 58
59 60 61 62 63
64 65
66
67
68
69
70
4.6 Concluding Remarks 71 Randolph A. D. Larson M. A. Theory of Par-
72 73 74 75
76 77
78
79
80 81 82 83
84 85 86
ticulate Processes, Academic Press, New York 1971 Reid K.]. Chem. Eng. Sci. 20 (1965) p. 953 Rigopoulos S. Jones A. G. AIChE J . 49 (2003) p. 1127 Ritter]. A. Yang R. T. Ind. Eng. Chem. Res. 30 (1991)p. 1023 Ruthven D. M. Farooq S. Knaebel K. S. Pressure-swing Adsorption, VCH Publishers, New York 1994 Saleeby E. G. Lee H. W. Chem. Eng. Sci. 50 (1995) p. 1971 Sato K. Meguri N. Shoji K. Kanemoto H. Hasegawa T. Maruyama T. Powder Technol. 86 (1996)p. 275 Seckler M. M. Bruinsmu 0. S. L. Van Rosmalen G. M. Chem. Eng. Commun. 135 (1995) p. 113 Serbezou A. Sotirchos S. V. Separation Purification Technol. 31 (2003)p. 203 Shinohara K. Golrnan B. Uchiyama T. Otani M. Powder Technol. 103 (1999) p. 292 Sheikh A. Y.]ones A. G. AIChE J. 44 (1998) p. 1637 Smith 0.1. Westerberg A. W. Chem. Eng. Science 46 (1991) p. 2967 Sohnel 0. Garside G. Precipitation. Basic Principles and Industrial Applications, Butterworth-Heinemann, Oxford 1992 Strathman H . AIChE J 47 (2001) p. 1077 Tauare N. S. Can. J . Chem. Eng. 63 (1985)p. 436 TessendotfS. Gani R. Michelsen M. L. Chem. Eng. Science 54 (1999) p. 943
87 Tsang T. H. Huang L. K. Aerosol Sci. Tech-
nol. 12 (1990) p. 578
88 Urban 2. Liberis L. in Proceedings of
89
90 91 92 93 94
95 96 97
98 99 100
Chemputers 1999 conference, Dusseldorf, Germany 1999 Van Peborgh Gooch]. R. Hounslow M. J. AIChE J. 42 (1996) p. 1864 Vanni M. AIChE J . 45 (1999) p. 916 Vareltzis P. Kikkinides E. S. Georgiadis M. C. Trans. IChemE Part A 81 (2003) p. 525 Varinot C. Berthiaux H. Dodds]. Powder Technol. 105 (1999) p. 228 Vannot C. Hiltgun S. Pons M.-N. Dodds]. Chem. Eng. Sci. 52 (1997) p. 3605 Von Wedel L. Marquardt W. Gani R. Modeling Frameworks in B. Braunschweig, R. Gani (eds) Software Architecture and Tools for Computer Aided Process Engineering, Elsevier Science, Amsterdam, pp. 89-126 2002 Williams M. M. R. Loyalka S. K. Aerosol Sci. Pergamon Press New York 1991 Wulkou M. Gerstlauer A. Nieken U.Chem. Eng. Sci. 56 (2001) p. 2575 Yang R. T. Gas Separation by Adsorption Processes, Butterworth, Imperial College Press and Word Scientific Publishers, Boston, MA 1987 Zauner R. Zones A. G.Chem. Eng. Sci. 57 (2002) p. 821 ZifR. M. McGrady E. D. Macromolecules 19 (1986)p. 2513 ZifR. M. J. Phys. A: Math. Gen. 24 (1991) p. 2821
I
169
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein I 1 7 1 Copyright 02006 WILEY-VCH Verlag GmbH 8
5 Model Tuning, Discrimination, and Verification Katalin M. Hangos and Rozblia Lakner
5.1 Introduction
Process mode-, are increasing in size an.. complexity in current computer-aided process engineering. Therefore the methods and tools for their tuning, discrimination and verification are of great importance. The widespread use of process models for design, simulation and optimization requires the proper documentation, reuse, and retrofit of already existing models that need the above techniques. This chapter deals with computer-aided approaches and methods of model tuning, discrimination and verification that are based on a formal structured description of process models. Basic assumptions. For the majority of process control and diagnostic applications, lumped dynamic process models are used. This model class, which is considered throughout this chapter, is obtained under the following basic modeling assumptions: 0
0 0
Only lumped models are considered (ordinary differential and algebraic equation models). only initial value problems are considered. All physical properties in each phase are assumed to be functions of the thermodynamic state variables (temperature T,pressure P,compositions Ck) only.
5.2 The Components and Structure of Process Models
The formal description of process models and their structure is the basis of any methods in computer-aided process systems engineering. If one considers process models as structured knowledge collection with underlying syntax and semantics then the formal methods of computer science can be applied for model discrimination and verification. The fundamentals of such an approach are briefly described in this section. Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
172
I
5 Model Tuning, Discrimination, and Verification
5.2.1 The Modeling Problem and the Modeling Goal
A process model is jointly determined by the process system it describes and by its modeling goal (Hangos and Cameron 2001). The specification of a process system includes the definition of the system boundaries and the way of interactions between the system and its environment together with the description of the internal stmcture (subsystems, mechanisms, etc.) of the system itself. The effect of the modeling goal is much less investigated despite its importance for constructing a process model. The modeling goal. Any process model is developed for a specific use or possibly
multiple uses. These uses influence the goals that the model must fulfill. For example, the application areas of process design, control, optimization or diagnosis usually lead to different model representations for the same physical system. Meeting the stated modeling goal provides a means of determining when the modeling cycle (see below) should terminate. A set of process models ishnctionally equivalent with respect to a modeling goal if every model of the set fulfils the inequalities in the modeling goal. The seven-step modeling procedure. Good modeling practice requires a systematic
way of developing the model equations 0 f a process system for a given purpose. Although this procedure is usually cyclic, in which one often returns back to an already completed step, the systematic procedure can be regarded as a sequence of modeling steps (Hangos and Cameron 2001) that include: 1. problem setup for process modeling; 2. selection of important mechanisms; 3. analysis of data; 4. construction of model equations;
5. model verification; 6. model solution; 7. model calibration and validation. Model tuning, discrimination and verification techniques are applied in the last four steps of this procedure.
5.2.2 The Model Equation Constructing Subprocedure and its Steps
The construction of model equations is the fourth step in the above procedure, which is a cyclic procedure in itself with the following steps: 1. system and subsystem boundary and balance volume definitions; 2. establish the balance equations;
5.2 The Components and Structure of Process Models
3. transfer and reaction rate specifications; 4. property relation specifications; 5 . balance volume relation specifications; 6. equipment and control constraint specifications; 7. selection of design variables. Incremental building of balance equations. The steps of the model building procedure should be carried out in a sequential-iterative manner. This means that the model equations are built up incrementally, repeating steps of the model equation constructing subprocedure in the following order of conserved extensive quantities:
0
0
Overall mass submodel. The terms and variables in the conservation balances for the overall mass in each balance volume appear in all other conservation balances. Therefore this subset of model equations is built up first. Component mass submodel. With the given conservation balances for the overall mass in each balance volume, it is easy to set up the conservation balances for component masses. This subset of model equations is added to the equations originated from the overall mass balances. Energy submodel. Finally the subset of model equations induced by the energy balances is added to the equations.
This way the kernel of the model equation constructing the subprocedure is repeated several times for every balance volume.
5.2.3 Model Equations, Initial and Boundary Conditions, and Model Parameters
The conservation balances of mass, component masses and energy are described by ordinary differential equations in a lumped process system model (Hangos and Cameron 2001). These are called conservation balance equations, and they are accompanied by suitable algebraic constitutive equations. Constitutive equations describe the underlying static relationships between model variables dictated by physics and chemistry. The process model is then a set of ordinary differential and algebraic equations (DAEs) where there are underlying semantic relationships between various variables, equations, and equation terms. In addition to the equations themselves, it is required to specify the initial conditions of the ordinary DAE system in order to solve the problem. Initial conditions set the values of the differential variables at the initial time (t=O). Note that in the case of distributed parameter systems, partial differential equations (PDEs) are used for describing the conservation balance equations of the model. In these models boundary conditions specifying the values of the differential variables for all time on each of the system boundaries and the specification of initial conditions for the whole spatial region of interest are also part of the process model.
I
173
174
I
5 Model Tuning, Discrimination, and Verification
5.2.4 Hidden Components
Besides the model elements above, a systematically constructed process model contains elements that are usually not stated in an explicit way, but are important for model discrimination and verification. These are as follows: 0 0
0
Application domain determines the validity region of the model. Inequality constrains constrain the value of a parameter or variable often dictated by the underlying physics and chemistry (e.g., temperature should be positive). Modeling assumptions describe the decisions of the modeler in an explicit formal way.
The importance of the modeling assumptions is explained by the fact, that model building itself can be seen as a sequence of specifying, simplifying, or enlarging assumptions on the process system to be modeled (Hangos and Cameron 2001a). This way, a uniform assumption-driven approach can be developed where modeling assumptions are regarded as artifacts of the modeling steps and allow the rigorous formal description of the modeling process and its result.
5.3 Model Discrimination: Model Comparison and Model Transformations
Model discrimination is based upon systematically comparing different process models to find relationships between them. For this purpose we briefly review various model description forms and their transformations that form the basis of model discrimination.
5.3.1 Formal Representation of Process Models and their Transformations
Model elements. The differential-algebraicequation set that forms a lumped process model can be seen as a hierarchically structured knowledge collection constructed from the following main model elements: 0
0
0
The balance volumes are the basic elements in process modeling as they determine the regions in which the conserved quantities are contained. The conserved extensive quantities (differentialvariables) are the additive properties of a system and they are used for describing the conservation principles (such as mass, component masses and energy conservation) in the balance volumes. The balance equations reflect the conservation principles for each extensive conserved quantity.
5.3 Model Discrimination: Model Comparison and Model Transformations 0
0
The transport mechanisms such as convection, transfer, reaction, etc., correspond to an effect on the conserved extensive quantities so they appear as additive terms in the balance equations. Tne constitutive equations are algebraic relations that complete the model equations. They describe property relations, extensive-intensiverelationships, transfer and reaction rate relations, equipment and control relations, and balance volume relations. The algebraic variables are the nondifferential variables appearing in balance equations and constitutive equations in the form of thermodynamic state variables, transfer and reaction rate variables, equipment and control variables, constants, specification variables, etc.
Any process model can also be seen as a collection of mathematical elements,like variables and equations of the following type: 0 0
0
0
differentialequations that originate from the conservation balance equations; algebraic equations describing the constitutive equations, the transport mechanisms, etc., that are evoked by the conservation balance equations; diferential variables with their first time derivative present in the differential equations; algebraic variables including constants and specified (design) variables.
Finally, there can be other auxiliary elements,such as surfaces, for constructing a process model. Hierarchy of model elements. Driven by the role in the process model these model elements can be organized into the following natural hierarchy levels: 0 0
0 0
L1: balance volume level L2: balance equation level L3: transport mechanism level
L4: constitutive level
A simple process model of a jacketed tank reactor with all of the above-mentioned model elements is shown in Fig. 5.1. Modeling assumptions. A modeling assumption can be expressed in a natural language sentence and formally described by a triplet (Hangos and Cameron 2001a) given by:
variable-name relation keyword, where ‘variable-name’refers to a process model element described previously in this subsection, ,relation’ is an “=” (“equals”)or “is“ symbol in most cases, and ,keyword’ is a symbolical or numerical constant or another ‘variable-name’.Thus, a modeling assumption is understood as an assignment to the ‘variable-name’and usually translated into either additional mathematical relationships between model variables and parameters, or into constraints between already introduced variables and parame-
I
175
176
I
5 Model Tuning, Discrimination, and Verification
CONSERVATION BALANCES Balance volume: tank - mass balance: M = const
du = vpcpTa- VpcpT+ Vr(
energy
)- Q
dt - component mass balances: dm, __ - vcAo- vcA-Vr dt
---vc,+Vr dm, Balance volume: cooler - mass balance: Mc= const - energy balance: dU,= dt
dt
v ~ P ~ c ~ ~ T ~+~Q - v ~ P ~ c ~ ~ T
CONSTITUTIVE EQUATIONS
r = kc, ~~
E
k = k,e RT mA =Vc,
u, = M'CJ M =Vp
M , = V,P,
ma = Vc, Figure 5.1
v, T, c , cB
ASSUMPTIONS - 2 lumped balance volumes (tank, cooler) - 3 components in tank (A, B, solver)
- 1 component in cooler - constant mass holdups - constant physico-chemical properties - A -> B first order exothermic reaction in tank
A simple process model example.
ters. The modeling assumptions can either be elementary assumptions consisting of just a single triplet or composite assumptions being the conjunction (logical and) of elementary assumptions. The model equations can be formally seen as a structured string obeying syntactical and semantic rules and the modeling assumptions can then be regarded asformal modeling transformations on these equations resulting in another set of model equations. The effect of an assumption on a given set of equations is computed following all of the implications ofthe assumption through the syntactical and semantic rules. Formally this is performed by substituting the assignment equations describing the assumption into all of the original model equations and then performing rearrangements using algebraic transformations. 5.3.2 Algebraic Transformations, Algebraically Equivalent Models
A set of functionally equivalent process models can be algebraically equivalent, when one can transform any member of the set to any other one using algebraic transformations. Algebraic transformations can be applied to model equations and to model variables (including both differential and algebraic ones). Examples of algebraic transformations on a set of process model equations are multiplying an equation by a constant number, adding two equations together, substituting one equation for another by expressing it as a variable and substituting that variable in every other equation. It is important to note that the variables do not change when applying algebraic transformations to the equations of a model, but the
5.3 Model Discrimination: Model Comparison and Model Transformations
CONSERVATION BALANCES Balance volume: tank - mass balance: M = const - energy balance: ._ dT v -=-(To dt V
- T ) + koe
RTCA(-AH)
p',
- component mass balances: dc, dt dcE
dt
-
v (cAo-cA)-k0e V
- 'c,+k,,e
Figure 5.2
V
Balance volume: cooler - mass balance: M-= const - energy balance: KA(T-T,)
dT,
V@,
dt
-
v,
V,
(TdJ-T )+
KA(T-T, I V c P$,'
__ E
R T ~ A
~-
'TcA
The substituted process model
model equations do. The formal description of algebraic transformations to a set of algebraic equations, together with a canonical set of primitive algebraic transformations and their effect on computational properties, can be found in Leitold and Hangos (1998).There it is shown that certain computational properties of DAE models, such as the differential index, does not change with algebraic transformations, but others, like the decomposition of the model may change quite drastically. Another type of transformation applicable to a model is when one applies a linear or nonlinear algebraic transformation to some of its variables and writes the model using these new transformed variables. This is analogous to coordinate transformation in geometry and is therefore called a coordinate transfornation. It is important to note, however, that the general locally invertible nonlinear transformations, which are useful and widely used in nonlinear system theory, are not well accepted in process systems engineering because they change the engineering meaning of the variables. The extensive-intensiveconstitutive algebraic equations, however, are widely used to transform a process model into its intensive variable form suitable for process control and diagnostic applications. Here we transform the set of differential variables in a balance volume originally equal to the set of conserved extensive quantities to be the set of overall masses (left unchanged) and the measurable intensive quantities (such as temperatures, compositions and pressures). Model classes. Algebraic transformations may change the mathematical form of a model, but an algebraicallytransformed model is the same from a process engineering point of view. Such algebraicallyequivalent models form a model class. Figure 5.2 shows an algebraically equivalent form of the simple process model of the jacketed tank reactor depicted in Fig. 5.1, where the constitutive equations have all been substituted into the differential ones.
5.3.3 Model Simplification and Model Building Transformations
Modeling assumptions can be regarded as representations of the engineering activity and decisions during the whole modeling process in constructing, simplifying and
I
177
178
I
5 Model Tuning, Discrimination, and Verification
analyzing process models and they act as modeling transformations on the process models. Assumption-driven modeling works directly with modeling assumptions, thus enabling the definition and handling of process models as structured knowledge with defined syntax and semantics. Model building assumptions. The modeling assumptions applied in the model building phase determine the structure of the process model and the assumptions applied to an existing model modify the equations and may even change the structure of the model. The model building procedure is seen as a sequence of model building, specification assumptions, and their associated transformations, as well as algebraic transformations applied to a process model. This way of assumption-driven model building offers a systematic incremental way of constructing a process model in its canonical form. Model simplification assumptions. The model simplification assumptions, which can either be elementary (atomic) or composite, and are composed of a conjunction of elementary assumptions, can be formally described as model transformations. These transformations are projections in a mathematical sense and are often performed in two substeps :
1. adding the equality describing the assumption to the already existing set of model
equations and performing algebraic transformations (for example substitutions) to get a more simple form; 2. adjusting the set of differential, algebraic and design variables to satisfy the degree of freedom requirement. The effect of a simplification assumption on a given set of equations is computed following all of the implications of the assumption through syntactical and semantic rules. Formally this is performed by substituting the assignment equations describing the assumptions into all of the original model equations and then performing rearrangements using algebraic transformations. It is important to note that not every simplification transformation is applicable to a particular process model. Moreover, a transformation may influence only part of a process model and then this effect propagates through the relationships between the model elements. Forward reasoning can be applied to find all of the implications of a simplificationtransformation, and the effect of a composite transformation is computed by generating a sequence of simplified process models. It is important to note, however, that the resultant model may be different if the order of the assumptions is changed, because model simplificationtransformations may be related and noncommutative (Hangos and Cameron 2001a). In conclusion we can say that algebraic manipulations can be regarded as equivalence transformations, model simplification, and enrichment assumptions as general nonequivalence modeling transformations acting on process models that bring the process model out from its original model class.
5.4 Model Tuning
5.3.4 Model Discrimination and Model Comparison
Model discrimination aims to find an exact relationship between two given process models of the same process system. First, we have to determine if these models are developed for the same modeling goal, and if so, if they are algebraicallyequivalent. Unfortunately, there are no standard formal ways of performing the above two basic tasks, mainly because the lack of our knowledge in formal description and utilization of the modeling goal itself (see Lakner, Cameron and Hangos 2003). In the case of functionally equivalent models one should only perform model comparison to investigate if they are algebraically equivalent, and if not, give their relationship in terms of model simplification transformations that lead the more detailed model to a more simple one (see later Section 5.4.3). Canonical form. We have already seen in Section 5.3.1 that both functionally and
algebraically equivalent process models form a model class. It is then useful to define and use a canonicalform of a process model class, which is a member of the class having each model element in the form that has a clear engineering meaning (Hangos and Cameron 2001). The differential equations of a process model in canonical form are the conservation balance equations for the overall mass, total energy (or enthalpy) and all but one component masses in their extensive form containing terms for the convective, transfer and source transport. These are supplemented by constitutive algebraic equations of standard categories such as intensive-extensive relationships, reaction rate equations, thermodynamic state equations (such as the ideal gas law), etc. The simple process model shown in the left-hand side of Fig. 5.1 is in its canonical form. Comparison o f process models. In the case of algebraically different but functionally equivalent process models we aim at finding out if these models are from the same model class, or not. The general procedure of comparing these models is to first bring them into their canonical form and then compare them element-wise following the hierarchy of the model elements from top (balance volumes) to down (model variables and parameters). This approach requires that the models being compared are given with their entire model elements hierarchically arranged.
5.4 Model Tuning
Process models generated for a given modeling goal may often be over-simplifiedor over-complicated for another use, which is why there is usually a need to extend, to simplify, or to generate a new model in the worst case. In addition, process models usually contain unknown parameters to be estimated using measured data, when we need to calibrate the model to tune it to meet the modeling goal.
I
179
180
I
5 Model Tuning, Discrimination, and Verification 5.4.1 Model Simplification
In the model constructing step of the seven-step modeling procedure we may need to perform a model simplification phase for refinement of an already defined process model by additional simplifying modeling assumptions. These simplifylng assumptions can be described by triplets and are usually translated into additional mathematical relationships. The simplifying procedure itself consists of two main steps. These steps are the implication of the modeling assumptions on the model equations with the aid of syntactical and semantic rules, and the rearrangement of the resulting equations make use of formal algebraic transformations. Because the process model elements are related to each other by a well-defined syntax and semantic, a model element cannot be simplified independently of the others. In addition, the implications of a simplification assumption depend both on the assumption and the structure of the model. For example, when a modeling assumption is related to a balance volume, its implications can refer not only to the equations of the balance volume, but could modify other related balance equations in other balance volumes. The same way, a modeling assumption related to a term in a balance equation can imply modifications of other terms in other balance equations. Figure 5.3 shows how a modeling assumption on the mass convective term of a balance volume changes the energy and component mass convective terms when a simplification assumption is applied to the model of the jacketed tank reactor depicted in Fig. 5.1. The implications of an assumption on the model equations can be determined by forward reasoning where all of the implications of the assumption are computed by respecting syntactical and semantic rules. The set of resulting modeling equations at the end of the implication stage is rearranged to an easily-solvableform by using algebraic transformations (Lakner et al. 1999).
CONSERVATION BALANCES Balance volume: tank - mass balance: M = vp
- energy balance:
du= vpc,To dt
SIMPLIFICATION ASSUMPTION
- mass convective outlet in tank is negligible + Vr(
)-
Q
- component mass balances: dm
2= vcdo-Vr
dt dm,=Vr dt
Balance volume: cooler - mass balance: M, = const - energy balance: du, - vC~ , c , J ~ -v, o PF,T~ + Q
7
Figure 5.3
A simplified process model
CONSTITUTIVE EQUATIONS
Q=WT-T,J r = kc, E -_
k = k,e RT mA =Vc, mn
= VCB
U=Mc,T
u, =McccpTc M =Vp
M, = V,P,
5.4 Model Tuning
5.4.2 Model Extension
Model extension procedures are widely applied in process modeling in quite different contexts:
0
0
At the end of a modeling cycle, in the model validation step, it may turn out that the developed model fails to fulfill the modeling goal. Then one has to extend the model by including additional model elements that were originally neglected, such as balance volumes, balances, mechanisms, etc. The incremental assumption-driven model building (Williams et al. 2002) uses model extension procedures, too. The incremental building of balance equations in the model equation constructing procedure can also be seen as model extension (see Section 5.2.2).
There are two questions of critical importance in model extension procedures: the selection of default values and the methods of ensuring incremental consistence. Default values set the value of model elements belonging to a model element (the “children elements” in the model element hierarchy) that is just being created. Incremental consistency is ensured by allowing one to add only such new model elements to an already existing consistent model that are not in conflict or contradiction to any already existing model element. Details about these questions can be found in the literature on computer-aided process modeling (CAPM)tools (see, e.g., Jensen-Krogh 1998; Modkit 2000). Empirical model building. This is a special method of model extension using empirical data and grey box models (Hangos and Cameron 2001). It is a top-down approach of model extension where the model element(s)to be changed or extended is (are)determined in a heuristic black box way by using sensitivity analysis. The submodel of the new element is also constructed in a heuristic black box way from some general approximating model class with its parameters estimated using measured data (see also model calibration in Section 5.4.4).
5.4.3
Model Comparison by Assumption Retrieval
In order to avoid any inconsistency during model simplification and extension, it is extremely useful to register explicitly all modeling assumptions applied in the construction and modification of the process model. The documentation of the model contains these modeling assumptions (Hangos and Cameron 20014 in the ideal case, but this documentation can often be incomplete or even missing. In order to complete the model documentation with all modeling assumptions, an assumption retrieval procedure could be used. The retrieval of modeling assumptions from a pair of process models for model analysis and comparison is an important but unusual problem where not only efi-
I
181
182
I
5 Model Tuning, Discrimination, and Ver$cation
cient algorithms but the engineering understanding is lacking. The reason for this is that assumption retrieval can be regarded as the inverse task of model transformations where modeling assumptions are determined from two related (one detailed and one simplified) process models of the same process system. As model transformations are projections in mathematical sense, it is not possible in general to retrieve fully the original model from the transformed one and from the transformations. Because of this, the result of assumption retrieval from the original and the transformed models may not be, and in general will not be, unique. Because of the nonuniqueness of the assumption retrieval task, an intelligent exhaustive search algorithm (Lakner et al. 2002) is needed for its solution. 5.4.4 Model Calibration (Model Parameter Estimation)
Process models developed from first engineering principles almost always contain model elements, model parameters and/or other elements like reaction rate expressions, the value of which is unknown. While the modeling approach futes the structure of the model, these unknown elements make the model “grey,”that is, partially unknown. Measured data from the real process system to be modeled is used along with model parameter and/or structure estimation methods to fine-tune the model for meeting the modeling goal. This fine-tuning of process models is called model calibration and is a standard step in the seven-step modeling procedure (Hangos and Cameron 2001). There are several key points to take special care of when performing model calibration: 0
0
Selection of model parameters to be estimated. Besides the real unknown model parameters, one often has parameters or model elements with large uncertainty associated to their values. If the model is sensitive with respect to the value of these uncertain parameters, then it is advisable to consider them as unknown and estimate their values using measured data (NCmeth et al. 2003). Nonlinear parameter estimation. In most of the cases the parameters to be estimated enter the model in a nonlinear way and the model itself is dynamic. This makes the parameter estimation problem especially difficult and can only be solved by numerical optimization techniques (Hangos and Cameron 2001 ; Ailer et. a1 2002). Quality of the data and the estimated parameters. The statistical nature of model parameter estimation requires one to check carefully the following key ingredients and properties of the parameter estimation method: - quality of the measured data (steady-state,no outliers and gross errors, etc.) and the presence of sufficient excitation; - quality of the prediction error sequence (if this is a realization of a white noise stochastic process); - quality of the estimated parameters considering their nonbiasness, variance, and covariance matrix.
5.5 Model Verijcation
5.5 Model Verification
Having completed the model equation constructing step of the seven-step modeling procedure (see Section 5.2.2), one needs to perform model verification, that is, to check the model against engineering insight and expectations, before attempting its solution. Model verification includes checks of syntax and semantics, as well as the well-posedness of the model from mathematical sense, and analysis of computational and dynamic properties.
5.5.1 Formal Methods for Checking Syntax and Semantics
Before a mathematical model is used for solvability analysis or is solved, it is useful to check and ensure its consistency. There are several methods for consistency checking that are applicable both in computer-aided modeling tools and in process systems engineering practices: 0
0
0
Dimension analysis is a useful simple check for consistency of the model equations in terms of units of measure. This very useful but not widespread method is used in ASCEND (Evans et. a1 1979) and VeDa (Bogusch and Marquardt 1997) modeling languages, for example. Syntactical veriication methods (checking bracketing, vector operations, etc.) are especially important for computer-aided modeling tools in which the model equations can be directly defined by the users. An example for this is the ICAS/ModDev modeling system (Jensen-Krogh1998). Logical checking (hierarchical consistency, material characterization consistency, chemical reaction rate equation derivation, etc.) is used for examining the modeling assumptions’ consistency. This very important verification method, accomplished before generating the model equations, is used in the majority of computer-aided modeling tools.
It is important to note that the above-introduced model verification methods are applicable only for partial consistency checking and they cannot insure full model consistency.
5.5.2 Structural Analysis of Computational Properties of Process Models
The structural analysis of dynamic lumped process models form an important step in the seven-step modeling procedure and is used for the determination of the solvability and computational properties of the model. This analysis includes the determination of the degrees of freedom (DOF), the differential index and the structural components of the model.
I
183
184
I
5 Model Tuning, Discrimination, and Ver$cation
Analysis of DOF and differential index. In order to solve a mathematical model, a
sufficient number of variables have to be specified so that the number of unknown variables exactly equals the number of equations. The DOF, i.e., the difference between the number of unknown variables and the number of equations in the mathematical representation, is equal to the number of variables that must be specified for obtaining a solvable equation system. There are three possible values for DOF to obtain (Hangos and Cameron 2001): 0
0
DOF=O. This implies that the number of independent unknowns and independent equations is the same and a unique solution may exist. DOF>O. This implies that the number of independent variables is greater than the number of independent equations and the problem is underspecified. In this case some of the independent variables have to be specified by some external considerations in order for the DOF to be reduced to zero. DOF
It is important that the DOF analysis can be applied both on the entire equation system and on the subsets (mass-related equations, energy-related equations, etc.) of model equations separately. The diferential index of a DAE is defined as the minimum number of differentiations with respect to time that the algebraic system of equations has to undergo to convert the system into a set of ordinary differential equations (ODE) (Hangos and Cameron 2001). The index of a pure ODE system is zero by definition. When the Jacobian of the algebraic equation set of DAE is of full rank, then the index of DAE is one. In this case the initial values of the differential variables can be selected arbitrarily, and the DAE can easily be solved by conventional methods such as RungeKutta or Backward Differentiation methods. If, however, the index is higher than 1, special care should be taken in assigning the initial values of the variables, since some “hidden”constraints lie behind the problem specifications. Structural decompositions. Effective graph-theoreticalmethods have been proposed in the literature based on the analysis tools developed by Murota et al. (1987) for the determination of the most important solvability property of lumped dynamic process models (Leitold and Hangos 2001): the differential index and the structural components. The analysis is based on constructing the structural representation graph of the DAE model equations where the variables are represented as vertices and the equations as edges (dependencies) between vertices. Labels are associated with the vertices of the graph indicating the computational property of the associated variable. The reduced representation graph, together with the L-and M-components and their hierarchy, are determined by the analysis, which can effectively be used to select a suitable numerical solution method and to determine the computational path. In addition, one can artificially structure a DAE model by using algebraic transformations to be able to solve it more efficiently (Robertson and Cameron 1997).
5.5 Model Ver$cation
5.5.3 Analysis of Structural Dynamical Properties (Controllability, Observability, Stability)
One simple yet powerful method of model verification is to analyze the structural dynamical properties of the developed model and compare the result with engineering expectations. The first step of the analysis is to transform the lumped process model in its DAE form into a nonlinear state-spacemodel form, which is only possible for index 1 models. Structure graph. Then the so-called structure graph, a weighted (signed) directed
graph (SDG)of the model is constructed, which contains the state, input and output variables as vertices, and the model equations determine its directed edges in such a way that a directed edge points towards variable vi from vj when vj appears on the right-hand side of the equation that determines v,. The weight of the edges contains the sign of the effect (the sign of the partial derivative) the edge is associated with. Note that an SDG model corresponds to a class of process model with the same structure. We say that a structural dynamical property, such as structural controllability, observability, or stability, holds for a class of process models if almost every member of the class (with the exception of null-measure sets) possesses the property. Check o f structural properties. Given an SDG model, there are simple-to-check
combinatorial conditions for the underlying process model class to be structurally controllable or observable (Hangos and Cameron 2001; Hangos et al. 2001). For example, a process model is structurally controllable (observable)if its state structure matrix is of full structural rank and its SDG graph is input (output) connectable, that
Figure 5.4 The SDC of the simple process example in Fig. 5.2 (input: cAo,To,T& output: cg, T).
1
185
186
I
5 Model Tuning, Discrimination, and Verification
is, there exists at least one directed path to every state (output)variable vertex from an input (state)variable vertex. The check of structural stability is more computationally demanding and requires finer qualitative information on the value of the model parameters. A simple, but not thorough enough, general method of checking structural stability of a process model class is the method of conservation matrices (Hangos and Cameron 2001). Here we use the state matrix of a locally linearized process model and check if it is a conservation matrix or not. A real square matrix is a conservation matrix if its diagonal elements are negative, all other elements are nonnegative and the diagonal elements are row (or column) dominants, i.e., the absolute value of the diagonal element is greater that the sum off the off-diagonal elements in every row (or column).
Acknowledgments
This research was partially supported by the Hungarian National Research Fund through contract numbers TO42710 and TO47198 and is gratefully acknowledged.
References
M. (2002) Parameter-estimation and model validation of a low-power gas turbine. Proceedings of the IASTED International Conference, Modelling, ident$cation and control, Innsbmck, Austria, pp. 604-609. 2 Bogusch, R., Marquardt, W. (1997) A formal representation of process model equations. Computers and Chemical Engineering 21, 1105-1115. 3 Evans, L. B., Boston, J . F., Britt, H. I., Gailler, P. W., Gupka, P. K., Joseph, B., Mahalec, V., Seider, W. D.. Yagi, H. (1979) ASPEN: An advanced system for process engineering. Computers and Chemical Engineering 3, 319-327. 4 Hangos, K. M.,Cameron, I. T. (2001) Process Modelling and Model Analysis. Academic Press, London, pp. 1-543. 5 Hangos, K. M.,Cameron, I. T. (2001a) A formal representation of assumptions in process modelling. Computers and Chemical Engineering 25, 237-255. 6 Hangos, K. M.,Lakner, R., Geizson, M. (2001) Intelligent Control Systems: An Introduction with Examples. Kluwer Academic Publishers, pp. 1-301. 7 Jensen-Krogh, J. A. (1998) Generation of Problem Specijc Simulation Modek; Withinan Integrated Computer-Aided System. PhD thesis, Danish Technical University. 1 Aibr, P., Szederkinnyi,G., Hangos, K.
8 Lakner, R., Cameron, I., Hangos, K. M. (1999) An assumption-driven case-specificmodel editor. Computers and Chemical Engineering 23,695-698. 9 Lakner, R., Hangos, K. M.,Cameron, I. T. (2002) Assumption retrieval from process models. Computer-Aided Chemical Engineering 9,195-200. 10 Lakner R., Hangos. K. M.,Cameron, I. (2003) Construction of minimal models for control purposes. Computer-Aided Chemical Engineering 14, 755-760. 11 Leitold, A,, Hangos, K. M. (1998) O n algebraic model transformations. In: CPM98.3rd IEEE European workshop on computer-intensive methods in control and data processing. Prague, Czech Republic, pp. 133-138. 12 Leitold, A., Hangos, K. M.(2001) Structural solvability analysis of dynamic process models. Computers and Chemical Engineering 25, 1633-1646. 13 ModKit (2000) Computer-Aided Process Modeling (ModKit), http:www.lfpt.rwth-
aachen.de/Research/Modeling/modkit.html.
14 Murota, K. (1987) Systems Analysis by Graphs
and Matroids. Springer, Berlin.
15 N h e t h , H., Pdkovics, L., Hangos, K. M.
(2003) System identification of an electropneumatic protection valve. Research Report SCL-o01/2003. Systems and Control Labora-
5.5 Model Ver$cation
tory. Computer and Automation Research Institute, Budapest, Hungary, http: daedalus. scl. sztaki. hu/PCRG/. 16 Robertson, G.A,, Cameron, I. T. (1997) Analysis of dynamic process models for structural insight and model reduction I: structural
identification measures. Computers and Chemical Engineering 21,455-473. 17 Williams, R. P. B., Keays, R., McGahey, S., Cameron, I. T., Hangos, R M. (2002) SCHEMA: Describing models using a model. ing object language. CHEMECA-2002, pp. 922.
I
187
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8 I189
6 Multiscale Process Modeling /an T Cameron, Cordon D. Ingram, and Katalin M. Hangos
6.1 Introduction
This chapter covers multiscale modeling by discussing the origins of such phenomena in process and product engineering as well as discussing the approaches to the modeling of systems that seek to capture multiscale phenomena. The chapter discusses the development of the partial models that make up the multiscale model, particularly focusing on the characteristics of those models. The issue of partial model integration is also developed through the use of integrating frameworks. Those frameworks are analyzed to understand the important implications of model coupling and computational behavior. Throughout this chapter, reference is made to granulation processing, which helps illustrate the concepts and challenges in this important area.
6.2 Multiscale Nature of Process and Product Engineering 6.2.1 The Origin and Nature of Multiscale Engineering Systems
Multiscale systems, and hence their models, exist due to the phenomena they contain or seek to represent. This is due to the fact that thermodynamic behavior and rate processes undergird our main view of the scientific and engineering world. The underlying phenomena come into focus depending on the granularity of our perspective, which is influenced by the history of scientific investigation and, indeed, our own backgrounds. This perspective ultimately deals with length and time scales that can vary from atomic to global scales, or beyond. We can investigate time scales of nanoseconds to millennia, or length scales from nanometers to light-years,as seen in Villermaux (1996), who illustrated the typical scales dealt with in chemistry, Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright @ 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
190
I
G Multiscale Process Modeling
physics, chemical engineering and astronomy. Hence, we are presented with a spectmm of scales depending on where we wish to view the system under study. Our modeling efforts are simply a mapping of our understanding of these phenomena into a convenient mathematical or physical representation. The amount of scale-related information we incorporate into our models determines the multiscale degree of that representation. In some cases we can work on a single scale of time and/or length, or incorporate two, three or more scales within our models. This latter case is the area of multiscale modeling, which we address here. As evidenced by the literature on the subject of multiscale engineering systems, there has been an explosion in interest since the mid 1990s (Li and Kwauk 2003). Papers on this topic at the start of the 1990s were very few. By 2000 a ten-fold increase in publications occurred and it continues to grow at a phenomenal rate. It is an area of intense research driven mostly by applications in science and engineering, especially materials science, mathematics and physics. Li and Kwauk (2003)and Glimm and Sharp (1997) provide examples from many disciplines. In fact, multiscale models are often multidisciplinary. Within chemical engineering, the multiscale approach facilitates the discovery and manufacture of complex products. These may have rnultiscale product specijcations, that is, desired properties specified at different scales. Biotechnology, nanotechnology and particulate technology - in fact, product engineering in general - are driving the interest in the multiscale approach (Charpentier 2002; Cussler and Wei 2003). In addition, despite the continuing increase in computing power, there are problems of practical interest that will remain intractable when tackled by direct, “brute force” methods. Multiscale techniques provide a way of making these problems feasible. There are a growing number of tools, methods and representations for engineering systems, yet little fundamental conceptual analysis leading to overall frameworks that help guide the modeling of multiscale systems.
6.2.2 Length, Time, Other Scales and their Representation
Because process engineering has its roots in physics and chemistry, the properties of models reflect the underlying time and length scales on which important phenomena occur. This can be from the quantum mechanical length scales of m with time scales of s to global scales of lo4 m and 10’ s and higher. At one extreme we are concerned about subatomic behavior, while the other extreme represents global processes that might have characteristic times of years or decades. Small scales are of significant interest in determining product properties whereas large-scale processes can be of interest to process engineers involved in areas such as climate change, environmental impact and supply chain management. Figure 6.1 shows a general scale map appropriate to process and product engineering. It is an adaptation of work by Grossmann and Westerberg (2000). It is noticeable that there is a general relationship between length and time scales that reflects the time constants over which phenomena occur at different lengths of behavior.
6.2 Multiscale Nature of Process and Product Engineering
mi:
1
I
Thin films
5
MoIecuIe
Length
Figure 6.1 A general scale map
It is now widely appreciated that product quality is often determined at scales well below the scales applicable at the processing or macrolevel. Hence there is intense interest in micro- and nanoscale behavior in product design. For example, the development of granulation processes via drum or pan granulation is a multiscale operation, where final product quality is determined not only by the macroscale processing equipment level, but also at the microscale level of particle formation and interaction. A typical granulation circuit diagram is shown in Fig. 6.2, which highlights the principal operating equipment in the circuit. In this case, the circuit consists of the granulator where fine feed or recycle granules are contacted with a binder or reaction slurry. Growth occurs depending on a number of operational and property factors. Drying, product separation and treatment of recycle material then occurs. For this application, Fig. 6.3 shows a scale map from Ingram and Cameron (2004, which considers the key phenomena as represented by length and time scales within the processes. The scales represent individual particles through to agglomerates and then onto processing equipment and finally the complete circuit. Besides the length and time scales, a detail scale could also be considered, which seeks to develop models with varying degrees of fidelity in relation to the real world phenomena. This form of scale can consider such issues as: 0
0
The granularity of the system view in terms of number and types of balance volumes and the degree of aggregation that takes place. The number of key mechanisms related to flows, reaction, heat, mass and momentum transfers within the model and their inclusion or exclusion. Of particular interest is the complexity and fidelity of the constitutive relations. These issues can have a significant impact on the validity region of the resultant process
I
191
192
I
G
Multiscale Process Modeling
Binder
I
A
Granulator
I
-.1.. Dryer
Wet granules
Dry granules
Screens
Oversize Product
Undersize
Crusher
Recycle Figure 6.2
Typical continuous granulation plant
10000
I - -1
L-J
Im
-.-E m
.-u
Circull
‘--I
c
.I?
2
I
2 0
I I
I
10
I
I
u Y
2
I
I
1 0
w
s,J
Granule tmd
1 0.1
0.01 106
Figure 6.3
lor
lo*
3001
0.01
01
1
Characterietlc length (m)
10
100
1000
Scale map for granulation processes
or product models, as different relations such as equations of state or property models are used. 0 Species identification and representation using, for example, “lumped” representations common in pseudo-component approaches to petroleum fractions. The detail scale complements the traditional time and length scales that are common in multiscale modeling.
6.3 Modelingin Multiscale Systems
6.2.3 Key Issues in Multiscale Process Modeling
We mention briefly some of the key issues in multiscale modeling before addressing some of those issues related to modeling and integration in Sections 6.3 and 6.4. The principal issues relate to:
0
0
0
Scale identification and selection for a specific modeling goal: For any modeling task the issue of identifylng what scales are needed to be represented in the final model is an important consideration. As a first step, the literature contains a variety of scale maps and diagrams that show the hierarchical organization in specific application areas (for examples Alkire and Verhoff 1994; Maroudas 2000). There are several approaches that are more fully elaborated upon in Section 6.3. However, an understanding of the time scales of interest can often dictate the final scales of length and time that are needed in the model. Model representation: In what form does the model exist? This question is often answered by our understanding of the system under study and the phenomena we can identify. It is most often the case that grey-box models are used at several scales, because we often have some mechanistic understanding of the system phenomena. At the same time there are system parameters that are calculated via data fitting or, in some cases, are averaged values from another calculation at a lower scale. A spectrum of models exists from completely black box to mechanistic descriptions. Model integration: Model integration refers to linking the partial models that apply at a single scale into a composite, multiscale model. It still remains a challenging area and one where much is yet to be done to resolve this important issue. Section 6.4 discusses a number of these issues, with reference to several application areas. A number of integration frameworks exist, which possess distinct characteristics of information flow between scales and hence computational and other properties. Model solution: Solution of multiscale systems remains another major challenge, especially where distinct model forms are present within the composite or multiscale model. This is a huge area and beyond the scope of the present chapter. Some aspects are briefly discussed in Section 6.4.4.2.
6.3 Modeling in Multiscale Systems
The general approach for developing a process model for multiscale systems is an extension of that for conventional, nonmultiscale process systems (Hangos and Cameron 2001a). This approach, called the “seven-step modeling procedure”, included the following stages: 1. model goal set definition (modeling problem specification); 2. model conceptualization (identifylng controlling factors); 3. modeling data: needs and sources;
1
193
194
I
G Multiscale Process Modeling
4. 5. 6. 7.
model building and model analysis; model verification; model solution; model calibration and validation.
Thus, this section focuses on the special elements of the extended general approach that make it applicable for modeling multiscale systems as well. Similar to the conventional case, the modeling problem specification consists of the definition of the process system with its boundaries, subsystems, components, mechanisms together with the description of the modeling goal. In the multiscale case the process system description as well as the modeling goal may call for a multiscale model when there are order of magnitude differences in either length or time behavior between the system elements. A separate subsection below deals with the specialties of the modeling goal in modeling multiscale systems. An extended seven-step modeling procedure (Hangos and Cameron 2001a) can also be followed as a general approach in the multiscale case when some of the steps need special care and procedures that are described below.
6.3.1 Multiscale Modeling Strategies
Most often the need for developing a multiscale model arises in step 1 (Problem d e j nition) or step 2 (Identifi) controllingfactors) of the seven-step modeling procedure. Here, one usually identifies the necessary scales that become part of the problem definition. Thus, the first two steps should be repeated for each of the scales to develop individual modeling problem definitions and to identify the relevant controlling factors. This way a set of scale-driven related submodels is created. Interest in system behavior over a long period of time (steady-state properties) often excludes phenomena, and hence scales, operating on a very fast timeframe. This eliminates the fast components of the system. In other cases such as modeling startup and shutdown performance of processes the intermediate time scales are of main interest. Here, some exclusion of slow and very fast components can be made (Robertson and Cameron 1997a,b). In addition, modeling decisions should be made on how to organize the information flow between the partial models, that is, to determine the multiscale modeling framework (Ingram, Cameron and Hangos 2004) as discussed in Section 6.4. Having formulated the modeling problems for each of the partial models, identified the controlling factors and reviewed the data available,we can turn to constructing the model in step 4 of the seven-step modeling procedure. There are two fundamentally different approaches to doing this: the bottom-up and the top-down approaches. As their name suggests, bottom-up approaches start with constructing the partial model in the finest resolution scale and proceed towards the coarser scales. Alternatively,top-down approaches start with the coarsest scale partial model
6.3 Modeling in Multiscale Systems
and refine its elements using finer scale submodels if necessary. Two other approaches have also been suggested in the literature. A simultaneous approach, which has been used industrially in the context of new product design (Lerou and Ng 1996), involves developing models at each scale of interest at the same time, and then linking them together. Middle-out modeling is the method of choice in some multiscale biological applications (Noble 2002). It refers to building up a model by starting with the scale that is best understood and has the most data, and then working “outwards” (to finer and coarser scales) from that. In the following, the key elements of the extended modeling procedure, the development and role of the multiscale modeling goal set, and the specialties of the topdown and bottom-up strategies of model construction will be described in more detail. 6.3.1.1
The Role of the Modeling Goal
Any process model is developed for a specific use or possibly multiple uses. These uses influence the goals that the model must fulfill. It is, however, important to recognize that modeling goals normally change, are refined, deleted or added as the modeling cycle proceeds. This is clearly seen in the modeling process of multiscale systems, where the original modeling goal might indicate the use of a multiscale approach and then the modeling goal set of any partial model is established and refined incrementally as the modeling proceeds. Overall Modeling Goal
The modeling goal is typically a statement with three major components: 0 0 0
the need to develop a model in some relevant form; an application of the model for a given purpose; a reality that is being modeled.
These three aspects can in turn be decomposed into lower level goals that are applicable to the partial models. This can be seen in the overall modeling goals such as: “Developa model for evaluating control options for chemical vapor deposition (CVD) of...”. The overall modeling goal will determine the number and hierarchy of the scales and their integration framework. Multiscale models are needed if the process system has controlling factors or mechanisms with very different scales covering several orders of magnitude. The modeling goal or the requested modeling accuracy might require partial models of finer granularity. A goal might include inputs and outputs at different scales, e.g., in CVD, where reactor operating conditions (macro) might be achieved such that film microstructure has acceptable smoothness (micro) or simply feasibility. Alternatively,the modeling feasibility can dictate partial model inclusion. An example of such a case can be a dynamic modeling problem of an industrial granulator drum for fault detection and diagnosis purposes. Figure 6.4 shows some multiscale aspects of the granulation system and indicates some of the key informa-
I
195
196
I
6 Multiscale Process Modeling 1
PrMuction rate Mean granule size Power consumptm
.
&uct
Powder fed rate Powder size distribution Binder additian raw
Drum desugn details Drum spssd Granule flow rats
Granule size drstri'wtion Binder fbw rate Binder droplet size and spray paltern Granule rnarcer Granule mlmities Granule rnoishrre m l e n t and porosrty
Granule mass
Mass fradion of -lid Mass fraction of liquid
Panicle propsnies Bmder propenies Deformation rate
Figure 6.4
I
losses
Granule mass Ilow rate Product granule size distribution Granule moisture
Nucleation rate Coalascence kernel Residenca time
Succsss of mlescencu
Binder tilm michess HeigM of asperities Granule Younp's modulus Granule yield streps
A rnultiscale view of linking granulation phenomena.
tion flows that exist. Because the malfunctions and faults in this equipment can be consequences of the granulation mechanisms, transport phenomena, fluid dynamics and operation procedures affecting the whole system, one needs to have several scales, a granulation particle scale and equipment scales integrated in a multiscale modeling framework. Modeling Goals for the Partial Models
The systematic decomposition of the overall modeling goal to the individual scales is difficult and has not yet been studied well in the literature. If we consider the modeling goal to be a multifaceted statement then some of the facets originate in the original overall modeling goal, i.e., those goal elements that are relevant to the partial model related to a particular scale are simply inherited from higher level goals. The other integrutingfucets in the modeling goal set of a partial model ensure its consistency and purposefulness from the viewpoint of the multiscale integrating framework applied to the modeling problem. This integrating part may contain variables in other partial models to be computed with a given accuracy, and may determine the data or other model ingredients to be used that are delivered by other partial models. Continuing the granulator drum example, the granulation particle submodel will inherit the goal facets related to the malfunctions caused by the granulation process
6.3 Modelingin Multiscale Systems
itself. That is, it should be dynamic and should describe the formation, growth, and breakage of the particles. The integrating facets in the modeling goal set ensure that the model to be developed should produce the source and kernel functions in the granule population balance in every time instance with a given accuracy that is needed to complete the conservation balance equations on the equipment scale. 6.3.1.2 Gradual Model Enrichment or Iterative Deepening
The philosophy of the top-down approach for multiscale model development is very simple: start from the overall system model on the coarsest resolution scale (largest time or space scale) and develop a new partial model on a finer resolution scale if any facet in the overall goal set requires it. This approach can be regarded as iterative model deepening, which is directed by the modeling goal and its sensitivity with respect to the elements of the process model being developed. The top-down approach has been viewed as the best method for process engineering because time and cost pressures favor quick application of the results, with a minimum of detailed modeling. Later on in the lifecycle of the process, model refinement can be applied as required. The iterative model deepening technique is applied in step 4 (Model building and model analysis) of the seven-stepmodeling procedure when the number of scales and the model integration framework have already been selected. We then start from the overall system model on the lowest resolution scale with the overall modeling goal and determine which facets of the modeling goal are not satisfied. By using sensitivity analysis, it is possible to determine which model elements influence the missing goal facets. The model is then enriched by a partial model on a finer resolution scale for the necessary model elements. Repeating the above deepening steps until the entire modeling goal set is satisfied constructs the final multiscale model. It is worth mentioning that the iterative model deepening procedure is similar to the approach applied for empirical model building (Hangos and Cameron 2001a). If we again consider the granulator drum example, we start constructing the overall multiscale model by developing the material and energy balances of the drum and find out that we need finer models for the convective and diffusive flows, together with the source terms describing the particle birth, growth, and breakage processes in the granule population balance. 6.3.1.3 Model Composition
The bottom-up approach of constructing multiscale process models is also applied in step 4 of the modeling procedure as an alternative method to the iterative modeldeepening procedure. The overall model construction starts by building the partial models on the highest resolution scale. These models are then integrated according to the selected multiscale integration framework to prepare the modeling problem statements for the submodels on the next, coarser resolution scale. Bottom-upmodel composition is a common way to build multiscale models, when the submodels originate from different sources and/or are based on different princi-
I
197
198
I
G Multiscale Process Modeling
ples. The advantage of this approach is that substantially different models may be integrated if a suitably selected framework is found. The drawback is that the resulting model is often not homogeneous in its approach, purpose, and accuracy or it may happen that a partial model cannot be suitably integrated. Model composition ensures that the upper level models have a sound fundamental basis and may consequently be more reliable than those developed by model enrichment (Section 6.3.1.2). However there is the risk of starting the modeling process at a level that is too fundamental, which may lead to accurate but inefficient modeling. 6.3.2 Partial Models: Approaches and Classification
In this section we focus on the partial models or submodels of a multiscale process model. For this purpose, we assume a well-posed modeling problem for each partial model, i.e., a system description and modeling goal specificationfor any partial model. This implies that the mechanisms and data available can be determined individually. The approaches of model building of the partial models, that is the development of the model equations, are essentially the same as in the classical case: we may apply mechanistic approaches based on first principles or a black- or grey-box model. Similarly, the classification of the resulting partial models goes along the same lines as in the general, nonmultiscale case. 6.3.2.1 Mechanistic Approaches for Process Models
The mechanistic approach of developing the model equations of a process model uses first principles to construct the ingredients of a partial model. The modelbuilding subprocedure (Hangos and Cameron 2001a) is followed in this case with the following substeps: 1. system and subsystem boundary and balance volume definitions; 2. define the characterizing variables (inputs, outputs, and system states); 3. establish the balance equations for conserved quantities: mass, energy, momentum and number, etc.; 4. transfer rate specifications; 5. property relation specifications; 6. balance volume relation specifications; 7. equipment and control constraint specifications; 8. modeling assumptions.
In the multiscale case, however, steps 1 and 2 need special care, because these are partially determined by the other partial model@)and by the integrating framework of the overall multiscale model. As the result of a mechanistic approach to partial model construction, we obtain a process model with standard ingredients (see Section 6.3.3.2). This makes it relatively easy to integrate the resulting partial model into a multiscale framework (see Section 6.4).
6.3 Modeling in Multiscale Systems Table 6.1 Classification of partial models. Type of Model
Criterion o f Classification
Mechanistic
Based on mechanisms/underlying phenomena
Empirical
Based on input-output data, trials or experiments
Stochastic
Contains model elements that are probabilistic in nature
Deterministic
Based on cause-effect analysis
Lumped parameter
Dependent variables not a function of spatial position
Distributed parameter
Dependent variables are functions of spatial position
Linear
Superposition principle applies
Nonlinear
Superposition principle does not apply
Continuous
Dependent variables defined over continuous space-time
Discrete Hybrid
Only defined for discrete values of time and/or space
I Containing continuous and discrete behavior
6.3.2.2 Black- and Grey-Box Modeling
Usually, either engineering knowledge or data are not available to construct a fully mechanistic so-called “white-box’’ model, but we obtain a partial model with unknown model parameters and/or structural elements. One can then use measured data from the real process to give an estimate of the unknown model parameters or to construct an empirical, so-called “black-box”model for the unknown model element. This way a fully determined model can be obtained in the model calibration and validation step of the modeling procedure. It is important to note, however, that in most of the cases it is rather difficult, if not impossible, to calibrate a grey-box partial model, because one only has measured data from the overall process system and not from its subsystems corresponding to partial models (see Section 6.4.4.3 for more details). In some cases a fully black-box model should be developed by using empirical model building (Hangos and Cameron 2001a) that is similar in its approach to gradual model enrichment. Typical of these models are Box-Jenkins and neural networks.
6.3.2.3 Model Classification
The classification of the resulting partial models is done similarly to the general case. As the characteristic of the different classes of models have a great impact on the solution techniques and on the application area we briefly recall the criterion of classification and the type of models in Table 6.1 (Hangos and Cameron 2001a).
P
200
I
G Multiscale Process Modeling
It is important to observe that a classification criterion generates a pair or triplet of model types and all classification criteria can be applied to a particular partial model. 6.3.2.4 Particular Modeling Techniques for Different Scales
We make a brief digression here to list some of the techniques that have evolved to describe specific scales. In approximate order of increasing scale, they include: 0
0
0
0 0
0
computational quantum chemistry and molecular mechanics to deduce basic chemical properties on the electronic/atomic scale; molecular dynamics, Monte Carlo and hybrid methods that predict the ensemble behavior of many molecules; assorted techniques for front tracking, interface modeling, particle interactions and so on, grouped roughly as “mesoscale”models; computational fluid dynamics for detailed flow prediction; unit operation modeling and process flowsheet simulation, most familiar to chemical engineers, for vessel and plant scale studies; environmental simulation and business enterprise modeling on the “megascale”.
Each technique has many variations, both broad and subtle, and will likely require contribution from specialists in the field. This reinforces the cross-disciplinary nature of much multiscale modeling work.
6.3.3 Characteristics of Partial Models
This section deals with the characteristic properties and model elements of partial models in a multiscale process model with an emphasis on those model properties and ingredients that are important from the viewpoint of multiscale modeling. 6.3.3.1 Model Types
The classification of partial models and the resulting model types are already described in Table 6.1 in Section 6.3.2.3. There are some model types that often arise as partial models in multiscale systems and are therefore of special importance. Lumped and distributed parameter models. Most often both of these types of partial model can be found together in a multiscale model. The models in the finest scale are typically distributed parameter dynamic models, while the models on the coarsest scale are often lumped parameter models. The integration of such mixed-type partial models into a multiscale integration framework needs special care (see Section 6.4). Deterministic and stochastic models. There are some characteristic phenomena often found in multiscale systems, such as fluid dynamics, diffusion, heterogeneous
6.3 Modeling in Multiscale Systems
kinetics and the like that are often described by using stochastic models on a fine scale. The integration of such partial models into a framework of deterministic models is usually performed by using averages of different types (mean values for stochastic variables, time, and/or space averages). 6.3.3.2 Standard Ingredients of Partial Models
Partial models in a multiscale model have the same standard ingredients as usual process models. In this case, however, it is of crucial importance to have all the ingredients clearly specified in order to be able to integrate the partial models into a multiscale framework. The seven-step modeling procedure ensures that the resulting model possesses all ingredients in a consistent way. Therefore mechanistic partial models need no further effort to ensure this. Balance volumes. Homogeneous or quasihomogeneous parts of process systems over which conservation balances are constructed are called balance volumes. They are fundamental elements of a partial model in a multiscale system. The union of all balance volumes in a partial model spans the entire domain of the partial model, while the balance volumes in different partial models are related in a way determined by the multiscale integration framework. Model equations (differential and algebraic). The differential equations in a dynamic partial model usually originate from conservation balances and they are supplemented by constitutive algebraic equations that make the model complete from both the engineering and mathematical viewpoints (Hangos and Cameron 2001b). The algebraic equations are of mixed origin: they describe extensive-intensiverelationships, transfer and reaction rate equations, equations of state, physicochemical property equations, balance volume relations, and equipment and control relations. In addition, conservation balance equations are also algebraic equations in static partial models. Model variables and parameters. Variables in a dynamic partial model are time and possibly space-dependent quantities, which are either differential variables if their time-derivative appears in the model equations or algebraic variables otherwise. There are also model parameters present in a partial model; their value is regarded as constant. Some of the variables are given a value by using specification equations in order to make a model with zero degrees of freedom: specification equations are also part of the model. In dynamic partial models, some of the variables, the potential input variables, are also assigned a given “value,”a time-dependent function. Initial and boundary conditions. Similarly to any process model, partial models are sets of ordinary and/or partial differential (or integro-differential)and algebraic equations. In order to make the model well-posed from a mathematical sense, we need to give suitable initial and boundary conditions, which are also part of the model.
I
201
202
I
G Multiscale Process Modeling
Modeling assumptions. Although not always stated explicitly, modeling assumptions are key ingredients of any process model, because they document any decision the modeler had taken while developing the model. This way modeling assumptions are artifacts of process modeling and serve as key elements in model documentation, verification and consistency checking (Hangos and Cameron 2001a). 6.3.3.3 Particular Considerations for Integrating Multiscale Process Models
Some of the above standard ingredients of partial models have particular significance when integrating partial models into a multiscale framework. These are briefly listed in this subsection, while the way they are used is described in Section 6.4. Variables and parameters. Some of the characterizing variables of the partial models
in a multiscale process model, such as the conserved extensive quantities (mass, component masses, and energy), the thermodynamic state variables (temperature, pressure, and concentrations), and the rate variables, are usually related in a way determined by the integration framework. In addition, some of the model parameters in a partial model of coarser scale may be determined by the modeling output of partial models of finer scales. Such interscale relationships appear in a partial model in the form of additional specifications or equalities for the model parameters. A simple example of such an interscale relation can be the value of the porosity of an equipment scale model determined by another partial model on a particle scale. Initial and boundary conditions. Phenomena occurring at an interphase boundary often call for a partial model describing them in a finer scale. The result of such small scale modeling enters into the equipment-scale model as an expression for the boundary condition of that particular interphase boundary. Constitutive equations. Certain algebraic variables in partial models of coarser
scales, such as reaction or transfer rates, serve typically as connection points between partial models of different scales. Their determining “connecting”constitutive equations contain variables of different scale partial models, where the variables of finer scales are usually averaged to obtain the connecting algebraic variable on the coarser scale. A simple example of such a connection can be a reaction rate of a heterogeneous catalytic reaction, where the reaction rate equation serves as a connecting point between the equipment scale and the catalytic particle scale variables. As a result of the particle scale model, an algebraic relationship should be somehow determined that describes the average reaction rate in a point of the equipment as a function of the average concentrations and temperature at the same point in a time instance.
6.4 Multiscale Model Integration and Solution
6.4 Multiscale Model Integration and Solution
Model integration is the process of linking partial models that exist at different time and length scales into a coherent, composite multiscale model. Integration is the essence of multiscale modeling. If information did not flow between the scales of interest, the model would not be multiscale! Two models sharing information at the same scale does not constitute a multiscale system. In this section we will explore the challenges and status of model integration, compare alternative integration schemes, and address some implementation matters.
6.4.1 The Challenges and Status of Multiscale Integration
The multiscale modeler potentially faces several challenges in performing model integration because it may be necessary to link partial models that: 0
0
0
Span a vast range oflength scales. For example, in electrodepositing of metal onto printed circuit boards, electrical function is influenced by lattice defects of O(lo-'') m, local hydrodynamic and mass transfer processes affecting the deposition rate occur over 0(104) m, and current distribution must be controlled over the entire job, which is 0(1) m in size (Alkire and Verhoff 1994). Operate on very dcrerent time scales. In semiconductor fabrication using chemical vapor deposition (CVD) for instance, individual diffusion and reaction events occur at atomic time scales of 0(10-13) s and thin film growth takes place over minutes 0(102)s, while the total processing time for a multilayer film may be hours or days 0(104)s (Jensen 1998). Have disparate natures. The models may have different dominant phenomena, be continuous or discrete, have different dimensionality, be deterministic or stochastic, exist in concept only or already be implemented in commercial software, and be drawn from different scientific disciplines.
These challenges raise several types of issues that need to be tackled for successful integration (Pantelides 2001): 0
0 0
0
conceptual issues, for example, deciding what information should flow between the scales and how, or investigating the benefits of reformulating the partial models to allow better integration; mathematical issues, regarding how well-posed the problem is, for instance; numerical issues, such as the choice of numerical method for efficient and robust solution of the composite model; application issues, including the software engineering work needed to link diverse software across different platforms.
Despite the difficulties, there are many successful examples of multiscale model integration from a wide range of disciplines. It is partly the diversity of the applica-
I
203
204
I
6 Multiscale Process Modeling
tion areas and the complexity of the implementation details that obscures the common features in these examples. There are the beginnings of a classification scheme for model integration methods, and there is some knowledge of their characteristics. However, we are not yet at the point of understanding, in a quantitative and general way, how the choice of integration method affects the resulting multiscale model nor how to select the best integration method for a particular modeling problem.
6.4.2 The Classification of Model Integration Methods
Multiscale models can be classified by the way the partial models at each scale are linked. The very act of classification is enlightening. It draws out the similarities and differences in the broad range of existing multiscale models. It also makes comparing alternatives easier. Several authors have proposed classification schemes, often just in passing. A widely accepted scheme has not emerged and there is a confusion of terms. We will summarize a few of these classification ideas then recommend a particular one in the next section. Maroudas (2000) divides multiscale models in materials science into two categories: serial and parallel. Serial multiscale models result from sequentially scaling up the partial models: the outputs of the finer scale models become the inputs of the coarser scale ones. By way of contrast, in parallel multiscale models, the partial models exist side by side and are solved together. Stefanovit and Pantelides (2000) identified three ways in which molecular dynamics (MD) for property estimation could be integrated with traditional process models. First, the MD model is run to generate “pseudo-experimentaldata,” which are then fitted to find macroscopic material parameters. The second possibility is sequential like the first, but instead the macroscopic parameters are found directly from the microscale simulations. In the third approach, the macroscopic model calls the microscopic one on demand to find a relationship between macroscopic variables. This technique eliminates the need for macroscopic parameters. Phillips (2001),whose interest is materials science, distinguishes two broad classes of multiscale models: those that work in information passage mode, and models with internal structure. Information passage models are of two subtypes. In the first, the fine-scalemodel is used to generate macroscopic parameters, which are then used by the coarse scale model. In the second, the microscale model is transformed into a macroscale model, that is, a new effective macroscale theory is derived from a microscale one. Models with internal structure, which in Phillips’ area are typically “mixed atomistic-continuum models,” are where the microscopic model is adjacent to or embedded within the macroscopic model and they are solved together. Guo and Li (2001) and Li and Kwauk (2003) are concerned with predicting the structure of dynamic, multiscale systems with competing mechanisms. They describe three modeling approaches of increasing complexity and illustrate them by referring to gas-solid fluidization. The first approach is descriptive: each scale of interest is modeled separately without attempting to link them together. The second is
6 4 Muhiscale Model lntegration and Solution
correlative, where fine scale information is scaled up for use in coarser scale models. Volume-averaging of parameters is a possible technique, for example. In the third approach, variational, the partial models are solved together subject to the minimization of some quantity that reflects the strengths of the competing mechanisms that influence the system structure. The last approach can handle regime transformations. These schemes represent different views of multiscale integration, which reflect their originators’ backgrounds and modeling objectives. They all distinguish between sequential and simultaneous application of the partial models, and most discriminate between variations on the sequential method. Pantelides (2001) presented a classification scheme for multiscale process modeling with four categories, serial, simultaneous, hierarchical and parallel, that encompass most of the ideas above. Ingram and Cameron (2002) refined and extended this scheme with an additional class in order to distinguish between different methods of simultaneously combining partial models.
6.4.3 A Framework Classification for Process Modeling
We introduce the termframework to describe the way partial models, which apply at different scales, are linked, or integrated, to form a composite multiscale model. Figure 6.5 shows the extended version of Pantelides’ classification scheme for multiscale frameworks that is proposed for process modeling. The main division is between decoupling and interactive frameworks. The serial and simultaneous frameworks decouple the solution of the partial models, so that one partial model is solved (in some sense) then the others are solved in turn. In contrast, the interactive frameworks-embedded, multidomain, and parallel-essentially involve simultaneous solution of the constituent partial models. Another view of the classification scheme, Multiscale models
‘Interactiveframeworks’ Transformation
‘Decoupling frameworks’ Figure 6.5 Classification scheme for multiscale models based o n the framework used to link the partial models.
I
205
206
I
6 Multiscale Process Modeling
(b) Serial - Simolification (microscale)
(a) Simultaneous
(c) Serial - SimDlification (macroscale)
(d) Serial - Transformation
/M ......
*.p....:
.................
:
$F; IJ
(e) Serial -One wav /Micro to macro)
............
/MgU,
.......
.......
............ (h) Multidomain
(f) Serial -One way [Macro to micro]
(g) Embedded Key
Model domain
/
;; 'Function' form
L a
of model
Information flow
(i) Parallel
Modelling step
Figure 6.6
Domain relationships and information flows in models with microscale (p) and macroscale (M)parts.
expressed in terms of the model domains and the information flows between them, appears in Fig. 6.6. The framework classification attempts to define the broad conceptual options for linking partial models. It is not intended to discriminate between the very specific techniques that are used in particular applications. The following sections explore the frameworks in more detail, by giving a short description, common application situations, some advantages and disadvantages, and the main challenges associated with them. The integration of two partial models at a time is discussed. The fine and coarse scale models are referred to as microscale and macroscale models, respectively.
6.4 Multiscale Model Integration and Solution
6.4.3.1 Simultaneous Integration Framework
Description
In simultaneous integration, the microscale model is used to describe the system in its entirety (Fig. G.6a). This approach corresponds, for example, to using discrete element modeling to predict the trajectory of every particle (microscalelevel) in a fluidized bed (macroscale level), or the use of computational fluid dynamics (CFD) (microscale)to model the detailed fluid flow in a complex vessel (macroscale).The macroscale “model”simply summarizes or interprets the detailed microscale results, usually by totalizing, averaging or otherwise correlating the microscale data. This is why the macroscale model in simultaneous integration could be better called the “macroscale function.” In the fluid bed example above, the macroscale function might estimate the average bed expansion or gross solids circulation rate, while in the CFD case, the macroscale function might calculate the residence time distribution. The microscale and macroscale models are decoupled in the sense that information is transferred in one direction only: from the microscale to the macroscale. Application
Simultaneous integration is used when: 0
0
0
it is not possible to model any part of the system with sufficient accuracy at the macroscale level; some part of the region can be successfully modeled at the macrolevel, but the micro- and macroregions cannot be meshed together well enough (this is multidomain integration; see section 6.4.3.4); it is desired to view the system entirely at the microlevel.
It also serves as a baseline integration method against which other frameworks can be tested because it has “zero integration error” (Solomon 1995; Werner 1999). Advantages and Disadvantages
The advantages of simultaneous integration include the potentially high levels of detail, flexibility and accuracy. The main disadvantage is the very high computational burden, the highest of all the frameworks. This limits the size of the system and the length of time that can be simulated. Artificially reducing the system size risks failing to capture large-scale/long-timeeffects (McCarthyand Ottino 1998). Despite continuing growth in computer power, at least for the intermediate future, there will be problems of practical interest where microscale modeling of the entire system is impossible (Chan and Dill 1993). Simultaneous integration will also likely generate a large amount of detailed microscale data that is largely irrelevant to the modeling objective.
I
207
208
I
G Multiscale Process Modeling
Challenges
These include: 0
0
0
improving the accuracy of the microscale model, because any microscale error will flow to the macroscale; increasing solution efficiency via improved numerical algorithms and distributed computing; recognising when simultaneous integration is truly necessary.
6.4.3.2 Serial Integration Framework
Description
There are three broad, partially overlapping possibilities in serial integration: simplification, transformation and one-way coupling. 1. Simplification The microscale model covers a small part of the system domain. Usually, it is this model that we “simplify” (Fig.6.6b). In this case, the microscale model is simplified by just “fitting a curve” to computed input-output data, by systematic order reduction methods, or by analytical solution if possible. The simplified or solved microscale model creates a relationship between macroscopic variables that is easier to evaluate than the complete solution of the original microscale model. We should perhaps refer to the microscale model as the “microscale function.” The macroscale model spans the system domain and calls the microscale function on demand. This kind of integration is used, for example, to link local and global scales in climate modeling. Mechanistic city-scale calculations relating atmospheric and urban variables to pollutant fluxes can be approximated by polynomials, which are then used in a global atmospheric chemistry model to predict climate change under different development scenarios. A more familiar example may be the analytical solution of the reaction-diffusionequations in a porous catalyst to yield a Thiele modulus-effectivenessfactor relationship that is then used in reactor scale modeling. In serial integration by simplification, information flows in both directions between the micro- and macroscales. However, the framework is decoupling in the sense that the solution process has two stages. First, the microscale model is first simplified to a “microscale function,” and then the macroscale model is solved, which involves calling the microscale function. Often the microscale model is “simplified”,but sometimes it is the macroscale one (Fig.G.6c). For example, if the focus of a modeling exercise were a particular unit in a process, then a model of the process excluding that unit could be built and then simplified, to provide a computationally cheap approximation to the operational environment of the unit. In this case, there is a “macroscalefunction” (the rest of the plant) that is called as required by the microscale model (the particular unit). 2. Transformation Another possibility in the serial integration framework is transformation. The microscale model describes a small part of the system domain. It is “formally
6.4 Multiscale Model Integration and Solution
transformed” into a macroscale model (Fig.G.Gd). This process is also called upscaling, coarse-graining, degree of freedom thinning, and constructing new effective theories or laws. Now, the microscale model is no longer needed and the system domain is described entirely by the new macroscale model. Many techniques are used for upscaling: volume averaging, renormalization and homogenization, among others. The three named methods have been used to upscale the equations for diffusion in porous media. No flow of information occurs between the microscale and macroscale models during solution because, in effect, the microscale model has been eliminated. Only the macroscale model must be solved. It is a decoupling framework in this sense. 3. One-way Coupling The third variation on the serial integration framework occurs when, because of the nature of the system, information flows naturally between the scales in one direction only (Fig.G.Ge,f).Or, that the approximation is close enough to satisfy the modeling goal. Physical vapor deposition (PVD) is an example. A vessel scale (macro) model predicts the average spatial distribution of metal sputtered onto a substrate in a PVD chamber. A “feature scale” (micro) model can then track the build up of the deposit layer on features, such as holes and trenches, on the substrate surface. Another example is the use of (microscale)molecular dynamics to calculate a diffusion coefficient, which is later used at the vessel (macro) scale via Fick‘s law. The information may flow either from the microscale to the macroscale, or vice versa. The independent model is solved first, and then the dependent one is solved. In this framework the solution of the models is decoupled in one direction. Application
The use of serial integration depends on the broad strategy chosen: 0
0
0
Simplijcation.Virtually any microscale (or macroscale) model can be simplified by order reduction or approximation techniques; some models may be solved analytically under appropriate simplifying assumptions. Transformation. The ability to use transformation depends on the upscaling method chosen and the nature of the system. For example, sufficient “scale separation” is needed for homogenization (Auriault 1991). One-waycoupling. This method can be used when the nature of the system is such that one scale is dependent and the other is independent, or at least we can treat them as such. There is no feedback between the scales.
Many well-known relationships in science and engineering can be viewed as applications of serial multiscale integration by simplification or transformation. Examples include: equations of state derived from the kinetic theory of gases, the rheological equation for Newtonian fluids, Thiele modulus-effectiveness factor expressions for reaction-diffusion problems, and even, in some sense, Newton’s law of universal gravitation (Phillips 2001).
I
209
210
I
G Multiscale Process Modeling Advantages and Disadvantages
The principal advantage of the serial approach is the elimination of the expensive, detailed microscale model. Consequently, serial integration can potentially produce the most efficient models among the five frameworks. Many powerful mathematical techniques for order reduction, analytical solution and upscaling can be applied to the microscale model. These techniques not only enhance calculation efficiency, but also, and perhaps more importantly, highlight the essential nature of the microscale model. They strip away unnecessary detail to reveal how (a few) key variables influence macroscale behavior. The &sadvantages include the restricted accuracy and flexibility of the approach. Considerable effort may be needed to apply the mathematical techniques referred to above. Challenges
The basic challenge is to find an appropriate balance between fidelity and computational cost through manipulation of the microscale model. This includes learning which mathematical techniques are best used to simplify or transform the microscale model, and knowing when one-way coupling is acceptable. A fitrther challenge is to provide mechanisms for revising the microscale representation as required (Oran and Boris 2001, p. 439). 6.4.3.3 Embedded Integration Framework
Description
The microscale model is “formally embedded” within the macroscale model in this framework (Fig.G.Gg), which was termed hierarchical by Pantelides (2001).The macroscale model spans the system domain, while the microscale model is local, restricted to a relatively small part of the domain. The microscale model calculates, on demand, a relationship between macroscale quantities. Hence, while its domain is small, the microscale model may be called (instantiated) at many points through the system. Ab initio molecular dynamics is an example of the embedded framework. In this application, the macroscale model is an MD simulation that tracks the motion of each molecule based on the forces that act upon it. The microscale model is an electron-atom scale computational chemistry method, such as density functional theory, that calculates the intermolecular force (potential)function on thejy as the MD simulation proceeds. The embedded approach is a true, interactive multiscale method because information is passed between two models that are actively being solved. Application
This framework is used when a suitable macroscale model exists but needs to be “informed” by localized microscale simulation, and the microscale model cannot be acceptably simplified. If a suitable simplification of the macroscale model were available, serial integration via simplification (Section 6.4.3.2) should be used to reduce computing demands.
6.4 Multiscale Model lntegration and Solution
Advantages and Disadvantages
Embedded integration has a natural appeal because of its orderly, hierarchical nature. It potentially has the flexibility and accuracy of simultaneous integration, but with a much lower computational load. The detailed, expensive microscale calculations are performed only where and when they are required. A disadvantage is the need to run the microscale model at all, because it may still consume the bulk of the computing resources. Because the micro-macro interface in embedded and serial (simplification)integration may be similar, it should be possible to swap these methods with little change needed in the macroscale model. Challenges
The challenges include: 0
0
finding the smallest domain and shortest time needed to simulate the microscale model to provide accurate results with a minimum amount of calculation; enhancing the micro-macro interface beyond that used in traditional serial (simplification) modeling. For example, Stefanovit and Pantelides (2000) show a new approach to linking molecular dynamics information with unit operation modeling.
6.4.3.4 Multidomain Integration Framework
Description
In the multidomain framework, the microscale and macroscale models describe separate but adjoining parts of the whole system (Fig.G.Gh). It is sometimes called “hybrid modeling” because it is often used to create hybrid, discrete-continuous models. There are many multidomain models in materials science. In investigating the fracture of brittle solids, for example, the region around the growing crack could be modeled with a discrete, atomistic, microscale method: molecular dynamics. Relatively far from the crack, the solid could be modeled at the macroscale level with a continuum technique, such as the finite element method. The interface between the micro- and macroscale domains may be either a point, line or surface, or it may be a buffer zone with a volume that is nonzero, but small compared to the size of the micro- and macrosimulated regions. The models in the two regions interact across an interface. This multidomain framework is a true, interactive multiscale method, with a two-way flow of information between the partial models via the interface. Application
Multidomain integration is used where some parts of the system can be adequately described at the macrolevel, while in other regions, only a microscale model will suffice. It is often used in models with heterogeneous media, for example, the gas and solid phases in chemical vapor deposition, or the catalyst and bulk phases in a packed bed reactor. In these applications, the microscale region is usually fKed in space. The other main application for multidomain models is materials science, par-
I
211
212
I
6 Multiscale Process Modeling
ticularly the field of fracture mechanics. There, the microscale model is applied when the macroscale model fails some error criterion; the microscale region may change as the simulation proceeds. Advantages and Disadvantages
Like embedded integration, the multidomain method couples micro- and macroscale models to reduce the computational burden compared to simultaneous integration, while maintaining microscale realism where needed. The greatest disadvantage is the potential complexity of the micro-macro interface. It is important to guarantee the continuity of thermodynamic properties and transport fluxes across the interface, and to avoid unphysical wave reflections (Brenner and Ganesan 2000a,b; Curtin and Miller 2003). Challenges
The main challenge is to formulate a seamless interface between the microscale and macroscale regions. In some applications,techniques are also needed to move, grow and shrink the microscale region as the simulation proceeds to minimize computational requirements. 6.4.3.5 Parallel Integration Framework
Description
Both microscale and macroscale models cover the entire system domain in parallel integration (Fig.6.6i). However, the models are complementary in the detail with which they describe the important phenomena. There are currently few examples of parallel integration in chemical engineering. All combine a CFD model with a traditional unit operation model. For example, in a bubble column reactor, two phenomena are important: hydrodynamics and process chemistry. In a parallel framework, the microscale model might treat the fluid mechanics in detail via CFD, while the process chemistry could be represented in an abbreviated manner by a simple gas source (or sink) term. The macroscale model, on the other hand, could contain a comprehensive reaction scheme that was assumed to take place in simple fluid flow regime, such as a well-mixed or plug-flow region, or some combination of these. Current parallel models are solved by successive substitution. The macroscale model predicts some quantities that are inputs to the microscale model, while the micromodel outputs some variables that the macromodel requires. The models are run alternately until convergence. Parallel integration is an interactive multiscale method because both models are active and must be solved in concert. Application
To date, parallel CFD-unit operation models have been developed for a bubble column reactor (Bauer and Eigenberger 1999,2001),a batch stirred tank reactor (Bezzo, Macchietto and Pantelides 2000), an industrial crystallizer (Urban and Liberis 1999) and a high temperature electrochemical reactor (Gerogiorgisand Ydstie 2003). The
6.4 Multiscale Model Integration and Solution
parallel method is suitable where the important mechanisms in the system are weakly coupled (Pantelides 2001; Urban 2001). However, it is likely that the strength of the coupling that can be successfully accommodated by the parallel method would increase as the macroscale model is more finely discretized. Very strongly coupled systems may require embedded integration. Advantages and Disadvantages
The key advantage of parallel integration is the division of the system into two simpler problems. It is also a way to form a multiscale model from an existing software package that has limited interface options. The main disadvantage is the inherent approximation of the method and its consequent limitation to systems with weakly coupled active mechanisms. Challenges
One challenge for parallel CFD-unit operation models is to determine efficiently, perhaps automatically, an acceptable combination of ideal flow regions needed to approximate the CFD flow pattern. Another is to replace successive substitution with a more efficient and stable solution method. However, the most open challenge is to expand the range of applications beyond the CFD-unit operation examples reported so far. A general question is how best to partition the controlling phenomena between the parallel models. 6.4.3.6 Discussion of the Scheme
The extended classification scheme proposed for process modeling groups multiscale models according to how the microscale and macroscale models are linked. It helps in understanding the structure of multiscale models, but there are some open issues: 0
0
0
0
0
The classification scheme does not provide formal definitions for the frameworks. A given multiscale model could potentially fall into more than one category. Conversely, applying a framework to given partial models does not guarantee a unique multiscale model; many variations are possible. While some qualitative properties of the frameworks are outlined, we lack comprehensive comparative information. There is little guidance and there are no quantitative tools available to help select the integration method. These would be especially useful early in the modeling process. It is unclear how, or indeed if, all frameworks could be applied to a given modeling problem. Some problems do not naturally seem to suit certain integration methods and the underlying reasons need to be understood. The way the integration framework depends on the partial models to be integrated, and conversely,how the partial models affect the properties of the integrating framework, also needs careful further investigation. The classification scheme considers the integration of two scales at a time. However, it can be applied to multiscale models with more than two scales. For example, in a three-scale system, the micro- and mesomodels could be linked with one
I
213
214
I
G Multiscale Process Modeling
framework, while the meso- and macroscales could be linked with another framework. This “painvise”scheme may not be adequate to describe all multiscale models with more than two scales. Some of these unresolved issues can be tackled through structural analysis techniques. For systems that may exist in more than one state or regime, there is an additional element needed in model integration. It is to define the criteria used to identify the operating regime or prevailing physical structure of the system. Li and Kwauk (2003) have developed a method for multiscale models.
6.4.4 Application of the Frameworks
So far, we have looked at some issues of conceptual modeling for multiscale process systems. There are also mathematical and software issues involved in implementing the models: the use of existing software, solution methods, and model testing. 6.4.4.1 The Use of Existing Sofiware
The chemical engineering community has already addressed many of the challenges in linking existing, previously incompatible process simulation software through the CAPE-OPEN project and its successors (Braunschweig,Pantelides, Britt, et al. 2000). CAPE-OPEN defines a set of open interface standards that allows parts of process simulators from different vendors to work together in a “plug-and-play’’manner. The standard is maintained by the CAPE-OPEN Laboratories Network, CO-LaN (http: www.colan.org). Their Web site contains a list of current projects and discusses issues in linking process engineering software. Work at the University of Aachen has been centered on the component-based hierarchical explorative open process simulator (CHEOPS) (http: www.lfpt.rwthaachen.de/Research/cheops.html)as an integrative modeling and solution environment. There are commercial examples of multiscale software integration. Some of these include linking a CFD package with another software type, for example, process modeling software (gPROMS with FLUENT and STAR-CD), a process simulator (ASPEN Plus with FLUENT), and a gas-phase/surface chemistry simulator (CHEMKIN With STAR-CD). A related development is underway in the field of medical research as part of the IUPS Physiome Project (Hunter and Borg 2003; Hunter, Robbins and Noble 2002). A series of XML-based markup languages is being developed to capture and exchange information on human physiology - from the scale of cells to the whole body.
6.4 Multiscale Model Integration and Solution
6.4.4.2 Solution Methods for Multiscale Models
Solution efficiency is important in multiscale modeling because the partial models may use computationallyintensive techniques, such as quantum chemistry, molecular simulation and so on. A key feature of multiscale modeling is transforming a problem that is intractable when viewed at the finest scale into a manageable one when considered at multiple scales. The modeling techniques used at each scale have established specialized solution methods. The challenge is to meld the scalespecific solution techniques into an efficient strategy for solving the multiscale problem. There is a broad and powerfd class of numerical methods known as multiscale (multigrid, multilevel, multiresolution, etc.) schemes (Brandt 2002). They seek to solve problems that are defined by equations at one scale, but have a multiscale character. This numerical approach complements the concept we have advanced in this work, namely defining separate models at each scale and linking them through a framework. The multiscale numerical method consists of recursively constructing a sequence of solutions to a fine-scale problem at increasingly coarse scales. Largescale behavior is effectively calculated on coarse grids, but it is based on information drawn from finer scales. Multigrid techniques have been applied to a vast range of problems. A few relevant to process engineering are: high efficiency methods for fluid dynamics, solution of partial differential equations in general, data assimilation and other inverse problems, optimal feedback control, efficient molecular dynamics and molecular modeling, and global optimization. See Brandt (2002) for a comprehensive review. Not included in that review is the recent "equation-free" (or "gaptooth/projective integration") method (Kevrekidis,Gear and Hummer 2004) that has been developed in the context of process systems. The solution of multiscale models constructed from partial models linked by a framework involves several issues, including: 0
0
The efeect ofthefiarnework - In some frameworks, for example, the simultaneous and some serial ones, the solution process is decoupled. The best method at each scale can then be employed in isolation. The nature of the mathematical problem is different for each of the interactive frameworks. In multidomain integration, the models only interact through boundary conditions, while in embedded integration, information may also flow through the state variables and their derivatives at any location, through transport coefficients and fluxes, and source terms, for example. Thus far, only successive substitution has been used to solve parallel multiscale models. There are also fundamental questions, such as how the properties - differential index, stability, computational complexity, and so forth - of a multiscale model are determined by the properties of the partial models and the chosen framework. Time stepping - The partial models will usually have quite different time scales. In interactive frameworks, this results in stiff problems. Different microscale and macroscale time steps are often used. The microscale time step is usually much smaller than the macroscale one, but not always. When the microscale model is
I
215
216
I
G Multiscale Process Modeling
0
0
0
stochastic, temporal averaging is helpful in damping out microscale “noise,” which would otherwise propagate into the macroscale model (Raimondeau, Aghalayam, Katsoulakis, et al. 2001). Updating strategies - Partial models that have been simplified may need to be updated by referring to the original model, either periodically or when their estimated error becomes too high (Oran and Boris 2001). The accuracy required for the simplified model can be judged through error propagation methods, similar to those used in design sensitivity studies as presented, for example, in Xin and Whiting (2000). State transitions - For systems that may exist in different regimes, identification of the prevailing state is required. Li and Kwauk (2003) solved the multiscale model together with an optimization problem: minimization of a function (stability condition) that reflects the competing mechanisms that determines the system structure. Parallelization - Parallel processing fits naturally into some multiscale codes (Laso, Picasso and Ottinger 1997; Broughton, Abraham, Bernstein, et al. 1999).
6.4.4.3 Model Verification and Validation
The verification and validation (V&V) of models in science and engineering is a discipline in its own right (Oberkampf, Trucano and Hirsch 2003). The goal is to lend confidence (Kister 2002; Best 2003) to the use of the model. Pantelides (2001) identified the construction of validated models as a major challenge in chemical engineering. Here we touch on some aspects related to multiscale models. Verification is “the process of determining that a model implementation accurately represents the developer’s conceptual description of the model and the solution to the model,” that is, solving the equations correctly. It is a matter of software engineering and mathematics. There are two aspects: code verification and solution verification (Oberkampf,Trucano and Hirsch 2003). The former can be split into the verification of the numerical algorithm and software quality assurance. Solution verification involves checking for gross consistency (such as overall mass conservation), spatial and temporal convergence, and consistency with trusted solutions. Essentially, it is being confident that the numerical error in the predictions is acceptable and the qualitative behavior of the solution, for example the stability, corresponds to the developer’s expectations. In multiscale modeling, each partial model can be verified in isolation, using dummy functions or typical values for any variables that are shared between scales. The composite multiscale model should then be verified. This tests the integrating framework. Code and solution verification can be applied as before, but additional “trusted solutions” are potentially available. For each pair of linked scales, the chosen framework could be checked against simultaneous integration of those scales, since that has “zero integration error” (Solomon 1995; Werner 1999). The process is to compare predictions for the coarse scale variables using the chosen framework against the predictions of the same coarse variables derived from a complete simulation of the system at a finer scale. In practice, complete simulation of the system at
6.4 Multiscale Model Integration and Solution
the microscale may not be feasible, but selected microscale simulation of regions with typical or extreme behavior may be. Another possibility is to check the integration of three scales at a time against a two-scale model for the special case where the intermediate scale should not alter the solution (Gobbert, Merchant, Borucki, et al. 1997).Of course, this is most convenient when a suitable existing two-scale model is available, which might occur as part of the iterative model building process when it is decided that another scale is needed. Model validation is “the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model”. It is a question of getting the actual physics of the system right. Ideally, validation consists of testing the model against experimental data drawn from a set of carefully targeted experiments (Pantelides 2001 ; Oberkampf, Trucano and Hirsch 2003). Both the data and the model will contain uncertainties. Two issues that are important for multiscale model validation are measuring data over a range of scales and estimating model parameters at different scales. If a model contains parameters that are unknown, that is, too uncertain, experiments can be used to estimate them. For hierarchical models, which include multiscale models, parameter estimation can be applied simultaneously at all scales, sequentially to the scales in some order, or independently at each scale, or a mixture of these (Robinson and Ek 2000). The ideal situation is to determine each parameter independently. Conversely, the simultaneous approach is seen as “bad empiricism” (Randall and Wielicki 1997), to be used only for parameters with mechanisms that are poorly understood. In climate science and meteorology, there are complex multiscale models with many poorly known parameters. Brandt (2002) discusses the use of (multiple)multiscale computational approaches to assimilating data on the fly into dynamic models of the atmosphere. Good data helps in both parameter estimation and model validation. For multiscale models, we would like data at each scale of interest. Different measurement techniques are used at different scales; see, for example, Balazs, Maxwell, deTeresa, et al. (2002); Gates and Hinkley (2003) for some techniques used in materials science. Like the various modeling techniques that describe the phenomena at different scales, we can locate the tools of measurement on the logarithmic time and length axes of a scale map (see Figs. 5 and 6 of Gates and Hinkley (2003)).However, there is a fundamental difference in the multiscale nature of models and measurements, at least at small scales. For models there is a rough positive correlation between time and space scales: small processes operate quickly and large processes slowly, in general. The opposite is true for measurement. There is an approximate negative correlation: it takes a long time to measure small things, and a short time to measure large ones. The consequence for multiscale modeling is that it is not possible to gather data to allow the direct validation or parameter estimation of some partial models. We have no choice but to view their effects through the filter of intermediate scales.
I
217
218
I
G Mu/tka/e Process Modeling 6.4.5 Summary o f Multiscale Model Integration
Mdtiscale model integration refers to linking models at different scales together into a coherent whole. The constituent models may operate on vastly different characteristic time and length scales, and may be of dsparate kinds. There are many reported examples of model integration and we are beginning to understand its principles. Classification of the types of integration is helpful. One classification scheme proposed for process modeling identifies five broad frameworks for multiscale integration: simultaneous, serial, multidomain, embedded and parallel. There is some qualitative information available on their properties and when they can be applied. Aside from conceptual modeling concerns, there are issues of integrating existing software, solution of the model, and model validation. There is a good opportunity now, through using the large number of published models as examples, to improve our understanding of model integration. Key elements will be a clearer classification of integration methods and a suite of modeling tools to estimate the performance of a multiscale model - in terms of both dynamic and computational properties - at an early stage in the modeling process. As computing power increases and becomes more accessible, previously infeasible modeling techniques will become ripe for integration into multiscale models. New ideas will be needed to achieve tighter integration of different kinds of models.
6.5 Future Challenges
There is strong and increasing interest in the multiscale approach. Many examples of multiscale modeling are now scattered widely throughout the literature and there are the beginnings of a general “theory of multiscale modeling.” However, like any rapidly evolving field, the development is uneven. In specific areas, we can choose between several sophisticated multiscale techniques, while in other fields multiscale thinking has barely made an impact. We highlight here some future challenges in the multiscale modeling of process systems. Overall Strategy for Multiscale Modeling In Section 6.3 we outlined a general model building strategy and discussed its extension to multiscale modeling. Solomon (1995), Li and Kwauk (2003) and others pro-
vide alternative viewpoints. Are these strategies sufficient for the ejicient development of parsimonious multiscale models? One item that deserves more attention is the role of the modeling goal in multiscale systems. How can we use a statement of the modeling goal to drive model building towards : 0 0
0
Identifying and selecting the scales to include in the model? Developing or choosing among alternative partial models at each scale? Guiding the integration of the partial models into a multiscale model?
6 5 Future Challenges
We need a better understanding of the decomposition of goal sets in multiscale problems: relating goals to scales and appreciating the filtering effect of the integrating framework. Rethinking the Partial Models
Pantelides (2001) expresses the point well: “it should not be taken for granted that techniques (e.g. for CFD or molecular modeling) that have evolved over long periods of time for ‘stand-alone’use automatically possess the properties that are necessary for their integration within wider computational schemes. Indeed, our experience has been that the contrary is often true.” We may need to reformulate the partial models to assist with mathematical aspects of “tighter”model integration: ensuring well-posedness, continuity, efficient Jacobian calculation, and so on. Stefanovie and Pantelides (2000) provide an example. Model Integration
Classification is the first step in generalizing our understanding of the options for linking multiscale models. Several classification schemes have been proposed (Sections 6.4.2 and 6.4.3),and there is some anecdotal information available on the properties of the classes. Are there more useful classification schemes? Attempting a formal definition of the classes, for example, Ingram, Cameron and Hangos (2004), may help improve upon current ideas. We also need to develop techniques to answer the question: how do the properties of the partial models and the nature of the integration method contribute to the properties of the resulting multiscale model? A suite of characterizing model metrics that can be applied at different stages of the modeling process will be of assistance here. Numerical Methods
Solving the partial models from some scales may entail a very high computational load. Indeed, the possibility of the repeated solution of such models in a multiscale simulation highlights the need for efficient numerical methods. Multiscale models might run across different platforms and processors. The partial models may be of different types (Section 6.3.2.3) and will almost certainly have very different time scales. SpeciaIist techniques, refined over time, are usually available for the models from different scales. We need to understand how the most suitable numerical methods for the partial models and the chosen integration scheme interact in order to develop efficient and robust solution methods for multiscale models. Multiscale Modeling Took
A multiscale perspective would enhance existing CAPE tools. To be effective aids for
multiscale modeling such tools should: 0
0 0 0
permit partial model development; allow various integration schemes to be applied; generate metrics that characterize the partial and multiscale models; provide specialist solvers for different scales;
I
219
220
I
6 Multiscale Process Modeling 0 0
store validation data over the range of scales; archive the underlying assumptions of the partial models.
Extending the current work on open interface standards and heterogeneous simulation would facilitate these efforts. On a final note, we need to maintain a watch on how other disciplines are approaching the challenge of multiscale modeling and its application. There are interesting developments in materials science, ecology, climate studies, medicine and many other fields (Glimm and Sharp 1997; Li and Kwauk 2003).
References 1 Villermaux, J. (1996) In: Fifth World Congress
of Chemical Engineering. San Diego, CA, pp. 16-23. 2 Li, J., Kwauk, M. (2003) Chemical Engineering Science 58, 521-535. 3 Glimm,]., Sharp, D. H. (1997) SIAM News 30, 4,17 and 19. 4 Charpentier,J. C. (2002) Chemical Engineering Science 57, 4667-4690. 5 Cussler, E. L., Wei, J. (2003) AIChEJoumal49, 1072-1075. 6 Grossmann, I. E., Westerberg, A. W. (2000) AIChEJournal46,1700-1703. 7 Ingram, G. D., Cameron, I. T. (2002) In: APCChE 2002/Chemeca 2002, Christchurch, New Zealand, Proceedings CD-ROM, Paper #554. 8 Alkire, R., Verhofl M. (1994) Chemical Engineering Science 49, 4085-4093. 9 Maroudas, D. (2000) AIChEJournal46, 878-882. 10 Hangos, K. M., Cameron I. T. (2001a) Process modelling and model analysis. Academic Press, London. 11 Hangos, K. M., Cameron I. T. (2001b) Computers and Chemical Engineering 25, 237-255. 12 Robertson, G.A,, Cameron I. T. (1997a) Computers and Chemical Engineering 21,455-473. 13 Robertson, G.A,, Cameron I . T. (199713) Computers and Chemical Engineering 21,475-488. 14 Ingram, G. D. Cameron I. T., Hangos K. M. (2004) Chemical Engineering Science, in press. 15 Lerou, J. J., Ng, K. M. (1996) Chemical Engineering Science 51, 1595-1614. 16 Noble, D. (2002) Science 295, 1678-1682. 17Jensen, K. F., Rodgers, S. T., Venkataramani, R. (1998) Current Opinion in Solid State @ Materials Science 3, 562-569. 18 Pantelides, C. C. (2001) In: ESCAPE-11 (Eds: Gani, R., Jsrgensen, S. B.).Kolding, Denmark, pp. 15-26.
19 Stefanovit, J., Pantelides, C. C. (2000) In:
AIChE Symposium Series 96(323) FiJh International Conference on Foundations of Computer-Aided Process Design (Eds: Malone, M . F., Trainham, J. A,, Carnahan, B.). American Institute of Chemical Engineers, New York, pp. 236-249. 20 Phillips, R. (2001) Crystals, defects and microstructures: Modeling across scales. Cambridge University Press, New York. 21 Guo, M . , Li, J. (2001) Progress in Natural Science 11, 81-86. 22 Sobrnon, S. (1995) In: Annual Reviews ofCompukztional Physics 11 (Ed: Staufer, D.). World Scientific, pp. 243-294. 23 Werner, B. T. (1999) Science 284, 102-104. 24 McCarthy, J . J., Ottino, J. M. (1998) Powder Technology 97, 91-99. 25 Chan, H.S., Dill, K. A. (1993) Physics Today 46, 24-32. 26 Auriault, J. L. (1991) International Journal of Engineering Science 29, 785-795. 27 Oran, E. S. and Boris, J. P. (2001) Numerical simulation of reactivepow. Cambridge University Press, New York. 28 Brenner, H., Ganesan, V. (2000b) Physical Review E 62, 7544-7547. 29 Brenner, H., Ganesan, V. (2000a) Physical Review E 61, 6879-6897. 30 Curtin, W. A,, Miller, R. E. (2003) Modelling and Simulation in Materials Science and Engineering 11. R33-R68. 31 Bauer, M., Eigenberger, G. (1999) Chemical Engineering Science 54, 5109-5117. 32 Bauer, M., Eigenberger, G. (2001) Chemical Engineering Science 56, 1067-1074. 33 Bezzo, F., Macchietto, S., Pantelides, C. C. (2000) Computers @ Chemical Engineering 24, 653-658.
6 5 Future Challenges 34 Urban, Z., Liberis, L. (1999) Computers 99,
Dusseldorf, Germany. 35 Gerogiorgis, D. I., Ydstie, 8. E. (2003) In: Proceedings of Foundations of Computer-Aided Process Operations (FOCAPO 2003): A view to the fiture integration of RCD, manufacturing and the global supply chain, pp. 581-584. 36 Urban, Z. (2001) PSE User Group Meeting 2001. 37 Braunschweig, B. L., Pantelides, C. C., Britt, H. I., Sama, S. (2000) In: AIChE Symposium
Series 96(323) Fijh International Conference on Foundations of Computer-Aided Process Design (Eds: Malone, M. F., Trainham, /. A,, Carnahan, B.). American Institute of Chemical Engineers, New York, pp. 220-235. 38 Hunter, P. J., Borg, T. K. (2003) Nature 4, 237-243.
39 Hunter, P., Robbins, P., Noble, D. (2002) PFugers Archiv, European journal of Physiology 445, 1-9. 40 Brandt, A. (2002) In: Multiscale and multire-
solution methods: Theory and applications (Eds: Barth, T. J., Chan, T. F., Haimes, R.). Springer, Berlin, pp. 3-95. 41 Kevrekidis, I. G., Gear, C. W., Hummer, G. (2004) AICHE Journal 50, 1346-1355. 42 Raimondeau, S., Aghalayam, P., Katsoufukis, M. A,, Vlachos, D. G. (2001) In: Foundations of molecular modeling and simulation: Proceedings of thefirst International Conference on Molecular Modeling and Simulation, Keystone, Colo-
rado Cummings, P. T., Westmorland, P. R., Carnahan, B.). American Institute of Chemical Engineers, New York, pp. 155-158. 43 Xin, Y., Whiting, W. B. (2000) Industrial @ Engineering Chemistry Research 39,2998-3006. 44 Laso, M., Picasso, M., &linger, H. C. (1997) AIChEJournal 43,877-892. 45 Broughton,J. Q., Abraham, F. F., Bernstein, N., Kaxiras, E. (1999) Physical Review B 60, 2391-2403. 46 Oberkampf; W.L., Trucano, T. G., Hirsch, C. (2003) Technical Report SAND 2003-3769,
Sandia National Laboratories.
47 Kister, H. 2. (2002) Chemical Engineering Progress 98, 52-58. 48 Best, R. (2003) TCE 40-41. 49 Gobbert, M. K., Merchant, T. P.. Borucki, L. j., Cafe, T. S. (1997) Journal ofthe Electrochemical Society 144, 3945-3951. 50 Robinson, A. P., Ek, A. R. (2000) Canadian journal ofForest Research 30, 1837-1846. 51 Randall, D. A,, Wielicki, B. A. (1997) Bulletin ofthe American Meteorological Society 78, 399-406. 52 Balazs, B., Maxwell, R., deTeresa, S., Dinh, L.,
Gee, R. (2002) In: Materials Research Society Symposium Proceedings 731, 3-7. 53 Gates, T. S., Hinkley, J . A. (2003) In: Collection $Technical Papers A I M ASME ASCE A H S ASC Structures, Structural Dynamics and Materials Conference 2, 1233-1246.
I
221
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
7 Towards Understanding the Role and Function o f Regulatory Networks in Microorganisms Krkt V: Cernaey, Morten Lind, and Sten BayJirgensen
7.1 Introduction
Microbial function is carefully controlled through an intricate network of proteins and other signaling molecules, which enables microorganisms to react to changes in their environment. Thus microorganisms constitute examples of entire autonomous chemical plants, which are able to produce and reproduce despite a shortage of raw materials and energy supplies. Understanding the intracellular regulatory networks of microorganisms is important to process systems engineering for several reasons. One reason is that the microbial systems still constitute relatively simple biological systems, the study and understanding of which may provide a better understanding of higher biological systems such as human beings. Furthermore microbial systems are used, often following genetic manipulation, to produce relatively complex organic molecules in an energy-efficient manner. Understanding the regulatory networks in microorganisms, and especially understanding how to couple the microbial regulatory functions and higher level process and production control functions, is a prerequisite for process engineering. The focus of this chapter is discussing basic modeling problems when describing regulatory networks in microorganisms. In this introduction, we first present arguments to explain why researchers from so many different disciplines, but especially from the systems engineering field, are interested in gaining an increased understanding of the functioning and design principles of these regulatory networks. Second, fundamental modeling problems are highlighted. The introduction finishes with a statement of the purpose of this chapter and an overview of the remainder of the chapter.
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
224
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
7.1.1 Why Gain an Understanding of Regulatory Network Function?
From an industrial point of view, a microorganism can be considered an autonomous plant suited for the production of complex biomolecules. Industrial production of chemicals such as food and cosmetics ingredients is, for example, increasingly based on biotransformation processes (Cheetham 2004), where the conversions of raw materials to useful products are catalyzed either by microorganisms or by enzymes obtained from microorganisms. On a macroscale, for example, in a bioreactor where millions of microorganisms reside, the conversion of raw materials to valuable products by the microorganisms has traditionally been monitored using probes for pH, dissolved oxygen, gas phase composition, and biomass concentration measurements. In recent years, however, interest in system-levelunderstanding of regulatory networks in biological systems, including microorganisms such as Escherichia coli (a prokaryotic organism) and Saccharomyces cerevisiae (a eukaryotic organism),has been an important research theme. This increasing interest in the microscale is, to a large extent, boosted by the fact that biology has evolved from being a data-poor science to a data-rich science, an evolution driven by progress in molecular biology, particularly in genome sequencing and high-throughput measurements (Kitano 2002; Vukmirovic and Tilghman 2000). Indeed, contrary to earlier efforts in developing system-level understanding of biological systems, it is now possible to collect informative system-wide data sets on protein-DNA interactions, protein-protein interactions, and increasingly smallmolecule interactions as well (Ideker and Lauffenburger 2003). An ever-increasing number of advanced analytical methodologies allow detailed monitoring of the dynamics of intracellular processes (e.g., Chassagnole et al. 2002). In the postgenomic era, the availability of genome sequence data of several organisms, including E. coli and S. cereuisiae, has already led to a focus shift from molecular characterization and sequence analysis to developing an understanding of functional activity and the interaction of genes and proteins in pathways (Salgado et al. 2004; Vukmirovic and Tilghman 2000; Wolkenhauer et al. 2003),a research area called functional genomics. In fact, microorganisms are networks of genes, which make networks of proteins, which regulate genes, and so on ad infinitum (Vukmirovicand Tilghman 2000). Gene expression and regulation, i.e., to understand the organization and dynamics of genetic, signaling, and metabolic pathways, is considered to be one of the main research challenges of the next 50 years (Wolkenhaueret al. 2003). Developing a system-level understanding of biological systems can be derived from insight into four key properties (Kitano 2002): (1)System structure, for example, the network of gene interactions; (2) System dynamics, for example, the dynamic response of a biological system to a change in the substrate concentration; (3) The control method, that is, understanding of the mechanisms that control the state of the cell; (4) The design principles of the cell, for example, simulations can support strategies to construct biological systems. Reaching a system-levelunderstanding of biological systems necessitates multidisciplinary research efforts to unravel the complexity of biological systems.
7.1 Introduction
One could, of course, wonder why not only biologists, but people coming from very different research fields, are involved and interested in developing an increased understanding of biological function. First of all, involving other research disciplines can be considered a necessity. Secondly,the versatility of microorganisms to produce industrially relevant chemicals, by expression of the appropriate gene(s), is an important factor promoting research aimed at gaining an increased understanding of biological function. Thirdly, the similarities between microorganisms and chemical plants, combined with increased data availability, almost naturally lead to an interest of systems engineering in understanding biological function. Each of these points will be presented in a little more detail below. Biology has grown to a scientific area that generates far more data than biologists are used to handle. The amount of complex data that are and will be generated with the technologies now available, and the need for modeling to understand the way networks function, requires - for efficiency reasons - that disciplines outside of traditional biology collaborate on the problem of understanding biological function (Vukmirovicand Tilghman 2000). The most obvious collaborators for this endeavor are systems theoreticians and engineers. The industrial interests in the understanding of biological function is illustrated by the tremendous and steadily growing list of products resulting from biotransformation processes mentioned in the review paper of Cheetham (2004).Clearly, improved understanding of the regulatory mechanisms responsible for the expression of the gene encoding a product of interest might lead to higher production rates (more product can be produced within an existing industrial facility), increased production yields (raw materials can be utilized more efficiently),and shorter time to market. Thus, for an industrial biotransformation process, the results of improved understanding of biological function are directly related to increased profit. The bacterium E. coli, to name one popular example, was called a “workhorse microorganism” for recombinant protein production and a fundamental understanding of intracellular processes, such as transcription, translation, and protein folding, make this microorganism even more valuable for the expression of recombinant proteins (Baneyx 1999).Knowledge of the mechanisms of complex regulatory networks involved in the transformation of extracellular signals into intracellular responses is important to improve the productivity of microorganisms. The E. coli lactose utilization (lac) operon, which will be used later in this chapter to illustrate the complexity of regulatory networks, has served as one of the paradigms of prokaryotic regulation, and therefore a considerable number of the promoters used to drive the transcription of heterologous genes (genes carrying the genetic code for a product of interest) have been constructed from lac-derived regulatory elements (Baneyx 1999; Makrides 199G). The interest of systems engineering groups in contributing to an improved understanding of microbial function becomes clear by considering the number of review and position papers that were published in recent years (e.g., de Jong 2002; Doyle 2004; Ferber 2004; Hasty et al. 2001; Ideker and Lauffenburger 2003; Kitano 2002; Smolen et al. 2000; Wolkenhauer et al. 2003). Microbial function is carefully controlled through an intricate network of proteins and other signaling molecules. Free-
I
225
226
I
7 Towards Understanding the Role and Function ofRegulatory Networks in Microorganisms
living bacteria have to maintain a constant monitoring of extracellular physicochemical conditions in order to respond and modify their gene expression patterns accordingly (Lengeler et al. 1999; Salgado et al. 2004). Microorganisms by themselves thus constitute examples of entire autonomous chemical plants, which are able to produce and reproduce despite a shortage of raw materials and energy supplies. Microorganisms can sense changes in the surrounding environment, and subsequently control the expression of genes in reaction to these changes. Such adaptation of the cell to changes in the environment is crucial for the survival of the cell, since it allows economical use of cellular resources (Lengeler et al. 1999), as a result of regulating the expression of all genes to produce the optimal amount of gene product at any given point in time. The energy consumption for protein synthesis and the relatively short half-life of the mRNA molecules are reasons for a cell to control both the types and amounts of each protein (Wolkenhauer et al. 2003). Making a link to chemical production plants, cell behavior can be compared with adjusting the production capacity and the operation strategy of a chemical plant to the availability of limiting amounts of raw materials, aiming at minimizing plant operating costs. In view of the similarities between the functioning of a microorganism and a chemical plant, it is not overly surprising that systems-engineering thinking is increasingly applied to these biological systems. Systems engineering has different applications. Reverse engineering, aimed at unraveling the functionality of regulatory networks, is one of the major goals in systems biology. However, more and more effort is also directed into forward engineering, aiming at the design of regulatory networks with a desired functionality (Elowitz and Leibler 2000; Ferber 2004; Hasty et al. 2001). This research area is also called synthetic biology, to distinguish it more clearly from the reverse engineering efforts in systems biology. Contrary to systems biologists, who analyze data on the activity of thousands of genes and proteins, synthetic biologists simplify and build. They create models of genetic circuits, build the circuits, see if they work, and adjust them if they don’t (Ferber 2004). In the synthetic biology field, one of the future visions is the construction of cells as small factories for complex chemical compounds such as pharmaceuticals.
7.1.2 Levels of Abstraction, Function, and Behavior
It is important to realize that models of biological systems play a central role in both reverse and forward engineering. However, a model of biological systems represents different types of knowledge and assumptions about the system depending on the problem to be solved. Thus, the aim of reverse engineering is to interpret the biological system in order to explain how its structure and behavior originate from interactions of its subsystems. The interpretation is based on a model of the expected structure and behavior. The model can be based on either previous experience or represent a purpose or design intention. In both cases, the aim is to test whether the model (the hypothesis)
7. J Introduction
is an adequate representation of empirical data. In contrast, the aim in forward engineering is to predict structure and behavior of a biological system from knowledge of the structure and behavior of its parts, and to test if the predictions match subsequent empirical observations or design intentions. In prediction, the model is assumed to be adequate and used to produce hypotheses about unobservable structure or behavior. Models have accordingly different roles in reverse and forward engineering of biological systems. A general problem in the modeling of dynamic systems is to determine a proper level of abstraction. Most natural and artificial systems can be modeled on a variety of levels but the choice of level is of particular importance for biological systems due to their extreme complexity. Unfortunately, levels can be defined relative to several dimensions in the modeling problem. For example, we can describe the spatial structure (the anatomy) on many part-whole levels, and we can also describe the behavior (dynamics) at several part-whole levels of temporal resolution. Another way to define levels in biological systems is to consider their functional organization. The idea here is to describe the biological system as a goal-directedsystem and to decompose the system into subsystems so that each subsystem serves the needs or provides the means for its superordinate system. The analysis that brings about this type of system information is usually called means-end analysis or functional modeling, and has been developed within cognitive science and artificial intelligence research. The use of means-end analysis to define levels of abstraction is a very powerful approach to handle the modeling of complex dynamic systems (Lind 1994). It is of particular importance for modeling systems with embedded control systems, such as biological systems. Control systems play a direct role in the constitution of functional levels (Lind 2004b) and their function can therefore not be described properly without means-end concepts. Note that when using concepts of means-end analysis we must distinguish carefully between the concepts of behavior and function. The two notions are often confused such that function is thought to be more or less synonymous with behavior. We stress the teleological meaning of function; it represents the role the system has in the fulfillment of a purpose or goal. Behavior refers to what happens when a system reacts to an intervention or a disturbance. Descriptions of behavior have accordingly no connotations to purposes or goals and are therefore distinct from functional descriptions. We will later return to a discussion of means-end analysis and functional concepts in modeling complex dynamic systems.
7.1.3 Overview of the Chapter
The main purpose of this chapter is to discuss basic modeling problems that arise when attempting to describe regulatory networks and their function in microorganisms. The focus is on the representation of the regulatory mechanisms in micro-
I
227
228
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
organisms applied for production purposes. First, the central dogma of biology will be introduced briefly. The E. coli lac operon is subsequently used as an example of a regulatory network structure in microorganisms, illustrating the complexity of such networks. The lac operon example is followed by a discussion of the essential steps in the central dogma, identifylng possible sites for control actions. Formalisms to model the regulatory networks are then introduced briefly, and strategies developed to deal with the complexity of regulatory networks in microorganisms are highlighted. Finally, means-end analysis and functional modeling are presented as suitable methods to represent the complex interactions in regulatory networks, and their use is illustrated by means of the lac operon example.The chapter ends with a discussion and conclusions.
7.2 Central Dogma of Biology
According to the central dogma of biology, a term coined by Sir Francis Crick, three processes, illustrated in Fig. 7.1, are responsible for the conversion of genetic information. (1)DNA replication: a process involving several enzymes and duplicating a double stranded nucleic acid to give identical copies; (2) Transcription: a DNA segment constituting a gene or an operon is read and transcribed into a single stranded sequence of RNA, the messenger RNA (mRNA),by the RNA polymerase enzyme; (3) Translation: the mRNA sequence is translated into a sequence of amino acids, where the ribosome reads three bases (a codon) at one time from the mRNA, translates them into one amino acid, and subsequently joins the amino acids together in an amino acid chain (protein formation). The resulting proteins, depending on their structure, may function as transcription factors (or regulatory proteins) binding to regulatory sites of other genes, as enzymes catalyzing metabolic reactions, or as components of signal transduction pathways. In prokaryotic cells, transcription and translation take place simultaneously. In eukaryotic cells, the mRNA is formed in the cell nucleus, which is separated from the rest of the cell. The mRNA undergoes further processing and modifications before it is transported out of the nucleus, where the ribosomes take care of the translation. New research results appearing in the early 1970s meant that the basic principle of the central dogma, that information flows uniquely from DNA to RNA to protein, Transcription DNA
n
Translation
ReKtion
Figure 7.1
biology.
Schematic illustration of the central dogma of molecular
7.3 Complexity ofRegulatory Networks
needed adjustment. Indeed, with the discovery of reverse transcriptase in retroviruses, the central dogma (Fig.7.1) was extended to include the ability to convert RNA into DNA. Prions also form an exception to the original formulation of the central dogma, since these proteins can induce misfolding of other proteins.
7.3 Complexity of Regulatory Networks
Of course, Fig. 7.1 is a simplified representation of the complex processes taking place in, for example, a prokaryotic microorganism. In reality, the central dogma of molecular biology includes quite a number of possibilities for regulation of DNA replication and protein production processes, which in this chapter will be first illustrated with an example of the production of enzymes (proteins) in a prokaryotic organism.
7.3.1 An Example of Transcriptional Regulation: the lac Operon
The example focuses on transcriptional regulation, the intracellular control mechanisms that influence the rate of the process responsible for converting the genetic information contained in the DNA into mRNA. In prokaryotes, genes are grouped into operons (Fig. 7.2).An operon can thus consist of several structural genes, where each structural gene encodes a protein. The genetic information contained in the genes in one operon together provides the cell with the capability to perform a coordinated function, for example, the execution of one metabolic pathway to produce a specific amino acid, or the (partial) conversion or degradation of one specific substrate to a metabolic intermediate. All genes in an operon are transcribed at once, resulting in polycistronic messenger RNA (mRNA),that is, mRNA encoding for several proteins. An example of an operon, in this case the well-known E. coli lac operon, is provided in Fig. 7.2A. The regulation of the transcription of the lac operon is the result of a combination of different mechanisms. 7.3.1.1 Absence of Extracellular Glucose: Induction of the lac Operon Mechanism
The first mechanism that will play a role in the transcription of the lac operon is induction, which is schematically represented in Fig. 7.2. The foundation for the current level of understanding of this regulatory mechanism is the operon model formulated by Jacob and Monod (1961),where the clear distinction between structural and regulatory genes was introduced. The lac operon consists of three structural genes (Fig. 7.2), containing the genetic code for enzymes that will be responsible for the uptake and conversion of the sub-
I
229
230
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
--.. .... .
--
h
Translation
a -
= Indicatingthe result of a conversion process = Indicating the
7:a
direction of a rnovementltransportprocess
= Repressor protein
b)
AA A perlfl
4..
m
a
9
:v v 'I v v a
&ell
./J
@Ee@
membrane
B
A =Lactose (extra-cellular, Lo)
= Inducer (allolactose, A)
V = Lactose (intra-oellular, L)
Figure 7.2. Induction o f the lac operon in the absence of glucose in the growth medium (based on the model o f Yildirim and Mackey (2003)) A Repressed lac operon, B Induced lac operon (A, B, P, L, L, and M, in italics in the figure, indicate the variables considered in the model of Yildirim and Mackey 2003) Lac1 = gene encoding for repressor protein, P = promoter region for repres
a tmTKlC
E
(enqm&J
sor protein, P = promoter region for structural genes, 0 = operator region for structural genes, LacZ = P-galactosidasegene, Lacy = P-galactoside permease gene, L a d = P-galactoside transacetylase gene, p-gal = P-galactosidase, per = p galactoside permease, transac = P-galactoside transacetylase
7.3 Complexity ofRegulatory Networks
strate lactose into its building blocks glucose and galactose. In the simplified representation in Fig. 7.2, the structural genes are preceded by one operator and one promoter. In the absence of extracellular glucose and lactose, the lac operon is repressed. The repression of the lac operon originates from the presence of a fourth gene, containing the genetic code for a repressor protein. This lac repressor gene, or regulatory gene, provides one of the keys for understanding the regulatory mechanism that allows E. coli bacteria to grow on lactose in the absence of glucose. The lac repressor gene has its own promoter (Pi in Fig. 7.2) allowing RNA polymerase to bind to Pi and to transcribe the lac repressor gene. The ribosomes translate the lac repressor mRNA, to form the lac repressor protein. In the absence of lactose, the lac operon is repressed, meaning that the lac repressor protein is bound to the operator region of the lac operon, preventing the RNA polymerase to bind to the promoter of the structural genes, and thus repressing the transcription of the structural genes (see Fig. 7.2A). Allolactose is the inducer of the lac operon and results from the intracellular conversion of lactose following uptake trough the cell membrane (Lengeler et al. 1999; Wong et al. 1997; Yildirim and Mackey 2003). Indeed, in the absence of extracellular glucose, and when lactose is present in the growth medium, lactose is transported into the cell by the P-galactoside permease (Fig. 7.2B). Intracellular lactose is subsequently converted into glucose, galactose, and allolactose. The lac repressor protein undergoes a conformational change after binding the inducer allolactose, and is then no longer capable of binding to the operator region of the structural genes (see Fig. 7.2B). RNA polymerase can now bind to the promoter of the structural genes and produce mRNA, which is subsequently converted into proteins (P-galactosidase, P-galactoside permease, and P-galactoside transacetylase) by the ribosomes. This induction mechanism of the lac operon is a positive feedback loop: increasing intracellular lactose concentrations will lead to an increase in the expression of the lac operon, and thus result in an increased production of, for example, permease enzyme molecules, which will again lead to increased intracellular lactose concentrations, until the maximum protein production rate is reached. Depletion of extracellular lactose will result in repression of the lac operon. A First Principles Model Example: Model of the lac Operon Induction
Modeling plays an important role in unraveling regulatory mechanisms. A model for the induction of the lac operon was proposed by Yildirim and Mackey (2003)and will be used here as an example. Since this model only considers the induction mechanism, the model is only valid in the absence of extracellular glucose. The model consists of five states (see Fig. 7.2B): intracellular lactose (L), allolactose (A), mRNA resulting from the transcription of the structural genes (M), P-galactosidase (B), and P-galactoside permease (P). The system is modeled with five nonlinear delay differential equations (DDEs) provided in Eqs. (1-s), and has two external inputs, the extracellular lactose concentration (LJ, which is assumed to be constant, and the growth rate (p). Note also that spontaneous mRNA generation has been omitted in Eq. (l),since its contribution could be neglected.
I
231
232
I
7 Towards Understanding the Role and Function ofRegulatory Networks in Microorganisms
dM dt
-=(YM'
+ K1 . (e(-pL.rM). A(t - t ~ )2 )- m . M - p . M K + K1 . (e(-fiL.rM) . A(t - t ~ ) ) 1
dB = (YB . e(-@"B) . M(t - tg) - y~ . B - p . B dt
Such a model is, of course, based on a number of model assumptions. In the case of this model, the delay times in the DDEs are assumed to be related to different biological phenomena. The delay in Eq. (1)represents the fact that there is a delay ZM between the start of transcription and the production of a complete mRNA. The delay tBin Eq. (2) represents the delay between the start of mRNA translation and the appearance of P-galactosidase, and tBthus corresponds to the time needed for translation. The delay tB+zP in Eq. (5) includes the assumption that P-galactosidase production needs to be finished before the production of the P-galactoside permease whereas tprepresents the time needed to produce P-galactoside can start (delay te), permease. The selection of DDEs with constant delays to model this regulatory mechanism actually includes the assumption that translational regulation does not influence the protein production rate. Indeed, translational regulation would lead to variations in the delays tBand zp. Furthermore, transcriptional control is only modeled as influencing transcription initiation. The constant delay tM in Eq. (1)includes the assumption that no regulatory mechanism influences transcriptional elongation and transcription termination. Besides the assumptions underlying the choice of the delay times in the model example, it is of utmost importance to have a proper understanding of the assumptions that were made when describing the transcriptional regulation mechanism of the lac operon. Actually, the model example in Eqs. (1-5) does not provide a detailed description of this regulatory mechanism (Santillin and Mackey 2004). Instead, the model example lumps the regulatory mechanism into one Hill-type equation, describing the production of mRNA as a function of the inducer, the allolactose concentration (Eq. (1)).The dynamics of the lac repressor protein, the lac repressor protein-allolactose complex and the RNA polymerase enzyme are not considered explicitly. The original paper by Yildirim and Mackey (2003)can be consulted for further detail on the kinetic expressions. A set of parameters, suitable initial conditions, and steady-state values obtained with these parameters can be found in Yildirim and Mackey (2003),as well as a demonstration of the capabilities of the model to describe the dynamics observed experi-
7.3 Cornphity ofRegulatory Networks
I
20
0
I
2
g 0.5-
0"
40
60
80
100
120
140
160
180
200
I
./' / I
//
I ---
I,,/' I
mRNA
_____-_______ ----- ------
c
1
-----
cc--c
I
g0 0.5
0
-0
0
0
,
r.
Permease
_/ 20
40
60
80
I00
Time (min)
420
140
160
180
ZOO
Figure 7.3 Relative concentration dynamics of intracellular lactose (L), allolactose (A), mRNA (M), P-galactosidase (B), and P-galactoside permease (P) predicted by the model of Yildirim and Mackey (2003) for a step-change of glucose to lactose feeding at t = 0 (L= 0.08 mM. p = 0.0226 m i d ) . The initial values provided by Yildirim and Mackey (2003) were used.
mentally. Figure 7.3 provides the relative concentration dynamics predicted by the model for a step change in the feed from glucose to lactose at t = 0. For the figure, the data were scaled by dividing each concentration time series by its maximum value. After the appearance of lactose, the model predicts an increase of the allolactose concentration, resulting in induction of the lac operon and subsequent production of pgalactosidase. 7.3.1.2 Presence of Extracellular Glucose: Inducer Exclusion and Carbon Catabolite Repression
Glucose, not lactose, is the preferred carbon source of E. coli bacteria. In the presence of both glucose and lactose, Ecoli bacteria will first grow on glucose, and the enzymes encoded in the lac operon will not be produced. However, when all glucose is consumed, the presence of lactose will induce the production of the enzymes encoded in the lac operon, and thus provide E.coli bacteria with the capability of growing on lactose as an alternative substrate. When all lactose is consumed, the production of the enzymes encoded in the lac operon will be turned off again, thereby economizing on the cellular resources.
I
233
234
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
The lac operon inducer mechanism (Fig.7.2) alone cannot explain why the lac operon is repressed when both extracellular glucose and lactose are present. Clearly, there must be additional regulatory mechanisms beside the lac operon inducer mechanism. Figure 7.4 provides a diagram of the main regulatory mechanisms of the lac operon included in the model of Wong et al. (1997),and indicates that the lac operon is indeed controlled by glucose at two levels (Lengeleret al. 1999; Wong et al. 1997): (1) Inducer exclusion; and (2) catabolite repression. Extracellular glucose, while being transported into the cell by the phosphoeno1pyruvate:sugar phosphotransferase system (PTS), an important transport system of E. coli bacteria, is converted to glucose 6-phosphate (GGP).The cell growth rate is assumed to depend on the G6P concentration. At this point, the model of Wong et al. (1997) assumes that the glucose uptake rate via the PTS is related to the external glucose concentration via Monod kinetics. It is important to realize, though, that the PTS itself is a protein complex consisting of several enzymes that will transfer a phosphate group from phosphoenolpyruvate (PEP) to glucose during glucose uptake, resulting in G6P (Postma et al. 1993). In the absence of extracellular glucose, cyclic adenosine monophosphate (CAMP)is synthesized by the adenylate cyclase (AC)enzyme, accumulates in the cell, and binds to the cAMP receptor protein (CRP). cAMP is considered an alarmone for carbon starvation of E. coli (Lengeler et al. 1999).Alarmones are molecules that signal stress conditions. The phosphorylated form of one of the enzymes of the PTS (HAG'') is an activator of AC (Postma et al. 1993).The cAMP:CRP complex binds to the CRP binding region, which is located near the lac promoter, and will enhance transcription initiation and also transcription of the structural genes by the RNA po1ymerase:a factor complex (RNAP:a in Fig. 7.4). In the presence of extracellular glucose, no CAMP will be generated since HAG'' is in its non-phosphorylated form, and no cAMP:CRP complex will be formed, resulting in catabolite repression, or the repression (inactivation)of certain sugar-metabolizing operons (such as the lac operon in this example) in favor of the utilization of an energetically more favorable carbon source (glucose in this example). In Fig. 7.4, the presence of extracellular glucose results in inhibition of the transport of lactose by the lac permease, a phenomenon known as inducer exclusion. It has been demonstrated that it is not extracellular glucose itself, but the non-phosphorylated form of the PTS enzyme IIAG" that inhibits the uptake of lactose by the lac permease (Postma et al. 1993). Several first-principles models have been formulated to describe the combined effects of inducer exclusion, carbon catabolite repression, and induction on the lac operon (e.g., Kremling et al. 2001; Santillin and Mackey 2004; Wong et al. 1997).An accurate description of the phenomena observed when E. coli bacteria grow on a mixture of glucose and lactose necessitates inclusion of the glucose effects in mathematical models of the lac operon, which results in rather complex models. In the model of Santillin and Mackey (2004),an additional layer of complexity in the regulation of the lac operon is considered explicitly by tahng into account in the model that the lac operon has three different operators, two different cAMP:CRP binding sites and two different promoters. Also, the model takes into account that DNA can fold in such a way that a single repressor molecule can bind to two different operators. Considering all possible interactions between the lac operon, the repressor, the cAMP:CRP com-
7.3 Complexity ofRegulatory Networks
A
V 6
a
= Lactose (extra-ceflularb = Lactose (imtra-ceflular)
--... *-.. ..... ...,,m. ...... rn
*-.**
= Aflolxtose
=Galactow = Gl~mSa-6-P P
Repressor protein
@ = Represor-Efrolmtose complex Figure 7.4 A diagram of the lac operon, schematically representing mechanisms for inducer exclusion, catabolite repression, and induction of the lac operon (Wong et al. 1997). See the main text for an explanation of the symbols.
plex, and the RNAP:a complex results in 50 different binding states for the lac operon.
7.3.2 Potential Sites for Control Actions
The lac operon example is entirely focused on transcriptional regulation, more specifically transcription initiation, meaning that only part of the mechanisms that control the transcription of the structural genes, and thus the production of mRNA, are
I
235
236
I
7 Towards Understanding the Role and Function ofRegulatory Networks in Microorganisms
considered. In fact, the emphasis is on transcriptional regulation in the majority of the studies on regulatory networks in microorganisms. In a review on modeling transcriptional regulation, Smolen et al. (2000)explained this by the fact that two key approximations have historically been used to model genetic regulatory systems: (1) control is exercised at the transcriptional level, (2) the production of a protein product is a continuous process, with the rate determined by the balance between gene activation and gene repression. As a consequence, there are few or no studies that model both translational and transcriptional control in any specific genetic system (Smolen et al. 2000). However, prokaryotic cells are capable of rapidly adjusting to a wide range of environmental conditions (Lengeler et al. 1999), and this adjustment is achieved in two ways: (1) Instant responses involving a change in the activity of critical metabolic enzymes; and (2) Delayed, more long-term responses, involving positive or negative regulation of gene activity in a coordinated fashion. Transcriptional regulation is, of course, very important. However, Lengeler et al. (1999)provided examples that illustrated that regulation of protein synthesis not only takes place at the level of transcription initiation, that is, regulation of the binding of the RNA polymerase to the promoter, but also occurs at the levels of transcriptional elongation (i.e., during the formation of the mRNA chains) and termination (i.e., during the final stages of mRNA formation). Moreover, the mRNA is not a stable intermediate, and mRNA degradation provides a major control point of gene expression in virtually all organisms (Makrides 1996). Furthermore, Lengeler et al. (1999) indicated the importance of regulation during translation and mention protein stability as an additional factor that can be influenced by regulatory mechanisms. Finally, posttranslational modification of proteins is considered as a fine-tuning mechanism to adjust the activity of enzymes. Summarizing, transcriptional regulation alone provides only part of a more complicated picture. Information on mRNA-levels in the cell provides an indication of gene expression and transcriptional regulation, but should also be combined with protein measurements to track the final gene expression product.
7.4 Methods for Mapping the Complexity of Regulatory Networks
Models are ideally suited for the representation of complex regulatory networks. The lac operon example is first compared to the size of the genome to further illustrate the complexity. As mentioned above, most systems can be modeled on a variety of levels of abstraction. The role of modeling is discussed and illustrated with a design example. Current developments in the construction of high-level models will be illustrated with the search for network motifs in regulatory networks, which is a highlevel modeling example. A signal-orienteddetailed first principles modeling methodology will subsequently be introduced as an attractive example of a low-level modeling approach. Finally, high- and low-level modeling approaches will be contrasted, and the link between high- and low-level models will be explained.
7.4 Methods for Mapping the Complexity of Regulatory Networks
7.4.1 Complexity of Regulatory Networks
Regulatory networks are complex, which was illustrated using the lac operon example. There are two distinct characterizations of complexity that both apply to regulatory networks (Doyle 2004): (1) The classical notion of behavior associated with the mathematical properties of chaos and bifurcations (behavioral complexity); and (2) The descriptive or topological notion of a large number of constitutive elements with nontrivial connectivity (organizational complexity). Chaos, bifurcations, and the occurrence of multiple static or dynamic states in biological systems are beyond the scope of this chapter. Instead, we are more interested in the organizational complexity of regulatory networks, more specifically in methodologies that allow representation of the complex regulatory networks and its many elements in a systematic way. The lac operon example illustrates the degree of organizational complexity involved in the transcriptional regulation of a single prokaryotic operon, and gives an indication that proteins are the main catalysts, structural elements, signaling messengers, and molecular machines of living cells. The classical view of protein function focused on the local action of a single protein molecule, for example, the catalysis of one specific reaction in the metabolism of an organism. However, today there is a more expanded view of protein function, where a protein is defined as an element in the network of its interactions (Eisenberg et al. 2000). Each gene in the genome of an organism encodes for a protein. Thus, a first indicator of the overall organizational complexity of the regulatory networks is the number of genes in the genome. The genome of the well-studied prokaryote E. coli consists of 4408 genes with 179 transcriptional regulators (Salgado et al. 2004), whereas the genome of a typical eukaryote, S. cerevisae, consists of 6270 genes (Lee et al. 2002). The absolute numbers of genes might already provide an indication that the organizational complexity of eukaryotic organisms is higher compared to prokaryotic organisms. Most proteins interact with several other proteins, resulting in complicated protein-protein interaction networks. It is exactly these multiple simultaneous interactions of many proteins in the network that need to be understood and represented to understand the functioning of a living cell. As a reaction on sensing a change in the extracellular environment, the gene expression pattern will be modified. Contrary to prokaryotic cells, eukaryotic cells have a nucleus. For Eukaryotes provoking a change in the gene-expressionusually requires the movement of a protein from the body of the cell to the nucleus in response to the changes in the extracellular environment (Downward 2001). Thus, the cell compartmentalization will also necessitate the mapping and representation of transport processes between different cell compartments for eukaryotic cells, whereas such intracellular transport processes usually don’t need to be considered for prokaryotic cells.
I
237
238
I
7 Towards Understanding the Role and Function ofRegulatory Networks in Microorganisms
7.4.2 The Essential Role of Modeling
A conceptual problem arises of how to understand the operation of these complex systems. Positive and negative feedback within signaling pathways, crosstalk between pathways, time delays that may result from mRNA or protein transport, and nonlinear interactions all need to be considered to understand the operation of genetic regulatory systems (Smolen et al. 2000). Mathematical modeling of the dynamics of regulatory networks in microorganisms is therefore assumed to take on an essential role for a number of reasons (Mackey et al. 2004; Smolen et al. 2000): (1) Mathematical models can integrate biological facts and insights, that is, process knowledge on regulatory networks can be represented and summarized in a mathematical model; (2) Models can be helpful in identifying design principles for the regulatory networks; (3) Modeling can contribute to developing an understanding of the responses of both normal and mutant organisms to stimuli; (4) Model analysis can reveal potentially new dynamical behaviors that can then be searched for experimentally; and (5) Models can be used to verify the consistency and completeness of reaction sets hypothesized to describe specific systems. Failure of realistic mathematical models to explain experimentally observed behavior often points to the existence of unknown biological details, and can thereby also act as a guide for experimentalists. Many modeling formalisms have been applied to the description of regulatory networks and were reviewed in detail by de Jong (ZOOZ), including directed graphs, Bayesian networks, Boolean networks, nonlinear ordinary differential equations (ODEs), piecewise-linear differential equations, qualitative differential equations (QDEs), partial differential equations (PDEs), and stochastic equations and rulebased formalisms. Discussing the advantages and drawbacks of each modeling formalism is beyond the scope of this chapter. Instead, we will limit ourselves to highlighting positive and negative aspects related to applying the most widespread modeling formalism for the detailed representation of regulatory networks, nonlinear ODEs. Representing regulatory network dynamics with differential equations has certain advantages (Smolen et al. 2000; Hasty et al. 2001): (1)the model yields a continuous description allowing, in principle, for a more accurate physical representation of the system; (2) the models are supported by dynamical systems theory or, in other words, a large body of theory and methodology is available to characterize the dynamics produced by these models; (3) despite being computationally expensive, simulations with detailed models are still rapid compared to in viuo experimental work, allowing researchers to examine many hypotheses and concentrate experimental effort on the most promising of them. Using differential equations also has disadvantages (Alur et al. 2002; de Jong et al. 2002; Smolen et al. 2000; Stelling et al. 2002): (1) The approach is computationally more intensive than, for example, the Boolean approach, where discrete updating of model states is applied. (2) Differentialequation models require the assumption of a specific kinetic scheme, whereas the necessary mechanistic detail is in many cases not (yet) available. (3) There is often a lack of in uiuo or in vitro measurements of
7.4 Methods for Mapping the Complexity of Regu/a#ory Networks
kinetic parameters in the models. Parameter values are indeed only available for a limited number of well-studied systems such as the E. coli lac and trp operon. Application of system identification methods combined with the increasing availability of data might alleviate this problem. (4)Cell compartments modeled with differential equations are assumed to be spatially homogeneous. In some situations this assumption is not appropriate. (5) Differential equations do not yield a good description of systems where only a limited number of molecules are involved. For identical initial conditions, two regulatory systems may reach different steady states as a consequence of stochastic processes resulting from the low number of molecules involved. 7.4.2.1 A Differential Equation Modeling Example: the Repressilator
Simulations with detailed mathematical models are important tools to analyze or to predict the behavior of regulatory networks and to subsequently draw conclusions regarding their design principles (Hasty et al. 2001). The repressilator (Elowitz and Leibler 2000) is an example of a rather simple synthetic network consisting of three transcriptional repressor systems, each consisting of a repressor gene encoding for a repressor protein. The names of the specific proteins are not important in the frame of this paper, and will therefore be omitted. When the genes (e.g., gA) are transcribed to mRNA, which is subsequently translated, the result is the production of a repressor protein (e.g., PA). The repressilator is a synthetic network, and was designed such that a negative feedback loop was obtained: The first repressor protein (PA) inhibits the transcription of the second repressor gene (gB). The second repressor protein (pB) inhibits the transcription of the third repressor gene (gC). And finally, the third repressor protein (pC) inhibits the transcription of the first repressor gene (gA).This is schematically presented in Fig. 7.5. The repressilator example can be represented by a system of six coupled ODES (Elowitz and Leibler 2000), where mA, rnB,and mc represent the mRNA concentrations, and pA,pB,and p c represent the protein concentrations.
* dt
= -B
'
(pc - mc)
I
239
240
I
7 Towards Understanding the Role and Function ofRegulatov Networks in Microorganisms
lgal -\*/
Figure 7.5
Scheme of the repressilator
(Elowitz and Leibler 2000).
El The parameters a + ao,ao,n, and p in Eqs. (6-11) represent the mRNA production rate of the derepressed promoters, the mRNA production rate of the repressed promoters (due to the “leakiness”of the promoter), a Hill coefficient, and the ratio of the protein decay rate to the mRNA decay rate, respectively. Elowitz and Leibler (2000) demonstrated that depending on the selection of the model parameters, the system has a stable or unstable steady state. Both cases are illustrated in Fig. 7.6, and were obtained by simply varying the parameter a in the model. Note that both the protein concentrations and the time axis were normalized I
8
I
I
I
-
0.4 0.2
d
0 0
0.1
0.2
0.3
0.4
-0
0.1
0.2
0.3
0.4
I
I
I
0.5
I
I
0.6
0.7
0.8
0.9
1
0.5
0.6
0.7
0.8
0.9
1
Time (-)
Time (-)
I
Figure 7.6 Evolution o f t h e repressor protein concentration pA YS. time (both in relative units) for a stable and unstable steady state o f t h e repressilator (Elowitz and Leibler 2000). The only parameter that was varied between both simulations is a.
7.4 Methods for Mapping the Complexity of Regulatory Networks
in Fig. 7.6 by dividing through the maximum protein concentration and the end time of the simulation, respectively. Based on the modeling work, it was concluded that oscillations are favored by the following cellular design principles : strong promoters coupled to efficient ribosome-binding sites, tight transcriptional regulation (low a,,), cooperative repression characteristics, and comparable protein and mRNA decay rates. These model-based design principles were subsequently used to construct an E. coli mutant showing oscillatory behavior in vitro (Elowitz and Leibler 2000).
7.4.3 Modularizing Complex Regulatory Networks
In the following, existing methods to decompose the complex network interactions into smaller elementary units will be highlighted. However, it should be mentioned explicitly that we did not attempt to make a complete overview of all available methodologies. Rather, we have chosen to present two approaches so that we could illustrate the difference between high-level and low-level modeling, There seems to be general agreement that suitable methodologies to represent the organizational complexity of regulatory networks should rely on hierarchical structures consisting of multiple modular elementary blocks. A module can generally be considered as a component or a subsystem of a larger system, and generally has some or all of the following properties (Csete and Doyle 2002): (1)identifiable interfaces; (2) can be modified and evolved somewhat independently; (3) facilitates simplified or abstract modeling; (4) maintains some identity when isolated or rearranged; and (5) derives additional identity from the rest of the system. However, what kind of modular structure should be selected for this purpose remains an open question and recent research has provided a number of attractive suggestions. The development of methodologies that allow a modular representation and simulation of large-scale dynamic systems is considered as one of the most important research topics in systems biology (Wolkenhauer etal. 2003). However, Csete and Doyle (2002) point out that the protocols (the rules that prescribe allowed interfaces between modules, permitting system functions that could not otherwise be achieved by isolated modules) are far more important to biological complexity than the modules. 7.4.3.1 Network Motifs
One way to deal with the organizational complexity of regulatory networks in microorganisms is the recognition of elementary modules, called network motifs. Such network motifs seem to be present in all kinds of complex networks (Milo et al. 2002) and can serve as elementary building blocks to reconstruct the connectivity in a regulatory network. For the prokaryote E. coli, Shen-Orr et al. (2002)extracted data from a database (Salgado et al. 2004) on direct transcriptional interactions between transcription factors and the operons they regulate, and augmented these data with a literature search, resulting in 141 transcription factors. A transcription factor, or a tran-
I
241
242
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
scriptional regulator, is a protein that binds to regulatory regions of the DNA and helps control gene expression. The Lac1 gene encoding for the lac repressor protein is an example of a transcription factor (Fig. 7.2). Shen-Orr et al. (2002) found that a considerable part of the regulatory network of E.coli was composed of repeated appearances of only three elementary network motifs (Fig.7.7). In the feedforward loop network motif, a general transcription factor X regulates the expression of a second specific transcription factor Y, whereas both transcription factors jointly regulate the expression of a structural gene Z. Coherent and incoherent feedforward loops are distinguished. In a coherent feedforward loop the direct effect of the general transcription factor on the expression of the structural gene has the same sign as the net indirect effect through the specific transcription factor. In an incoherent feedforward loop the direct and indirect effect have opposite signs. The coherent feedforward loop, the most frequently occurring feedforward loop motif in E. coli (Mangan and Alon 2003), was originally thought to be designed to be sensitive to persistent, rather than short and fast, transient inputs (Shen-Orr et al. 2002), that is, as a circuit that can reject transient activation signals from the general transcription factor (X in Fig. 7.7). A more detailed mathematical analysis of the feedforward loop motif (Mangan and Alon 2003) indicated that coherent feedforward loops act as a sign-sensitive delay element, meaning that the coherent feedforward loop responds rapidly to step changes in the general regulator concentration X in one direction (e.g., OFF to ON), and with a considerable delay to step changes in the general regulator concentration X in the other direction (e.g., ON to OFF). The practical functioning of this coherent feedforward loop regulatory mechanism was demonstrated with the L-arabinose (am) utilization system in E. coli (Mangan et al. 2003). The influence of step changes in the global regulator cAMP on the expression of the L-arabinose system was investigated, and it was demonstrated that the ON response following a step increase of the cAMP concentration was indeed much slower compared to the OFF response (provokedwith the addition of glucose in the growth medium). It was concluded that E. coli might have an advantage in a rapidly varying environment with this type of asymmetric response. When glucose is suddenly present (corresponding to a CAMP OFF step) it is utilized immediately. However, when glucose is depleted from the growth medium (corresponding to a cAMP ON step), the cell can save on the energy spent for protein production by only responding to persistent CAMP ON stimuli. In a single input module network motif, a number of structural genes Z1, Z2,..., Z N are controlled by a single transcription factor X. The single input module can be compared to a single-input multiple-output (SIMO) block architecture in control (Doyle 2004), and is typically found in systems of genes that encode for a complete metabolic pathway. Shen-Orr et al. (2002) further indicate, based on mathematical analysis, that single input modules can show a detailed execution sequence of expression of the structural genes, resulting from differences in the activation thresholds of the different structural genes. In the dense overlapping regulons network motif there is a layer of overlapping interactions between a group of transcription factors X1, Xz,..., XN and a group of structural genes Z1, 2 2 , . .., ZN. In control terminology, dense overlapping regulons can be compared to a multiple-input multi-
7.4 Methodsfor Mapping the Complexity of Regulatory Networks
Feedforward
[!
Single input module (SIM)
Dense overlapping regulons (DOR)
2,
Z
2,
z,
2,
-
... z,
Figure 7.7 Elementary network motifs found in the E. coli transcriptional regulation network (Shen-Orr et at. 2002).
ple-output (MIMO) block architecture (Doyle 2004). The dense overlapping regulons seem to group operons that share a common biological function (Shen-Orr et al. 2002). Shen-Orr et al. (2002)indeed illustrate that the motifs allow a representation of the E. coli transcriptional network in a compact, modular form. However, reality is more complex: the transcriptional network can be thought of as the “slow”part of the cellular regulation network (with a time scale of minutes). An additional layer of faster interactions, including interactions between proteins (often on a subsecond timescale) contributes to the full regulatory behavior and will probably introduce additional network motifs. This was confirmed by Yeger-Lotem et al. (2004), who extended the search algorithms for network motifs from genome-wide transcriptional regulatory network data to also include protein-protein interaction data (YegerLotem and Margalit 2003) and applied the methodology to S. cerevisiae. For a more complete review of recently developed methods to search for network motifs in highthroughput data see Wei et al. (2004). Again, eukaryotic organisms are more complex than prokaryotic organisms. In a study with S. cerevisiae on regulator-gene interactions, Lee et al. (2002)identified six frequently occurring network motifs, compared to only three for E. coli. Besides the Autoregulation
.. ....... ....
i
i
a
;
A.....;
Multi-component loop a,
4
+A 2
Regulator chain
a,~B,.....)b,~B,.....)b,
i
Multi-input motif
A1 t- a2
Bl
B2
B3
c
B,
Figure 7.8 Network motifs in the regulatory network o f 5. cerevisiae (Lee et al. 2002). Solid arrows indicate binding of a regulator to a promoter. Dotted arrows indicate links between regulators and genes encoding for a regulator. Capitals indicate genes (e.g., A), whereas normalfont (e.g., a) indicates proteins.
I
243
244
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
feedfonvard loop and the single input module (single-input motif in Lee et al. 2002) found for E. coli (Shen-Orr et al. 2002), autoregulation, multicomponent loop, regulator chain, and multiple-input motif network motifs were identified (Fig.7.8). In an autoregulation motif, the regulator binds to the promoter region of its own gene. This mechanism was found for about 10% of the yeast genes encoding transcription factors. 7.4.3.2 A Signal-Oriented Modeling Approach to Modeling Regulatory Networks
Models are ideally suited to represent the knowledge about complex systems. Hierarchical modular modeling approaches are needed, since they lead to high model transparency at different levels of abstraction. Such model transparency is beneficial for engineers, but certainly also for biologists. Moreover, a modular structure contributes to allowing easy modification of the model by the model user. One can just modify one model module and subsequently plug the updated module into the overall model. A two-level hierarchical approach for modeling cell signaling mechanisms was proposed (Asthagiri and Lauffenburger 2000), where signaling modules would be defined as units whose underlying mechanisms can be studied first in isolation, and then integrated into a larger flow diagram of networked modules. Modules may be networked in a manner similar to the assembly of unit operations. Signaling outputs would be directed between different modules providing the interconnectivity and optimization; network performance can be assessed from a process systems perspective. According to Lengeler et al. (1999),cellular control is hierarchical, meaning that there are global control networks that are superimposed on the specific control systems, and that can overrule the specificcontrol systems. In prokaryotes, operons and regulons are at the lowest level of the control hierarchy as specific control systems. A regulon is a group of operons that are regulated by a common, but specific, regulator. The global control networks are coupled to complex signal transduction systems, which sense changes in the extracellular environment that require more drastic cellular adaptations than simply the expression or the repression of a few operons. Groups of operons and/or regulons controlled by such a global regulator are called a modulon. Finally, a stimulon represents groups of genes that will respond to the same stimulus. In the example of the lac operon, the repressor protein can be considered a local regulator, whereas CAMP can be considered a global regulator. In this cellular control hierarchy, functional units naturally appear by applying a set of three biological criteria (Kremling et al. 2000; Lengeler et al. 1999): (1)the presence of an enzymatic network with a common physiological task; (2) its control at the genetic level by a common regulatory network, corresponding to operons, regulons, and modulons; and (3) the coupling of this regulatory network to the environment through a signal transduction network. The prokaryotic cellular control hierarchy of Lengeler et al. (1999) is applied in the signal-oriented first principles modeling approach of Kremling et al. (2000), where complex metabolic and regulatory networks are decomposed into physiologically meaningful smaller functional units. Each functional unit is built up by combining
7.4 Methodsfor Mapping the Complexity ofRegulatory Networks
Substance storaee No genetic information:
-0-
Substance transformer Enzymatic reaction
Genetic information: Degradation Signal transformer Signal transduction/ Processing: process: Figure 7.9
Elementary modeling objects for the signal-oriented
modeling approach (Kremling et al. 2000).
a number of elementary modeling objects (Fig.7.9). When building a mathematical model of a unit, each elementary model block in the representation of a regulatory network structure (see Fig.7.10 for an example) gets a mathematical equation assigned to it. As a result, functional units in complex networks are represented as mathematical modeling objects. The method was first applied to the modeling of the lac operon (Kremling et al. 2001). The development of the systems biology markup language (SBML), an XML-based language for representing models of systems of biochemical reactions, and for
7 1 Signal
I
DNA
Signal transformer
nucleotides
+czZI
-
transformer
t-RNA-
Figure 7.10 Representation of transcription F C ) and translation (TL) processes using the elementary modeling symbols o f Kremling et al. (2000). -q and are the transcription and translation efficiency, respectively.
I
245
246
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
exchanging these models between simulation and model analysis tools (Hucka et al. 2003), is an important joint effort of a number of research teams. The use of SBML should facilitate the exchange of models between users of different software platforms. Indeed, instead of writing and validating model code for each software platform, a validated model in one software platform can be exported as an SBML model, which can subsequently be loaded by another software platform. The mere existence of the current SBML definition already contributes to modular representation of models, since a model for part of the processes in a cell can now be exchanged easily between researchers interested in cell modeling, and incorporated into other models and simulation software packages. Moreover, in the long term, it is envisaged that SBML will include the possibility of building large models by reusing a number of previously defined submodels (Finney and Hucka 2003). Clearly, this future SBML development is ideally suited for building large cell models with a modular structure. 7.4.3.3 Bridging the Gap between Network Motifs and the Signal-Oriented Modeling Approach
The network motifs (see Section 7.4.3.1) can be linked to the low-level modeling of regulatory networks (Doyle 2004), where the motifs represent modular components that recur across and within given organisms. One hierarchical modeling classification is proposed (Doyle 2004),where the top level corresponds to a network, which is comprised of interacting regulatory motifs. A module is at the lowest level in the hierarchy and describes transcriptional regulation. It is important to realize here that the network motifs are extracted from systemwide (genome-wide)molecular interaction datasets by applying statistical methods. They provide a general indication of the connectivity and the structure of the regulatory network, however, without any indication on the exact kinetics of each interaction. Network motifs might point in the direction of a model structure that can be applied to describe the connectivity in part of the network, but there are many model candidates that can correspond to each motif (Mangan and Alon 2003). However, development of a detailed (low-level) simulation model necessitates experimental data that can be used to discriminate between model candidates and to estimate kinetic model parameters (Mangan et al. 2003). The signal-oriented modeling approach (see Section 7.4.3.2) on the other hand, is based on detailed experimental work aimed at generating dynamic data for the key metabolites participating in the interactions related to a very small part of the genome. The signal-oriented modeling approach includes detailed mechanistic information on the kinetics of each interaction between model states, resulting in a detailed nonlinear ODE-basedmodel. Both approaches consider the regulatory network at a different abstraction level (Ideker and Lauffenburger 2003). The network motifs can be considered as high-level pathway models, whereas the signal-oriented modeling approach belongs to a class of extremely detailed low-level models. High-level and low-level models are of course connected. In fact, there are relatively few well-documented systems where detailed low-level modeling can be applied (de Jong 2002; Ideker and Lauffenburger 2003),whereas high-level informa-
I
7.5 Towards Understanding the Complexity of Microbial Systems
MODEL TYPE:
High-level models Statistics, data mining
MODELLING FORMALISMS:
Low-level models b
Bayesian networks
Boolean networks Markov chains Differential equations
DETAILS CONSIDERED:
Network connectivity
Influences and information flow
Mechanisms
b Including structure
Figure 7.11
Illustration o f the use of different modeling formalisms to move from abstracted high-level models to specific low-level models (Ideker and Lauffenburger 2003).
tion on protein-DNA interactions and protein-protein interactions is available for an increasing number of microorganisms. Bridging the gap between the high-level and the low-level models or, in other words, increasing the throughput with which interesting and important biological problems can be brought from the high-level to the low-level modeling state is a major challenge for systems biology (Ideker and Lauffenburger 2003). Bridging the gap between high-level and low-level modeling might necessitate the sequential use of a hierarchy of modeling formalisms (see Fig. 7.11), where each formalism corresponds to an adequate description of a certain level of abstraction ofthe regulatory network (de Jong 2002; Ideker and Lauffenburger 2003). An example of a procedure to evolve from high-level to low-level models is provided in Ronen et al. (2002).
7.5 Towards Understanding the Complexity o f Microbial Systems
The models presented above represent selected but important aspects of microbial regulatory function as they can be expressed using dynamic systems concepts and theories. These theories have proven to be very powerful in dealing with analysis and design problems in control engineering and it is therefore natural to expect that similar successes can be obtained when they are applied to microbial systems. However, this expectation is based on the assumption that the complexity of engineering systems and microbial systems are comparable and measurable on the same scale. In that respect, Csete and Doyle (2002) indeed concluded that the complexity of engineering systems, taking a Boeing 777 with its more than 150,000subsystem modules as an example, is almost comparable to the complexity of biological systems. The modeling of microbial systems should therefore not represent fundamental new challenges, except maybe from the problem that the number of ODES required to
247
248
I
7 Towards Understanding the Role and Function ofRegulatory Networks in Microorganisms
describe their behavior will be significantly higher than for engineering systems, and that more nonlinear phenomena might be involved. This assumption however ignores basic interpretation problems in model building. These interpretation problems are tractable and their importance therefore remains unrecognized in most systems engineering problems, which becomes a major problem in modeling microbial systems.
7.5.1 The Interpretation Problem
The interpretation problem originates in the multifunctional nature of microbial systems. Where a subsystem in most engineering systems only serves one or a few functions, it may serve many interdependent functions in microbial systems. A function is not an inherent property of a subsystem, but is defined relative to other subsystems and by the purpose of the system of which it is a component. A protein may thus serve at least three different functions. It can serve as a substance (material or product, e.g., in protein degradation reactions) in a metabolic process, it can serve as an enzyme promoting another reaction, and it can act on the DNA for promoting or blocking the expression of genetic information (transcription factor).The complexity of microbial systems originates in this unique ability of proteins to enter into a multitude of functional relations. The identification of functions requires knowledge of how a subsystem contributes to the whole. This knowledge about the functional organization of the system is a prerequisite for the formulation of a set of ODES describing the system, because it determines the level of abstraction adopted and the system features to be included in the equations (Lind 2004b). As mentioned before, a distinction must be made between organizational (functional) complexity and behavioral complexity (Doyle 2004). Behavioral complexity can be expressed by ODES,but we need other concepts to model the organizational complexity. The purpose of a model of the organizational complexity is to define, in formalized language, the functional relations between subsystems and the biological system as a whole. Such a model comprises an abstract qualitative representation which can be used to communicate the understanding obtained for the biological system. Often, informal sketches or graphics are used to communicate functional knowledge. However, more formal concepts are required in order to ensure clear semantics and consistency of the models. A formalized model of the functional organization is therefore a complement to, and not merely a mediocre or less accurate version of, an ODE model. In the following we will discuss the interpretation problem in more detail in order to further motivate the application of functional concepts in the modeling of microbial systems, and to introduce and explain the basic concepts. We will subsequently present formalized generic concepts to model control (regulatory)functions. A key advantage of generic concepts is that they can be applied on an arbitrary level of abstraction and thus facilitate the modeling of complex control functions in microbial systems. Another advantage of the formalized concepts is their completeness.
7.5 Towards Undentanding the Complexity ofMicrobia/ Systems
7.5.1.1 Frameworks of Interpretation
In order to develop a deeper understanding of the problems in modeling complex systems, it is important to realize that modeling activity, in addition to mathematical aspects, involves a process of interpretation where the modeler makes sense of the events and phenomena in the problem under investigation. The interpretation problem is fundamental to humanities and social sciences but has thus far not been considered particularly relevant for the natural or the engineering sciences because interpretation is often considered in conflict with objectivity. However, when considering complex systems we must apply perspectives or make abstractions in order to handle the modeling problem at hand, and thus interpretations are unavoidable. However, interpretation of a phenomenon is always relative to a conceptual framework. According to Goffman (1974) we can distinguish between two so-called primary frameworks of interpretation. A framework of interpretation serves as a frame of reference and is seen as rendering what would otherwise be a meaningless and chaotic situation into something that is meaningful and with structure. The two primary frameworks are called the natural and the social frameworks, respectively, and are defined as follows: 0
0
...Natural frameworks identify occurrences seen as undirected, unoriented, unanimated, unguided, “purely physical.” Such unguided events are ones understood to be due totally, from start to finish, to “natural” determinants. It is seen that no willfid agency causally and intentionally interferes and that no actor continuously guides the outcome. Success or failure with regard to these events is not imaginable.. . ...Social frameworks, on the other hand, provide background understanding for events that incorporate the will, aim, and controlling effort of an intelligence, a live agency, the chief one being the human being. What the agent.. . does can be described as “guided doings.” These doings subject the doer to “standards,” to social appraisal of his action based on its.. . efficiency, economy, safety, etc.
Events and occurrences in engineering systems can clearly be interpreted within a natural framework. Engineering systems are, however, designed to exploit physical phenomena such that human purposes and aims can be fulfilled and therefore be understood within a social framework of interpretation. The natural and social frameworks are both broad categories. The natural frameworks include, for example, physics and chemistry, and similarly the social frameworks include several subcategories. Note that concepts of function and purpose belong to the social framework of interpretation. Habermas (1989) compared different approaches to functionalism within social science. His analysis identifies three approaches. Two of these are in Goffman’s sense of social frameworks for understanding the plan or intention of a system or an activity: 0
We can understand the plan teleologically, in which case it is based on the artisan model of instrumental action through which an end is reached through appropriate means.
I
249
250
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms 0
We can also conceive the plan dialectically, in which case it is based on the dramaturgic model of communicative action, in which an author makes an experience transparent through the role playing of actors.
Habermas’ analysis also indicates that we could define an additional framework of interpretation that could be called biological and was not considered by Goffman. This framework is characterized as follows: 0 We can also use a model borrowed from biology. According to this model, systems can be understood as organized unities that under changing circumstances maintain themselves in a specific state through self-regulation and reproduction. Accordingly, four different frameworks may be applied in the interpretation of an event or a phenomenon: 1. the natural framework, 2. the framework of instrumental action, 3. the framework of communicative action,
4. the biological framework.
Note that the four frameworks of interpretation should be seen as different ways to assign meaning to an observed event or phenomenon. Each framework defines a context for understanding the system according to a particular point of view. As mentioned below, the frameworks are often combined in the interpretation of complex systems. 7.5.1.2 Interpretation of Complex Biotechnological Systems
Interpreting complex biological systems will often require the application of more than one of the frameworks. For example, the behavior of a “dancing” bee can be described within a communicative framework by its communicative function. But it may also be described within a biological framework (at a higher abstraction level) by its function for the survival of the species. In order to understand the organization of bio(techno)logicalsystems it is necessary to apply the instrumental action, the biological, and possibly also the communicative framework, for example, when considering quorum sensing where a population of microorganisms are informed about a certain event. Since there is no blueprint (i.e., no designer of the cell), its regulatory function must be explained in evolutionary terms, where it must be seen as emerging from a selection process, leading to a competitive advantage for the cell. When the behavior of a cell population in a bioreactor is controlled from the outside it must be seen as an object of instrumental action. A major challenge in the interpretation of complex microbial systems is therefore to understand how to combine different interpretations of the same subsystem or how to combine the interpretations of subsystems that belong to different frameworks. As an example, the central dogma, which includes the transcription, translation, and expression of information in the DNA and RNA (communicative action), should be combined with the metabolic reactions (biological framework) and the control of cell population in a reactor (instrumental action).
7.5 Towards Undentanding the Complexity of Microbial Systems
7.5.2 Functional Analysis
The instrumental, communicative and biological frameworks support functional explanations, i.e., answers to a “why”question having the general form “in order to” (Achinstein 1983).The explanations are different, however. Within the framework of instrumental action the explanation of an event or object relates it to the intention of the actor. Within the communicative framework an occurrence is ascribed a communicative function (e.g., a message), and the occurrence is explained by its effect or role in an act of communication. Within the biological framework, observed events are seen as contributing to survival and adaptation of the system to its environment, e.g., an organ is ascribed a function in view of its contribution to the survival of the organism of which it is a part. Functional explanations express the reasons (not the causes) for the occurrence of an event and are therefore an integral part of means-end analysis. Means-end analysis is an old topic in philosophical logic with ancient roots in works of Aristotle but has more recently been developed within artificial intelligence (Simon 1981)and cognitive science research (Bratman 1987). Means-end analysis is the basis for multilevel flow modeling (MFM), which is a methodology for modeling complex industrial systems (Lind 1994) by integrating different frameworks. MFM is not intended to generate detailed dynamic models. Instead, it allows one to represent systems at different levels of abstraction and as such supports the building of detailed dynamic models in the conceptual phase (Gofuku and Lind 1994).MFM has an inherent logic that allows formal analysis of the organizational complexity, and is therefore also attractive for application to regulatory networks in microorganisms. 7.5.2.1 Formalization of the Concept of Function
One of the key research problems in means-end analysis is the formalization of the concept of hnction. Formalization is necessary in order to be able to build models of means-end relations in systems that are logically consistent and in order to be able to use the models for computational purposes. The formalization involves the solution of two problems. The first problem is to define a logic that can be used to make inferences about means-end relations. The other problem is to identify a basic set of socalled elementary functions, which can be used as generic modeling concepts. The question of means-end logic was addressed by Larsson (1996) for applications of MFM in fault diagnosis and by Larsen (1993) for problems of start-up planning. We will not consider these logics here, but instead focus on the problem of elementary functions, which is of particular interest in the present context of modeling regulatory networks in microbial systems. 7.5.2.2 Elementary Functions
Before we address the problem of elementary functions it should be mentioned that the concept of function actually has two core meanings. One meaning is related to
I
251
252
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
the concept of action and the other is related to the concept of role. The first meaning is used when we define a function by what a system or actor is doing, and the second when we refer to the entities involved in the action. Consider the following example: the function of the pump is to “move the water.” Here the function of the pump in the first meaning is the intended result of the pump’s action, whereas the function in the second meaning is the role played by the pump in its interaction with the water, i.e., that it is the agent of the action. The function (role) of the water is similarly the object of action (the patient). By distinguishing between actions and roles we are accordingly able to define functions of systems more precisely. This clarification of the concept of function relies on the linguistic analyses of verb semantics (see, e.g., Lyons 1977).The solution to the problem of elementary functions depends therefore on whether we mean function in the sense of action or role. Elementary Roles and Embedding Relations
Elementary roles (such as agent, patient, instrument, etc.) have been defined by linguists, but some disagreements of minor importance for the discussion in this chapter still remain. Role relations are important for understanding system complexity. Thus, the same object or system could have several roles at the same time or different roles at different times. In this way system processes can be embedded into each other. With the roles mentioned above we have the following possibilities for role shifting: 0
0
0
An item may be the patient (product) of an action (transformation) and then become the agent (e.g., catalyst) of another action. An item may be the patient (product) of an action (transformation) and then become the patient (material) of another transformation. An item may be the agent of an action (transformation) and then become the patient (material) of another action (transformation).
An item may participate in this way in several processes at the same time provided it can play the roles simultaneously. Elementary Actions
The possibility of defining a set of elementary action types has been addressed by Von Wright (1963),and has been explored further for application in means-end analysis of complex dynamic systems by Lind (1994, 2002, 2004a,b). The elementary action types are actually derived from a set of corresponding elementary change types. The idea is that an action results in a change of state. Conceptually, the change caused by the action would not appear if the action was not done. The definition of an action therefore contains a reference to a hypothetical situation that is not realized because the action was done. Now, by defining a change as a transition between two states, we can define four so-called elementary changes shown in Table 7.1. Each change in the table is defined by both a linguistic description and a logic formula, which is composed of a proposition p representing the world state, a temporal operator T (Then) and one of the four change verbs “happens,” “remains,” “disappears,”and “remains absent.” In this way the formula -pTp (-p Then p) is a logic representation of the change described by “p happens.”
7.5 Towards Understanding the Complexity ofMicrobial Systems
We shall not go into details about the logic definitions here. However, it is notable that the list of elementary changes is a logically complete list so that all changes in the world can be defined provided we define the state in question by a proposition p. We will actually also need to combine elementary changes. Each elementary change has a corresponding elementary action type as indicated in Table 7.1. The action formula contains the temporal operator T (Then) and an additional operator 1 (Instead) used to indicate the hypothetical state. The logical formula -pTpI-p represents the action “produce p.” It is seen that if the action was not done the state of the world would be -p instead of p. The list of elementary actions can actually be expanded with four additional action types not shown in the table. These actions would correspond to actions where the agent refrains from intervening with the world. The total number of elementary actions is accordingly eight. The four (eight) elementary action types define a generic set of actions that have the great advantage of being defined on a logical basis. This means that the completeness of the action types is ensured. The elementary action types (Table 7.1) therefore form a very attractive basis for the definition concepts for modeling system functions. Note that the action types are generic because they are defined without specifying the proposition p. The action types can therefore be specialized to specific problem domains by proper specification of p. Another remarkable aspect of the action types is that they have a direct correspondence with the types of control functions used in control engineering. The correspondence is shown in Table 7.2. The completeness of the action types implies Table 7.1 The elementary action types of Von Wright (1963). p denotes a state, T denotes “Then,” and I denotes “Instead.” Types o f elementary change
I
Types of elementary action
Description
Formula
Description
Formula
p Happens
-PTP
Produce p
-PTPI-P
p Remains
PTP
Maintain p
pTp1-p
p Disappears
PT-P
Destroy p
PT-PIP
p Remains absent
-pT-p
Suppress p
-PT-PIP
Table 7.2 Correspondence between the elementary action types and control actions. Elementary action
I Control action
Produce
I Steer
Destroy
Trip
Suppress
I Interlock
I
253
254
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
accordingly that any control function can be described by proper combinations of these four functions. Note that the descriptions of the controls do not represent the implementation of the controls. The descriptions only define the control purpose.
7.5.3 A Language for Modeling Functions of Microbial Systems
The elementary action types can be used as a systematic basis for the derivation of modeling concepts for a particular problem domain. As an example, MFM (Lind 1994) can be mentioned. A basic set of modeling concepts adapted to the domain of microbial systems is proposed in Fig. 7.12. Each of the actions shown can be defined formally as specific interpretations of the elementary action types or as compositions of two elementary actions (Lind 2004a).We will not provide all details here. Instead, we prefer to demonstrate with examples (see Section 7.5.4) how the modeling concepts can be used to represent the functional organization of microbial systems. The MFM modeling language (Fig.7.12) comprises three types of concepts. It contains a set of concepts for representing action (functions), concepts to represent goal states, and a set of concepts for representing means-end relations between actions, sets of actions, and goals. It should be stressed that MFM represents the actions or transformations done to material, energy, or information flows (fluxes) in a complex system. However, it does not represent the flows or fluxes themselves. This may seem disturbing, but the abstractions provided by MFM describe how the systems of
Actions
0 0 0 69 0 0
Q Q
Means-end relations Source Sink Transport
C I
Condition
T
Negated condition
-cI
Transcription I
Translation Storage Conversion
.p I
pp
Achievement
Producer-product
I
Separation
rlIn
Goal
t S
States
0
T
I
Mediation Figure 7.12
Steering
MFM
modeling concepts adapted to the microbial domain.
7.5 Towards Understanding the Complexity of Microbial Systems
transformation of the various substances (energy, material or information) are organized into means-end networks. The levels of abstraction can therefore not be defined without implicitly thinking in concepts of flows or fluxes. It should be noted that MFM also includes concepts to model part-whole relations, as well as concepts to model relations between functions and physical structures, but these relations are not used here (see Lind (1999) for more detail on these relations). The concepts in Fig. 7.12 will be explained briefly in the following. A deeper understanding can be obtained by studying the application examples presented in Section 7.5.4. 7.5.3.1 The Means-End Relations
Goals and functions can be connected by conditions, achievements, producerproduct, mediation, and steering relations. Each of the relations will be discussed separately. 0
0
0
0
0
0
The condition: a goal can define a condition that is necessary for the enablement of a function. This conditioning is expressed by a relation (C) between the objective and the function. The negated condition: a goal can define a condition that is necessary for the disablement of a function. This negated conditioning is expressed by a relation (-c) between the goal and the fimction. The achieve relation: goals are achieved by system functions. This relation is defined by the achieve relation (A). The (A) relation is a means-end relation where the goal is the end and the function or systems of functions are the means. The producer-product relation: functions can be related through a means-end relation called a producer-product (PP) relation. This relation is used when the interactions between a set of functions (an activity or process) result in a product that again serves another function in the system. The mediation relation: functions can also be related through a mediate (M) relation. This relation is used when a system has the role of being an intermediate between a system and another system that serves as an object of action. An example of such mediation could be the transportation of energy by the pumping of water. Here, there is a mediate relation between the pumping function and the transportation of energy. The steering relation: functions can also be related through a steering relation (S). This relation is used when the interactions between a set of functions (an activity or process modeled by a so-called flow structure) determine the state of another function. The connection relation: MFM also includes a so-called connection relation, which is not really a means-end relation, and is also not shown in Fig. 7.12. A connection is used to relate the functions (actions)into functional (flow)structures. A connection is symbolically represented as a line linking two functions and represents a contextual linkage of two functions. This means that they relate to the same goal perspective or that they share substances (change properties that belong to the
I
255
256
I
7 Towards Undentanding the Role and Function of Regulatory Networks in Microorganisms
same substance).The connection relation can be further specialized by taking into account causal directions in the interaction between functions. 7.5.3.2 The Flow Functions
MFM also defines a set of so-called flow functions (the actions), which are used in building models together with the relations described above. The symbols used for functions are shown in Fig. 7.12. Each ofthe functions represents an action on a substance that may be mass, energy, or momentum. The different substances are indicated by symbology. In action terms a source provides a substance, i.e., makes it available. Similarly a sink removes substances. A transport changes the spatial location of a substance, a conversion changes the composition of a material flow, and a separation separates a flow of material into its components. It is clear that some of the functions both apply to modeling material and energy flows. However, there are also functions that are dedicated to modeling transformation of, e.g., information flows. Two such functions are defined here for modeling the processing of genetic information in microbial networks, namely the transcription, and the translation functions. 7.5.4
Modeling Examples
In the following, the application of MFM to regulatory networks will be illustrated with examples that illustrate the capabilities of the methodology in decomposing the regulatory network into its elementary modules. Again, the lac operon will be used as an example since its regulation has already been described in detail (see Section 7.3.1). 7.5.4.1 Induction of the lac Operon
Figure 7.13 represents the induction mechanism of the lac operon (see also Fig. 7.2) using the symbols introduced in Fig. 7.12. The logic of the model can be explained by starting with the bottom part of Fig. 7.13, which represents the uptake of lactose and the conversion of lactose to allolactose, the inducer of the lac operon. The model shows in box I that the transport of lactose over the cell wall is carried out (“mediated”) by the f3-galactosidepermease, which is produced as the result of the translation of the mRNA (top part of the model). The subsequent conversion of lactose to allolactose is catalyzed (“enabled”) by the f3-galactosidase.The allolactose is afterwards assumed to be distributed in the cellular cytoplasm by diffusion (“transport function”). The functions described comprise the means for achieving (A in Fig. 7.13) the conditions for allolactose to be present in the cell. By changing perspective and thus moving upwards in the model we now consider the set of functions in boxes I1 and I11 in Fig. 7.13, describing how the state of the repressor (R) is influenced by the various functions of the microbial system. The
7.5 Towards Understanding the Complexity ofMicrobial Systems
I
257
"beta galactosidase is present"
r
A
Y permease
F lac( Z,Y,A)
IV beta galacto
"no R a t IacZYA prornotor" A
I11
Figure 7.13. MFM representation o f the lac operon induction mechanism (see also Fig. 7.2).
allolactose
258
I
7 Towards Understanding the Role and Function ofRegulatory Networks in Microorganisms
principles to describe the production of repressor protein in detail (box 11) are similar to the flow model in box IV, representing the transcription and translation of the structural genes of the lac operon. The lac1 gene is transcribed and the resulting mRNA is translated into the repressor protein. Box I1 thus depicts the flow model for the transcription and translation part of the central dogma (Fig.7.1). The function in box I1 provides the presence of R. In other words, it provides the means for producing R. Box I1 represents the producer, whereas the fact that R is becoming present in box 111 is the product. The repressor protein can follow three possible paths, represented in box 111: (1)it can follow the bottom path where it binds with the operator of the lac operon, and will thus block transcription; (2) it can follow the middle path, where it binds with the inducer allolactose and undergoes a conformational change; and (3) it can follow the upper path, where the repressor protein is degraded (modeled as a "Sink). The presence of allolactose conditions two of the transports in box 111. When allolactose is present the repressor protein binds to the allolactose, whereas in the absence of allolactosethe repressor protein binds to the lac operon operator. In the first case, the functions described in box 111 comprise the means for achieving a de-repressed lac operon and thus transcription of the lac operon structural genes occurs. Again, we can now move upwards in the model. The transcription and translation of the lac operon is represented in box IV. The lac operon structural genes are transcribed into a polycistronic mRNA, and during the subsequent translation process the mRNA results in the different proteins (P-galactosidepermease and P-galactosidase).Conversion of the polycistronic mRNA to several proteins is modeled by combining the translation process with the subsequent separation function in Fig. 7.13. It is important to mention that the production of the third protein encoded in the lac operon, pgalactoside transacetylase, is not shown in Fig. 7.13, since that enzyme is assumed not to play any significant role in the induction mechanism. The difision of both proteins into the cytoplasm is finally illustrated in boxV and VI. Note that bgalactoside permease is a mediator (transport enzyme), whereas the role of galactosidase is an enabler (a catalyst). Now the loops to box I are closed.
0-
7.5.5 Inducer Exclusion and Carbon Catabolite Repression
Figure 7.14 is an extension of Fig. 7.13, including the regulatory effects of glucose on the lac operon, catabolite repression, and inducer exclusion (see also Fig. 7.4). Again, we start at the bottom of the figure to explain the modeled flows. The PTS system (box VII and VIII) is responsible for the uptake and the phosphorylation of glucose, resulting in GGP (for an explanation of the abbreviations, see Section 7.3.1.2). Note that both the energy level (box VIII: transfer of a phosphate group from PEP to GGP) and the component level (box VII) are represented, indicating the strength of MFM to represent a system at different abstraction levels. In the simplified schematic representation of the PTS for this purpose, it is assumed that a phosphate group is transferred from the phosphorylated PTS enzyme P-IIAG"to glucose during uptake
7.G Discussion and Conclusions
of glucose. The resulting PTS enzyme HAG'' again receives a phosphate group from PEP, i.e., it is involved in a loop. In fact, phosphorylation of IIAG1' takes place through a series of conversion steps, involving several of these loops. These conversions are not shown in detail since they do not contribute to the regulatory mechanisms, and are instead represented by one extra catalysis step between PEP and the catalysis of HAG1' to PIIAGICconversion. For the detailed mechanism of the PTS system, Lengeler et al. (1999) and Postma et al. (1993) should be consulted. With respect to the glucose effects on the regulation of the lac operon, the formation of IIAG" and PIIAG1'has been described in boxVII in all the detail needed. Again, we can change perspective and consider the functions where the presence of HAG" or PIIAG" will have an influence on the state of the system. The P-IIA'", the species that will be abundant in the absence of extracellular glucose, activates the conversion of ATP to cAMP by the adenylate cyclase (AC) in box X. Similarly to pgalactosidase, AC is modeled as an enabler (catalyst),but the transcription and translation processes leading to AC formation are not presented since these mechanisms were considered as not contributing substantially to the regulation of the lac operon. We have thus assumed that the enzyme AC is present ("Source" in box IX) and undergoes diffusion in the cytoplasm, and subsequently catalyzes the ATP to cAMP conversion in box X. The CAMP forms a complex with CRP, and this complex subsequently boosts the transcription of the structural genes of the lac operon (box IV), thereby releasing the catabolite repression of the lac operon. Thus, the catabolite repression mechanism has been described. Finally, in the presence of glucose IIAG1'will be abundant and will inhibit the uptake of lactose by the permease. Inducer exclusion is thus modeled as a negated condition, i.e., the absence of HAG'' will be the condition to reach full activity of the permease enzyme.
7.6 Discussion and Conclusions
To reveal and understand regulatory network mechanisms constitutes one of the most significant scientific challenges in the post-genomic era. Many researchers are devoted to uncovering these networks and utilize many different (novel) techniques to enable gene annotation, transcription factor identification, as well as characterization and representation of protein-DNA and protein-protein interactions. Behind these attempts remains a fundamental question of how to combine data from these many different sources for revealing the function of regulatory networks in microorganisms. The intuitive approach for a systems engineer is to generate a model of the system under study. However it is not always clear which modeling methods and what abstractions to apply. Therefore, this chapter first highlighted fundamental modeling problems in describing regulatory networks in microorganisms, and subsequently illustrated the potential of means-end analysis (also called functional modeling) to represent the functionality of complex regulatory networks at different levels of abstraction.
I
259
260
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms "beta galactosldase
f PP lac(2Y.A)
permease
sidase
"no Rat IacZYA promotor"
\ R-allolactose
R-IacZYA promotor
I mSSystem
~-
cAMP.CRP wmplex
"IIA'~Is present"
CRP
n
-T
ATF Gluwse
G6P
VII
X
cAMP.CRP
"adenylatecyclase IS presenl"
(k) adenylate cyclase
IX
Figure 7.14 MFM representation of the glucose effects on the lac operon (see also Fig. 7.4).
7. G Discussion and Conclusions
As mentioned above, microbial function is carefully controlled through an intricate network of proteins and other signaling molecules, which was demonstrated in a couple of examples mainly drawn form the lac operon. From a production process perspective, it is definitely an interesting question how systems engineers should couple the detailed description and understanding of the functioning of microorganisms (the microscale) to the higher process level descriptions (the macroscale). The proposed MFM-modeling-based methodology is especially suitable to support this coupling of the microbial regulatory functions and the higher level process and production control functions, since the same set of symbols might be used to represent the flows at the process as well as at the detailed (micro) level. This ability clearly distinguishes the proposed methodology from existing methods to represent regulatory network mechanisms: The network motifs (Lee et al. 2002; Shen-Orr et al. 2002) only represent the connectivity between system states, and do not allow a representation of the connectivity with higher process levels. The elementary modeling objects developed by Kremling et al. (2000) allow the representation of the regulatory networks at a very detailed level, and would probably also be suited to connect the regulatory network with higher process level functions. However, the representation with elementary modeling objects does not contain the degree of information available in the MFM models of regulatory networks, where the actions and means-end relation symbols (see Fig.7.12) provide a high degree of transparency on the way system states interact with each other. Thus, a first conclusion of this chapter is that the proposed representation of regulatory network systems, based on MFM, is ideally suited for supporting systems engineers in detailed model building in bioreactor systems. The applied modeling concept has been demonstrated to enable modeling the changes in qualitative behavior of microorganisms, and is as such able to summarize available process knowledge. If quantitative dynamic models were desired, then these could be developed within each region of qualitative behavior using the logic in the MFM model as a support in the generation of detailed mathematical descriptions. By providing a methodology to represent the regulatory networks at several abstraction levels, this chapter is of relevance to process systems engineering for several reasons. One reason is that microorganisms constitute relatively simple biological systems, and the study and understanding of these relatively simple biological systems may, with suitable extensions, enable better understanding of multicellular biological systems. Furthermore, microbial systems are increasingly used, often following genetic manipulation, to produce relatively complex organic molecules in an energy-efficient manner. Understanding the details of intracellular regulatory networks is a prerequisite for efficient coupling of microbial regulatory functions with higher-level process and production control functions. In other words, the final result of applying process engineering might be improved considerably when process-relevantparts of the intracellular regulatory networks are better understood, and the methodology proposed in this chapter can significantly contribute to represent and subsequently develop that understanding. Finally, and maybe most importantly, applying the MFM modeling method to regulatory networks in microorganisms almost naturally leads to modularizing the network into elemental
I
261
262
I
7 Towards Understanding the Role and Function of Regulatory Networks in Microorganisms
building blocks that are understandable for systems engineers as well as biologists. Thus, the proposed modeling method could contribute substantially by providing a formalism that allows biologists and systems engineers to communicate efficiently about regulatory network functions. To reach a basic level of understanding of the function of these autonomous plants, which is what microorganisms are from an industrial point of view, a systematic description of fundamental regulatory and metabolic functions is proposed in this chapter: The proposed description, which is based on MFM (Lind 1994), might eventually lead to combining the basic understanding of microbial behavior with the semiotics of control. This combination leads to simple schematics for describing fundamental roles of molecules in cells, and their reactions for control and coordination of microbial behavior. In this respect, the flexibility of the MFM modeling formalism is especially noteworthy. In fact, in the lac operon example the lacI gene is expressed constitutively. This means that transcription and translation of the gene to the resulting repressor protein does not necessarily have to be modeled in detail in box I1 of Fig. 7.13. Indeed, since we assume that no regulatory mechanisms are involved in this process, the presence of the repressor protein could have been modeled by only including a “source” for the repressor protein in box 111, thereby omitting box I1 from the model. Thus, MFM models are flexible and can be extended easily. This is, for example, also illustrated by the straightforward extension of the lac operon induction mechanism (Fig.7.13) to also include the glucose effects (Fig.7.14). Clearly, when further building on existing MFM models, the “source” symbols are obvious candidates for extending these models, aimed at including more detail. An MFM representation of the DNA replication process could for example be coupled to the presence of the lacI and lac(Z,Y,A)genes in Figs. 7.13 and 7.14. One could also argue that this chapter has mainly addressed prokaryotic organisms, and that this will limit the applicability of the MFM modeling methodology severely. We claim that application of the proposed methodology to other organisms, for example, eukaryotic unicellular organisms, should be no problem except for obtaining the necessary fundamental knowledge. Again, an example will illustrate this. In eukaryotes, the mRNA might undergo several processing steps before it is transported out of the nucleus, where the ribosomes will finally take care of the translation of the mRNA to a protein. Applying the proposed MFM methodology to such eukaryotic systems, box IV in Fig. 7.13 (describingtranscription and translation of the prokaryotic lac operon) would definitely need several extensions to allow the detailed representation of similar eukaryotic mechanisms. This extension could be obtained by splitting up the prokaryotic version of box IV (Fig.7.13) into several boxes for the eukaryotic case, where each box represents a separate perspective. One box for transcription and its regulation, one box for the subsequent eukaryotic mRNA processing steps, one box for the transport of the processed mRNA out of the nucleus, and finally one box for the translation process. However, it is also evident from the examples in Figs. 7.13 and 7.14 that these extensions can be made in a Straightforward way by using the symbols and conventions provided in Fig. 7.12. Thus, a second conclusion of this chapter is that MFM modeling is highly flexible, allowing systems engineers to easily extend existing models (e.g., by adding flow
7.G Discussion and Conclusions
models related to the means for production of some “sources” in an existing model), and to transfer MFM modeling concepts to other (more complex) biological systems. Finally, it should also be pointed out that the proposed modeling methodology is not only useful in reverse engineering, where it could be applied to represent hypotheses on the operation of complex regulatory network systems. In our opinion MFM models could also be used in forward engineering to design regulatory network building blocks such as the repressilator (Elowitz and Leibler 2000) before developing a detailed mathematical description References 1 Achinstein P. (1983) The nature of explana-
tion. O x f r d University Press, Oxford, UK. 2 Alur R. et al. (2002) Modeling and analyzing biomolecular networks. Comput. Sci. Eng. 4 ()an/Feb),20-31. 3 Astltagiri A. R., Laufenburger D. A. (2000) Bioengineering models of cell signaling. Ann. Rev. Biomed. Eng. 2. 31-53. 4 Baneyx F. (1999) Recombinant protein expression in Escherichia coli. Curr. Opin. Biotechnol. 10,411-421. 5 Bratman M. E. (1987) Intention, plans and practical reason. Harvard University Press, Cambridge, MA. 6 Cltassagnole C., Noisommit-Rizzi N., Schmid J . W., Mauch K., Reuss M . (2002) Dynamic modeling of the central carbon metabolism of Escherichia coli. Biotechnol. Bioeng. 79, 53-73. 7 Cheetham P. S. ]. (2004) Bioprocesses for the manufacture of ingredients for foods and cosmetics. Adv. Biochem. Engin. Biotechnol. 86, 83-158. 8 Csete M. E., Doyle J . C. (2002) Reverse engineering of biological complexity. Science 295, 1664- 1669. 9 deJong H . (2002) Modeling and simulation of genetic regulatory systems: a literature review. J. Comp. Bid. 9, 67-103. 10 Doyle F.]. I l l (2004) A systems approach to modeling and analyzing biological regulation. Proceedings of the International Symposium on Advanced Control of Chemical Processes (ADCHEM2003). 11-14 January 2004, Hong Kong. 11 Downward /. (2001)The ins and outs of signalling. Nature 411, 759-762. 12 Eisenberg D., Marcotte E. M., Xenarios l., Yeates T. 0.(2000) Protein function in the postgenomic era. Nature 405, 823-826. 13 Elowitz M. B., Leibler S. (2000) A synthetic oscillatory network of transcriptional regulators. Nature 403, 335-338.
14 Ferber D. (2004) Microbes made to order. Sci-
ence 303, 158-161.
15 Finney A,, Hucka M . (2003) Systems biology
markup language (SBML) level 2: structures and facilities for model definitions. http: www. sbml. org/. 16 G o f i a n E. (1974) Frame analysis. Penguin Books, London. 17 Gofuku A., Lind M . (1994) Combining multilevel flow modeling and hybrid phenomena theory for efficient design of engineering systems. Proc. Znd IFAC Workshop on Computer Software Structures Integrating AI/KBS Systems in Process Control. 18 Habemas J. (1989) On the logic of the social sciences. M I T Press, Cambridge, MA. 19 Hasty ]., McMillen D.,Isaacs F., Collins]. ]. (2001) Computational studies of gene regulatory networks: In numero molecular biology. Nature Rev. Gen. 2, 268-279. 20 Hucka M., Finney A,, Sauro H. M., Bolouri H. et al. (2003)The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics 19, 524-531. 21 Ideker T., Lauffenburger D. (2003) Building with a scaffold: emerging strategies for highto low-level cellular modelling. Trends Biotechnol. 21, 255-262. 22 Jacob F., Monod]. (1961) Genetic regulatory mechanisms in the synthesis of proteins. J. Molec. Bid. 3, 318-356. 23 Kitano H. (2002) Systems biology: a brief overview. Science 295, 1662-1664. 24 Kremling A., Bettenbrock K., Laube B., Jahreis K., Lengeler]. W., Gilles E. D. (2001)The organization of metabolic reaction networks. 111. Application for diauxic growth on glucose and lactose. Metabol. Eng. 3, 362-379. 25 Kremling A , ,jahreis K., Lengefer]. W., Gilles E. D. (2000) The organization of metabolic reaction networks: a signal-oriented appro-
I
263
264
I
7 Towards Undentanding the Role and Function ofRegulatory Networks in Microorganisms ach to cellular models. Mekzbol. Eng. 2, 190-200. 26 Lanen M. N. (1993) Deriving action sequences for start-up using multilevel flow models. PhD Thesis, Department of Automation, Technical University of Denmark. 27 Larsson J. E. (1996) Diagnosis based on explicit means-end models. Art$ Intell. 80, 29-93. 28 Lee T. I. et al. (2002)Transcriptional regulatory networks in Saccharomyces cerevisiae. Science 298, 799-804. 29 Lengeler]. W., Drews G., Schlegel H. G. (1999) Biology of the prokaryotes. Thieme Verlag, Stuttgart, Germany/Blackwell Science, Oxford, UK. 30 Lind M. (1994) Modeling goals and functions of complex industrial plant. Appl. Art$ Intell. 8, 259-283. 31 L i d M. (1999) Plant modeling for human supenisory control. Trans. Inst. Measure. Control 21, 171-180. 32 Lind M. (2002) Promoting and opposing. NKS-R-07 Project Report, Orsted DTU, Technical University of Denmark, Kongens Lyngby, Denmark. 33 Lind M . (2004a) Description of composite actions-towards a formalization of safety functions. NKS-R-07 Project Report, Orsted DTU, Technical University of Denmark, Kongens Lyngby, Denmark. 34 Lind M. (2004b) Means and ends of control. Proc. IEEE Conf Systems Man and Cybernetics, 10-13 October 2004, The Hague, Holland. 35 LyonsJ. (1977) Semantics 2. Cambridge University Press, Cambridge, MA. 36 Mackey M . C., Santillltn M., Yildirirn N. (2004) Modeling operon dynamics: the tryptophan and lactose operons as paradigms. C. R. Biolo. gies 327,211-224. 37 Makrides S. C. (1996) Strategies for achieving high-level expression of genes in Escherichia coli. Microbiol. Rev. 60, 512-538. 38 Mangan S., Alon U. (2003) Structure and function of the feed-forward loop network motif. Proc. Natl. Acad. Sci. 100, 11980-11985. 39 Mangan S., Zaslaver A,, Alon U.(2003) The coherent feedforward loop serves as a signsensitive delay element in transcription networks. J. Mol. Biol. 334, 197-204. 40 Milo R., Shen-Ov S., Itzkovitz S., Kashtan N., Shklovskii D.,Alon U. (2002) Network motifs: simple building blocks of complex networks. Science 298, 824-827. 41 Postma P. W., LengelerJ. W., Jacobson G. R. (1993) Phosphoeno1pyruvate:carbohydrate phosphotransferase systems of bacteria. Microbiol. Rev. 53, 543-594.
42 Ronen M., Rosenberg R., Shraiman B. I., Alon U. (2002)Assigning numbers to the arrows:
parameterizing a gene regulation network by using accurate expression kinetics. Proc. Natl. Acad. Sci. 99, 10555-10560. 43 Salgado H. et al. (2004) Regulon DB (version 4. 0): transcriptional regulation, operon organization and growth conditions in Escherichia coli K-12. Nucleic Acids Res. 32, D303-D306. 44 Santilldn M., Mackey M. C. (2004) Influence of catabolite repression and inducer exclusion on the bistable behavior of the lac operon. Bi~phys.J. 86, 1282-1292. 45 Shen-Ov S. S., Milo R., Mangan S., Alon U. (2002) Network motifs in the transcriptional regulation network of Escherichia coli. Nature Genetics 31, 64-68, 46 Simon H. A. (1981) The sciences of the artificial. MIT Press, Cambridge, MA. 47 Smolen P., Baxter D. A,, Byrne]. H. (2000) Modeling transcriptional control in gene networks-methods, recent results, and future directions. Bull. Math. Biol. 62, 247-292. 48 Stelling H., Klamt S., Bettenbrock K., Schuster S., Gilles E. D. (2002) Metabolic network structure determines key aspects of functionality and regulation. Nature 420, 190-193. 49 Von Wright G. H. (1963) Norm and action-a logical enquiry. Routledge and Kegan Paul, London. 50 Vukmirovic 0. G., Tilghman S. M. (2000) Exploring genome space. Nature 405,820-822. 51 Wei G.-H., Liu D.-P., Liang C.-C. (2004) Charting gene regulatory networks: strategies, challenges and perspectives. Biochem. J. 381, 1-12. 52 Wolkenhauer O., Kitano H., Cho K. H. (2003) Systems biology. IEEE Control Syst. Mag. 23, 38-48. 53 Wong P., Gladney S., KeaslingJ. D. (1997) Mathematical model of the lac operon: inducer exclusion, catabolite repression, and diauxic growth on glucose and lactose. Biotechnol. Prog. 13, 132-143. 54 Yeger-Lotem E. et al. (2004) Network motifs in integrated cellular networks of transcriptionregulation and protein-protein interaction. Proc. Natl. Acad. Sci. 101, 5934-5939. 55 Yeger-Lotem E., Margalit H. (2003) Detection of regulatory circuits by integrating the cellular networks of protein-protein interactions and transcription regulation. Nucleic Acids Res. 31, 6053-6061. 56 Yildirim N., Mackey M. C. (2003) Feedback regulation in the lactose operon: a mathematical modeling study and comparison with experimental data. Biophys. J. 84, 2841-2851.
Section 2 Computer-aided Process and Product Design
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
Section 2 presents a state-of-the-art review on methods and computer tools currently available to support engineering design activities. The material i n this section is organized infive chapters covering the use of models is the development and improvement of processes and products. This activity relies on tools that integrate knowledgef r o m many disciplines, since it has to take care of all the constraints of process and product design, such as equipment and plantflexibility and operability, raw materials and energy usage, economy, health and safty. Feedstock and product purification are among the critical components o f a chemical process. This is why Chapter 1 addresses the synthesis of separation systems, with emphasis on distillation, one of the most energy-intensive unit operations in the chemical process industry. Besides the optimization of a single column, improved sequencing i n the case of multiple separations, heat integration and thermal coupling offer perspectives for significant energy savings and are reviewed. All potential solutions of the synthesis problem are considered by generating a superstructure to be optimized using mathematical programming techniques. Chapter 2 covers process intensification. The design of more escient and compact processing equipment, usually combining severalfunctions, has long been realized by drawing on intuition and expertise. Now systematic design procedures based on modeling thefundamental principles underlying the process intensification technologies are being developed. Significant achievements are reported in the design of single and multiphase reactors, of reaction-separation systems and of hybrid separation processes by using computer-aided methodsfor process intensification. The performance of a process is not only related to the proper design of the main equipment. All processes require utilities: water, solvents, waste treatment, and most of all, energy. Chapter 3 presents computer-aided methods for solving the optimal integration of processes with utility networks; it compares several formulations proposed to optimize the integration of different types of utility subsystems (e.g., combined heat and power, heat pumps and refigeration, water circuit). CAPE tools now allow the development, evaluation and optimization of new units and processes. Chapter 4 documentsfive industrial case studies that illustrate how a combination of rigorous models has been used to produce innovative designs meeting multiobjective targets besides economy: environment conservation, safty, operationalflexibility, controllability. They required the use ofcommercial tools but were enhanced with detailed models of the units that were not routinely available. An eficient design relies on accurate bench and pilot plant data combined with rigorous models based on thermodynamics, conservation laws, and accurate models of transport and fluid flow, with particular emphasis on dynamic behavior and uncertainty in market conditions.
The process industries undergo a movefrom the process-oriented to the product-centered businesses. A consequence of this switch is a growing interest in product development. Chapter 5 addresses this issue, with special emphasis on the product definition phase: how to translate customers’ needs into product properties and how to foster the generation of novel product concepts. Applications of computer-aided techniques, such as data mining or case-based reasoning, are also illustrated by practical examples.
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
1 Synthesis of Separation Processes Petros Proios, Michael C. Georgiadis, and Efstratios N. Pistikopoulos
1.1 Introduction
Process synthesis has received considerable attention over the last 30 years and it continues to be an active research area in chemical engineering, with significant advances achieved in terms of developing synthesis methods for subsystems (reactor networks, separation systems, heat exchange networks, and mass exchange networks) and for total flow sheets. By definition, process synthesis is the determination of the topology of process units, as well as the type and design of the units within the flow sheet that will convert specified inputs into desired products. The synthesis task is usually driven by the optimization of a specific objective, typically based on economics. Systematic techniques for the synthesis and design of flow sheets are represented by two different approaches: hierarchical decomposition and mathematical programming. Reviews on early developments can be found in Hendry et al. (1973), Westerberg (1980),and Nishida et al. (1981).An overview of synthesis techniques is also available in the book by Biegler et al. (1997) and recent advances are presented in the excellent review article by Westerberg (2004). Early work on process synthesis appeared on developing insights into finding better separation sequences (using distillation) for separating mixtures of n components. The emphasis on investigating techniques for the synthesis of complex distillation networks can be mainly attributed to the role that distillation plays in the economy of the overall plant. Distillation is a highly utilized and at the same time one of the most energy intensive unit operations in the chemical process industry. Mix indicated in 1978 that distillation accounted for about 3% of the total US energy consumption and that a 10% savings in distillation energy could amount to a savings of about $500 million in national energy costs (Mixet al. 1978). Today, the expense of distillation-relatedenergy consumption has reached even higher levels, considering the expansion of the use of distillation in industry and the higher cost of utilities. These economic reasons have imposed energy efficiency as the main design target in distillation. Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
270
I
7 Synthesis ofSeparation Processes
Due to its importance, distillation has received particular attention in the field of chemical engineering, with publications about the operation and design of countercurrent separation cascades dating as early as 1889 (Sore1 1889). Moreover, since multicomponent separations require the use of sequences of distillation columns, significant research efforts have concentrated on the synthesis of these systems aiming at energy efficiency. Research on this subject has been further powered by the fact that extensive energy savings can be achieved through the selection of the most energy efficient sequence, amongst a large number of available candidates. This is an explicit consequence of the dependence of the distillation systems’ energy consumption on the feed mixture and on the order in which its components are separated. Technological breakthroughs are constantly called in to propose new techniques for energy efficiency that would compensate for the ever increasing energy-related distillation expenses. Two of the most promising techniques are heat integration and the thermal coupling of distillation columns. The former is based on the energy savings that can be achieved by heat integrating two distillation columns, that is, by using the heat generated in a column’s condenser for the heat required in another column’s reboiler, while satisfylng appropriate temperature difference conditions. This technique can lead to substantial energy savings that can reach the order of 50% when compared to nonheat-integrated arrangements. Similar energy savings have been reported through the use of thermal coupling techniques in distillation, where heat units and their associated utilities are eliminated through the use of two-way liquid and vapor interconnections between columns, the latter being characterized as complex columns. These energy savings are the direct result of the elimination of heat units and the increase of thermodynamic efficiency, due to the minimization of remixing effects, which are generally associated with nonthermally-coupledarrangements. However, in order to apply the aforementioned synthesis techniques for energy efficiency, certain complicating issues need to be addressed, which are mainly of structural and physical nature. The structural complications are related to the large number of alternative arrangements that need to be considered. Even in the simplest case from a structural perspective, where sequences of simple columns are examined (columns with a single feed and two products), the extensive connectivity possibilities between columns lead to the generation of a large number of alternative column sequences, which increase with the number of components to be separated. Moreover, these structural complications become even more intense through the incorporation of structural possibilities associated with heat integration and thermal coupling. The physical complications are related to the complexity of the underlying physical phenomena, which involve simultaneous mass and heat exchange between liquid and vapor streams at the tray cascades. Furthermore, the physics of the problem are such that the choice of the optimal configuration is largely dependent on the feed mixture to be separated (its components’ relative volatilities and composition). It has been reported (Tedder and Rudd 1978; Agrawal and Fidkowski 1998) that for a particular separation, column configurations that are generally regarded as highly energy efficient (for instance, fully thermally coupled columns), can, in fact, have
1.7 Introduction
larger energy consumptions than sequences of more conventional columns. Consequently, in order to evaluate efficiently the energy consumption of a particular column sequence and the energy savings that can potentially be achieved through its heat integration or thermal coupling. The aforementioned physical phenomena need to be accurately captured. Summarizing, it is not an exaggeration to state that the economic importance and associated complications have made the distillation column sequencing problem for energy efficiency one of the most challenging synthesis problems in chemical engineering, with numerous approaches proposed for its solution. One of the earliest attempts was based on total enumeration. This approach is, however, limited to problems with only a few alternatives. Other main approaches are the heuristic and physical insight ones. The former relies on rules of thumb derived by engineering knowledge and/or by the use of shortcut models, while the latter is based on the exploitation of basic physical principles, which are also based, to a certain extent, on simplified models and on graphical representations of the problem. These approaches generally enable quick and inexpensive calculations for the alternatives’physical evaluation. However, the fact that they are derived based on simplifylng assumptions, which are valid only for certain cases, places a major limitation on their accuracy, validity, and applicability. Furthermore, more complications arise when the developed heuristics are conflicting, suggest more than one possible solution, or do not cover the details of the examined problem. The most recent approach addressing this problem is the mathematical programming (algorithmic) approach, where the synthesis of column sequences is formulated as an optimization problem. Based on mathematical programming, one of the most important systematic approaches that has been receiving increased attention over the last years is superstructure optimization. Superstructures are, in general, superset flow sheets incorporating every feasible realization of the process in question. The generation and evaluation of each alternative realization takes place with the solution of an optimization problem, which usually involves the use of continuous and binary (0-1) variables, rendering the problem a mixed-integer programming (MIP) problem. However, most of these methods either use sirnplifylng assumptions, limiting the validity and accuracy of the results, or treat the problem rigorously, but at an expense of computational effort. Section 1.2 will provide an overview of techniques for the synthesis of simple column sequences. The subsequent section will describe in a comprehensive way the synthesis problem of heat-integrated distillation trains. State-of-the-artmethodologies and algorithmic frameworks for the synthesis of complex distillation sequencing are critically discussed in Section 1.4.Finally, concluding remarks will be made in Section 1.5.
I
271
272
I
7 Synthesis ofseparation Processes
1.2 Synthesis of Simple Distillation Column Sequences 1.2.1 Simple Distillation Column Sequencing
As already mentioned, in order to separate multicomponent mixtures into pure or
multicomponent product streams using distillation, more than one column needs to be employed, generating sequences of distillation columns. From chemical engineering knowledge it is known that for the same mixture separation, different distillation column sequences have different energy consumption levels, which can, in fact, be quite different from each other. Since distillation is an energy-intensive process widely used in the chemical industry, there is a substantial economic incentive in selecting the appropriate distillation column sequence for a particular separation. However, as already mentioned, two main complications make the distillation column sequencing problem one of the most challenging synthesis problems in chemical engineering, namely the increasing number of structural alternatives and the complexity of the underlying physical phenomena. From a structural point of view, developing a method that could incorporate all the alternatives of interest, without simplifying assumptions, such as sharp splits or product streams enriched in a particular component produced once in a column sequence, is not a trivial task after the first member of multicomponent separations. For the latter only three possible simple column sequences exist (Fig. 1.1). However, an illustration of the difficulty of the problem is realized when the next problem is considered, namely the quaternary separation. For this problem, if sharp split assumptions are used, only five different structures are possible (Thompson and King 1972). However, if the simplifying assumptions are removed then 22 possible alternative sequences have been identified (as will be shown later). It must be noted that these alternatives may include more sections than the minimum number required, however, this is desirable since when general design targets are optimized, such as the total annualized cost (TAC), these structures may potentially exhibit an optimal tradeoff between operating and capital cost in their additional sections. Therefore, a proposed sequencing method must also be able to incorporate and generate the structural alternatives systematically and automatically. From a physical representation point of view, the proposed method must be accompanied by a physical model that can capture the underlying phenomena accurately. Due to the complexity of the physical phenomena taking place within each distillation column, rigorous physical models are required for an accurate physical representation. However, the incorporation of rigorous models leads, by default, to the generation of nonconvex mathematical problems whose solution is quite involved and usually computationally expensive. In order to obtain easy and fast solutions, numerous methods have been proposed approximating the physical behavior of distillation columns with models based on a number of simplifying assumptions, thus trading accuracy and applicability for ease of calculation.
1.2 Synthesis ofsimple Distillation Column Sequences Figure 1.1 Ternary simple column sequencing alternatives.
(a) Direct Sequence
(b) Indirect Sequence
(c) _I
Finally, due to the aforementioned complexities, an efficient sequencing method must also be accompanied by an appropriate solution procedure. For the case of three or five structural alternatives, an implicit enumeration solution procedure is acceptable. However, even for the quaternary case including nonsharp separations, implicit enumeration is not the most efficient solution procedure, especially if rigorous models are used, due to the computational effort required. Due to the aforementioned importance and associated difficulties of the simple column sequencing problem, a number of methods have been proposed over the years for its solution, based on different approaches for process synthesis. These approaches involved heuristic methods (Seader and Westerberg 1977), evolutionary techniques (Stephanopoulos and Westerberg 1976), hierarchical (Douglas 1988), implicit enumeration (Fraga and McKinon 1995) and thermodynamic insights (BekPedersen 2003; Bek-Pedersen and Gani 2004). to name but a few. However, one approach that has received a lot of attention recently, is the superstructure optimization approach. In this approach a superstructure of the problem is generated that, according to Westerberg (Andrecovich and Westerberg 1985a, b), “Should contain all feasible distillation sequences and all feasible operating conditions for any column within the superstructure. ” Due to the importance of the superstructure optimization approach, particular attention will be given to the various superstructure optimization methods proposed for the synthesis of simple distillation column sequences.
1.2.2 Superstructure Methods for Simple Column Sequencing
An early systematic superstructure method for the simple column sequencing problem was proposed by Andrecovich and Westerberg (1985a,b).The proposed method addressed the separation problem of a multicomponent feed into pure product streams, generating a superstructure according to the separation tasks taking place in each column and to the column connectivities. Using the assumption of sharp splits in each distillation column, the generation of the superstructure was based on
I
273
274
I
1 Synthesis ofSeporation Processes
the list splitting technique of Hendry and Hughes (1972),ranking components in decreasing order of relative volatilities. The authors proposed two algorithms for generating systematicallythe column sequences. The synthesis problem was formulated as an MILP, with binary variables introduced for the presence of columns and with simple models based on total mass balances and feed split fractions. This work was further extended in the work of Wehe and Westerberg (1987),for the incorporation of bypass possibilities, addressing the more general problem, namely of distillation separation of multicomponent feed streams into multicomponent product streams. Simple sharp split column models were employed, which were generally linear, except for those of the superstructure splitters, where bilinearities were introduced (due to the absence of information about the composition of the splitter feed and the actual values of the splits obtained). In order to overcome this complication, the resulting nonlinear problem for each selected structure was relaxed through linearizations, generating a lower bound to the solution. For the sequence with the best lower bound, the nonlinear problem was solved to obtain an upper bound on the global optimum. Optimal column sequences were considered those that had similar upper and lower bounds, while the ones with lower bounds above the best upper bound were discarded. The general separation problem using superstructure optimization was also addressed by Floudas (1987). A superstructure method was developed for the sequencing of separators, without explicitly specifying the actual type of separators used. Distillation columns were also included as potential separator types. The objective was the minimization of a function related to the separation difficulty,which was calculated based on relative volatilities, for assuming distillation as the separation task. The method generated the superstructures based on the assumption of sharp (perfect) splits obtained in each separator. The number of separators in the superstructure was assumed fixed, thus eliminating the need for the introduction of binary variables. The optimal column sequences were derived by the selection of the column interconnections. In this method each separator was modeled using simple models of total and component mass balances, resulting in an NLP problem. Floudas and Anastasiadis (1988)addressed explicitly the synthesis of simple distillation column sequences for the general multicomponent feed and product problem. Their method, as the one of Floudas (1987), generated a superstructure containing series and/or parallel arrangements of distillation columns, along with stream splitting, mixing and bypassing. In order to construct the superstructure, systematic stream mixing and splitting rules were provided, based on the assumption of sharp split separations performed in each distillation column. Simple distillation column physical models were employed based on assumptions of isothermal and isobaric columns with equal condenser and reboiler heat duties and general total and component mass balances, where the component compositions did not participate as variables but as parameters derived a priori over a number of shortcut simulations. The objective to be minimized was the system’s total annualized cost (TAC), which was modeled as a function of the column feed flow rate. This was derived based on a linear approximationof the six-tenths factor rule used to scale equipment cost. The necessary coefficients for the cost functions were also based on simulations of shortcut
7.2 Synthesis ofSimple Distillation Column Sequences
distillation models at different feed flow rates. The problem was posed as an MILP with binary variables denoting the existence or not of distillation columns, to be determined by the optimization along with their continuous interconnections, thus generating the optimal sequence. In Aggarwal and Floudas (1990) the simple distillation column sequencing problem for general separations was revisited, relaxing the assumption of sharp separators. The latter were modeled based on the assumption of nondistributed nonkey components. The construction of the superstructures, and therefore the generation of the alternatives, was based on the assumption that in each column exist two adjacent (withrespect to their volatility) key components, which are allowed to distribute between the column distillate and bottoms (whose recoveries were considered optimization variables).All other nonkey components were only allowed to appear in the distillate or bottoms. Based on the fact that each column performs a separation of adjacent key components, a maximum of n-1 columns was set up for an n component feed. Binary variables were introduced for the presence of each potential distillation column. In the structural model of the method, possibilities of mixing, splitting and bypassing were explicitly incorporated. The objective function for the optimization problem was the minimization of the system's TAC. Cost models were generated through regression analysis, based on the solution of numerous shortcut models over a range of flow rates, compositions and key component recoveries, and the problem was modeled as an MINLP. Finally, the method was extended to incorporate cases of nonadjacent key components, with intermediate components allowed to distribute between the column product streams. The simple column sequencing problem of single feed to pure product streams was addressed in Novak et al. (1996)based on the generation of a network and a compact superstructure, which were viewed as combinations of interconnection nodes (mixers and splitters) and process units (distillation columns). The former were approximated using special linear constraints and the latter were based on the assumption of sharp splits. The compact superstructure was viewed as a simplification of the tree superstructure of Hendry and Hughes (1972) and of the network superstructure of Andrecovich and Westerberg (1985a,b) for sharp separations and was a modification of the superstructure provided by Floudas and Anastasiadis (1988),using less distillation columns. In the compact superstrucmre the number of columns was assumed fmed and binary variables were assigned only to the stream connectors. For an n component feed, the number of distinct sharp distillation sequences S (for all superstructure types employed) was equal to (Thompson and King 1972): S=
S ( n - l)! n!(n- l)!
The number of columns in the network and compact superstructures S" and 9, respectively, was found equal to:
I
275
276
I
1 Synthesis of Separation Processes
The distillation columns were modeled based on the Gilliland/Fenske/Undenvood method using assumptions of uniform relative volatilities per column and specifylng minimum recoveries of light and heavy key components. The objective to be minimized was set as the system’s TAC, employing the futed charge model of Grossmann (1985)for the capital cost calculations. The structural (pure integer) constraints were based on the topology of the superstructure. The generated MINLP problem was solved using the modified OA/ER algorithm (Kravanja and Grossmann 1994),implemented in the computerized synthesizer PROSYN-MINLP (Kravanja and Grossmann 1993). Smith (1996) addressed the simple column sequencing problem generating a superstructure for the structural alternatives using the state operator network (SON) representation. In this representation the separation tasks and equipment were defined while the assignment of tasks to the equipment had to be determined. Stream mixers and splitters were included before and after each distillation column to arrange all possible interconnections. The distillation columns were modeled using a slightly modified version of the rigorous tray-by-trayMESH distillation column model of Viswanathan and Grossmann (1993).The method did not employ any simplifying assumptions such as sharp splits or equimolar flow rates. Binary variables were used in order to determine the number of trays within each column (reflux tray location). In the SON method the number of columns was rationally fuced according to the number of components in the feed. The feed tray location and the column interconnections, defining the column sequences, were continuous decisions, determined by their flow rate values. The generated MINLP problems were solved in GAMS using the DICOPT MINLP solver (Kocis and Grossmann 1989).The solution procedure included the solution of the relaxed MINLP problem and then a branch-and-bound search over the nonzero relaxed binary variables. This procedure was followed since, as reported, the full MINLP problem failed to converge in DICOPT, due to scaling and domain errors in the set-up of the NLP subproblems. To overcome the implicit assignment of processing tasks to processing units, Papalexandri and Pistikopoulos (1996) developed a general synthesis framework based on a mass and heat transfer representation. Utilizing fundamental masslheat transfer principles within a superstructure environment, this building block synthesis method exploits modeling concepts from the well defined heat exchanger network (HEN)and mass exchange network (MEN) problem. Synthesis alternatives are not prepostulated, but instead embedded within a network of mass and heat exchanging modules that allow nonconventional hybrid systems to be revealed. In terms of total process flow sheet alternatives, the potential of this approach has been illustrated through application to ideal systems including separation, reaction, and heat integration operations. The approach has also been applied to the synthesis of flexible heat and mass exchange networks (Papalexandri and Pistikopoulos 1994). Ismail et al. (1999)modified and extended the masslheat transfer module to general multicomponent nonideal systems and reactive distillation systems (Ismail et al. 2001). Yeomans and Grossmann (1999a) proposed a systematic modeling framework based on superstructure optimization in which the sequencing of simple distillation
1.2 Synthesis ofSimple Distillation Column Sequences
columns was also addressed. The superstructures were obtained using the state task network (Sargent 1998) and the state equipment (operator) network representations (Smith 1996; Smith and Pantelides 1995).These two representations can be regarded as complimentary to each other. In the STN, tasks and states are defined while the equipment assignment is unknown, while in the SEN, as indicated above, the tasks and equipment are defined and the assignment of tasks to equipment must be determined. The synthesis problem was modeled with the generalized disjunctive programming (GDP) (Raman and Grossmann 1994). In order to use GDP, conditional constraints had to be identified from permanent ones, that is, constraints that held in all structural alternatives. Having identified the conditional constraints, these were then represented with disjunctions and assigned to Boolean (logical)variables representing their existence. For a Boolean variable of value equal to true, the conditional constraints (corresponding to a column physical model) became activated, otherwise all variables participating in these constraints were set to zero. The GDP models were then transformed to MILP problems using the convex hull formulation of the disjunctions (Balas 1985). The generation of the superstructures for the examined problem was based on sharp separations performed in the distillation columns, which were modeled using simple mass balances and recoveries, generating linear physical models. The above modeling framework was also implemented in Yeomans and Grossmann (1999b) for the synthesis of simple distillation column sequences employing nonlinear shortcut distillation column physical models. The latter were based on assumptions of sharp splits, where only the distribution of adjacent key components was allowed along with high recovery of the key components. The Fenske/Undenvood/Gilliland method was used for the calculation of the minimum number of trays and of the minimum reflux. The simple column sequencing problem using GDP was also addressed in Caballero and Grossmann (1999).The column superstructures were generated using the STN, SON and an intermediate representation. The latter was proposed based on the STN representation, however, allowing the columns the possibility of performing multiple tasks. The distillation column models were aggregated using mass transfer feasibility constraints at the section boundaries. These models bore many similarities with the ones used for the representation of simple distillation columns in the generalized modular framework (Papalexandri and Pistikopoulos 1996). However, the two models have many structural and physical modeling differences. Structurally, the building blocks of the superstructure are different. In Caballero and Grossmann (1999) each building block consisted of a set of two aggregated column sections with their heat units and a feed position mixer (with predefined interconnections between them). In the GMF each Mass/Heat (M/H) module and each Heat Exchange (He) module constitutes different building blocks. Moreover, as shown in Papalexandri and Pistikopoulos (1996),the GMF building blocks by definition have more extensive interconnection possibilities. From a physical modeling point of view, in Caballero and Grossmann (1999) simplift.lng assumptions of equimolar flow rates, isothermal operation and sharp splits were employed. Moreover, in order to provide valid mass transfer feasibility constraints, the mass transfer direction had to be known and postulated, which depended on the separation task taking place in
1
277
278
I
1 Synthesis ojSeparation Processes
a particular column section. The GMF mass transfer is arranged through driving force constraints, which are formulated in such a way that the mass transfer direction does not need to be prepostulated or known a priori. However, both models provide a lower bound on the energy consumption of distillation columns. In the proposed method the generated GDP problems were transformed into MINLP problems by assigning a binary variable to each Boolean variable and by transforming the disjunctions into big-M constraints. The most recent superstructure method for the simple column sequencing problem was proposed in Yeomans and Grossmann (2000a),using rigorous MESH models for the physical representation of distillation columns. The superstructure for the problem was generated using the SON representation, as implemented in Yeomans and Grossmann (1999a).The computational difficulties of the rigorous MESH models, associated with equations becoming singular, the solution of redundant equations, the need for good initialization procedures and the convergence difficulties due to nonexisting columns or flows were overcome using a GDP adaptation of the Viswanathan and Grossmann (1993)distillation column model. In the GDP model the feed, the condenser and reboiler stages were modeled as permanent and all the other stages as conditional, assigned to a Boolean variable. The disjunctions were modeled so that for existing stages (whose Boolean variables had a value of true), phase equilibrium constraints were enforced along with other MESH constraints, thus allowing mass and heat exchange between the contacting phases. For the opposite case, of nonexisting stages, the MESH constraints were applied without the phase equilibrium constraints and the input stream properties of the stage were set equal to those of the output stream, thus making the nonexisting stage a simple input-output stage. The translation of the GDP problems into MINLP problems was also done by transforming the disjunctions into big-M constraints. Moreover, appropriate initialization schemes were provided based on relaxed purity and recovery constraints. Table 1.1 presents an overview of superstructure methods for simple column sequencing. The synthesis of azeotropic separation processes has also received significant attention in the literature. Table 1.2 summarizes most of the developed approaches for the analysis, design, and synthesis of homogeneous azeotropic separation systems. Recently, Proios (2004) addressed the simple distillation column sequencing problem through the GMF synthesis model. From a structural point of view, a GMF structural model has been developed that can generate the feasible structural alternatives for the examined problems, aiming at addressing efficiently the first complication of the column sequencing problem. In the proposed GMF sequencing method the accompanying physical model is based on the GMF physical model originally introduced by Papalexandri and Pistikopoulos (1996) and extended by Ismail et al. (1999, 2001). This model, which is based on aggregation, can be viewed as an intermediate between the simplified and the rigorous models. On one hand it avoids potentially limiting simplifylng assumptions, such as equimolar flow rates, isothermal operation, sharp splits, etc., thus increasing the accuracy of the physical representation. On the other hand, due to aggregation it generates problems of smaller
Authors
Features
Simplified methods (sharp splits)
Andrecovich and Westerberg (1985)
I
Network superstructure, simple mass balances ~~
Wehe and Westerberg (1987) Floudas (1987)
I
General separation, nondistr/nonkey, mass balances General separation, mass balances
Floudas and Anastasiadis (1988)
General separation, mass balances
Agganval and Floudas (1990)
General separation, nondistr/nonkey. mass balances
Novak et al. (1996)
Network & compact superstructures, mass balances
Yeomans and Grossmann (1999a)
STN & SON superstructures, GDP, mass balances
Yeomans and Grossmann (1999b)
STN & SON superstructures, GDP, nonlinear shortcut models
Caballero and Grossmann (1999)
STN, SON & intermediate superstructures. aggregated models
Rigorous methods ( M E S H model, Vinuanathan and Grossmann 11993)) Smith (1996)
SON superstructure, MESH modified model
Yeomans and Grossmann (2000)
SON superstructure, GDP
size, which are easier to solve. Finally, the GMF was accompanied by a solution procedure, using formal MINLP solution techniques that find the optimal sequence without having to evaluate all structural alternatives. The overall synthesis framework and solution approach was demonstrated in several case studies (Proios 2004).
1.3 Synthesis of Heat-integrated Distillation Column Sequences 1.3.1 Heat-integrated Distillation Column Sequencing
As shown in the previous chapter, there is a substantial economic incentive for the development of methods for the simple distillation column sequencing problem, since significant energy savings can be achieved by finding the most energy efficient column sequence for a particular separation. However, even higher energy savings
280
I
1 Synthesis ofseparation frocesses
can be achieved in distillation sequences by incorporating techniques such as heat integration (HI) and thermal coupling. In this chapter, the former technique is employed, while the latter is examined in the following chapter. In general, distillation heat integration makes use of the heat generated in a column’s condenser in order to provide the heat required in another column’s reboiler (an illustration for the direct column sequence is shown in Fig. 1.2). The generated heat in the column’s condenser is produced by the condensation of the vapor effluent of the column’s top end, through heat exchange with a cold utility (coolingwater) due to a heat gradient. On the contrary, in a column’s reboiler, heat is required for the vaporization of the liquid effluent of a column’s bottom end, through heat exchange with a hot utility (steam). Structurally, two process streams exist: one vapor (hot) from a column’s top end that needs to be cooled and a liquid (cold)from a column’s bottom end that needs to be heated. Provided there is a sufficient heat gradient between those two streams the latter can be made to exchange heat with each other. In order to achieve the necessary heat gradients the column pressures are shifted appropriately, since the column pressure has a direct effect on the column temperature levels (for nonazeotropic and nonreacting mixtures, raising the column pressure raises the column temperature). The operating cost savings achieved through HI have been reported to reach 50% as compared with non-HI column sequences (Hostrup etal. 2001). The obtained energy savings compensate for a potential increase in the capital expenditure due to the use of pumps, which is usually considered negligible when compared to the overall column investment cost (Novak et al. 1996). Due to its economic importance, a large number of methods have been proposed for the synthesis of HI column sequences. Some characteristic methods proposed over the years included the dynamic programming algorithm of Rathore et al. (1974)
(a) Forward HI Match Figure 1.2
(b) Backward HI Match
Illustration of a ternary direct column sequence
Entrainer selection
Doherty and Caldarola (1985)
RCM, total reflux, ternary mixtures. Assumes linear distillation boundaries and no boundary crossing.
Stichlmair J., Fair, J. R., Bravo J. L. Chemical Engineering progress 85 (1989), p. 63
DLM, total reflux, linear distillation boundaries.
Foucher et al. (1991)
RCM, total reflux, ternary. Automatic procedure.
Laroche L., Bekiaris N., Andersen H. W., Morari M. AICHE Journal, 38 (1992), p. 1309
RCM, ternary. Analysis based on “equivolatilitycurves.”
~~
Column sequencing & bounding strategies
Laroche et al. (19926)
I RCM, all reflw, ternary. Separabilityin single feed columns.
WahnschafR et al. (1992)
RCM, all reflw, ternary mixtures, single feed columns. Bounds by feed pinch point trajectories.
Fidkowski et al. (1993)
DLM, all reflw. ternary, single feed. Introduces “distillation limit,” accounts for boundary crossing, algebraic-based.
Stichlmair and Herguijuela (1992)
DLM, all reflux, ternary, single feed. Accounts for curved distillation boundaries.
Jobson et al. (1995)
Achieves attainable product region based on simple distillation and mixing for ternary mixtures.
Safrit and Westerberg (1997)
RCM. Algorithm determining boundaries & distillation regions for n-component systems.
Rooks et al. (1998)
RCM. Equation-based approach to determine distillation region structures of multicomponent mixtures.
Boundary value design procedure Levy et al. (1985) Knight and Doherty (1986) Julka and Doherty (1990) Knapp and Doherty (1994) Fidkowski et al. (1991)
Ternary mixtures, single feed columns, CMO Ternary, single feed with heat effects. Multicomponent, single feed, CMO. Tracks fixed points ternary, double feed column, CMO. Calculates R,,, and R,,,.Up to four components, single feed, CMO. Algebraic continuation arc method to locate tangent pinch.
Stichlmair et al. (1993)
Ternary mixture, “pinch point” geometry.
Bauer and Stichlmair (1995)
Combines pinch point analysis with process MlNLP optimization for ternary mixtures in single-feed columns.
~
~~~
Castillo et al. (1998a)
Introduces the concept of staged leaves for ternary mixtures in single feed.
Thong et al. (2004)
Two-stage synthesis procedure for separating multicomponent azeotropic mixtures.
282
I
I Synthesis ofSeparation Processes
the Branch-and-Bound techniques of Sophos etal. (1978) and Morari and Faith (1980),the thermodynamic insights (pinch) technique of Linnhoff et al. (1983), to mention but a few. However, focus will be, as elsewhere, on the superstructure optimization methods proposed for the solution of this problem. A distinct characteristic of the majority of these methods is that they are based on column sequencing methods, incorporating elements of heat exchanger network (HEN) synthesis. An overview of the superstructure HI column sequencing methods is provided next.
1.3.2 Superstructure Methods for HI Simple Column Sequencing
A number of superstructure methods have been reported in the open literature for the treatment of various facets of HI distillation sequencing problem. Andrecovich and Westerberg (1985a) incorporated HI possibilities in their MILP simple column sequencing method. This was realized by assuming instances of columns at different prespecified pressure levels, The pressure ranges were determined by the available hot and cold utility temperatures. The method constructed the HI sequencing schemes using two proposed algorithms. The hot and cold streams were the condenser, reboiler, and utility streams. Having specified the column pressures, the temperatures of the condensers and reboilers were also considered fxed. The heat duties of each condenser and reboiler at each column instance were assumed proportional to the feed flow rate. The HI possibilities were incorporated using HEN transportation (Cerda et al. 1983) and the transshipment (Papoulias and Grossmann 1983) problem formulations. However, in the above approach an invalid assumption was used, which assumed that the product of the column heat duty and the reboilercondenser temperature difference was constant in each column. This essentially meant that the reboiler and condenser heat duties and their temperature difference were assumed independent of the column temperature (or pressure) levels. In order to rectify this, Andrecovich and Westerberg (198513) removed this assumption. The results derived generated the same HI sequences as previously, but the calculated utility targets were improved. The HI column sequencing superstructure method of Andrecovich and Westerberg (1985b)was also used in Kakhu and Flower (1988),incorporating three types of complex columns (Petlyuk,side stripper, and side rectifier) to be heat integrated with other simple column sequences. The HI problem was addressed through the transshipment/transportation formulations. The distillation columns were modeled assuming sharp separations and total mass balances, generating a MILP problem, solved for the minimization of the TAC. The capital cost functions were based on a fEed-charge part and on a feed-flow-rate-dependantpart and the column heat duties were also given as simple functions of the feed-flow rate. Paules and Floudas (1988)addressed the HI simple column sequencing problem, generating a superstructure based on sharp split columns. The presence of each column was assigned to a binary variable, along with the HI possibilities between potential condenser and reboiler matches. In order to provide more realistic HI
7.3 Synthesis of Heat-integrated Distillation Column Sequences
schemes, the effects of pressure were taken into account implicitly, through relating the column pressure to the condenser temperature. The latter was included as a variable in the optimization problem. Through shortcut simulations at different pressure ranges and using regression analysis, the reboiler temperatures and the TAC expressions were formulated as functions of the condenser temperatures. The pressure ranges were determined by the available hot and cold utilities available. The overall problem was posed as an MINLP solved for the minimization of the TAC using the APROS methodology. Raman and Grossmann (1993) developed a model for the synthesis of HI distillation column sequences, where the sequencing problem without HI was based on the MILP method of Andrecovich and Westerberg (1985a). In the HI sequencing problem, columns of sharp splits were incorporated, which were represented in the HI problem through the temperatures of reboilers and condensers. In contrast with the HI method of Andrecovich and Westerberg (1985a),the reboiler and condenser temperatures were not considered constant, but were incorporated as variables. The columns were modeled operating at arbitrary pressures, since the authors did not incorporate pressure explicitly as a variable in their HI method. The columns’ capital cost was expressed in the fixed charge-variable charge (feed-flow-rate-dependant)model. In the operating cost model the heat loads of the reboilers and condensers were calculated by simple functions of the feed flow rate. The generated problem was solved using a proposed incorporation of logic in the branch-and-bound scheme. The simple column sequencing method of Novak et al. (1996) was also extended for the incorporation of HI possibilities, combining the compact and network column sequencing superstructures with the multistage HEN superstructure of Yee at al. (1990) (NLP) or Yee and Grossmann (1990) (MINLP in which binary variables were used for every potential heat match). The columns were modeled assuming sharp splits and using the Fenske-Gilliland-Underwoodshortcut model. The investment cost of pumps was found to be small compared to that of the columns and was therefore neglected. In the case studies, the MINLP form of the HEN problem was employed, along with a simple initialization and linearization scheme for overcoming convergence problems. In Grossmann et al. (1998)the above column sequencing superstructure method was coupled to the Duran and Grossmann (1986) HEN model and its disjunctive MILP reformulation. The latter was proposed in order to overcome difficulties experienced with the above model handling isothermal streams (due to its convergence to suboptimal solutions). In a revisited HI column sequencing problem from Novak et al. (1996), the Duran and Grossmann (1986) model and its disjunctive reformulation-generated HI structures were characterized by the same column sequencing, but with different HI schemes. The Duran and Grossmann (1986) model produced the worst solution, while its disjunctive reformulation and the Novak et al. (1996) solution produced the best results, which were quite similar. The HI column sequencing problem was also addressed as an extension of the simple column sequencing problem of Yeomans and Grossmann (1999a). However in order to produce more realistic HI results, energy balances were added in the simple mass balance models employed for the physical representation of the distillation
I
283
284
I
7 Synthesis ofSeparation Processes
columns. The HI synthesis procedure was based on the Raman and Grossmann (1993)model. The convex hull formulation was applied to the generated GDP problem for its translation to an MILP problem. For the incorporation of HI possibilities, the Raman and Grossmann (1993)HI model was also implemented in Yeomans and Grossmann (1999b), coupled to the proposed nonlinear GDP simple column sequencing model (involving nonlinear shortcut distillation column physical models). In the distillation column sequencing method of Caballero and Grossmann (1999) HI possibilities were incorporated based on the heat integration models of Paules and Floudas (1988)and Raman and Grossmann (1993).The heat loads of the isothermal reboilers and condensers were calculated as a function of their vapor flow rates and heat of vaporization,while simplified energy balances were used for the incorporation of HI possibilities. Heat exchange feasibility constraints were enforced between condenser and reboiler temperatures for their potential HI matches, which were assigned to a binary variable. The authors did not consider the incorporation of pressure in the proposed method. An objective function was used for the minimization of the sequences’ operating cost, which enabled the method to derive valid lower bounds of the represented systems’ energy consumption. Finally, a superstructure optimization method was proposed by Hostrup et al. (2001), incorporating thermodynamic insight principles. The main principle in this method was the use of thermodynamic insight techniques (Jaksland et al. 1995) in order to generate a superstructure of alternatives, to be further optimized using formal superstructure MINLP techniques. Three main steps were defined in this method: the problem formulation (identification of tasks and techniques), the flow sheet optimization, and the validation/analysis. A general separation problem posed in Agganval and Floudas (1990)was revisited also incorporating HI possibilities. In the first step of the method having provided the feed mixture to the ICAS synthesis toolbox (CAPEC 1999), based on thermodynamic insights two feasible separation techniques were identified (flash and distillation), which were eventually presented as a single technique, thus reducing the size of the generated MINLP problem. In the structural optimization step the compact superstructure of Novak et al. (1996) was employed and extended for the general separation case. HI was implemented in this model using the Yee and Grossmann (1990) HEN model. The columns were modeled using the Fenske/Gilliland/Undenvood shortcut model and the HI results indicated 50% lower operating costs than those of the non-HI case. Recently, Proios (2004) extended the GMF model for the simple column sequencing problem to account for heat integration opportunities. To this end, an HI block has been introduced, along with its structural and physical model components. A summary of superstructure-based methods for HI simple column sequencing is presented in Table 1.3.
1.4 Synthesis ofcomplex Disti//ation Column Sequences Table 1.3
Summary of superstructure methods for HI simple column
sequencing.
Authors
Features
Simplijed methods (sharp splits) Andrecovich and Westerberg (1985)
HEN transportation/transshipment, fxed P columns
Kakhu and Flower (1988)
Andrecovich and Westerberg (1985b) - three complex columns
Paules and Floudas (1988)
P via condenser temperature, regression analysis
Raman and Grossmann (1993)
Andrecovich and Westerberg/Paules and Floudas, Logic BB
Novak et al. (1996)
I HEN of Yee and Grossmann (1990)
~~
Grossmann et al. (1998)
HEN of Duran and Grossmann (1986) and its GDP MILP
Yeomans and Grossmann (1999a,b)
HI of Raman and Grossmann (1993), GDP
Caballero and Grossmann (1999)
HI of Raman and Grossmann (1993), GDP
Hostrup et al. (2001)
Thermodynamic insights and Novak et al. (1996)
Proios (2004)
Extension of the GMF to account for heat integration opportunities
1.4 Synthesis of Complex Distillation Column Sequences 1.4.1 Complex Distillation Column Sequencing
In Section 1.2 it was illustrated that choosing the most appropriate simple column sequence out of a set of possible alternatives can lead to substantial energy savings. In Section 1.3 these energy savings were extended by applying heat integration techniques on these simple column sequences. However, it has been reported, as will be shown below, that high energy savings can also be achieved through the incorporation of thermal coupling in complex columns (columns with multiple feeds and side streams). In this section, an overview of techniques for the complex distillation column sequencing problem (that is, the sequencing problem where both simple and complex columns are considered) is critically presented. An illustration of the complex distillation sequencing problem is given here for the ternary separation problem (the first member of the multicomponent separation). The alternative sequences for ternary distillation are shown in Fig. 1.3. From those alternatives, the most common in both literature and industrial practice are those consisting of simple distillation columns, which are shown in Figs. 1.3a-c. However, the number of alternatives increases considerably when complex columns are considered (Figs. 1.3d-k). All these complex columns are either partially or fully ther-
I
285
286
I
7 Synthesis ofseparation Processes
(a) Direct Sequence
(b) Indirect Sequence
(c) 3-columns
(d) Prefractionator
(e) Petlyuk
(f) Dividing Wall Column
(h) Side Stripper
(i) Side Rectifier
Q
(g) Side Petlyuk
(i)RV Column
-;i 4-
(k) SL Column
Figure 1.3 Alternative configurations for the distillation o f a nonazeotropic ternary mixture.
mally coupled. From these, the fully thermally coupled (Petlyuk) column (Fig. 1.3e) has been receiving increasing attention, as it has been found that it can exhibit energy consumption reductions of up to 40% compared to conventional configura-
1.4 Synthesis ofcomplex Distillation Column Sequences
tions (Fidkowski and Krolikowski 1986; Glinos and Malone 1988; Triantafyllou and Smith 1992; Annakou and Mizsey 1996; Dunnebier and Pantelides 1999),which is due to its thermodynamic efficiency (minimization of losses due to unnecessary remixing). The dividing wall column (Fig. 1.30 is considered thermally equivalent to the Petlyuk column under the assumption of no heat transfer through the vertical wall. It also provides capital cost savings by including the main and the prefractionation column of the Petlyuk arrangement in the same shell. The two final configurations (RV and SL) are essentially a side rectifier column with a direct vapor connection and a side stripper column with a direct liquid connection, respectively (Agrawal and Fidkowski 1999a). However, some further column configurations have been reported in the literature (Fig.1.4) (Agrawal 2000b). As indicated in Fig. 1.4, these configurations are modifications of some of the complex columns, aiming at the improvement of the latter’s controllability.According to Agrawal (2000b),control difficulties can be encountered in complex columns, the source of which the author traced to the columns’ vapor interconnections. In the alternatives to Fig. 1.4, one or more “problematic” vapor interconnection streams have been replaced with additional rectifying sections and condensers or with stripping sections and reboilers. These configurations, which are assumed to be thermally equivalent to their “parent” configurations, increase the number of structural alternatives for ternary distillation to 18 versus the eight reported by Tedder and Rudd (1978). Therefore, the issue that needs to be formally addressed is to provide a method that systematically incorporates the complex structural possibilities of thermally coupled columns in a unified way with simple column sequences, while capturing accurately the underlying physical phenomena in order to obtain the most energy efficient distillation sequences.
1.4.2 Superstructure Methods for Complex Distillation Sequencing
The first systematic superstructure method for generating distillation column sequences, including complex arrangements, was the sequential column superstructure presented initiallyby Sargent and Gaminibandara (1976) and extended in Eliceche and Sargent (1981) (see also Fig. 1.5). The authors proposed a rigorous tray-by-tray model for the design of distillation columns with multiple feeds and side streams. The number of stages and the optimal feed and side stream locations in each column required the introduction of a large number of discrete variables, rendering the problem a large scale MINLP. The combinatorial complications were handled by treating the integers as continuous variables rounded off each time to the nearest integer variable, eventually reducing the problem to a nonlinear programming (NLP) problem. The work on the sequential column superstructure was continued in Mei (1995), by investigating the possibilities of aggregation and size reduction, by using as superstructure building blocks column segments (aggregations of trays) and connecting units, whose existence was denoted by binary variables. Using the fact that trays do not operate at equilibrium, their number in each column segment was not
I
287
288
I
1 Synthesis ofSeparation Processes
%-
(a) Modified Side Stripper
(b) Modified Side Rectifier
(c) Modified Petlyuk 1
(d) Modified Petlyuk 2
(e) Modified Petlyuk 3
(0 Modified Petlyuk 4
A?-
(g) Modified Petlyuk 5 Figure 1.4
Modified configurations for operability targets.
1.4 Synthesis ofComplex Distillation Column Sequences
specified as an integer variable. Moreover, each building block was modeled as a modular simulator using as inputs a guess of the number of trays, the flow rates, and the intensive variables characterizing each input stream and generating outputs of the calculated flow rates and intensive variables of each output stream to be included in the optimization. The case studies demonstrated the effect of the modular treatment of the problem, which led to mathematical models of less than 100 variables and equations, respectively. Agrawal (1996)proposed a satellite column superstructure method for the separation of near-ideal mixtures of four and more components (n)into pure (or near pure) products, generating more than one fully thermally coupled distillation column sequences, with a single reboiler and condenser. This superstructure involved n-2 columns as satellites around a central column, while generating an overall smaller minimum number of column sections than the sequential column superstructure (Sargent and Gaminibandara 1976) for n 2 4.The method was also extended to the generation of all the other feasible partially or nonthermally coupled column sequences for the examined separations through the addition of reboilers and condensers at the appropriate end of each column section of the satellite superstructure. A step-wise procedure was provided for the generation of all the FC structural alternatives. It was shown that although for n-3 the satellite and the sequential column superstructures are equivalent, for n 2 4 the former is more complete than the latter. However, no formal structural optimization model was provided by the authors for the automated generation of all the alternatives. The proposed satellite superstructure representation was extended in Agrawal(2000a),providing a systematic method to draw more alternative FC column sequences, which are topologically different and allegedly more operable than those presented in the previous publication. From a thermodynamic point of view, the additional structures were equivalent to those of the 1996 publication. A method was also proposed for the generation of feasible distillation column sequences was the state task network (Sargent 1998),which was analyzed more thoroughly in Doherty and Malone (2001). The method initially identified the network states (mixtures created by all the feasible separations) and then the separation tasks, which were assigned to a distillation column or, in the case of complex distillation columns, to a section of a distillation column. Having identified the possible states and tasks, a superstructure could be generated linking a distillation task between two consecutive states and by using logical constraints for the feasible sequencing of tasks. Since the STN is a one-to-one approach (one unit carries out only one task), this method generated a problem with a large number of units (contrary to the SON, which considered only a small number of multi-task units). The complex distillation column sequencing problem was also examined in Dunnebier and Pantelides (1999),based on the state operator network (SON) method (Smith 1996).In the proposed method the distillation column operators were physically modeled based on the rigorous tray-by-tray model of Viswanathan and Grossmann (1993). Binary variables denoted the existence of column operators, number of trays (reflux return tray location), and side stream locations (see Fig. 1.6). The authors included some very useful comments regarding the convergence complexities of the exam-
I
289
290
I
I Synthesis ofSeparation Processes
ined problem, while stressing the importance of complex arrangements for energy efficiency and the use of accurate models for their representation. Yeomans and Grossmann extended their disjunctive model for the rigorous design of simple distillation columns (Yeomans and Grossmann 2000a) to the incorporation of complex arrangements (Yeomans and Grossmann 2000b) based on the sequential column superstructure representation of Sargent and Gaminibandara (1976). In both studies, once the superstructure for the column sequence had been postulated, the representation was translated into a GDP mathematical problem, which was then transformed into an MINLP problem. The column trays were arranged as permanent and conditional ones. The former were fured in the superstructure and corresponded to the feed, reflux, and boil-up trays. The conditional trays were assigned to a Boolean variable, which when false, turned these trays into simple input-output trays with no mass transfer. The authors concluded that their method avoided the numerical difficulties associated with including redundant equations and with singularities due to liquid and vapor flows taking a value of zero and improving the model's robustness. Caballero and Grossmann (2001) provided a superstructure method on the STN formalism of Yeomans and Grossmann (1999a)for the generation of all FC column sequences for n components from the satellite superstructure (Agrawal 1996). In conjunction with the guidelines provided in the latter publication, the superstructure was modified for the generation of all the partially and nonthermally coupled column sequences. The superstructure was modeled structurally using repositional logic expressions (Raman and Grossmann 1991) and each column was modeled physically
A=uFL. B
V Vk
L d l
B+C
1 +=
&
LC L
Figure 1.5 Sequential column superstructure (Eliceche and Sargent 1981).
7.4 Synthesis ofcomplex Distillation Column Sequences
Figure 1.6
Superstructure for ternary distillation (Dunnebier and Pantelides 1999).
using a simple shortcut distillation column models based on a modified version of Underwood’s equations (Carlbergand Westerberg 1989a,b)and using fixed costs for the heat units and column sections. The complete synthesis problem was modeled using GDP and solved using a modified version of the logic-based outer approximation algorithm. In a recent paper Agrawal (2003) proposed an improved systematic procedure for the generation of basic and thermally coupled column sequences. In both types of column sequences it was imposed that a product stream enriched in a particular component was produced only once. A systematic procedure including useful guidelines was provided initially for generating all the basic column sequences, followed by a procedure for the thermally coupled column case. The latter was based on the generation of thermally coupled columns by replacing reboilers and condensers of basic sequences with two-way interconnections between columns. The method that could be readily modified in order to incorporate other structural possibilities regarding the transfer of streams between columns can be viewed as a useful tool for the construction of a systematic superstructure optimization method for the automatic generation of the reported and of other “unknown” column sequences. Coupled to an efficient and appropriately designed physical model and solution procedure, solutions can be generated providing optimal designs from an energy and/or operability point of view.
I
291
292
I
1 Synthesis of Separation Processes Table 1.4
Summary of superstructure methods for complex column sequencing. Features
Authors
Sargent and Gaminibandara (1976) ~~
Sequential column superstructure, MESH model
~
Eliceche and Sargent (1981)
Sequential column superstructure, M E S H Model
Mei (1995)
Sequential column superstructure, pseudo-aggregated model
Agrawal (1996, 2000, 2003)
Satellite column superstructure
Sargent (1998)
STN (Doherty and Malone (2001))
Dunnebier and Pantelides (1999)
I SON superstructure, MESH (Smith 1996)
Yeomans and Grossmann (2000b)
Sequential column superstrumre, MESH
Caballero and Grossmann (2001)
Satellite column superstructure, GDP, shortcut model
Shah and Kokossis (2002) Proios (2004)
I Synthesis of complex distillation sequences
Extension of the GMF for synthesis of complex column sequencing
Shah and Kokossis (2002) proposed a new representation for the synthesis and optimization of complex separation systems. This representation is based on tasks instead of units. The problem is formulated as an MILP and its efficiency was illustrated in several problems based on the literature as well as industrial problems.
1.5 Conclusions
A comprehensive review of state-of-the-arttechniques for the synthesis of energy efficient simple, heat integrated, and complex distillation column sequences, has been presented is this chapter. Emphasis was placed on systematic and rigorous approaches focusing on the systematic generation and evaluation of the alternative designs in a compact and unified way. The problem of significant economic and scientific interest required the tackling of underlying structural and physical complicating issues. These were inherently related to increasing the number of structural alternatives and the complexity of the underlying mass and heat exchange physical phenomena. The economic potential in conjunction with these complications has "compelled" virtually all major research groups, both in academia and industry, to propose methods for the distillation column sequencing problem, addressing it in its entirety or just certain components of it. The presented superstructure methods have contributed significantly to the efficient treatment of the distillation sequencing problem. It has been shown that the nonconvexities and discontinuities of the generated large scale MINLP problems, which have imposed a limitation of many rigorous approaches, have been overcome
References
through the use of techniques such as disjunctive modeling and its associated solution procedures. These limitations can be treated through the use of aggregated models, like the GMF, by easing the computational effort due to the reductions in the size of the generated MI NLP problems, achieved without compromising the accuracy of the results. It is important to note that lately we are seeing a growing discussion of constructing the process at a more fundamental level, that is, by thinking of the process as combinations of mass and heat exchanges (Westerberg 2004). The original work of Papalexandri and Pistikopoulos (1996),as extended by Ismail et al. (1999, 2001) and recently by Proios (2004), is a key step towards this direction. If these approaches successfully develop and are further applied, they will lead to designing and synthesizing not only the process, but also the unit operations themselves that should form the basis of these processes. Applications in synthesis of absorption, reactive absorption, and crystallization networks are clearly sought.
References A., C. A. Floudas (1990) Synthesis of General Distillation Sequences-Nonsharp Separations. Comput. Chem. Eng. 14(6),
1 &anVal,
631-653. 2 Agrawal, R. (1996) Synthesis of Distillation
Column Configurations for a Multicomponent Separation. Ind. Eng. Chem. Res. 35, 1059-1071. 3 Agrawal, R. (2000a) A Method to Draw Fully
Thermally Coupled Distillation Column Configurations for Multicomponent Distillation. Trans lChemE Part A 78,454-464. 4 Agrawal, R. (200Ob) Thermally Coupled Distillation with Reduced Number of Intercolumn Vapor Transfers. AIChEJ. 46(11), 2198-2210. 5 Agrawal, R. (2003) Synthesis of Multicompo-
nent Distillation Column Configurations. AlChEJ. 49(2), 379-401. 6 Agrawal, R., 2.T. Fidkowski (1998) Are Thermally Coupled Distillation Columns Always Thermodynamically More Efficient for Ternary Distillations? Znd. Eng. Chem. Res. 37(8), 3444-3454. 7 Agrawal, R., 2.T. Fidkowski (1999a) New
Thermally Coupled Schemes for Ternary Distillation. AIChEJ. 45, 485-496. 8 Andrecouich, M. J., A. W . Westerberg (1985a) An MILP Formulation for Heat-Integrated Distillation Sequence Synthesis. AIChEJ. 31(9), 1461-1474.
9 Andrecouich, M . J., A. W. Westerberg (198513)
A Simple Synthesis Method Based on Utility Bounding for Heat-Integrated Distillation Sequences. AZChE]. 31(9), 363-375. 10 Annakou, O., P. Mizsey (1996) Rigorous Comparative Study of Energy-Integrated Distillation Schemes. AlChEJ. 35(6), 1877-1885. 11 Balas, E. (1985) Disjunctive Programming
12
13
14
15
16
and a Hierarchy of Relaxations for Discrete Optimization Problems. SIAM J. Alg. Discrete Methods 6, 466-486. Bauer, M . H., J. Stichlmair (1995) Synthesis and Optimization of Distillation Sequences for the Separation of Azeotropic Mixtures. Comput Chem. Eng. 19, S15-S20. Bek-Pedersen, E. (2003) Synthesis and Design of Distillation Based Separation Schemes. Dissertation, Technical University of Denmark. Bek-Pedersen, E., R. Gani (2004) Design and Synthesis of Distillation Systems Using a Driving-Force-BasedApproach. Chem. Eng. Process. 43, 251-262. Biegler, L. T., I. E. Grossmann, A. W. Westerberg (1997) Systematic Methods of Chemical Process Design. Prentice-Hall, New Jersey. Caballero,J. A,, I. E. Grossmann (1999) Aggregated Models for Integrated Distillation Systems. Ind. Eng. Chem.Res. 38, 2330-2344.
I
293
294
I
1 Synthesis of Separation Processes 17 Caballero, J. A., 1. E. Grossmann (2001) Gen-
18
19
20
21
22
23
24
25 26
27
28 29
30
eralized Disjunctive Programming Model for the Optimal Synthesis of Thermally Linked Distillation Columns. lnd. Eng. Chem. Res. 40,2260-2274. CAPEC, Department of Chemical Engineering DTU (1999) lCAS Users Manual. Lyngby, Denmark. Carlberg, N . A,, A. W. Westerberg (1989a) Temperature-Heat Diagrams for Complex Columns. 2. Underwood”s Method for Side Strippers and Enrichers. lnd. Eng. Chem. Res. 28, 1379-1386. Carlberg, N. A,,A. W. Westerberg (1989b) Temperature-Heat Diagrams for Complex Columns. 3. Underwoods Method for the Petlyuk Configuration. Ind. Eng. Chem. Res. 28, 1386-1397. Castillo, F. J. L., D. Y. C. Thong, G. P. Towler (1998a) Homogeneous Azeotropic Distillation. 1. Design Procedure for Single-Feed Columns at Nontotal Reflux. Ind. Eng. Chem. Res. 37, 987-997. Cerda,]., A. W . Westerberg, D. Mason, B. Linnhof(1983) Minimum Utility Usage in Heat Exchanger Network Synthesis. A Transportation Problem. Chem. Eng. Sci. 38(3), 373-383. Doherty, M . F., G. A. Caldarola (1985) Design and Synthesis of Homogeneous Azeotropic Distillations. 3. Sequencing of Columns for Azeotropic and Extractive Distillations. Ind. Eng. Chem. Fundam. 24,474-485. Doherty, M. F., M. F. Malone (2001) Conceptual Design of Distillation Systems. McGrawHill, New York. Douglas,J. M. (1988) Conceptual Design of Chemical Processes. McGraw-Hill, New York. Dunnebier, G., C. C. Pantelides (1999) Optimal Design of Thermally Coupled Distillation Columns. lnd. Eng. Chem.Res. 38, 162-176. Duran, M . A., 1. E. Grossmann (1986) Simultaneous Optimization and Heat Integration of Chemical Processes. A1ChE j . 32, 123-138. Eliceche, A. M., R. W. H. Sargent (1981) Synthesis and Design of Distillation Systems. lChemE Sytnp Ser 61, 1-22. Fidkowski, Z. T., L. Krolikowski (1986) Thermally Coupled System of Distillation Columns: Optimisation Procedure. AlChEJ. 32, 537-546. Fidkowski, Z. T., M . Malone, M. F. Doherty (1991) Nonideal Multicomponent Distillation: Use of Bihrcation Theory for Design. AIChEJ. 37(12), 1761-1779.
31 Fidkowski, 2. T., M. F. Doherty, M. Malone
32
33
34
38
39
40
41
42
43
(1993a) Computing Azeotropes in Multicomponent Mixtures. Comput. Chem. Eng. 17(12), 1141-1155. Floudas, C. A. (1987) Separation Synthesis of Multicomponent Feed Streams into Multicomponent Product Streams. AlChEJ. 33(4), 540-550. Floudas, C. A., S. H. Anastusiadis (1988) Synthesis of Distillation Sequences With Several Multicomponent Feed and Product Streams. Chem. Eng. Science 43(9), 2407-2419. Fraga, E. S., K. 1. M. McKinnon (1995) Portable Code for Process Synthesis Using Workstation Clusters and Distributed Memory Multicomputers. Comput. Chem. Eng. 19, 759-773. Glinos,K., F . Malone (1988) Optimality Regions for Complex Column Alternatives in Distillation Systems. Chem. Eng. Res. Des. 66, 229-240. Grossmann, I. E. (1985) Mixed-Integer Programming Approach for the Synthesis of Integrated Process Flowsheets. Comput. Chem. Eng. 9,463-482. Grossmann, I. E., H . Yeomans, 2. Kravanja (1998) A Rigorous Disjunctive Optimization Model for Simultaneous Flowsheet Optimization and Heat Integration. Comput. Chem. Eng. 22, S157-S164. Hendry, J. E., R. R. Hughes (1972) Generating Separation Process Flowsheets. Chem. Eng. Prog. 68, 71-76. Hendry, J. E., D. F. Rudd, J. D. Seader (1973) Synthesis in the design of chemical processes. AlChEJournal 19, 1. Hostrup, M., R. Gani, Z. Kravanja, A. Sorsak, I. E. Grossmann (2001) Integration of Thermodynamic Insights and MINLP Optimization for the Synthesis, Design and Analysis of Process Flowsheets. Comput. Chem. Eng. 25, 73-83. Ismail, S. R., E. N. Pistikopoulos, K. P. Papalexandri (1999) Modular Representation Synthesis Framework for Homogeneous Azeotropic Distillation. AlChEJ. 45, 1701-1720. Ismail, S. R., P. Proios, E. N. Pistikopoulos (2001) Modular Synthesis Framework for Combined Separation/Reactive Systems. AlChE]. 47, 629-650. Jaksland, C., Gani R., Lien K. (1995) Separation Process Design and Synthesis Based on Thermodynamic Insights. Chem. Eng. Sci. 50, 511-530.
References I295
44 jobson, M., D. Hildebrandt, D. Glasser (1995) Attainable Regions for the Vapour-Liquid Separation of Homogeneous Ternary Mixtures. Chem. Eng.]. 59(1),51-70. 45 Julka, V., M. F. Dolterty (1990) Geometric Behaviour and Minimum Flows for Nonideal Multicomponent Distillation. Chem. Eng. Sci. 45, 1801-1822. 46 Kakhu, A. I., 1. R. Flower (1988) Synthesising Heat-Integrated Distillation Sequences Using Mixed Integer Programming. Chem. Eng. Res. Des. 66, 241-254. 47 Knight, /. R., M. F. Doherty (1986) Design and Synthesis of Homogeneous Azeotropic Distillations. 5. Columns with Nonnegligible Heat Effects. Ind. Eng. Chem. Fundam. 25, 279-289. 48 Knapp, J . P., M. F. Dohel?, (1994) Minimum Entrainer Flows for Extractive Distillation: a Bifurcation Theoretic Approach. AIChE J. 40(2),243-268. 49 Kocis, G. R., I. E. Grossmann (1989) Computational Experience with DICOPT Solving MINLP Problems in Process Systems Engineering. Comput. Chem. Eng. 13, 307-315. 50 Kravanja, Z., I. E. Grossmann (1993) PROSYN-An Automated Topology and Parameter Process Synthesizer. Comput. Chem. Eng. 17, S87-S94. 51 Kravanja, Z., I. E. Grossmann (1994) New Developments and Capabilities in PROSYNAn Automated Topology and Parameter Process Synthesizer. Comput. Chem. Eng. 18, 1097-1114. 52 Levy, S. G., D. B. Van Dongen, M. F. Doherty (1985) Design and Synthesis of Homogeneous Azeotropic Distillations. 2. Minimum Reflux Calculations for Nonideal and Azeotropic Columns. Ind. Eng. Chem. Fundam. 24, 463-474. 53 LinnhoJ B., H. Dunford, R. Smith (1983) Heat integration of distillation columns into overall processes. Chem. Eng. Sci. 35, 1175-1188. 54 Mei, D. (1995)An SQP Based MINLP Algorithm and its Application to Synthesis of Distillation Systems. Dissertation, Imperial College of Science, Technology and Medicine, London. 55 Mix, T. W., J . S.Dweck, M. Weiberg, R. C. Amstrong (1978) Energy Conservation in Distillation. Chem. Eng. Prog. 74(4), 49-55. 56 Morari, M., C. Faith (1980)The Synthesis of Distillation Trains with Heat Integration. AlChE]. 26,916-928.
57 Nishida, N., G . Stephanopoulos, A. W. Wester-
58
59
60
61
62
63
64
65
66
67
68
69
berg (1981)A Review of Process Synthesis. AIChEj. 27, 321-351. Novak, Z., Z. Kravanja, I. E. Grossmann (1996) Simultaneous Synthesis of Distillation Sequences in Overall Process Schemes Using an Improved MINLP Approach. Comput. Chem. Eng. 20, 1425-1440. Papalexandri, K. P., E. N. Pistikopoulos (1994) A Multiperiod MINLP Model for the Synthesis of Flexible Heat and Mass Exchange Networks. Comput. Chem. Eng. 18, 1125-1139. Papalexandri, K. P., E. N . Pistikopoulos (1996) Generalized Modular Representation Framework for Process Synthesis. AIChEj. 42(4), 1010-1032, Papoulias, S., I. E. Grossmann (1983)A Structural Optimization Approach in Process Synthesis. I: Utility Systems, 11: Heat Recovery Networks, Ill: Total Processing Systems. Comput. Chem. Eng. 7(6), 695-734. Paules, G. E., C. A. Floudas (1988) A Mixed Integer Nonlinear Programming Formulation for the Synthesis of Heat Integrated Distillation Sequences. Comput. Chem. Eng. 12(6),531-546. Proios, P. (2004) Generalized Modular Framework for Distillation Column Synthesis. Dissertation, Imperial College London, University of London. Raman, R., I. E. Grossmann (1991) Relation Between MILP Modeling and Logical Inference for Chemical Process Synthesis. Comput. Chem. Eng. 15(2),73-84. Raman, R., I. E. Grossmann (1993) Symbolic Integration of Logic in Mixed Integer Linear Programming Techniques for Process Synthesis. Cornput. Chem. Eng. 17, 909-927. Raman, R., I. E. Grossmann (1994)Modelling and Computational Techniques for Logic Based Integer Programming. Comput. Chem. Eng. 18(7),563-578. Rathore, R.. K. van Wormer, G. Powers (1974) Synthesis Strategies for Multicomponent Separation Systems with Energy Integration. AIChE/. 20,491-502. Rooks, R. E., V.]ulka, M. F. Doherty, M. F. Malone (1998) Structure of Distillation Regions for Multicomponent Azeotropic Mixtures. AIChE]. 44(6), 1382-1391. Safnt, B. T., A. W. Westerberg (1997)Algorithm for Generating the Distillation Regions for Azeotropic Multicomponent Mixtures. Ind. Eng. Chem. Res. 36(5), 1827-1840.
296
I
1 Synthesis ofSeparation Processes 70 Sargent, R. W. H. (1998) A Functional
71
71
73
74
75
76
77 78
79
80
81
82
83
84
Approach to Process Synthesis and its Application to Distillation Systems. Comput. Chem. Eng. 22(1-2), 31-45. Sargent, R. W. H., K. Gaminibandara (1976) Optimal Design of Plate Distillation Columns. Optimization in Action. Academic Press, London. Seader, J. D., A. W. Westerberg (1977) A Combined Heuristic and Evolutionary Strategy for Synthesis of Simple Separation Sequences. AIChEJ. 23, 951-954. Shah, P. B., A. C. Kokossis (2002) Synthesis Framework for the Optimization of Complex Distillation Systems. AIChEJ. 48, 527-550. Smith, E. M. (1996) On the Optimal Design of Continuous Processes. Dissertation, Imperial College of Science, Technology and Medicine, London. Smith, E. M. B., C. C. Pantelides (1995) Design of Reaction/Separation Networks Using Detailed Models. Comput. Chem. Eng. 19, S83-S88. Sophos, A., G. Stephanopoulos, M. Morari (1978) Synthesis of Optimum Distillation Sequences with Heat Integration Schemes. Paper 42d. 71st Annual AIChE Meeting. Miami, FL. Sorel, E. (1889) Sur la Rectification de 1' Alcohol. Comptes Rendus CVIII, 1128. Stephanopoulos, G., A. W. Westerberg (1976) Studies in Process Synthesis, ILEvolutionary Synthesis of Optimal Process Flowsheets. Chem. Eng. Sci. 31, 195-204. Stichlmair, J., J. Hergulj'uela (1992) Separation Regions and Processes of Zeotropic and Azeotropic Ternary Distillation. AIChEJ. 38(10), 1523-1535. Stichlmair, J., H. Oflers, R. W. Potthofl(1993) Minimum Reflux and Minimum Reboil in Ternary Distillation. Ind. Eng. Chem. Res. 32(10),2438-2445. Tedder, D. W., D. F. Rudd (1978) Parametric Studies in Industrial Distillation: Part 1. Design Comparisons. AIChEJ. 24(2), 303-315. Thompson, R. W., C. /. King (1972) Systematic Synthesis of Separation Schemes. AIChEJ. 18,941-948. Thong, D. Y. -C, G. Liu, M. Jobson, R. Smith (2004) Synthesis of Distillation Sequences for Separating Multicomponent Azeotropic Mixtures. Chem. Eng. Process. 43, 239-250. Triantafyllou, C., R. Smith (1992) The Design and Optimisation of Fully Thermally Coupled Distillation Columns. Trans. Inst. Chem. 70, 118-132.
85
86
87
88
89
90
91
92
93
94
95
Viswanathan,J., I. E. Grossmann (1993) Optimal Feed Locations and Number of Trays for Distillation Columns with Multiple Feeds. Ind. Eng. Chem. Res. 32, 2942-2949. Wahnschafl, 0. M.,J. W. Koehler, E. Blass, A. W. Westerberg (1992) The Product Composition Regions of Single-Feed Azeotropic Distillation Columns. Ind. Eng. Chem. Res. 31, 2345-2362 Wehe, R. R., A. W. Westerberg (1987) An Algorithmic Procedure for the Synthesis of Distillation Sequences with Bypass. Comput. Chem. Eng. 11(6),619-627. Westerberg, A. W. (1980) Review of Process Synthesis. In: R. G. Squires, G. V. Reklaitis (eds. ) Computers Applications to Chemical Engineering, pp. 53-87, ACS Symposium Series, No. 124. Westerberg, A. W. (2004) A Retrospective on Design and Process Synthesis. Comput. Chem. Eng., 28 (2004) p. 2192-2208. Ytz, T. F., I. E. Grossmann (1990) Simultaneous Optimization Models for Heat Integration - 111. Heat Exchanger Network Synthesis. Comput. Chem. Eng. 14, 1165-1184. Yee, T. F, I. E. Grossmann, Z. Kravanja (1990) Simultaneous Optimization Models for Heat Integration. 1. Area and Energy Targeting and Modelling of Multistream Exchangers. Comput. Chem. Eng. 14, 1151- 1164. Yeomans, H., I. E. Grossmann (1999a) A Systematic Modelling Framework of Superstructure Optimization in Process Synthesis. Comput. Chem. Eng. 23, 709-731. Yeomans,H., I. E. Grossmann (1999b) Nonlinear Dis3unctive Programming Models for the Synthesis of Heat Integrated Distillation Sequences. Comput. Chem. Eng. 23, 1135-1151. Yeomans, H., I. E. Grossmann (2000a) Disjunctive Programming Models for the Optimal Design of Distillation Columns and Separation Sequences. Ind. Eng. Chem. Res. 39, 1637-1648. Yeomans, H., I. E. Grossmann (2000b) Optimal Design of Complex Distillation Columns Using Rigorous Tray-by-Tray Disjunctive Programming Models. Ind. Eng. Chem. Res. 39,4326-4335.
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
2 Process Intensification Patrick Linke, Antonis Kokossis, and Alberto Alva-Argaez
2.1 Introduction
Since its emergence in the early 1980s, process intensification (PI) has received significant attention from the chemical and process engineering research community. PI promises novel, more efficient technologies with the potential to revolutionize the process industries. The term process intensification is associated mainly with more efficient and compact processing equipment that can potentially replace large and inefficient units commonly used in chemical processing but also includes methodologies, such as process synthesis methods, that enable the systematic development of efficient processing units. To date, numerous technologies have emerged from PI research activities and a number of commercial-scaleapplications have successfully been implemented. In a recent survey of industrial and academic experts, PI has recently been highlighted as one of the key enabling technologies required for the sustainable development of the chemical industry (Tsoka et al. 2004). One of the earliest and most prominent applications linked to process intensification has been the redesign of the Eastman Chemicals methyl acetate process (Siirola 1995). The original process was highly complex and consisted of more than twenty pieces of equipment. Through integration of reaction and separation processing tasks into multifunctional reaction/reactive separation equipment it was possible to devise an intensified process consisting of only three pieces of equipment. Since the success of the Eastman process, numerous other PI technologies have been developed and applied. Prominent examples (Stankiewicz and Drinkenburg 2004) include efficient reaction equipment such as spinning disk reactors and monolithic reactors, equipment for nonreactive operations such as compact heat exchangers, multifunctional reactor concepts such as reactive separations, hybrid separations such as membrane distillation, alternative energy sources such as ultrasound, and many other technologies. The numerous technological breakthroughs have led to comprehensive PI dissemination efforts for academic research as well as commercial applications including Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
298
I
2 Process Intensification
international conferences, symposia, and networks. Most recently, a textbook by leading academic and industrial PI scientists and engineers has been published to provide a good overview over the technology developments to date (Stankiewicz and Moulijn 2004). PI developments have their origins in commodity chemicals due to the strength of the industry sector in the 1980s as well as the large economic impact of even small efficiencygains. As a result of the evolution of the chemical industries, PI has found applications in other sectors including fine chemicals and bioprocessing. This has resulted in a new definition of PI, which is now widely understood as the development of novel equipment, processing techniques, and process development methods for chemical and biochemical systems (Stankiewiczand Drinkenburg 2004). Process intensification plays an important role in the chemical industry’s attempt to develop into a sustainable industry that is both economically and ecologically viable. The following benefits associated with process intensification are important factors for sustainable development (Siirola 1995): 0
0
The development of more profitable processes, that are cheaper to run, require less energy, and produce less waste and by-products. Process intensification is linked to reduced costs for land, equipment, raw materials, utilities, and waste processing. The acceleration of the process development cycle. This is particularly important in industry sectors where time to market is a crucial factor, such as fine chemicals or pharmaceuticals. The development of greener and safer processes, which is closely linked with good company image. Such a positive image is crucial for enterprises in the chemical process industries in order to remain viable businesses.
Most of the reported developments have been realized on an ad hoc basis, drawing on design engineers’ intuition and expertise. Only more recently, the hndamental principles underlying process intensification technologies have been called upon in the form of systematic design procedures. The development of reactive distillation processes, such as the Eastman process mentioned above, is a typical example of an area where initial success stories have resulted from intuition and systematic design methods have subsequently been developed that can guide the designer in the development of such a system (Ciric and Gu 1994; Cardoso, Salcedo, Fey0 de Azevedo 2000; Hauan, Westerberg and Lien 2000; Hauan, Ciric and Westerberg 2000; Ismail, Pistikopoulos and Papalexandri 1999; Okasinski and Doherty 1998; Linke and Kokossis 2003a). Such methods are generally computer-aided design techniques that screen numerous novel and existing design options and assist the design engineer in the identification of promising process intensification routes. However, such methods are still emerging and very few tools are currently commercially available. Computer-aided tools have great potential to accelerate process intensification technology development. They enable the systematic screening and scoping of large numbers of alternative processing options and can identify novel options of phenomena exploitation that may lead to higher efficiencies. Such tools provide the basis for systematic approaches to novelty in process intensification and have the potential to
2.2 Process Intens9cation Technologies
identify processing options, which can easily be missed in design activities that rely on intuition and past experiences. The next section provides an overview over current process intensification technologies. In the remainder of this chapter we will present a number of recently developed systematic computer-aided process intensification methods.
2.2 Process Intensification Technologies
A large number of process intensification technologies have been developed. Accord-
ing to Stankiewicz and Drinkenburg (Stankiewicz and Drinkenburg 2004), the existing technologies can be broadly classified into process-intensifymgequipment (hardware) and process-intensifyingmethods (software).PI applications often result from a combination of equipment and methods. For instance, the application of methods often leads to the development of novel equipment. In the same vane, novel apparatuses often make use of new processing methods. The following sections will provide a brief overview of important process intensification methods and equipment and discuss industrial applications. A more detailed description of PI technologies can be found elsewhere (Stankiewicz and Moulijn 2004).
2.2.1 Process Intensification Equipment
Process intensifying equipment (hardware)can be broadly classified into equipment for reaction systems and equipment for nonreactive systems (Stankiewicz and Drinkenburg 2004). Examples of reaction equipment include spinning disk reactors, static mixer reactors, monolithic reactors, and microreactors. Static mixers, compact heat exchangers, rotating packed beds and centrifugal adsorbers are examples of equipment for nonreactive operations. Process intensifying equipment generally aims at improving crucial processing characteristics in terms of mixing, heat transfer, and mass transfer over those realizable in conventional equipment. Such equipment is generally smaller than their conventional counterparts, albeit offering improved processing performances. A number of PI equipment developments are illustrated below. Spinning Disk Reactors
The fluid dynamics of multiphase contact are dominated by surface forces so that very small interfacial areas are developed. This results in a lack of countercurrent interfacial motion in conventional multiphase contacting equipment, which in turn causes a low intensity operation in terms of reaction, mass, and heat transfer. The processing characteristics can be significantly enhanced through the application of a high-accelerationfield. In the spinning disc reactor (SDR) (Ramshaw 2004), such an
I
299
300
I
2 Process lntensijcation
acceleration field is established within a rotor that receives and discharges the working fluid. In general, the spinning disk device achieves highly efficient gas-liquid contacting from which numerous applications benefit, such as evaporators,aerators/ desorbers and reactors. The device is very compact and has been demonstrated to allow very good control over multiphase reactions, even in highly viscous systems. Multifunctional Heat Exchangers
Multifunctional heat exchangers combine heat transfer phenomena with other phenomena such as reaction, separation, or mixing in a single piece of equipment. A combination of heat transfer with one or more of these phenomena allows one to achieve better process performances or control for many systems. A reactor heat exchanger is a typical example for such a multifunctional heat exchanger (Thonon and Tochon 2004). Integrating the reactor within the heat exchangers enables better heat management that can significantly improve process yields and selectivities as well as reduce process energy requirements. Microreaction Technology
Microreaction technology (Ehrfeld 2004) achieves intense mixing and very high heat transfer rates by significantly decreasing the characteristic dimensions of a processing system to the scale of micrometers. The availability of high transfer rates for heat and mass in conjunction with very small material holdups allows good control over reaction systems, i.e., over reaction yields, selectivities and energy management. For instance, it is possible to operate highly exothermic reactions at isothermal conditions using miroreactors. Apart from more intense processing, microreaction technology offers advantages in process control, because the starting and boundary conditions for reactors and unit operations can be adjusted precisely and are easily scaled-up using parallelization. Structured Catalysts
Structured catalysts, such as monoliths, offer a number of advantages over nonstructured catalysts in terms of high rates, high selectivities,low energy consumption and easy scale-up (Moulijn, Kapteijn and Stankiewicz 2004). Monolithic catalysts are metallic or nonmetallic bodies that contain large numbers of channels of defined cross sectional shapes and sizes (Stankiewicz and Drinkenburg 2004). They cause very low pressure drops, offer high surface areas per reactor volume, allow the use of very small catalyst particles, such as zeolites, and exhibit very high catalytic activities due to short difision paths in the thin wash coat layer. The process-intensifymg benefits from a structured reactor results mainly from the possibility to decouple reaction kinetics, transport phenomena and hydrodynamics so that each can be optimized independently in order to achieve very good reactor performances (Moulijn, Kapteijn and Stankiewicz 2004).
2.2 Process Intensijcation Technologies
2.2.2 Process Intensification Methods
According to Stankiewicz and Drinkenburg (2004), process intensifylng methods (software) include multifunctional reactors, hybrid separations, the use of alternative energy sources and other computer-aided methods. A number of multifunctional reactor concepts have been developed including reactive separations, heat integrated reactors and fuel cells. Hybrid separations generally combine two or more types of unit operations into a single system and include processes such as membrane distillation and membrane adsorption amongst others. Alternative energy sources are used in process intensification to allow a better exploitation of chemical and physical phenomena. Such methods include the application of centrifugal fields, ultrasound, microwaves and electric fields. Other process intensifjing methods that do not fall into the above categories have been classed as “other methods” by Stankiewicz and Drinkenburg (2004). This class contains dynamic strategies for reactor operation, but also computer-aided design methods, such as process synthesis tools, that allow the systematic identification of promising design options that allow efficient processing. We explain a number of important process intensification methods below. Reactive Separations
Reactive separations are classes of reactors that facilitate component separations in the reaction zones with the aim to improve reaction yields and selectivities or to facilitate difficult separations. Examples of reactive separation processes include reactive extraction where a solvent is used to transfer components in and out of the reaction zone, reactive distillation where vapor is used as the separating agent, and membrane reactors, where the permeability and selectivity of a membrane is exploited to selectively add and remove components from the reaction zone. The most prominent success of reactive separations is the Eastman methyl acetate reactive distillation process mentioned in the introduction. Hybrid Separations
Hybrid separations are processing methods that exploit the synergies between different separation techniques in a single operation. Prominent examples of hybrid separations include extractive distillation, adsorptive distillation, membrane distillation, and membrane extraction. Hybrid separations are generally developed for systems where the performance of a unit operation is inefficient or problematic. For instance, distillation is an efficient process for the separation of close-boiling systems. The introduction of a suitable solvent into the column, i.e., extractive distillation, allows one to increase the driving forces in the column and thus improve the efficiency of the operation. The viability of a hybrid separation process depends strongly on the system under consideration. More information on the selection of hybrid separation processes can be found in Stankiewicz (2004).
I
301
302
I
2 Process Intens&ation
Process Synthesis
Process synthesis refers to the systematic development of process flow sheets through “the automatic generation of design alternatives and the selection of the better ones based on incomplete information” (Westerberg 1989). Process synthesis methods support the engineer in finding novel, improved solutions to process design problems. The synthesis objectives aim at finding those processing options that enable the production of desired chemicals in the most cost effective and environmentally benign manner possible. A number of process synthesis tools have been developed that can systematically determine the most promising process designs for a number of systems including reaction systems, reactive separations, hybrid separations, as well as water and energy management. Such methods will be described in detail later.
2.2.3 Process Intensification in Industrial Practice
There are a number of incentives for enterprises in the chemical industries to adopt process intensification technologies. Through the intensification of processes, stepby-step improvements can be realized that offer a strong possibility to fulfill current business requirements that are becoming more demanding in the current economic climate (Bakker 2004). A strong driver for the introduction of process intensification technologies is the need to remain competitive in expanding markets by achieving and maintaining the best low-cost position. Other strong drivers include the need to meet tightening legislative and environmental requirements. Process intensification technologies can positively respond to these drivers (Bakker 2004) as they offer numerous related advantages such as energy savings, reductions in space requirements, reductions in the number of required processing steps, reduced emissions and waste, and more flexible feedstock specifications. Bakker (2004)describes the possibilities for introducing process intensifylng technologies into an industrial process taking into account six largely sequential process development phases:
New ideas for a process. The development of a chemical path leading to a desired product defines the core of the process. There are typically a number of possible pathways that can be followed and it is important to consider PI technologies in order to determine the most intensified process option. This phase has great potential for introducing process intensification options because conceptual process changes are relatively cheap. At the end of this development phase, the chemical pathway is identified and a number of process parameters, such as concentrations and temperatures, have been roughly specified. Determining the process chemistry. This phase is concerned with identifjmg the optimal design and conditions for the reaction system so as to identify the intrinsically optimal route that produces the least by-products and waste. In this phase, the process design is also decided and care needs to be taken to ensure that the overall
2.3 Computer-Aided Methods for Process Intens5cation
0
0
0
process design achieves optimal performance and can easily be implemented. Generally, the consideration of process changes in this phase is still relatively cheap. Pilot plant studies. This phase is typically concerned with optimizing performances on an equipment level in order to overcome heat and mass transfer limitations, for instance, by making use of process intensification equipment. The testing of PI options is still relatively cheap in this phase. Plant design. At this point, the process design is fixed and it is difficult to introduce major changes as this would require revisiting of the previous phases. It is important that the benefits of process intensification technologies have been identified before this phase. Start-up. At start-up, the process system is fully specified and there is no more room to introduce process intensification options. Debottlenecking and trouble-shooting. Once the process is operating, it is difficult to make significant changes as this would incur downtimes and loss of production. This is only viable if step changes in efficiencies can be gained from a process change. The application of process intensification technologies may offer such savings and their economic benefits should be identified in order to decide on high-impact retrofit projects.
Clearly, the quick evaluation of various processing scenarios is very important in order to identify the most intensified process for a given product. Without systematic methods, such a screening is likely to be incomplete and promising process options are likely to be missed. Computer-aided methods offer great potential to realize quick and systematic process screening so as to minimize the risk of choosing underperforming processing options.
2.3 Computer-Aided Methods for Process Intensification
The ability to process large amounts of data in modern computers has opened up the possibility of developing practical and user-friendly software tools to assist process engineers and process designers in their decision-making tasks. In the past, the lack of powerful computers and other data processing tools resulted in design tools that would heavily draw on past design experiences. Experience from previously successful designs, encapsulated in heuristics, were the basis for new process developments. However, the application of heuristics very often created contradictions and confusion and good, novel design options were often missed. With today’s vast computing power, it is possible to do away with heuristic rules and develop systematic methods that are not limited to the reproduction of past designs. For a widespread use of such computer-aided process intensification methods to be reached, the research community needs to deliver design methodologies that can be translated into usable software. In this section, we will describe a number of available methodologies and models that could become part of such a software toolbox.
I
303
304
I
2 Process Intensification
Systematic computer-aided process intensification methods are required to generate a set of feasible process design alternatives and to select the most efficient configurations from the set. Ideally, such decision support tools should allow a systematic determination of the most promising process designs, which closely approach the performance limits of the system and meet the business constraints, out of the set of all feasible alternative structural and operational process design options. Due to the complexity of the overall problem, applications of systematic decisionmaking technologies have addressed closely defined design subproblems, such as reaction, heat integration, and separation systems. These various subproblems exhibit a variety of challenging features. For instance, in reactor optimization these arise mainly from the highly complex chemical and physical models that need to be processed. In energy systems such as heat exchanger networks, on the other hand, the challenge is of combinatorial nature as vast numbers of different feasible design option exist. Separation system design presents examples of intermediate complexity in terms of model complexities. Despite the challenges in solving the individual design subproblems, the decomposition of the overall design problem into such subproblems makes numerous limiting assumptions that present opportunities for process choices. The conventional approach to process design has similarities with a bow tie model: one starts broad and collects all available process information. A structure is used, e.g., a tree diagram, to create a format for the required information. In this stage, all relevant factors that are anticipated to have an influence on the process are collected. Boundaries of the process are generally set early to reduce rework later on. Clearly, such an approach limits the degree of design novelty that can be achieved. It must be recognized that the system boundary is mostly an arbitrary construct defined by the interest and the level of ability and/or authority of the participating actors. From the above discussion, it is clear that process alternatives are commonly generated based on intuition and case-based reasoning. This provides a strong chance that promising design candidates are not arrived at and that novelty is not automatically accounted for in the design process. This risk of selecting underperforming designs can be reduced through systematic approaches that allow one to capture all possible design alternatives in a process representation and screen for the design that delivers the best possible performance for the specified performance measure. The following sections describe such methodologies for reaction system design, integrated reactionseparation systems, separation systems, solvent-process systems, and integrated water and wastewater systems.
2.3.1 Reaction Systems
The reactor is the part of the process where value is added by converting raw materials into valuable products. The intensification of this unit is crucial for profitability. Reactor design is particularly difficult since reaction, heat, and mass transfer phenomena need to be exploited simultaneously. Reaction and mass transfer models
2.3 Computer-Aided Methods for Process Intensification
tend to be highly complex and difficult to process numerically. There are a number of decisions that have to be made in reactor design such as the selection of appropriate mixing and contacting patterns, the choice of optimal reaction volumes, decisions on temperature policies, and the identification of the best feeding, recycling and bypassing strategies. In order to identify the reactor design options that achieve the best possible performance for a given system, this information needs to be analyzed simultaneously as one decision tends to impact the others. Such an activity is generally beyond the capabilities of a design engineer without the assistance of systematic design methods. As a result, the designer commonly employs textbook knowledge, heuristics, empiricism, past experience and qualitative reasoning on the basis of analogies with similar systems and case studies, which tend to be insufficient to guide the development of high-performance reactor designs for systems of industrial complexity. This design practice results in a lack of innovation, quality and efficiency in many industrial designs. The complexities of the reactor design task make the development of systematic computer-aided design methods a challenging task. From a practical viewpoint, such methods need to be applicable to general reaction systems and capable of supporting decision-making to enable the designer to quickly identify high-performance reactor designs. Over the past two decades, a number of such methods have evolved that can provide performance targets for the reaction system under consideration as well as guide design suggestions in terms of the best mixing, feeding, recycling, and operational policies at a conceptual level. In this section, we describe recent developments in reactor network synthesis that provide systematic decision support for various reactor design aspects such that the system’s performance is maximized with respect to the design objectives employed, Initial efforts in reactor network synthesis have focused on the development of systematic design methods for single-phase systems. More recent efforts have addressed multiphase systems. We will review the singlephase developments first, in order to give an overview over the different approaches followed. Approaches to conceptual reactor design can be broadly divided into optimizationbased and graphical methods. Optimization-based approaches make use of superstructure formulations. Reactor network superstructures include different possible structural design options in a single model, which is then searched in order to determine the optimal candidates. Achenie and Biegler (1986; 1988; 1990) were the first to propose the optimization of comprehensive reactor network superstructures combining options arising from combinations of axial dispersion models and recyclePFR representations. The structures were searched using deterministic optimization techniques in the form of nonlinear programming (NLP) methods to identif) the most promising design candidates embedded in the superstructures. Later, Kokossis and Floudas (1990; 1994) introduced the idea of a reactor network superstructure modeled on a mixed integer nonlinear programming (MINLP) formulation. They proposed superstructures to account for all possible interconnections amongst the generic structural units of ideal CSTRs and PFRs (represented by CSTR cascades) with the aim to screen for design options and estimate the limiting performance of the reaction system. Schweiger and Floudas (1999)later replaced the PFR represen-
I
305
306
I
2 Process Intensification
tation by rigorous DSR models in order to avoid the inaccuracies introduced by the use of CSTR cascades. The above methods all made use of deterministic optimization approaches to identify locally optimal designs. Marcoulaki and Kokossis (1999) have recently presented the application of a global search strategy in the form of simulated annealing. Their approach allows to establish solid performance limits for the reaction system and to systematically develop designs that can achieve the targets. We will explain this approach in detail later. In a parallel effort to the development of optimization-based approaches, Glasser, Hildebrandt and Crowe (1987)presented a graphical procedure, the attainable region (AR) method, which has its roots in the insighhl methods originally proposed by Horn (1964).This method allows one to identify the reactor network with maximum performance in terms of yield, selectivity or conversion, which can be located on the boundary of the AR in the form of DSR and CSTR cascades with bypasses. A number of extensions that generalize the AR method have been published (Hildebrandt, Glasser and Crowe 1990; Hildebrandt and Glasser 1990; Glasser, Hildebrandt and Glasser 1992; Glasser, Hildebrandt and Godorr 1994; Glasser and Hildebrandt 1997; Feinberg and Hildebrandt 1997). Though very useful for single-phase systems in lower dimensions, in higher dimensions the developments face both graphical and implementation problems and often lead to complex designs with multiple DSR and CSTR units and complex feeding and bypassing strategies. In an attempt to exploit insights gained from the AR concept to enhance superstructure optimization, Biegler and coworkers established rules for an efficient formulation and optimal search of superstructures. Balakrishna and Biegler (1992a,b) and Lakshmanan and Biegler (1996, 1997) presented NLP and MINLP formulations of this approach as well as a number of applications. From a practical viewpoint, a useful approach for reactor network synthesis should be able to identify the performance limits for the reaction system and to identify designs that can approach these limits. Targets can guide design evaluation in light of the ultimate performance possible for a given system and are also very useful to identify retrofit projects as they provide the incentives associated with such a project. Optimization-based methods can fulfill both of these requirements, provided confidence can be established in the optimization results. If local search strategies are deployed, there is no reason to be confident that the obtained solution can not be substantially improved. It is therefore important that global search strategies are adopted as in the method proposed by Marcoulaki and Kokossis (1999).Their application of stochastic optimization enables confidence in the optimization results, can afford particularly nonlinear reactor models, and is restricted neither by the dimensionality nor the size of the problem. Marcoulaki and Kokossis (1999) optimize the rich and inclusive superstructures formulated by Kokossis and Floudas (1990, 1994) to identify performance targets and to extract numerous design candidates that approach the targets. A reactor network superstructure containing three reactor units, each of which allows choices between CSTRs and PFRs with side feeding, is shown in Fig. 2.1. The superstructures generally feature complete connectivities between the reactor units, the feed streams and the reactor units, and the reactor units and the product. Structural restrictions that may result from practical consider-
I
2.3 Computer-Aided Methodsfor Process Intensification
307
ations can be introduced as constraints for the optimal search. The stochastic search of the superstructures using the simulated annealing algorithm has been observed to systematically converge to the globally optimal domain, i.e., the performance targets of the system, and to produce numerous design alternatives with performances close to these targets for the numerous systems studied (Marcoulaki and Kokossis 1999). The stochastic optimization-basedreactor synthesis approach has been extended to multiphase systems to enhance its industrial relevance. Multiphase reactors are most frequently used in chemical processes besides fixed-bed reactors. A multiphase reactor network synthesis method needs to handle additional degrees of freedom that arise from the presence of multiple fluid phases in the system. When formulated as a superstructure optimization problem, the problem poses additional challenges to capture a significantly larger number of possible network configurations and to handle models for mass transfer and hydrodynamic effects. Mehta and Kokossis (1997) have proposed a compact representation of design options for multiphase reactors in the form of generic multiphase reactor units comprised by compartments and shadow compartments. A reactor module contains a single reactor compartment in each phase, which contains a single reactor unit, and can exist as either a CSTR or a PFR with side feeding. Each reactor compartment features diffusional mass transfer links with its shadow compartments in the different phases of the reactor unit. By combining different mixing and contacting patterns in compartments of different phases, a single multiphase reactor unit can represent all the conventional industrial reactors such as bubble columns, cocurrent and counter-current beds, and agitated reactors. By combining a number of such multiphase reactor units and their mixing and contacting options, it is possible to derive novel reactor configurations that can improve or enhance the performance of a multiphase reaction process. When embedded into a superstructure, a model can be formulated that contains all possible novel and conventional multiphase reactor design options from which the optimal candidates can be extracted using optimization techniques. Mehta and Kokossis (1997) adopted the stochastic optimization scheme proposed for single-phase systems (Marcoulaki and Kokossis 1999).The resulting methodology enables the development of performance targets and design options for multiphase systems.
Feed
Figure 2.1
RCTR 1 -
Single-phase reactor network superstructure representation.
RCTR 4
I
I
. . vvvv,
Product
308
I
2 Process Intensification
Mehta and Kokosis (2000) have extended the approach to account for nonisotherma1 systems. Two approaches have been developed to handle temperature effects. Both approaches introduce elements associated with the manipulation of temperature changes. A conceptual approach (profile-basedapproach) imposes temperature profiles onto the reactor units, whereas a detailed approach (unit-based approach) introduces heaters and coolers into the reactor unit representations. The profile based approach allows one to determine the optimum temperature policies without considering the details of heat transfer mechanisms. As the profiles are imposed rather than simulated, the approach adds no hrther computational difficulties compared to the isothermal reactor network synthesis. The solutions obtained are easy to interpret and thus the approach helps in understanding of dominant tradeoffs in the problems. Results from the unit-based approach (UBS) provide the target that can be obtained from a network of adiabatic reactors with consideration of direct and indirect intermediate heat transfer options. The multiphase reactor network synthesis approach has been successfully applied to a number of industrial problems (Mehta 1998).For illustration purposes, consider the chlorination of butanoic acid as an example of a gas-liquid system. The chlorination of butanoic acid (BA) to monochlorobutanoic acid (MBA) and dichlorobutanoic acid (DBA) involves two reactions in the liquid phase.
+ Cl2 + MBA + HCl BA + 2 Cl2 + DBA + 2 HCl
BA
(1)
The reactions occur in the liquid phase and the gas phase contains chlorine feed and hydrogen chloride product. The modeling assumptions and problem data for this study are given in Mehta (1998). The study aimed at identifying reactor designs that can achieve the highest possible yield of MBA. The best conventional reactor design was observed to achieve a maximum yield of 74.4% (mechanically agitated vessel). Reactor network optimization of a superstructure identified a maximum yield of 99.6%. A number of designs were obtained from the optimization of a superstructure of three generic multiphase reactor units that achieve performances close to the target value. Designs range from simple designs employing only one reactor unit to designs featuring three units. Two simple designs are illustrated in Fig. 2.2. Despite their successes, current reactor network design methods still suffer from a number of shortcomings. As mentioned above, the attainable region is difficult to construct for problems containing a large number of reactions and components, local deterministic optimization-basedtechniques suffer from initialization and convergence problems, whereas stochastic optimization techniques such as the ones proposed by Marcoulaki and Kokossis (1999) and Mehta and Kokossis (1997) have proven to be fairly reliable but require long computational times to converge. Moreover, the optimization results are often incomprehensibly complex and impractical to implement, and the methods offer no insights into the key features that cause good performance in the designs identified. Research is in progress to address these issues. Most recently, a novel approach has been proposed that combines knowledge
2.3 Computer-Aided Methods for Process intensification
I
309
Gas Fe (Cf,)
I
Liquid Feed (BA+Gat)
I
Liquid Product (BAtMBA+DBA+CI2+Cat)
Liquid Product [BA+MBA+DBA+CI,+Cat)
Figure 2.2 Reactor designs for the chlorination o f butanoic acid (Mehta 1998).
acquisition tools with stochastic optimization methods to robustly and quickly address complex reactor design problems (Ashleyand Linke 2004). The method uses knowledge derived from reaction pathway information to gain an understanding of the system and devise a set of rules that are used to guide the optimal search. The approach was observed to consistently outperform the stochastic optimization-based methods in terms of search speed. Moreover, the knowledge gained during the search can be processed to provide insights into important features that determine a design with performances close to the targets of the system. 2.3.2 Reactive and Reaction-SeparationSystems
As indicated in Section 2.2, the combination of separation and reaction has given rise to a number of process intensification methods such as reactive distillation and reactive extraction. The appropriate integration of reaction and separation may lead to significant reductions in capital requirements, raw material waste and other operating costs. However, we would like to stress that the integration of these phenomena is by no means beneficial for all types of systems. There are strong and complex interactions between the various individual subsystems in a process flow sheet, in particular between the reaction and the separation systems that need to be considered in order to develop the most efficient processing options. The development of systematic synthesis tools for integrated reaction and separation systems is considerably more challenging than for the case where only one of these systems is considered as is the case in reactor network synthesis. As a result of these challenges, the
310
I
2 Process intensification
area has received considerably less attention and the developments have mainly been confined to subproblems of integrated reaction and separation problems, such as reactor-separator-recycle, and specific reactive separation systems, such as reactive distillation column design. No general method capable of determining the most appropriate way of integrating reaction and separation is available, be it in a reactive separation process or in a process with separate reaction and separation units. We will review the existing approaches below before we describe in detail a method for integrated reactivelreaction-separation systems design. A first approach to reactor-separator-recycle process network synthesis was proposed by Kokossis and Floudas (1991),who extended their reactor superstructure formulations to account for the design of a separation train, in terms of determining separation task sequences, and the determination of recycle policies. Smith and Pantelides (1995)reformulated the problem to use more detailed separation process models in conjunction with process units that do not perform predefined tasks. An alternative approach was presented by Fraga (1996), who proposed a discretized reactor-separator synthesis problem that can be optimized using dynamic programming techniques. All presented approaches represent only systems with a single fluid phase, and can not account for reactive separation options. For the synthesis of specific reactive separation systems, a number of graphical techniques have been presented. Such methods aim at producing feasible rather than optimal design candidates and are available for reactive distillation, reactive extraction and reactive crystallization problems (Hauan, Westerberg and Lien 2000; Hauan, Ciric and Westerberg 2000; Okasinski and Doherty 1998; Barbosa and Doherty 1988; Ng and Berry 1997; Ng and Samant 1998; Nisoli, Malone and Doherty 1997). Ciric and Gu (1994) were the first to propose an optimization-based approach for reactive distillation column design with the number of stages and the feed locations being the structural decision variables. The problem was formulated as a MINLP and solved using a local deterministic optimization algorithm. Cardoso et d. (2000) later proposed the optimization of the same representation using stochastic techniques in the form of simulated annealing. A more general approach to reactive/reaction separation system synthesis has been presented by Papalexandri and Pistikopoulos (1994),who propose the optimization of superstructures of postulated units for reaction and separation using local deterministic optimization techniques. Additional applications in reactive and reactor/separation system synthesis using this method were reported in separate publications (Ismail, Papalexandri and Pistikopoulos 1999; Ismail, Papalexandri and Pistikopoulos 2001). The approach is limited by the multiphase reaction options it can handle and must cope with large MINLP formulations that include nonlinearities of the most general type. A general framework for the synthesis and optimization of processes involving reaction and separation was presented by Linke and Kokossis (2003a).The scheme employs rich and inclusive superstructure formulations of two different types of synthesis units and stream networks that allow for a conceptual as well as a rigorous representation of the chemical and physical phenomena encountered in general reaction and separation systems. Stochastic search techniques in the form of simulated
2.3 Computer-Aided Methods for Process lntensifcation Separation task feasibility constraints
* Mass exchange options (solvents, stripping agents,
I
membranes, etc.) Property data and mass transfer models Reaction kinetics
I
Network Model
Biases, constraints, Analysis
Figure 2.3 Strategy for reactive/reaction separation systems design (Linke 2001).
annealing and Tabu search can be applied to identify robust performance targets and a set of processing options from the superstructures that closely achieve the targets (Linke and Kokossis 2003b). The synthesis method is applied as part of an overall strategy (Linke 2001) as illustrated in Fig. 2.3. In the first instance, the available process design information for candidate reactive separation and mass exchange operations is incorporated into generic synthesis unit models. The proposed methodology utilizes two types of generic units, the reactor/mass exchanger (RMX) and the separation task (ST) unit, which are described below. Superstructures of these generic units are then formulated and the performance targets as well as a set of design candidates are obtained subsequently via robust stochastic optimization techniques. The overall synthesis strategy as well as the flexible process representation allows for iterations to incorporate the insight gained during the synthesis process. The two different types of synthesis units employed in the superstructures allow one to handle detailed information on reaction, mass and heat exchange (RMX unit) as well as conceptual information in the form of separation tasks (ST units). The generic RMX unit is a flexible and compact representation of the possibilities for phenomena exploitation in processing equipment. The RMX unit follows the shadow compartment concept described in Section 2.3.1 and consists of compartments in each phase present in the system. The streams processed in the different compartments of the generic unit can exchange mass across a physical boundary, which can either be a phase boundary or a diffusion barrier. Each compartment features a set of mutually exclusive mixing patterns through which a compact representation of all possible contacting and mixing pattern combinations between streams of different phases can be realized. With different combinations of mixing patterns in the compartments, the existence of mass transfer links between compartments, and decisions on the presence of catalysts, a variety of processing alternatives can be represented by a single generic RMX unit. For a vapor-liquid-liquidsystem featuring reactions in one heterogeneously catalyzed liquid phase, the six design alternatives in terms of phase interactions can be represented including a homogeneous reactor, a vapor-liquidand a liquid-liquid mass exchanger, a rectifier, a stripper, and gas-liquid-
I
311
312
I
2 Process intensification
liquid reactors. Each of these instances represents a number of design alternatives resulting from different possible combinations of mixing patterns and flow directions in the compartments of the different phases. In contrast to the rigorous representation of reaction and mass transfer phenomena by RMX units, the ST units represent venues for composition manipulations of streams without the need for detailed physical models. In accordance with the purpose of any separation system, the separation task units generate a number of outlet streams of different compositions by distributing the components present in the inlet stream amongst the outlet streams. The ST units can accommodate for the different synthesis aims in the screening and design stages outlined in the previous section by different levels of component distribution and separation task constraints. Nonsharp separations arising from operational constraints on the separation tasks can easily be accomplished. The ST units can perform a set of feasible separation tasks according to the separation order to define a set of outlet streams. Depending upon the order in which the tasks are performed, a variety of processing alternatives exist for a single ST unit such as different distillation sequences performing the same separation tasks. The synthesis units are combined in network superstructures to provide for a framework that enables the simultaneous exploration of the functionalities of the different synthesis units along with all possible interactions amongst them. A number of synthesis units and a complete stream network are required to capture all different design options that exist for a process. Novelty is accounted for in the superstructures as the representations are not constrained to conventional process configurations but instead include all possible novel combinations of the synthesis units. The superstructure generation and the design instances that can be extracted from such superstructures are described in detail in Linke and Kokossis (2003a). The synthesis method has been applied to a number of case studies including reactivelreaction-distillation systems, reactive/reaction-extraction systems and bioreaction-separation systems (Linke 2001). For illustration purposes, we will discuss below an application in reactivelreaction-distillationand an application in the development of optimal wastewater treatment processes. The method was applied to synthesize process designs for ethylene glycol (C2H602)production from ethylene oxide and water (Linke and Kokossis 2003a). The model consists of two reactions as follows:
C2H40 f C2H602 +-C4H1003
(4)
The process goal is the production of 25 kmol/h of ethylene glycol with a minimum purity of 95 mol%. Ideal vapor and liquid phases are assumed (Ciric and Gu, 1994). The case study aimed at identifying the performance target and designs that minimize raw material and utility costs. Network superstmctures of four multiphase RMX units were generated accounting for vapor and liquid phases. The stream network includes complete intraphase as well as inter-phase connectivity amongst all compartments of the generic units present. Network optimization reveals a perform-
2.3 Computer-Aided Methodsfor Process lntensifcation
ance target of around 218Ok$/yr for the system, an improvement of 7% over the optimal reactive distillation column reported earlier (Ciric and Gu 1994). A number of designs were identified that exhibit performances close to the target. Figure 2.4 shows two such design alternatives of similar performance. In a different case study, the method was applied to design activated sludge processes (Rigopoulos and Linke 2002) using the comprehensive kinetic model available for activated sludge systems (Henze, Grady, Gujer et al. 1987). The study aimed at identifylng optimal schemes for combined oxidation/denitrification of wastewater that minimize both, the effluent COD and total nitrogen content (N). Superstructures were generated using RMX units to represent the possible single-phase and gas-liquid reactors and ST units to facilitate sludge separation. The study identified
Liquid
RMX 3
;
ti'
eWater
Vawur
RMx2i 0;1 RMX 3
RMX 4
RMX 4
Ethylene OXlde
Ethylene glycol product
+
Prduct
b.l Figure 2.4 Strategy for reactive/reaction separation systems design (Linke 2001).
I
313
314
I
2 Process Intensification
Air f e d
Figure 2.5
Activated sludge process design.
designs that would achieve reductions by 97.4% in COD and by 84.9% in N. There is significant scope to improve the total nitrogen content reduction performances by adopting the novel designs delivered by this study. The new designs were observed to achieve improvements in reduction rates of total nitrogen content between 33% (Bardenpho process) and 80% (Ludzack-Ettinger process) over their conventional counterparts. A high performance design identified in this study is shown in Fig. 2.5. Details on the exploitation of the chemical and physical design insights are given in Rigopoulos and Linke (2002).
2.3.3 Membrane and Hybrid-Separation Systems
Membrane separation processes such as gas permeation, pervaporation, reverse osmosis, ultrafiltration, microfiltration, dialysis, and electrodialysis are frequently used in the chemical process industry. In addition, various membrane hybrid processes are used that couple membrane separation processes with other separation processes including adsorption or evaporation. Systematic design procedures for membrane separation systems have only emerged recently and it is common practice to compare a few known design alternatives in simulation studies. A large amount of work has been published in this area, a few of which are listed here. Stem, Perrin and Naimon (1984) and Stookey, Graham and Pope (1984) have analyzed the performance of single stage membrane permeators with recycle options. Mazur and Chan (1982) have studied multistage systems for natural gas processing and Kao (1988) have investigated recycle strippers and enrichers. Rautenbach and Dahm (1987) and Bhide and Stern (1991a,b) have performed economic feasibility studies for various membrane network configurations with and without recycle streams. Aganval and
2.3 Computer-Aided Methods for Process lntensifcation
Xu (1995, 1996) have provided broad guidelines based on process economics for two compressor cascades. Clearly, by their very nature, all of these approaches do not allow one to systematically identify the most efficient process option that may exist for a given membrane separation problem. Few approaches have considered the use of optimization technology to guide membrane network design selection. Tessendorf, Gani and Michelsen (1998) have presented various aspects of modeling, simulation, design and optimization of membrane-based separation systems modeled using orthogonal collocation. The proposed model can handle multicomponent mixtures and considers the effects of pressure drop and energy balances. Qi and Henson (1997, 2000) proposed an optimization-based approach that makes use of local deterministic optimization techniques to solve NLP and MINLP formulations for the design of gas permeation membrane networks using spiral wound permeators. Most recently, Marriott and Sorenson (2003a,b) have developed a general approach to modeling membrane modules considering rigorous mass, momentum and energy balances. Their approach constitutes a feed side flow model coupled with a permeate side flow model and a local transport model for the membrane system. They employ the detailed models in superstmcture formulations, which are optimized using genetic algorithms. Their application focuses on pervaporation systems and due to the complex models employed, the optimizations have proved to be extremely computationally demanding. A further membrane network synthesis approach has been proposed recently (Uppaluri, Linke and Kokossis 2004) that capitalizes on the developments in integrated reaction and separation process synthesis presented in the previous section (Linke and Kokossis 2003a) to develop a comprehensive gas permeation membrane network representation in the form of superstructures that make use of variants of the RMX unit presented previously. The superstructures capture all possible conventional and novel combinations of cocurrent, counter-current and cross flow gas permeation membrane units. The networks can be optimized using robust stochastic optimization techniques in the form of Simulated Annealing to extract those designs that exhibit the best economic performances. It overcomes major limitations of existing approaches as it allows one to quickly screen amongst all possible structural and operational process alternatives that may exist for gas permeation networks, to identify the performance limits of the system, and can accommodate for user preferences and problem specific modeling aspects. The approach can also be extended to account for hybrid separations by incorporating additional RMX and ST units in the superstructure formulations. The approach has been applied in a number of case studies in gas permeation network design as well as membrane hybrid network design. We will present two such applications below. In one case study, membrane networks are developed that exhibit minimum total cost for the recovery of hydrogen from synthesis gas using polysulfone membranes (Uppaluri, Linke and Kokossis 2004). High-purity hydrogen is required (99% H,) at high recovery. A number of optimal designs obtained from superstructure optimization for RMX units with different flow patterns are shown in Fig. 2.6. The designs using countercurrent membrane units exhibit the lowest network cost compared to
I
315
316
I
2 Process fntensification
networks with other flow patterns; however, the performance variations between various flow patterns is fairly small so that designs with cocurrent and cross flow patterns may be viable options if they are seen to offer operational advantages for this particular problem. In another case study, the superstructure optimization approach was applied to synthesize optimal hybrid adsorption/membrane process configurations for the
350 m2
2500 rn2
2500rn2
-
I 0.25
316 i n 2
.......................................
TAC = $ 1,724,000 Co-current flow
-
I
0.695
2500 m2
2500rn'
....................................
1790rnZ ........................
-
-
104
-
sweetening of crude natural gas to meet pipeline specifications (Linke and Kokossis 2003a). Superstructures featuring three RMX units to represent the poly(etherurethane-urea) membrane units and three ST units to represent the irreversible fixed-bed adsorption process are optimized. The study sought to identify the most economic designs in terms of the total annualized network cost as well as to establish the influence of the hydrogen sulfide content in the feed gas on the performance targets of the membrane-absorption hybrid processes. The network optimization revealed a number of different processing options for different hydrogen sulfide feed concentrations. The optimal designs for the cases of lower hydrogen sulfide feed concentrations feature only networks of membrane units, whereas adsorptionmembrane hybrid systems were observed at higher concentrations. The dependence of the optimal total annualized process cost on the hydrogen sulfide feed concentrations is shown in Fig. 2.7. Details on the case study can be found in Linke and Kokossis (2003a).
2.3.4 Process-Solvent Systems
So far, we have presented a number of computer-aided methods for process intensification. These methods allow for the determination of the best processing schemes for a defined process system. Many processes make use of additional materials such as solvents to perform processing tasks. The process performance is strongly dependent on solvent properties and solvent selection is crucial to achieve optimal process intensification. Due to the strong interactions between the solvent and the process design, it is important to select the solvent and the process simultaneously. However, as process performance information is generally unavailable in the molecular design stage, available computer-aided solvent design methods (e.g., Marcoulaki and Kokossis 2000; Buxton, Livingston and Pistikopoulos 1999; Gani and Brignole 1983) design molecules for simultaneous optimality in a number of thermodynamic properties that are anticipated to have a significant effect on the process performance. A
318
I
2 Process Intensification
processing scheme is then designed in a subsequent stage for the solvent identified in computer-aided molecular design. Clearly, the success of such a design philosophy is highly sensitive to the formulation of the objective function and thermodynamic property constraints employed in solvent design and selection. The appropriate levels of importance of the thermodynamic properties used to judge the molecular performance are easily misrepresented without process performance feedback. If any important thermodynamic property effects are accidentally excluded, overestimated, or underestimated, the resulting process designs will inevitably underperform. As this is the case most of the time, there is a high risk of failure to provide optimal solutions to the overall design problem with such a sequential design strategy. Despite the potential for identification of improved designs, very few approaches have been reported that account for process performance feedback in molecular design. A possible design strategy has been proposed by Hostrup, Harper and Gani (1999). However, their method does not account for structural design interactions between computer-aided solvent and process design. This is a tribute to the combinatonal explosion problems faced when attempting to integrate process synthesis approaches with computer-aided solvent design approaches in a unified optimization model. A first approach to the simultaneous design process and solvent structures was proposed by Linke and Kokossis (2002).They developed a representation and optimization framework that exploits all possible molecular and process design options for solvent-processing reaction-separation systems. The proposed system representation takes the form of a process-molecule synthesis supermodel, which combines the superstructure based process design method (Linke and Kokossis 2003a) as presented in Section 2.3.2 to capture all possible processing options with a computer-aided design representation (Marcoulaki and Kokossis 2000) to capture all possible molecular structures of the solvent. The supermodels are then searched using simulated annealing to identify the optimal process-solvent system. Apart from the degrees of freedom associated with the process design options, the type and number of functional groups of the GC method that comprise the solvent molecules are treated as optimization variables during the search. The method was applied to the synthesis of reaction-extraction bioprocesses together with the corresponding solvents for (reactive) liquid-liquid extraction. The case study aimed at identifying optimal processes and solvents for the production of ethanol by fermentation as described by Fournier (1986).The objective for the study involved the identification of processing options that achieve maximum ethanol yields and glucose conversion while minimizing solvent flow rates. The objective function and problem data are given in Linke and Kokossis (2002). The classical fermentor design followed by a countercurrent liquid-liquid extractor using dodecanol as the solvent was taken as the reference design for the study. The performance target for this system was identified at an objective function value of 3.66 and the design achieved a glucose conversion into ethanol of 60.5%. By allowing process and solvent optimization, designs could be identified that achieved performance targets of 3135, a performance gain of three orders of magnitude. A number of process configurations together with different solvent molecules were revealed
2.3 Computer-Aided Methodsfor Process Intensification
a I-
Multi-object!V P Optimization
Solvent and ,---..--...._____________ I Process Design j
[
Continuous (Model of Solvent)
j
.
& Process
System
n
I(
Set of 1:0", Solvents
,
Properties
Synthesis Framework
)
Solvent Clusters L......_~..___..~-~
J Figure 2.8 Synthesis strategy for processsolvent systems.
that allowed one to achieve near complete conversion of glucose (&Iu >w%)into ethanol at moderately increased solvent flows. The optimal process design options feature combinations of fermentor, reactive fermentor and extractors with recycles in the solvent as well as aqueous phases. More detail on the results of this study can be found in Linke and Kokossis (2002). Even though the results were impressive, serious numerical problems during the optimization were reported. The possible performance gains have stimulated additional research into a more numerically friendly method for the integrated design of processes and solvent. Such an approach was proposed recently by Papadopoulos and Linke (2004). They decomposed the design problem into a computer-aided solvent design and a process synthesis part. At the solvent design level, the design problem is reformulated as a multiobjective optimization problem in order to capture the relationships between physical properties and performance/environmental indices expected to impact on process performance and to extract solvent design candidates across the Paretooptimal front without having to make limiting assumptions prior to process design. The identified set of Pareto-optimal solvents is then introduced in the process synthesis task. A number of process synthesis tools can be employed in this approach, including the generic reaction-separationprocess synthesis scheme described in Section 2.3.1. The decomposition-based approach is illustrated in Fig. 2.8. Overall, this approach has been observed to be robust and quick for all cases studied, including liquid-liquid extraction, extractive distillation and reactivelreaction-extraction systems.
2.3.5 Water and Wastewater Systems
The previous sections have presented a number of computer-aided methods for the identification of process intensification options in reaction systems, reactive and reaction-separation systems, hybrid separation systems, and solvent-based process systems. Another area where computer-aided methods are useful for the identification of optimal processing schemes, is the design of industrial water-using systems.
1
319
320
I
2 Process lntensifcation
The design of such a system for the efficient use of the water resources is a complex problem involving a number of different tradeoffs. Apart from major process changes, e.g., replacing wet cooling towers by air-coolers, the efficiency of the water system can generally be significantly improved through a variety of practices that include water reuse, regeneration at different process stages, and recycling. The distinctive features of the wastewater minimization problem dictate the need for a focused rather than general purpose approaches. A major feature of the water system design problem in a typical industrial site is that the largest water users, e.g., cooling towers and steam system, are generally not mass transfer operations. Another advantage is the possibility that a limiting water profile can be constructed (Wang and Smith 1994),which allows the analysis of all water-using operations of water system on a common basis. This allows the combination of such insights gained from a graphical analysis of the problem with powerful optimization-based methods to systematically generate highly efficient water networks. In this section, we describe such an approach, which combines graphical water-pinch concepts with mathematical programming techniques in the form of superstructure models formulated as an MINLP problem (Alva-Argaez 1999). The optimization strategy employs a decomposition scheme and associates binary variables with the existence of different connections. The use of binary variables enables the approach to address complexity issues of the water network as it allows one to incorporate many practical constraints into the analysis, such as geographical constraints, flow rate constraints, and forbidden/compulsory matches. The formulation extends the domain of water pinch analysis with elements of capital cost to study the tradeoffs between freshwater costs, mass exchanger costs, and the cost of the required pipework. Consequently, the optimization results in cost-efficient networks rather than networks featuring minimum freshwater consumption. Former limitations of the water pinch method (Wang and Smith 1994) to address multiple contaminants are thus overcome since there are, in principle, no limits on the number of components or the number of freshwater sources. The design problem for a water-using system involves waterrelated elements in the form of: 0 0
a number of freshwater sources available to satisfy the demand water-using operations described by loads of contaminants and concentration levels.
The water system design problem with its sources and operations is sketched in Fig. 2.9. The design task is to find the network configuration to minimize the overall demand for freshwater (and consequently reduce wastewater volume) compatible with minimum total annual cost. The synthesis objective thus combines terms for low freshwater consumption, suitable network topology for water reuse, and low investment cost. The investment cost of the network includes piping costs and the approximate length of the pipe can be specified for each possible connection together with the materials of construction. The cost of mass exchange units assumes thermodynamic parameters and equilibrium relationships between the process streams and the water streams for the key contaminants, as well as the corresponding design equations and cost functions. The water-using system synthesis problem is initially
2.3 Computer-Aided Methods for Process Intens$cation r---
Freshwater
r r r
,
/Wastewater
I
Constraints
H iI _ _ _ )
El El
1II
4II I
!
Network design?
Figure 2.9 Water systems design problem.
presented based on the optimization of a superstructure representation. A natural decomposition of the problem is then developed based on the physical nature of the design problem as engineering insights gained through water pinch analysis. This allows the projection of the bilinear terms in the problem formulation following a recursive procedure which results in an efficient solution of the MINLP formulation. The method relies on the optimization of a superstructure model that facilitates all possible connections between freshwater sources and operations and different waterusing operations. The process streams in the mass exchange operations are considered implicitly in the superstructure model through the construction of the limiting water profiles. The number of units in the water network determines existing units and discrete options account for connections between sources and sinks of water. The superstructure is developed so that each freshwater stream entering the network is split amongst the water-using operations, each operation is preceded by a mixer fed by streams from the freshwater splitters and reuse streams emanating from the outlets of all other operations, and each operation is followed by a splitter that feeds the final mixer and the other operations in the system. The nonlinearities in the superstructure formulation are due to bilinear terms that appear in the mass balances (superstructure mixers and splitters), the nonlinear terms of the sizing equations, and the cost functions for the water-using operations. An example of a superstructure with a single freshwater source and two water-using operations is shown in Fig.2.10. Unlike some of the previous Mass Exchanger Network (MEN) developments (Papalexandri, Pistikopoulos and Floudas 1994), the superstructure provides options for water streams to merge. The minimum allowable composition difference used by mass exchanger network synthesis developments (Papalexandri,Pistikopou10s and Floudas 1994; El-Halwagiand Manousiouthakis 1989,1999a,b; Papalexandri and Pistikopoulos 1994)to avoid infinite sized mass exchangers is built into the limiting water profile data. In this approach, the given value is only a lower bound and it does not imply any preoptimized cost tradeoff between capital and operating costs. A customized solution strategy for the superstructure optimization problem was developed by making use of insights gained from water pinch analysis. From water pinch analysis it is known that for all water-using operations, at least one of the contaminants will reach its maximum permissible value in the optimal solution. If this was not the case, the flow rate through a particular unit could be decreased further
I
321
322
I
2 Process intensification
Figure 2.10
Water network superstructure.
to decrease the water consumption, and disprove optimality. Since the mass load of an operation is fixed, the outlet concentrations must be maximized to reduce the water flow. Motivated by this observation and provided all outlet concentrations are set to their maximum levels in the water-using operations, an upper bound can systematically be obtained for the performance of the system. The observation is further exploited with the development of a “natural”decomposition strategy that is explained next. The original MINLP problem is decomposed into two related subproblems that are solved sequentially within an iterative procedure. A primal problem (PI) is developed in a first instance by projecting the nonlinear constraints onto the concentration space according to the observations in the previous section. The projection is consistent with the objective of minimum freshwater demands. Given that the “limiting” contaminants define the water demand in the operations, the fact of having a set of nonlimiting contaminants far away from the initial assumption does not detract from the quality of the solution obtained. Experience with the procedure shows that in numerous multicontaminant problems the minimum freshwater demand is identified under these conditions and the calculation of the outlet concentrations of nonlimiting contaminants is not significant. However, the projection is likely to correspond to an infeasible solution as not all contaminants will be able to reach their maximum level for a given mass load and water flow rate. Thus, there are limiting contaminants which achieve their maximum level and determine the water demand for the operation and nonlimiting contaminants that exit the operation at a lower concentration and do not affect the water consumption in the particular operation. The cost functions are linearized in problem (PI), which then becomes a mixedinteger linear programming (MILP) problem that can be solved to global optimality using any reliable branch and bound algorithm. Once the flow pattern corresponding to the optimal solution of problem (PI)is identified, a second problem (P2)can be solved to find the set of concentrations corresponding to the design obtained. Problem (P2) is formulated as an LP model that includes mass balance equations projected onto the flowrate space with respect to all flows in the network. The exit concentrations of all contaminants are the variables of the problem.
2.3 Computer-Aided Methodsfor Process Intensification
The overall procedure constitutes a robust methodology as will be illustrated below on the basis of an example in the design of a water system for an oil refinery. The mixed integer models allow one to consider the system-wide interactions between design decisions. In particular, the models can consider tradeoffs between various design alternatives, striking a balance between increased design expenditures and resulting improvements in the system's operation such as decreases in operating cost. The petroleum and petrochemical industry are heavy users of water. Oil refineries not only face the challenge of reducing the amount of waste generated, but also a fierce competitive market with other water users. Water reuse and recycling are widely practiced in petroleum refining operations; however, typically no systematic methods for exploring new design alternatives to make the most of water reuse and recycling are used. The presented methodology was applied in an oil refinery case study that considered a number of water users, namely an atmospheric distillation unit (CDU), vacuum distillation operation (VDU), hydrotreating (HDS), crude desalting (desalter), cooling tower (c.tower), boiler house, delayed coker, and other operations grouped together (others). The study considered three contaminants: hydrocarbons (HC), hydrogen sulfide (H2S)and salts (salts). The problem data, equipment cost correlations, and modeling assumptions are given elsewhere (AlvaArgaez 1999). The total annual cost of the design proposed without applying optimization techniques is 4,430k$/yr. This base case already accounts for some water reuse as the emuent from the CDU steam stripping feeds the desalter. The design corresponding to the base case is illustrated in Fig. 2.11. The problem was solved using the presented methodology considering simultaneously the freshwater cost, the cost associ-
Ac-.
-2
y m ;, <" i',
\ \
Desalter stages = 1 HDS trays = 10
LOSS
/ /
,,
belayed coker
Annual operating cost Cost of mass exchangers Piping cost Total annualieed cost
Figure 2.11
Base case water network design for an oil refinery.
MMS2 799ty MM$0.997& M M50.39Wy M M54.193fy
I
323
324
I
2 Process Intensijcatlon
Desalter stages = 1 HDS trays = 10
Annual operating cost Cost of mass exchangers Piping cost Total annualised cogt
MM$2.799/y
MM$00.997/y MMM0.396/y MM$4&93/y
Figure2.12 Optimal water network design for an oil refinery.
ated with the mass exchangers and the piping cost. The optimal design (Fig.2.12) was found after about 30 s of CPU time on an antiquated Pentium I11 processor. The optimized design incurs a total annualized cost of 4,040 k$/yr, corresponding to cost savings of about 10% as compared to the base case.
2.4 Concluding Remarks
There is constant pressure on the chemical process industry to improve processes and to create new process routes in order to remain competitive in a global market place and to comply with environmental regulations. This translates into a constant need to develop new, more efficient process technologies. Process intensification technologies have proven their value in improving process efficiencies. Most process intensification technologies, in terms of equipment and process concepts, have been invented through the ingenuity of a small number of individuals. Systematic computer-aided approaches have the capability of enhancing the innovation process significantly by offering valuable decision-support to the process designer. This chapter has presented important process intensification technologies as well as a number of the available computer-based methods that allow the systematic generation of intensified processes for reaction systems, reactive/reaction-separation systems, membrane (hybrid) separation systems, process-solvent systems and water and wastewater systems. A major advantage of all the presented methods is their ability to determine robust performance targets and to identify a variety of design
References I 3 2 5
options with similar close-to-targetperformances. By providing this information, the design tools offer invaluable decision support to the design engineer as it will allow the inspection of similarities and differences between high-performance candidates in order to select the most practical and novel process designs. Most of these methods have only recently been developed in academia and a close collaboration between academia and industry is essential to mature the technologies so that they can be used routinely by design engineers throughout the world. References 1 Achenie, L. E. K, Biegler, L. T. (1986) Jnd.
21 Feinberg, M., Hildebrandt, D.(1997) Chem.
2
22
3 4
5 6
7 8
9 10 11 12 13 14 15 16 17
18 19 20
Eng. Chem. Fund. 25,621. Achenie, L. E. K, Biegler, L. T. (1988) Ind. Eng. Chem. Res. 27, 1811. Achenie, L. E. K, Biegler, L. T. (1990) Comp. Chem. Eng. 14, 23. Aganval, R., Xu, /. (1995) Chem. Eng. Sci. 51, 365. Agamal, R., Xu,/.(1996) AIChEJ. 42, 2141. Alva-Argaez, A. (1999) Dissertation, UMJST, UK,. Ashley, V. M., Linke, P. (2004) Chem. Eng. Res. Des. 82(A8), 1. Bakker, R. A. (2004) In: Stankievicz, A,, Moulijn, J. A. (eds.) Reengineering the chemical processing plant: process intensification. New York: Marcel-Dekker,447-470. Balakrishna, S., Biegler, L. T. (1992a) Ind. Eng. Chem. Res. 31, 300. Balakrishna, S., Biegler, L. T. (1992b) Ind. Eng. Chem. Res. 31, 2152. Barbosa, D.,Doherty, M. F. (1988) Chem. Eng. Sci. 43(3), 541. Bhide, B. D.,Stem, S. A. (1991a)J. Membr. Sci. 62, 13. Bhide, B. D.,Stern, S. A. (1991b)J. Membr. Sci. 62, 37. Buxton, A., Livingston, A. G., Pistikopoulos, E. N. (1999) AICHE]. 45, 817. Cardoso, R., Salcedo, L.. Fey0 de Azevedo, S., Barbosa, D.(2000) Chem. Eng. Sci. 55, 5059. Ciric, A. R., Gu. D.(1994) AIChEJ. 40, 1479. Ehrfeld, W. (2004) In: Stankievicz, A,, Moulijn, J. A. (eds.) Re-engineering the chemical processing plant-process intensification. New York: Marcel-Dekker, 167-190. El-Halwagi, M.M., Manousiouthakis, V. (1989) AIChEJ. 35, 1233. El-Halwagi, M. M., Manousiouthakis, V. (1990a) Chem. Eng. Sci. 9, 2813. El-Halwagi, M. M., Manousiouthakis, V. (1990b) AIChEJ. 36,1209.
23 24 25 26 27 28 29 30 31
32 33 34
35 36 37 38 39
Eng. Sci. 52, 1637. Fournier, R. L. (1986) Biotech. Bioeng. 28, 1206. Fraga, E. S. (1996) Chem. Eng. Res. Des. 74, 249. Gani, R., Brignole, R. A. (1983) Fluid Phase Equilibria. 13, 331. Glasser, D.,Hildebrandt, D. (1987) Comp. Chem. Eng. 21, 775. Glasser, D., Hildebrandt, D.,Crowe, C. M. (1987) Ind. Eng. Chem. Res. 26, 1803. Glasser, B., Hildebrandt, D.,Glasser, D.(1992) Jnd. Eng. Chem. Res. 31, 1541. Hildebrandt, D., Godor, S. (1994) Glasser, D., Jnd. Eng. Chem. Res. 33, 1136. Hauan, S., Westerberg, A. W., Lien, K. M. (2000) Chem. Eng. Sci. 55, 1053. Hauan, S., Ciric, A. R., Westerberg, A. W., Lien, K. M. (2000) Chem. Eng. Sci. 55, 3145. Henze, M., Grady, C. P. L.Jr, Gujer, W., Marais, G. v. R., Matsuo, T. (1987) Water Res. 21(5), 505. Hildebrandt, D.,Glasser, D., Crowe, C. M. (1990) Jnd. Eng. Chem. Res. 29, 49. Hildebrandt, D.,Glasser, D.(1990) Chem. Eng. Sci. 45, 2161. Horn, F. (1964) Attainable and nonattainable regions in chemical reaction technique. Proceedings of the 3rd European Symposium on Chemical Reaction Engineering, London: Pergamon Press. Hostrup, M., Harper, P. M., Gani, R. (1999) Comp. Chem. Eng. 23, 1395. Ismail, R. S., Pistikopoulos, E. N., Papalerandri, K. P. (1999) Chem. Eng. Sci. 54, 2721. Ismail, R. S., Pistikopoulos, E. N., Papalexandri, K. P. (2001) AJChEJ. 47, 629. Kao, Y.K. (1988)J. Membr. Sci. 39, 143. Kokossis, A. C., Floudas, C. A. (1990) Chem. Eng. Sci. 45, 595.
326
I
2 Process Intensijcation
40 Kokossis,A. C., Floudas, C. A. (1991) Chem. Eng. Sci. 46, 1361. 41 Kokossis,A. C., Floudas, C. A. (1994) Chem. Eng. Sci. 49, 1037. 42 Lakshmanan, L., Biegler, L. T. (1996) Ind. Eng. Chem. Res. 35, 1344. 43 Lakshmanan, L., Biegler, L. T. (1997) Comp. Chem. Eng. 21, 785. 44 Linke, P., (2001) Dissertation, UMIST, UK. 45 Linke, P., Kokossis,A. C. (2002) Comput Aided Chem. Eng. 10, 115. 46 Linke, P., Kokossis,A. C. (2003a) AIChE]. 49(6), 1451. 47 Linke, P., Kokossis,A. C. (2003b) Comp. Chem. Eng. 27(5), 733. 48 Marcoulaki, E. C.. Kokossis,A. C. (1999) AIChE]. 45, 1977. 49 Marcoulaki, E. C., Kokossis,A. C. (2000) Chem. Eng. Sci. 55, 2547. 50 Marriott, ]., Ssrensen, E. (2003a) Chem. Eng. Sci. 58, 4975. 51 Marriott, I., Ssrensen, E. (2003b) Chem. Eng. Sci. 58, 4991. 52 Mazur, W. H., Chan, M. C. (1982) Chem. Eng. Prog. 78, 38. 53 Mehta, V. L., Kokossis,A. C. (1997) Comp. Chem. Eng. 21, S325. 54 Mehta, V. L., Kokossis, A. C. (2000) AIChEJ. 46, 2256. 55 Mehta, V. L. (1998) Dissertation, UMIST, U.K. 56 Moulijn,]. A,, Kapteijn, F., Stankiewicz,A. (2004) In: Stankievicz, A,, Moulijn, I. A. (eds.) Re-engineering the chemical processing plant-process intensification. New York: Marcel-Dekker, 191-226. 57 Ng, K. M., Berry, D. A. (1997) AIChE]. 43(7), 1737. 58 Ng, K. M., Samant, K. D. (1998) AIChE]. 44(6), 1363. 59 Nisoli, A., Malone, M . F., Doherty, M . F. (1997) AIChE]. 44,1363. 60 Okasinski, M.. Doherty, M . F. (1998) Ind. Eng. Chem. Res. 37, 2821. 61 Papadopoulos, A., Linke, P. (2004) Comput Aided Chem. Eng. 18, 259. 62 Papalexandri, K. P., Pistikopoulos, E. N. (1994) Comp. Chem. Eng. 18, 1125. 63 Papalexandri, K. P., Pistikopoulos, E. N.,Floudas, C. A. (1994) Chem. Eng. Res. Des. 72, 279.
64 Qi, R., Henson, M . A. (1997) Sep. Pur. Tech. 13, 209. 65 Qi, R., Henson, M. A. (2000) Comp. Chem. Eng. 24, 2719. 66 Ramshaw, C. (2004) In: Stankievicz, A,, Moulijn, J. A. (eds.) Re-engineering the chemical processing plant-process intensification. New York: Marcel-Dekker,69-119. 67 Rautenbach, R., Dahm, W. (1987) Chem. Eng. Proc. 21, 141. 68 Rigopoulos, S., Linke, P. (2002) Comp. Chem. Eng. 26, 585. 69 Schweiger, C. A,, Floudas, C. A. (1999) Ind. Eng. Chem. Res. 38, 744. 70 Siirola,j.J. (1995) AIChE Symp. Ser. 91(304), 222-234. 71 Smith, E. M . B., Pantelides, C. C. (1995) Comp. Chem. Eng. 19,83. 72 Stankiewicz,A. (2004) In: Stankievicz, A,, Moulijn, J. A. (eds.) Re-engineering the chemical processing plant-process intensification. New York: Marcel-Dekker, 261-308. 73 Stankiewicz,A,, Drinkenburg, A. A. H. (2004) In: Stankievicz, A,, Moulijn, J. A. (eds.) Reengineering the chemical processing plantprocess intensification. New York: MarcelDekker, 1-32. 74 Stern, S. A., Perrin,]. E., Naimon, E.]. (1984) /. Membr. Sci. 20, 25. 75 Stookey, D.J., Graham, T. E., Pope, W. M . (1984) Enu. Prog. 3, 212. 76 Tessendorf;S., Gani, R., Michelsen, M. L. (1998) Chem. Eng. Sci. 54, 943. 77 Thonon, B., Tochon, P. (2004) In: Stanlaevicz, A., Moulijn, J. A. (eds.) Re-engineering the chemical processing plant-process intensification. New York: Marcel-Dekker, 121-165. 78 Tsoka, C., johns, W. R., Linke, P., Kokossis, A. (2004) Green chemistry; 8, 401. 79 Uppaluri, R. V. S., Linke, P., Kokossis, A. C. (2004) Ind. Eng. Chem. Res. 43, 4305. 80 Wang, Y. P., Smith, R. (1994) Chem. Eng. Sci. 49, 981. 81 Westerberg,A. W. (1989) Comp. Chem. Eng. 13, 365.
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
3 Computer-aided Integration o f Utility Systems Franqois Marechal and Boris Kalitventzeff
3.1 Introduction
The integration of utility systems concerns the way energy entering a production plant will be transformed into useful energy for satisfymg the needs of the production processes. In order to identify the best options for the utility systems, one has to consider the chemical industrial plant as a system (Fig. 3.1) that converts raw materials into valuable products and by-products.The production is realized by a list of interconnected physical unit operations that will together form the chemical process. These transformations are made possible by the use of energy and support media, like solvent, water, and catalysts. The transformations are not perfect and
Raw materials 4
nWY roducts y-products
Waste heat Emissions
Water
Solids
Figure 3.1 The process as a system that converts raw materials into products and by products
ComputerAided Process and Product Enpneenng Edited by LUIS Puigjaner and Georges Heyen Copynght 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinhelm ISBN 3-527 30804-0
328
I
3 Computer-Aided Integration ofUtility Systems
therefore the process results in the production of wastes. Wastes are emitted in severa1 forms: 0 0 0
waste heat in the cooling water or the air; emissions as combustion or other gases (e.g., humid air, steam vent, etc.); liquid streams as polluted water or solvent; solids.
Emissions are regulated or taxed, so the waste emissions should reach levels that meet waste processing strategies. When performed on site, waste treatment integration offers opportunities for reducing the treatment cost by material recycling, waste energy recovery or by using process waste heat to improve the treatment efficiency. In this chapter, the utility system will be considered as being composed of different subsystems that serve the following production processes: 0
0 0 0 0 0 0
energy conversion; compressed gases: air, Oz, Nl, HI,etc.; cleaning in place; air conditioning and space heating; catalyst recovery; solvent recovery; water treatment; effluent treatment; etc.
We will, however, focus our presentation on energy conversion and water integration. Referring to Fig. 3.1, the goal of the engineer is to maximize the efficiency of the horizontal transformations from raw materials to products by minimizing the use of resources in the vertical transformations, leading by balance to the reduction of the emissions. The process integration techniques based on the pinch analysis (e.g., [I])have mainly focused on the definition of the minimum energy requirement (MER), expressed as the minimum (useful) heat requirement, and the definition of the optimal heat exchanger network (HEN) design that will realize the energy recovery between the hot and cold streams of the production processes at a minimum cost, achieving in this way the best tradeoff between the heat exchangers investment and the energy savings. In this approach, utilities are not really considered since they only appear as a way of supplying the minimum energy requirement. Using the analogy between temperature and concentration and between heat and flow rate, water usage may be tackled using a similar approach (e.g., [2]). We will therefore first present the methods for integrating the energy conversion and consider after the water usage aspects. The role of the energy conversion subsystem is to supply the energy requirement of the process at a minimum cost or with a maximum efficiency. This means converting the available energy resources into useful energy (energy services) and distributing it to the process operations. From the process synthesis perspective, the problem has five aspects:
Introduction I 3 2 9
0 0
the definition of the most appropriate energy resources; the selection of the conversion technologies; the definition of the most appropriate size of the equipment considered for the system (i.e., the investment); the definition of the most appropriate way of operating the conversion system; the definition of the utility-process heat exchanger network (energy distribution and interface between the utility system and the process).
In order to increase the efficiencyof the energy conversion system, the rational use of the energy resources will be obtained by the combined production of different services (polygeneration),for example, combined heat and power, combined production of hydrogen, and steam or refrigeration. In a process, the utilities are considered as a service provided to the production units. The control and the reliability issues are also part of the problem. For example, steam condensers are placed in such a way that these will control the target temperatures of the process streams. When the investment is made, the optimal management of the utility system will cover the aspects of exploiting market opportunities (e.g., electricity prices) and using at best the energy conversion equipment. The scheduling and the optimal management will then be an issue for the process. The role of the utilities becomes even more important when considering multiprocess plants being served by one centralized utility system. In this case, the utility system will transfer heat between processes. Considering the variations of demand over time, the definition of the optimal system becomes a multimodal and multiperiod problem where the best sizes of the equipment will have to be defined considering annualized value on a yearly production basis. In this perspective, another dimension to consider is the influence of the ambient conditions that will influence both the process demands and the conversion process efficiency (e.g., the influence of the ambient temperature on the refrigeration cycle coefficient of performance (COP)).
3.1.1 Defining the Process Requirements Using the Utility System
In an ideal situation, the definition of the hot and cold streams of a process is obtained combining the use of data validation (for existing processes) or simulation tools (for new processes). The data of an existing process are, however, not always available for the engineers doing the process integration study. This is particularly the case when the chemical production site utility system is managed by an energy service company that is different from the production companies on the site. Furthermore, if utility systems are easy to instrument, instrumenting processes are not always so easy, particularly when dealing with processes in the food industry. Taking advantage of the utility system instrumentation, it is possible to deduce the definition of the hot and cold streams of the process from the data collected on the utility system. Data validation and reconciliation tools (see Chapter 2 in Section 5 of this book) will be used to combine online measurements with other information
330
I
3 Computer-Aided Integration ofutility Systems 0.4 r
Technology requirement
Thermodynamic
0
(1
SU
I1x)
requirement
150
100
250
QWW Figure 3.2 Dual representation of a heat requirement representing a stream heating by steam injection
from the specification sheets of the processes and obtain a coherent picture of the process. Correctly defining the temperatures and the heat loads of the hot and cold streams is essential for a proper process integration study. For this reason, the first step of the analysis is defining the operations required to transform raw materials into the desired products. The heating or cooling requirements are inferred from the operating conditions. In this respect, the MER may be computed in two different ways. The first (thermodynamic requirement) consists of determining the temperature profiles of the process streams that maximize the exergy supplied by the hot streams and minimizes the exergy required by the cold streams. The second (technological requirement) is to consider the equipment used to convert utility streams into useful process heat. Those two approaches produce the same overall energy balance but with a different temperature profile. The shape of the composite curve may differ from one representation to another. An example of this dual representation is shown in Fig. 3.2 for the case of water preheating by steam injection. The thermodynamic requirement corresponds to water preheating from its initial to its target state, while the technological requirement corresponds to the production of the injected steam. When using the Carnot Factor (1-TO/T) as the Y axis, the area between the two exergy composite curves corresponds to the ,,thermal" exergy losses due to the technological implementation of the operation. Following a systematic analysis [3], most of the process requirements may be defined from the knowledge of the process-utility interface.
3.2 Methodology for Designing Integrated Utility Systems
The process integration techniques aim at identifjmg the maximum energy recovery that could be obtained by counter-current heat exchange between the hot and cold
3.2 Methodologyfor Designing Integrated Utility Systems
600 550 SO0
450 eat recovery
400
/
350 300
250
0
5Mx)
IOOM)
15ooO
2OOOO
25000
3a700
Q(kW Figure 3.3
Hot and cold composite curves of the process
streams of the process. This technique, based on the assumption that a minimum temperature difference between the hot and the cold streams (ATmin), allows the calculation of the so-called MER target for the system. The identification of the pinch point, the point where the hot and cold composite curves of the process are the closest, is further used to design the heat recovery heat exchanger network structure. Using the concept of the hot and cold composite curves (Fig. 3.3), it is possible to compute the MER graphically. Mathematically, the minimum energy requirement is computed by solving the heat cascade model (3.1). This model is based on the definition of the corrected temperature list. The corrected temperatures are obtained by A Tmin and increasreducing the inlet and outlet temperature of the hot streams by 2 ing the temperature of the cold streams by - By assuming that the streams 2 have constant cp, the fluid phase changing streams are divided into stream segments with constant cp. In a more detailed study, the value of A Tmin will be related to the 2 heat film transfer coefficient, allowing for an account of a heat transfer resistance that depends on the fluid type considered. The heat cascade model (3.1) is a onedegree-of-freedomlinear programming problem that computes the energy required to balance the needs of the cold streams when recovering the maximum energy from the hot streams by counter-current heat exchange and cascading heat from the higher temperatures. The energy balance is written for each temperature interval. The grand composite curve (Fig. 3.4) is the plot of the heat cascaded as a function of the temperature:
I
331
332
I
3 Computer-Aided Integration of Utility Systems
550
500
450 h
w
m
350 300 250
L
.
.....
_....-..... ....,-.... .................... 1......... ......... Cold utitizy :6948 kW ~
2ooo
0
6000
4000
QWW Figure 3.4
8 W
10W
12OOO
Grand composite curves of the process
subject to heat balance of the temperature intervals Vr=l,
kQ,,r+Rr+l-Rr=O
.... n,
(3.4
i=l
Rr
>0
V r = I,
.... n, + 1
(3.3)
where n is the number of specified process streams; n, is the number of temperature intervals; R, is the energy cascaded from the temperature interval r to the lower temperature intervals; Q,, is the heat load of the reference level of process stream i in the temperature interval r; Q, > 0 for hot streams and 5 0 for cold streams. The heat cascade constraints (3.2) is the equation system that is solved by the problem table method. An alternative set of equations (3.4) may be used to compute the heat cascade. This formulation has the advantage of involving only one R, per equation, with each equation being related to one temperature in the temperature list. From the analysis of the pinch point location, it may be demonstrated that the list of temperatures (and therefore of equations) may be reduced in this case to the list of inlet temperature conditions of all the streams:
2 (2
Q.k)
k=r
i=l
+ %,+I
- Rr = 0 V r = 1, .... n,
(3.4)
When considering the overall system (including the utility streams), it is necessary to define the complete list of streams to be considered in the system, including the hot
3.2 Methodologyfor Designing htegrated Utility Systems
and cold streams of the utility subsystem, prior to any heat exchanger network design. Compared to the conventional pinch analysis where hot and cold streams of the process have constant temperature and flow rates, the utility system integration has a much larger number of degrees of freedom since it requires the definition of the temperatures and flow rates of the utility streams that will minimize the cost of the energy conversion. This will be dealt with using modeling and optimization techniques. The modeling of energy conversion units will be used to determine the operating temperatures and compositions allowing the definition of the hot and cold streams of the utility subsystem. The flow rates will be determined by optimization in order to minimize the cost of the energy delivered. The constraints of the heat cascade will be considered in the problem and the solution will be characterized by a list of pinch points, one of these being the process pinch point, representing the maximum energy recovery between the process streams, the others corresponding to the maximum use of the cheapest utility. If, in the simplest cases, the calculation of the utility streams may be done graphically, it is more convenient to use optimization techniques to solve the problem, especially when cycles like steam network or refrigeration system integration is considered. There exist several ways of solving the optimal integration of the energy conversion system. All strategies are based on the definition of a utility system superstructure, which includes the possible conversion technologies that are envisaged. Although it is possible to set up a generic problem that would state and solve the problem in an automatic manner, it is more convenient to proceed by successive iterations, keeping in mind that learning from one step will result in new problem definitions and perhaps new ideas for the integration of alternative energy conversion technologies. This is particularly true because the problem definition is usually not known from the beginning and because the utility system integration may influence the process operating conditions. The philosophy behind the computer aided utility system integration is to have a method that supports an engineer’s creativity, helping him or her to identify the most promising options. Three major aspects have to be considered: 0
0
0
Technology data bases including thermoeconomic models of the different conversion technologies available. These models will be used to constitute the energy conversion system superstructure consistent with the technologies available on the market and the process requirement. An optimization framework for targeting the optimal integration of the utility system prior to any heat exchanger network design. The optimal utility system integration is by definition a mixed-integer nonlinear programming problem (MINLP), the integer variables being used to select in the superstructure the equipment to be used in the final configuration, while the continuous variables will be the operating conditions and the flow rates in the utility system. Graphical representations applying thermodynamic-based principles to assess, analyze, and understand the solutions obtained by optimization and to help in a
I
333
334
I
3 Computer-Aided Integration ofUtility Systems
possibly new definition of the superstructure. Graphical representations are used to support the engineers when stating the problems and analyzing the results.
3.3 The Energy Conversion Technologies Database
When considering energy conversion technologies, we switch from the energy dimension to the thermoeconomic dimension, where cost of energy and the investments are considered simultaneously. It is therefore necessary to represent the market state by introducing market-related relationships between cost, sizes, and efficiencies. There is therefore a need to develop a technology database for the different conversion technologies. Today, Web-based techniques give access to the needed information [4].The required data are as follows: 0
Investment costs refer to the installed cost. It is computed by:
where C 1, C P, C C, C E, C G, C 0, 0
0
0
0
0 0 0
is the installed cost of the equipment e is the purchased cost of equipment e; is the cost of connections and piping; is the cost of engineering for equipment e; is the cost of civil engineering for equipment e; is the other costs like taxes, land, etc.
maintenance cost required to operate the technology on a yearly basis; operating costs that refer to the manpower and the consumables related to the use of the technology; The fuel consumption and the type of fuel concerned. This usually refers to the thermal efficiency computed by qh = useful heat (kJ). Ideally, efficiency should LHV (kj include partial load information. the electricity consumed or produced; the hot and cold streams that define the energy service delivery (process/utility heat exchange interface); the standard prescriptions for the technology specification; any information concerning the technology implementation; time to market information for emergent technologies; the list of suppliers.
From data collection using a technology market study, correlation equations are obtained. These give, as a function of the size parameters, the required values for the optimal integration models. For the efficiency correlation, it is important to analyze the degrees of freedom in order to avoid developing a correlation that would be inconsistent with the rules of thermodynamics. For this reason it is usually preferred to use simulation models for the technologies in which the model parameters are defined as correlation functions, e.g., isentropic efficiency as a function of the tur-
3.3 The Energy Conversion Technologies Database
I
335
bine size. Two types of approach may be used. The first aims at representing the technology market by functions that correlate the model parameters with technology design parameters like temperature, pressure, or size. The model has the following generic form: Heat and mass balances B ( X , S, P ) = 0; Efficiency equations F ( X , S, P ) = 0; Cost correlations cr = a ( x ,s, PI; Correlations limits Smin I :S i Srnax; where X S
is the list of state variables characterizing the streams of the technology; is the set of sizing variables of the technology; is the set of characterizing parameters of the technology identified from the market database correlations.
P
When detailed thermoeconomic models are available, the efficiency equations will become more complex. Examples of such relationships may be found in [S],[GIor [7]. With this approach, it is assumed that the technology may be custom designed. It applies well, therefore, to steam turbines, heat pumps, or heat exchangers. The second approach considers that the technologies are available on the market with futed sizes and operating design conditions. This is the case, for example, for the gas turbines market in which a limited number of models are available. In both situations, the database data are used to calibrate the thermoeconomic models by computing the model parameters in the standard conditions in which the reference
I
5 000 1
y = 11554x0
4 000
='' x
A
0
x +
1
10
100
1000
Electrical Power [kWe]
Figure 3.5 Comparison o f different cost correlations for diesel engines in cogeneration applications
ASCCC (cogen).MAN (C) Exsys (cogen) MTU (C) MTU (P) El-Nashar (mot)
FAA Chandalar Caddet (cogen) ListedLeroy Somer
10000
100000
336
I
3 Computer-Aided Integration ofUtility Systems
data have been collected. The parameters are then used to compute the system performances, including part load efficiencies, in the operating conditions of the plant under study. Cost correlationshave to be used with caution considering the conditions, the area, the date, and the ranges for which they have been established. Consider for example the results of Pelet [8] shown in Fig. 3.5. The different correlations obtained from the literature and other market surveys are compared and show big differences. Tables 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7 give the values and correlation for the major energy conversion technologies. These data are representative of the European market in 2000. They have been gathered mainly by the partners of the European project EXSYS ([9],[4]). In order to update the cost given by the correlation, a plant or equipment index is used. The most important are the Marshall Swift index tables and the chemical engineering plant cost index (CEDCI),whose values are regularly updated in the Chemical Engineering journal. In the tables, all data are given with a CEPCI index of 382. For other equipment, usefil references like [lo] may be used. When data are available from other sources, e.g., from quotations, the effect of size is represented by the relation (3.7), the exponent a, may be obtained from different sources like [lo] or [11]:
is the installed cost of the equipment e of size S, in the year y . is the size of the equipment e. The size of the equipment is the relevant sizing parameter that mainly influences the cost of the technology. For this reason, when estimating the cost of a heat exchanger, the heat exchange area is preferred over its heat load. is the size of the known equipment similar to e. Se, is the cost of the known equipment. c Lf C E P C I ( y ) is the chemical engineering plant cost index for the year y. the year of the reference cost C res Yref
where C I, (S,,y) Se
Table 3.4
Gas engine, lean burn configurations. We: power in kW.
Gas engines, lean burn
Generator eff.
[-I
Mechanical eff. Engine cooling eff. @ 90'C Heat eff from combustion gases NO, emissions CO emissions Installed cost Catalyst cost Maintenance cost Lifetime
[-I [-I [-I
[mg/m3N] [mg/m'N] [El [El [E/kWhI
[hl
qgen= 0.015 . In(WJ + 0.8687, W,5 845 PW] qsn = 0.0013 . In(Wc)+ 0.9611, We> 845 [kW] I?= 0.2419 . ~ 0 6 " ql~,,@ = 0.7875 . W;o.'682 qth,=hap = 0.1556 . P I 3 250 650 C,",, = 0.026G . Wf+ 578.84 . We + 208174 85 @ Cm,,,= 0.0407 . W;0~2058+ 0.0034 48000
3.3 The Energy Conversion Technologies Database Table 3.5
Diesel engines. We electrical power in kW.
Diesel engines
Generator eff.
[-I
qS,,= 0.015 . ln(W,) + 0.8687, We5 845 [kW]
Mechanical eff. Engine cooling @ 90°C Heat eff. from combustion gases
[-I [-I [-I
qm
q f h , 4= 0.2875 . W g 0 O ” ’ l)h,chp = 0.5433 .
NO,
co
[mg/m3N] [mg/m3N]
100 400
Installed cost
[El
Catalyst Maintenance Lifetime
[El
C,,,, = -0.0266 (3.4. We)’ + 578.84 3 ~ 4We + 208174, We> 1000 C,,,, = 1147.62 . We> 1000 [kW] 136 . W, C,,,,, = 0.0407 . 48000
Table 3.6
q,, = 0.0013 . In(We)+ 0.9611, We> 845 [kW] = 0.0131 . h ( W J + 0.3452
[E/kWhI [hl
em, wss
Aeroderivative and heavy duty gas turbines.
Aeroderivative gas turbines: Weelectrical power in kW
[-I [-I [-I
[mg/m3 N] [mg/m’N]
[El Lifetime
[hl
‘lee.= 0.015 . In(W,) + 0.8687, W,5 845 IkW] qgen= 0.0013 . h ( W J + 0.9611, W,> 845 [kW] qmec= 0.0439 ln(W,) - 0.0684 q[h,&ap = 0.838 . ~ O S s 7 80 50 Cru,brnr = 1564. ’ i!4’”’, We < 50000 [kW] Ctu,brnr = 2977. ’ 77’1, W,> 50 000 [kW] 48 000
Heavy duty gas turbines: Weelectrical power in kW
vet,,= 0.015 . ln(W,) + 0.8687, We5 845 [kw]
[-I
[-I [-I
[mg/m’N] [mg/m3N]
[El Lifetime
[hl
Auxiliaries: Weelectrical power in kW,
Qth
qge.= 0.0013 . ln(We)+ 0.9611, W,> 845 [kW] q- = 0.0187 . ln(W,) + 0.1317 q t h a h p = 0.7058 . eo31r 50 50 ctu,blnr = 4786. . We < 50000 [kW] Crur(nne = 2977. . ~ 7 7 ’ ’ , We > 50000 IkW] 55 000
e7”’,
heat recovery boiler heat load in kW
Reference cost
IEI
c,, =
Recovery boiler
[El
Cb&r
cturbinc
(0.0503 . In =
Cboikr=
( L + 0.3208)+ ) 1000
(125.5 - 0.4 ’ )-( 0.2436 .
(Qfk
iv,
1000 [kw])”““’
. c,f
I
337
338
I
3 Computer-Aided integration ofUtility Systems Table 3.6
(Continued)
Connection charges
C,
[El
=
C,,,= (0.0494 - 0.0047 . In
Instrumentation
(6)) . ,C,
. (L . C,ef )
Civil engineering
[El
CG.c= (0.1232 - 0.0005
Engineering
[El
Cc,= (0.1211 - 0.000735
[El
CdlV = (0.1403 - 0.0142 In
Whl
C,,,,int = 8.15
Maintenance €cts
. C,f
(0.09318 - 0.00011 . (*)
1000
. (A) . C, 1000
(F))w,
. C,I
. Wi0'3m1
Table 3.7 Thermoeconomic characteristics of industrial heat pump systems. The sizing parameter is the heat delivered Q h in kwth, the cost is expressed in €/kwh computed by Investment = a . (Qh)b, source
contribution of TNO in (9).
45 Electric compression heat pump Mechanical vapor recompression 30 Absorption heat pump NH3/H20 50 Absorption heat pump LiBr/H20 50 Absorption heat transformer LiBr/H20 50 Thermal vapor recompression 20
110 200 150 150 150 180
10- 3000 814 250-50 000 663.5 5-60000 810.2 5-60000 810.2 250- 4000 1164.8 15-50000 268.56
-0.327 -0.3925 -0.3154 -03154 -0.288 -0.4832
45% Carnot 45 % Carnot 1.4 (COP) 1.6 (COP) 0.45 (COP)
~~~
Table 3.8
Phosphoric acid fuel cells. PAFC
fld flfh
NO,
co
Installed cost Maintenance Lifetime
[-I [-I
0.35-0.4 0.25
1611
[PPml [PPml [WWI [€cts/kWh]
0
0 4000 1 40 000
1621 [621 [611 [611 ~ 3 1
-
I611
PI
Table 3.9 Solid oxide fuel cells. SOFC fld
7th
NO,
co
Installed cost Maintenance Lifetime
1-1 1-1 IPP4 [PPd F/kWI [€cts/kWh]
[hl
0.5 0.35 < 0.2 0 450 (long term) - 1500 1 > 20 000
~ 4 1 1641 [65] [611 1661
3.4 Graphical Representations Table 3.10
Proton exchange membranes.
PEMFC
-
9d 9th
1-1 [-I
0.3-0.4 0.5-0.45
NO,
[PPml IPPml [e/kWl [€cts/kWh] lhl
-
co
Installed cost
Maintenance Lifetime
[611
-
- 500 (long term) - 1000
1 87 600
1671, ~1 [611 1671
The total cost of the energy conversion system is given by (3.8):
where mf 0) Cf@)
is the flow rate of fuelfduring the period p [kg/s]; is the cost of fuelfduring the period p [Elkg]; W e l (p), Weel, @) is the electrical power imported (exported)during period p [kWl; C el (p), C el, (p) is the cost of electricity for import (export)during the period P [E/kJl; d @) is the duration of period p [slyear]; C me is the maintenance cost of equipment e [E/year]; ne is the number of equipment pieces; C re is the installed cost of equipment e [El; i is the interest rate for annualization; nP is the expected lifetime of the installed equipment e [year].
3.4 Graphical Representations 3.4.1 Hot and Cold Composite Curves
From the beginning, composite curves have been used to explain integration and identify energy savings opportunities. In most representations, temperature is a topological indicator in the sense that it allows one to pinpoint the process operations concerned with the pinch points or the pseudo-pinch points. The hot and cold composite curves mainly concern the process streams. They are used to quantify the possible energy recovery by exchanging heat between the hot and cold process streams. Four zones are of importance in the hot and cold compos-
I
339
340
I
3 Computer-Aided integration of Utility Systems
ite curves (Fig. 3.3). On the right, we visualize the hot utility requirement. The heat recovery zone represents the possible heat recovery by an exchange of the hot and cold streams of the process. The remaining heat of the hot stream has to be evacuated using a cold utility. The left part of the graph therefore defines the cold utility requirement. The latter is divided between cooling requirements above the ambient temperature and the refrigeration requirements below the ambient temperature.
3.4.2 Grand Composite Curve
For analyzing the energy conversion system integration, the grand composite curve (Fig. 3.4) will be used. It describes as a function of the temperature, the way energy has to be supplied to or removed from the system. The process grand composite curve is divided into three zones. Above the process pinch point, the system is a heat sink to be supplied by a hot utility. Below the process pinch point and above the ambient temperature, the process is a heat source. The heat has to be evacuated from the process by a cold utility or used in another energy consuming subsystem like another process or a district heating system. Below the ambient temperature, the process requires refrigeration. The feasibility rule of the utility integration is that the grand composite curve of the utility system should envelop the process grand composite curve. Resulting from the linear nature of the composite curve calculation, the optimal integration of the utilities will result in the definition of utility pinch points (the intersection between the utility composite and the process composite). Each of them will correspond to the optimal use (maximum feasible flow rate) of the cheapest utility stream. Above the pinch point, the grand composite curve represents, as a function of the temperature, the heat that has to be supplied to the process by the hot utility. When ignoring the pockets in this curve, the process appears as a cold stream to be heated up by the hot utility. Knowing the temperature-enthalpy profile of the
Figure 3.6 Computing the flow rate of the hot utility
3.4 Graphical Representations
utility, it is possible to determine the flow rate of the utility stream by activating the appropriate pinch point, as illustrated in Fig. 3.6, for one hot utility whose inlet temperature is higher than the maximum temperature of the process streams. In this situation, the flow rate of the utility stream is computed by: (3.9)
where f w Ti,,,, T CPW
A Tmin/Zw Rk
is the flow rate of utility w [kg s-l]; are, respectively, the inlet and the outlet temperature of the utility stream w [“C]; is the specific heat of the utility w [kJ kg-’/’C] is the contribution to the minimum approach temperature of the utility stream w ; for k = 1, nk + 1, are the values of the heat cascade, and R,,k+lis the MER of the process.
It should be mentioned that in the example, the heat delivered by the hot utility is higher than the MER. The temperature Tk defining the intersection between the utility curve and the grand composite curve is called a utility pinch point. In the example, it differs from the process pinch point. Additional heat is therefore available from the hot utility and should be valorized. This situation often occurs in high temperature processes, like steam methane reforming processes, where the high temperature pinch point is activated by the integration of the combustion flue gases at the high temperature reforming reactor and where the heat excess available in the flue gases at lower temperatures is used to produce high-pressure steam that will be expanded in a condensing turbine to transform the excess heat into useful mechanical power by combined heat and power production. The same graphical representation applies for the cold utility and refrigeration requirements. The graphical definition of the utility flow rate has been widely used to define the flow rates in the steam system. The steam condensation defines a horizontal segment (constant temperature) whose length defines the steam flow rate. This approach cannot, however, be applied when utility streams interact as it is the case when steam is produced in a boiler, then expanded to produce mechanical power before being condensed to heat up process streams. The utility system is made of more than one stream and the flow rate of the one (e.g., the steam flow rate) will define the flow rate of the other (e.g., the fuel consumption in the boiler). Furthermore, since the heat of the flue gases is available for the process, nothing restricts its use for temperatures above the steam condensation temperature. As will be demonstrated later on, the problem has to be solved by optimization in this case. Nevertheless, the grand composite curve representation will be of prime importance to define the possible utilities that may be envisaged for the process under study. Another important aspect of the utility system integration is the combined production or consumption of mechanical power. Townsend and Linnhoff [12] have made
I
341
342
I
3 Computer-Aided Integration of Utility Systems
a complete analysis on the use of the grand composite curves to analyze the optimal placement of the energy conversion systems and the combined heat and power production. Following the prescribed rules, the engineer is able to define a list of possible energy conversion technologies that may be envisaged for the process under study. The creativity of the trained engineers will unfortunately create a problem at this stage since they will usually be able to identify more than one technology per requirement.
3.4.3 Exergy Composite Curves
The exergy analysis is a thermodynamic-based useful concept that helps the understanding and analysis of efficiency in energy conversion systems. Exergy measures the thermodynamic value of energy. It defines the maximum work that could ideally be obtained from each thermal energy unit being transferred or stored using reversible cycles with the atmosphere being either the hot or cold energy source. The exergy approach (e.g., Kotas [13]) is used to represent in a coherent way both the quantity and the quality of the different forms of energy considered. The concept of exergy presents the major advantage of an efficiency definition compatible with all kinds of conversion of energy resources into useful energy services (heating and electricity, heating, cooling and electricity, refrigeration, heat pumping, etc.) and for all domains of energy use. While energy efficiencies are higher than 100% for heat pumping systems (because ambient energy is not accounted for), exergy efficiencies are lower than 100%. This gives an indication of how well the potential of an energy resource is exploited in the different technical concepts in competition. In the context of process integration analysis, the exergy concept is combined with the pinch analysis for reducing the energy requirement of the process [14,15] for optimizing the energy conversion system integration and for optimal combined heat and power production. The exergy composite curve concept has been introduced by Dhole [16] for this purpose. The exergy delivered (4by a stream exchanging heat with a constant cp from Tinto To,,is computed by: k = Q (1- -),TO
TJm
the logarithmic mean of temperatures computed by TJ,,,=
(a
where TJm is
a and To is the Tin In
(K’
ambient temperature (all temperatures are expressed in K). When representing the heat exchange in the temperature-enthalpy diagram, the exergy delivered may be represented by exchanging the temperature axis with the Carnot efficiency (1 - -). TO
T
The exergy then corresponds to the area between the exchange curve and the enthalpy axis (Fig. 3.7). When applied to the concept of the composite curves, the exergy composite curves (Fig. 3.8) represent the exergy lost in the heat exchange between the hot streams and
3.4 Graphical Representations
F
Fv
0.3
-
0.2
-
0.1
0
Figure 3.7
-
0
200
400
600
I000
8(KI
1200)
Q(kW
Exergy received by a cold stream heated from 350 to 500 K
0.6
0.5 Q.4 0.3
2 4
0.2 0.F
-0. I
xergy requirement
u
5000
loo00
1sm
2
m
25m
3oooo
QWW) Figure 3.8
Exergy composite curves defining the process requirements
the cold streams of the system. The exergy delivered by the hot streams (shaded area between the hot composite curve and the X axis) is deduced from the exergy required by the cold streams (shaded area between the cold composite curve and the X axis). As for the hot and cold composite curves, there are four zones to be considered in the representation. The hot utility requirement is defined by an area (below the cold composite curve) representing the corresponding exergy requirement. The area between the two curves represents the amount of exergy that will be lost in the heat recovery heat exchange network. This exergy loss may be recovered partly by properly
I
343
344
I
3 Computer-Aided lntegration of Utility Systems
integrating combined heat and power devices. Therefore, the area between the curves and above the pinch point should be deduced from the hot utility exergy requirement to define the minimum exergy requirement of the process. After the heat recovery exchange, the remaining heat, to be evacuated from the system, is divided into two parts. Above the X axis (ambient temperature) the area between the hot composite curve and the X axis represents the exergy that could be obtained by integrating low temperature heat recovery devices like organic Rankine cycles. Below the X axis, the area represents the exergy required by the refrigeration system. Favrat and Staine [14]have added to this representation the exergy losses related to the compression work (as a function of the pressure drop) and the grey exergy. The grey exergy is the exergy required to construct the heat exchangers. It includes the raw materials exergy content as well as the exergy consumed in the construction process. This concept could be used to determine the threshold of the heat recovery effort; the exergy expenditure for the recovery equipment should not exceed the avoided exergy loss thanks to the heat recovery. This tradeoff is, of course, considered together with the economical tradeoff and should be used to define the appropriate value of the A Tmin.The grey exergy may become important when the process and the heat recovery exchangers are operated only part-time. When considering the exergy grand composite curve, the diagram represents the exergy required by the process. In this diagram, the special role of the self-sufficient pockets should be noted. This is the area representing the possible mechanical power recovery by the combined heat and power (Fig. 3.9). From this analysis, it is possible to identify the possible characteristics of the steam cycle as demonstrated by Marechal et al. [17]. This is done by integrating rectangles in the grand composite curve. The basis of any inserted rectangle being the vaporization and the condensation temperatures of the Rankine cycle. The use of the exergy concept allows one to
0.5
-
3.4 Graphical Representations
quantify the exergy required and therefore allows one to set a target for the energy conversion system. Because the grand composite curve is computed in the corrected temperature domain, we consider that there will be an exergy loss that is a priori accepted for limiting the heat exchangers investment and that is related to the definition of the A T,,,. The exergy analysis has been extended to account for the chemical reactions or the physical separations of the process operations [18]. In this represenAH-TOAS tation, the temperature axes is replaced by , which is an extension of AH the Carnot factor (1- -). TO
T
One heuristic rule resulting from the exergy analysis is to try to minimize the area between the hot and cold composite curves of the integrated systems.
3.4.4 Balanced Composite Curves
The optimization models presented hereafter allow one to overcome such difficulties by computing the optimal flow rates in the utility system in order to minimize the cost of supplying the energy requirement. The composite curves of the process, together with the utility system, can then be represented. They are known as the balanced composite curves. An example of such curves is given in Fig. 3.10. This representation is characterized by a number of pinch points, one being the process pinch point, the others corresponding to the maximum use of the cheapest utilities to satisfy the process requirement. In particular, we have a pinch point at the lowest and highest temperatures of the system, indicating that the energy requirement of the process is satisfied by the utility subsystem. These curves are, however, difficult to analyze and do not really help in improving the system efficiency. 3.4.5
Integrated Composite Curves
The integrated composite curves [19] will help analyzing the results of the optimization or understanding the integration of subsystems. This representation is based on the decomposition of the system into subsystems: processes, boiler house, refrigeration cycle, steam network, heat pump, utility system, etc., even very detailed subsystems may be considered like one or several existing heat exchangers. The integrated composite curves of a subsystem are obtained by subtracting from the grand composite curve of the overall system, the grand composite of the subsystem under study. The next step is to mirror the subsystem curve. The two curves intersect at the pinch points of the balanced composite curves. In order to locate the Y zero axis, it is convenient to consider the process pinch point location as a reference. From a mathematical point of view, the integrated composite curves are computed using the formulas below (Eqs. 3.10 and 3.11).
I
345
346
I
3 Computer-Aided Integration of Utility Systems
Grand Composite curves of the system
Figure 3.10 Balanced composite curves of the process with boiler, steam, and cooling system
c
0
1000 2000 3000 4000 5000 6000 7000 8000 9000 Q(kW
Composite curves of the system
0
5000 10000 1500020000250003000035000400004500050000
Q(kW
The set of streams is divided into two subsets. The first (set A) defines the subsystem whose integration should be checked, the second (set B) is formed by all the other streams. Set B will be referred to as being the reference set and will be represented by a curve (RBk,Tk), where Tk is the kth corrected temperature of the heat cascade and RBI, is computed by: (3.10)
3.4 Graphical Representations
where N B Q,
R,
is the number of streams in subsystem B; is the heat load of stream s in the temperature interval r; is the enthalpy reference that defines the position of the temperature axis (see below).
The opposite curve (RAk,Tk), corresponding to the subset A, is computed to make the balance of the first one. It defines the integration of the streams of set A to the others (reference set B):
r=k s=l
(3.11)
where NA is the number of streams in subset A; R,,k+l is the additional energy that can not be provided by the proposed utility set. The R,,+1 has been introduced to obtain a general definition. When the utilities are well integrated R,,+1 = 0. The value R, that appears in the definition of the two curves defines the position of the temperature axis on the energy axis. The value of R, is computed by considering that the curve of set B (the reference set) will intercept the temperature axis at the process pinch point temperature (Tk,).The latter being identified by computing the heat cascade where only the process streams are considered. If k,, refers to the process pinch point, R, is obtained by futing R Bkp= 0 in Eq. 3.10 giving: (3.12) r=k P s=l
Using this definition, the temperature axis divides the energy range into two parts: the positive values correspond to the energy concerned with the set B integration, while the negative values refer to the energy involved in the set A integration. The application of the integrated curves representation is described in more detail in Marechal and Kalitventzeff [191. Some applications of the integrated composite curves are given in the example below. Such representation is really useful when analyzing the integration of cycles. In this case, the difference between the hot and cold streams corresponds to the mechanical power that closes the energy balance. The representation will therefore be used to verify the integration of steam networks or refrigeration cycles and to confirm the appropriate placement of the cycles in the process integration. When the subsystem considered is a heat exchanger, the integrated composite curves of the heat exchanger are the graphical representation of the remaining problem analysis. When the two curves are separated by the temperature axis, no heat exchange is needed between the streams of the heat exchanger and the rest of the process, when this is not the case, the size of the heat exchange between the two curves will represent the energy penalty that would be associated to the heat exchanger under study. The graphical representation in this case also gives an indica-
I
347
348
I
3 Computer-Aided Integration of Utility Systems
600 r 550
300
F
T
500 450
MER
4
TW)
z a l t y of heat exchanger
-
-
250 -1000 Figure 3.12 analysis
-500
I
0
,Q(kW)
500
1000
1500
2000
2500
3000
Integrated composite curves and remaining problem
tion on the temperature profile of the heat exchange required to reduce the penalty. An example is given in Fig. 3.12.
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
The analysis of the process requirement using the analysis of the grand composite curve and by applying the rules of optimal placement of Townsend et al. [12] and other rules for the optimal CHP schemes allows one to propose a list of energy conversion technologies able to supply the energy requirement of the process with a maximum of efficiency. The mathematical formulation of the targeting problem exploits the concept of effect modeling and optimization (EMO) [20], [21]. It will be used to select the equipment in a superstructure and determine their optimal operating flow rates in the integrated system. This approach assumes that the temperature and pressure levels are fixed, resulting from the analysis of the grand composite curve and the application of rules for the appropriate placement of utility streams [12]. The problem is then a mixed integer linear programming (MILP) formulation, where each technology in the utility system is defined for a nominal size and an unknown level of utilization to be determined in order to satisfy the heat cascade constraints, the mechanical power production balance, and additional modeling constraints. Two variables are associated with any utility technology w:the integer variables yw represents the presence of the technology w in the optimal configuration and fw its level of utilization. The objective function is the total cost including the operating costs and the annualized investment cost, both expressed in monetary units (MU) per year. Other objective functions like minimum operating cost or minimum of emissions may also be used. The annualizing factor is computed from the
3.5 Solving the Energy Convenion Problem Using Mathematical Programming
annualizing rate and the life time of the investment. In the equation system, we complete the model by an additional set of constraints (3.17) that are written in a generic form: (3.13)
w=l subject to: Heat balance of the temperature intervals w=l
nw
Cfwqw,r
w=l
+
2
+ Rr+l - Rr = 0
Q.r
i=l
Vr = 1, ..., nr
(3.14)
Electricity consumption
c
w=l
+ weli
fwww
-
wc ? 0
(3.15)
Electricity exportation nw
C f W W W + weli
-
We], - wc = 0
(3.16)
w=l
Other additional constraints (3.17)
k=l
w=l
%,ink
5
Xk
5
Vk = 1,. . . , n,
Xmaxk
(3.18)
Existence of operation w during the time period p: fminwyw ~ f ws f m a x W y w
Vw = 1 , .. . , nw,yw E 0, 1
(3.19)
Thermodynamic feasibility of the heat recovery and utility systems We1 2 0, R1
Wels L 0
= 0, Rnr+* = 0,
where yw c1w c2w
I Fw
(3.20) Rr
>0
Vr = 1 , .. . , nr+l
(3.21)
is the integer variable associated with the use of the technology w; is the fixed cost of using the technology w [€ year-']; is the proportional cost of using the technology w. This value is defined in [€-'I; is the fixed cost related to the investment of using technology w;I Fw is expressed in monetary units [€] and refers to the investment cost of the combustion and cogeneration equip-
I
349
350
I
3 Computer-Aided Integration of Utility Systems
IPW t
ments as defined above as well as to the other equipment considered in the utility system (turbines, heat pumps, refrigeration systems, etc.); is the proportional investment cost of the technology w , I Pw [elallows one to account for size effect in the investment; is the annualizing factor of the investment. This value is used to express the investment of the energy conversion units in M U per year. = (1 1% ' ; - 1is the annualization factor +
i (1 +
4wr
fw ww
of the investment (years-') ment (years-') for a annualization interest i and an expected equipment life of nyean; is the number of technologies proposed in the superconfiguration of the utility system; is the heat load of the technology w in the temperature interval r for a given reference flow rate, qwr > ofor a hot stream [k w]; is the multiplication factor of the reference flow rate of the technology w in the optimal situation; is the mechanical power produced by the reference flow rate of technology w;w , < , 0 for a mechanical power consumer and > 0 for a producer [ k w]; is the selling price of electricity [Eper k W € kJ-'I; is the net production of electricity [kW]; is the electricity cost at import [€ kJ-'1; is the net import of electricity [kW]; is the total annual operation time [s year-']; is the overall mechanical power needs of the process; Wc < 0 if the overall balance corresponds to a mechanical power production [kW]; is the (n,) additional variables used in the additional equations of the technology models; are, respectively, the coefficients of the multiplication factor and the integer variables of technology w in the constraint i in the effect models; are, respectively, the coefficients of the additional variables and the independent term in the constraint i in the effect models; are, respectively, the minimum and maximum bounds of x,; is the minimum and maximum values accepted for fw. p @ ' s
The method presented may be applied to any kind of energy conversion technologies. It is based on the assumption that the operating conditions have been defined for each piece of equipment concerned and that only the flow rates are unknown. This is a limiting assumption but it allows one to solve most of the problems of energy conversion integration mainly because nonlinearities may usually be solved by discretizing the search space. The method has been further adapted to compute the optimal integration of steam networks [22], to incorporate restricted matches
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
constraints [23], to integrate refrigeration cycles [24] [25] and organic Rankine cycles[26]as well as heat pumps [27].It has been applied to integrate new technologies like the partial oxidation gas turbine [28],or to design new types of power plants by introducing the concept of isothermal gas turbines (291.
3.5.1 Gas Turbine and Combustion System
In order to demonstrate the ability of the formulation to tackle complex problems, the model for computing the integration of gas turbines and combustion will be given in more detail. The purpose is to explain how to formulate the problem as a linear problem even if the models appear to be nonlinear. The model represents the integration of the gas turbine including its partial load operation, the possible postcombustion of the gas turbine flue gas, the use of different fuels in the gas turbine and in the post combustion, and of course the integration of conventional combustion in a radiative furnace with possible air enrichment or air preheating. The post combustion and the partial load models are required because there is no possibility of identifylng a gas turbine model whose heat load will perfectly match the heat requirement of the process. The principle ofthe integration is illustrated in Fig. 3.21. The following integration constraints are added to the aforementioned problem. Hot stream corresponding to the flue gas of a gas turbine g is defined by: (3.22) where hg
@
CPf, TOT, TstackB
f, Yg
fgmin(-x’ ng
is the flue gas flow rate at the outlet of the gas turbine g in nominal conditions. These values result from the simulation of the gas turbine g is the total heat load of the flue gas from the gas turbine g is the mean cp of the flue gas at the outlet of the gas turbine g, is the temperature of the flue gas at the gas turbine g; is the stack target temperature accepted for the outlet of the gas turbine g after heat recovery; is the level of utilization of gas turbine g w i t h r . yg Sf,5 yg
. fgmaX;
is the integer variable representing the use or not (1,O) of the gas turbine g; is the minimum (maximum) level of utilization of the gas turbine g, is the number of gas turbines proposed in the utility system super configuration.
Hot stream corresponding to the post combustion (heat available for convective heat exchange) is given by:
I
351
352
I
3 Computer-Aided Integration of Utility Systems
QF =fpc
. hg. cpfg . (Trad- TOTg) vg = 1, ng
(3.23)
is an arbitrary temperature used in the combustion model and representing the limit of the radiative exchange; is the fraction of the nominal gas turbine flue gas flow rate used for postcombustion; is the heat load supplied by the flow rate fraction of the flue gas flow rate of gas turbine g used in the post combustion device.
where Trad
J7 €T
Fuel consumption in the gas turbine g is as follows:
cJg . "cgt
LHV, - (yg . FCIg +fg . FCPg) = 0 Vg = 1, fig
c=l
(3.24)
is the number of fuels available for combustion in the gas turbines; the lower heating value of the fuel c; L H V, the flow rate of the he1 c in the gas turbine g; fi yg . FCI, +f, . FCP, is the linearized fuel consumption of gas turbine g as a function of its level of utilization.
where ncgt
Electricity production with the gas turbines W, is given by:
wgt
C (yg . Wlg +fg . WPg) = 0 %
-
(3.25)
g=l
where yg . W I, +f,
. W P,
is the linearized mechanical power production of the gas turbine g as a function of its level of utilization.
The parameters for the linearization are computed by simulation considering the partial load operation of the gas turbine. For each gas turbine g, the unknowns aref,, yg, andJP,t,while the other parameters are obtained from the thermoeconomic models. The quality of the linearization will mainly depend on the range in which the partial load operation is expected to happen in the optimal situation. The operating costs 0 C, and the investment costs I C, of the selected gas turbines are computed by:
C ( y g . ocr, +fg . omg) oc, n,
-
=o
(3.26)
g=l
%
Cyg.r a g
-
'C@ = 0
(3.27)
g=l
where yg . 0 C I, +fp . 0 C P, is the linearized maintenance cost of gas turbine g as a function of its level of utilization; is the investment cost of gas turbine g from the data Yg . I c 4 base catalog.
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
The fraction of the flue gas of the gas turbine used in the post combustion is limited to the level of utilization of the gas turbine g.
fgP C
Tfg vg= L n g
(3.28)
The combustion model is made up of different equations: (3.29) includes different terms representing the oxygen balance required by the combustion of the fuels and the oxygen supplied by air and post combustion flue gas. (3.29) a=l
J'Ln@"''x'
na
c= 1
is the oxygen content of the flue gas at the outlet of the gas turbine g is the oxygen content of the ambient air; is the amount of air used by the combustion in the system; its speis the flow rate of fuel c used in the combustion fC (fc""); cific cost is c,; is the fumes flow rate resulting from the combustion of one unit of fuel c; is the mean specific heat of the fumes resulting from combustion. This cp is considered between T rad and T stack; is the number of fuels that can be used in the system including those for firing the gas turbine (ncgt); is the oxygen requirement per unit of fuel c. For practical reasons, the oxygen requirement includes the minimum oxygen excess for this fuel; is the oxygen content of the enriched air stream leaving the air separation unit a; is the flow rate of enriched air leaving the air separation unit a under nominal conditions; is the level of utilization of the air separation unit a, w i t h r . ya 5 h 5 - r * y!i is the integer variable representing the use or not (1,O) of the air separation unit a is the minimum (maximum)level of utilization of the air separation unit a; is the number of air separation units considered in the system.
The fuel consumption balance of any fuel c that might be used either in a gas turbine or in standard combustion follows: (3.30) g=l
where fc
is the overall consumption of fuel c.
I
353
3541 3 Computer-Aided Integration of Utility Systems
High-temperature balance: radiative exchange model above T rad
Low-temperature balance: convective exchange below T rad:
cf. na
f
'
(3.32)
h a . cpa ' (Trad - Tstack) - Qcnv = 0
a=l
where
and is the total amount of heat available above T rad;
a,,"
L H V,
To Grh
cp,
is the total amount of heat available from T rud to T stuck; is the lower heating value of the fuel c. This value is the value computed by simulation of the combustion using the minimum accepted value of the oxygen content in the fumes; is the reference temperature used for computing the LHV is the heat load of air preheating, the existence of the air preheating equipment is defined by an integer variable Yprh and the following equation: Yprh&hrnin IG r h Iyprh&hmaX. The investment cost of the air preheating device is computed by linearizing the air preheater cost by 1 Cprh = 1 c Fprhyprh -I I c PprhGrh; is the mean specific heat of the enriched air leaving unit a at a temperature of
c.
Table 3.11 gives the values needed to compute the integration of some typical fuels used in the industry including some renewable fuels (wood and biogas). Table 3.11
Values for some typical fuels.
CH,
Natural gas Light fuel oil Heavy fuel oil Coal (lignite) Wood Biogas
LHV
b
Gad
(kllkg)
(K)
(K)
50000 39680 45316 44500 25450 18900 13358
2646.5 2270 2425 2423 2111 2185.43 2077
374 374 440 441 438 374 374
17.1 13.9 14.4 14.3 7.29 7.9 4.63
7.67 6.9 4.67 2.53
5 -
55 55 70 71 81 0 0
Natural gas composition: 87% Methane, 13% N2 Light fuel oil: '36.2% mass, H 12.4% mass, S 1.4% mass Heavy fuel oil: C-86.1% mass, H-11.8% mass, S-2.1% mass Lignite: C-56.52%, H-5.72%, 0-31.89%, N-0.81%, S-O.81%, Ash-4.25% Wood (wt) composition: C-49.5%, H-6%, 0 4 . 6 % + C1H1.400.7, C02 neutral 50% C 0 2 and 50% C H4, C 0 2 neutral Cost 2004, European market
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
Air Preheating: Outlet Temperature Calculation
When combustion is considered, air preheating plays the role of a chemical heat pump. It is used to pump waste heat available below the pinch point and make it available above the pinch point (by an increase of the adiabatic temperature of combustion). The effect of air preheating is, however, limited to the preheating of the stoichiometric air flow. When the flow rate is higher, the adiabatic temperature of combustion decreases and the benefit of air preheating is lost for the part corresponding to the excess air. When combined heat and power are considered using the steam network, the process pinch point no longer defines the maximum preheating temperature. In this case, the combustion air may be preheated up to the temperature corresponding to the highest condensation pressure of steam because this steam will produce mechanical power before being used as a preheating stream. The preheating heat load will become available at a higher temperature to produce an additional amount of steam at the highest pressure or to increase the superheating temperature. The air preheating temperature is therefore unknown and its optimal value has to be computed. When a heat cascade is considered, computing the optimal preheating temperature is a nontrivial task, mainly because the temperature is used to generate the list of the heat cascade constraints. This makes the problem nonlinear and discontinuous (i.e., according to the temperature, the stream will appear or not in a given heat cascade constraint). Some techniques have been proposed to solve this problem as a nonlinear programming (in our case mixed-integer) problem using smooth approximation techniques (e.g., [30]). This approach is explained in more detail in another chapter. A further approach consists of keeping the linear programming formulation by discretizing the temperature range in which the air preheating will take place in ni intervals of AT. The air preheating stream is to Tiri+l = T? + A T and by adding therefore defined by a list of cold streams from the following constraints: fair
vi = 1,.. . , ni;
>fai
(3.33a)
ni
Q r h = Cfaicpair,i(Ti+l - Ti)
(3.33b)
i=l
where fa; cp,ir,i
is the flow rate of air preheated from T, to Ti+l; the specific heat capacity of the air flow rate from Ti to Ti+l.
In the combustion model, the optimal temperature calculation model is also used to compute the outlet temperature of the air and enriched air preheating, fuel preheating as well as to compute the outlet temperature at the stack. This calculation is done in two steps: 1. Solve the model and compute the optimal flow rates in each interval cfai). 2. Compute the resulting temperature To,,, by solving from i = 1 to n,: Vai-, -fai) +fai . Ti+l
TO^
=
fai-1
with Too = Tain,which is the inlet temperature of the stream a.
I
355
356
I
3 Computer-Aided Integration of Utility Systems
The precision of the model is related to the size of the discretizing temperature intervals. A compromise between the precision required for the equipment sizing and the number of variables is therefore required. A similar formulation is also used to compute the optimal temperature of the gas turbine flue gas after heat recovery. This systematic choice has been made to keep the robustness advantage of the MILP formulation. 3.5.2
Steam Network
The steam networks play a very important role in most industrial process plants. They are the major interface between the utilities and the process streams while allowing the combined production of heat and power. Furthermore, by transferring heat from one process stream to another, the steam network will be used to reduce the energy penalty resulting from restricted matches. The importance of steam networks is also present for site scale process integration since the steam network will be the method of transferring heat from one process to another. Targeting the integration of the steam network is an important part of the integration of the energy conversion technologies. In the first attempt to study the integration of energy conversion technologies, the idea was to consider steam as being a constant temperature stream that supplies or extracts heat from the process. This provided an easy way of understanding the multiple utilities integration from the grand composite curve analysis. When designing the site scale steam networks, Dhole and Linnhoff [lG]introduced the total site integration concept. They defined the total site composite curves to represent the integration of chemical plants that are composed of several processes and may be integrated. The purpose of these curves is to identify the way energy has to be transferred from one plant section to another using the steam network. This method assumes that heat recovery inside the process has already been performed before allowing for exchanges between processes. The construction of the total site composite curve is explained in Fig. 3.15 (top) for the integration of two processes whose grand composite curves are given on the left. After eliminating the pockets (self sufficient zones), the hot utility and cold utility profiles are composed to build the hot and cold site profiles. The exchange between the processes is then realized using the steam network as a heat transfer medium. If this approach is convenient from the graphical point of view, it can not be applied when considering the integration of steam networks in practice. Two major defaults should be removed. First, the pockets can not be ignored because these may hide heat exchange potentials. This is demonstrated in the center of the figure, where the integrated composite curves of process 1 versus process 2 shows the energy saving that could be obtained when exchanging heat between the two processes. In this example, the energy saving mainly results from the integration inside the pockets. In reality, the hot and cold composite curves of the whole system should be considered. The second drawback of that total site approach is that it ignores the combined production of mechanical power in the steam network. This is shown in the bottom of
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
"1r
'I
T
Total site representation
Process 1
Process 2
-'
.
.
,
..j
n g through process/process i n t e g r a t i o n Energy s a v ing
Process 1 Figure 3.15
Coolkng syste
Process 2
Total site integration and steam network
the figure where steam is produced at high pressure in process 2 to be used at high pressure in process 1 and low pressure steam produced in process 1 is used at a lower pressure in process 2. In between, the steam is expanded in a back pressure turbine to produce mechanical power. From the exergy analysis, the potential for combined production of mechanical power is proportional to the size of the pockets
I
357
358
I
3 Computer-Aided Integration ofUtility Systems
in the exergy grand composite curve, These can not therefore be ignored from the heat integration and CHP perspective. In real steam networks, the steam production and condensation cannot be considered at a constant temperature. One should consider the preheating and the superheating for the steam production and the desuperheating, condensation, and liquid undercooling for the steam condensation. Furthermore, the maximization of the mechanical power production will be obtained by optimizing the heat exchanges within the steam network, e.g., by condensing lowpressure steam to preheat high-pressure water of the steam network. The MILP formulation presented above may be extended to define a more precise model of the steam network. The formulation has been given in [22]. It is based on the definition of a steam network superstructure (Fig. 3.13). This superstructure was first proposed by Papoulias and Grossmann [31]. It has been adapted to account for the temperature-enthalpy profiles of the steam production (i.e., preheating, vaporization, and superheating) and consumption (de-superheating, condensation, and undercooling). One of the difficulties of this formulation has been to guarantee a coherency between the heat and the mass balances while using linear equations. This has been obtained by a special formulation of the mass balances of the steam network headers. Hot and cold streams of the steam network are considered in the heat cascade model and the contributions of the steam expansion in turbines are added in the mechanical power balances. It should be mentioned that using this
t, DT13
Dfn
SI
DC14
Df42
2 ’
DL43
L2
I
I
DC24
3’
Figure 3.13 Superstructure of a steam network including three production/usage levels and one condensing level (deaerator)
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
model, the optimal flow rate in the steam network will be determined also by considering the possibility of exchanging heat between the streams of the steam network. This leads to an optimized steam network configuration where steam draw offs are used to preheat the high pressure water. The model may therefore be used as a tool for a rapid prototyping of complex steam cycles in conventional power plants [29]. The integrated composite curves of the steam network (e.g., in Fig. 3.20) is used to represent the results of its integration. In the figure, the overall mechanical power production corresponds to the balance between the hot and cold streams of the system. It is made of the contribution of the expansions between the different pressure levels in the superstructure. A post processing analysis will be required in order to decide the best configuration of the steam turbine(s): single turbine with multiple draw off or multiple back-pressure turbines. The use of the process pinch point as a reference to locate the zero heat (temperature axis) is used to verify the appropriate placement of the steam headers and the combined production of mechanical power. Combined heat and power is well located when it takes heat above the pinch point and sends it above or when it takes heat below and sends it to the cold utility [12]. When the combined heat and power production satisfies the rules for appropriate placement above the pinch point, the part of the integrated composite curve of the steam network that will appear in the left side of the temperature axis should correspond to the mechanical power production. This would indicate that the additional energy required by the system is equal to the mechanical power produced. If not, the rules for appropriate placement are not satisfied and the reason for this penalty should be investigated.
3.5.3 Refrigeration Cycles
Refrigeration cycles are used as a cold utility below the ambient temperature. The major principle of the refrigeration cycle is to use the compression power to change the temperature level of the streams. A simple refrigeration cycle is presented in Fig. 3.14, it is composed of one compressor, one evaporator (at low temperature), one condenser (at higher temperature) and a valve. From the process integration point of view, the refrigeration cycle is defined by one hot and one cold stream and by the corresponding mechanical power consumption. The optimal flow rate will be determined by the MILP formulation presented above. The temperature levels obtained from the grand composite curve analysis will usually define the type of fluid to be used but other considerations have to be taken into account like the environmental aspects (CFC refrigerants) or safety (flammability).The use of fluids already in use in the process plant is also an important criterion. The efficiency of the integration of one refrigeration cycle depends on its compression ratio and on the flow rate. It will depend also on the structure of the cycle and on the possibility of combining cycles with different refrigerants or with different pressure levels. The problem is therefore highly combinatorial since refrigerants, structures, pressure levels and flow rates have to be optimized. A graphical approach based on the exergy analysis has been
I
359
360
I
3 Computer-Aided Integration of Utility Systems
340
320
300
280 I
260
I
,
-2000
-1500
-500
-lo00
0
500
lo00
QW) Figure 3.14 cycle
Integrated composite curve o f a single stage refrigeration
proposed in [32]. This approach illustrates the methodology of the integration of complex refrigeration systems. A nonlinear programming model has been proposed in [33]. The method presented by Marechal and Kalitventzeff [24] shows the extension of the MILP formulation to tackle this complex problem. The method first identifies the most important temperature levels in the grand composite curve using an MILP formulation. The systematic integration of the cycles with the possible refrigerants is made by applying heuristic rules. From this first selection, the remaining cycles for which the refrigerants, the configuration, the temperature levels, and the mechanical power are known are added in the energy conversion system superstructure and the best configurations are sorted out by solving the MILP problem. When several cycles compete, integer cut constraints are added to the problem to systematically generate an ordered set of solutions. The integer cut constraint is used to avoid the generation of an already known solution when solving the MILP problem. The restriction of the kthsolution is obtained by adding the following constraint:
\w=1
w=l
where yk n,,l
/
(3.34)
is the value of yw in the solution k is the number of solutions so far.
The use of an integer cut constraint is an important tool when solving utility system integration. The systematic generation of multiple solutions allows the comparison of the proposed utility system configurations using different criteria (not accounted in the definition of objective function) and to perform a sensitivity analysis to uncertain problem parameters like the cost of energy or the investment.
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
3.5.4 Heat Pumps
When appropriately placed heat pumping systems should drive heat from below to above the pinch point. Several types of heat pumping systems may be considered (mechanical vapor recompression, mechanically driven heat pumps, or absorption heat pumps). Table 3.7 gives some useful thermoeconomical values for the evaluation of the industrial heat pumping systems. The optimal integration is determined by first identifying the streams or the temperature levels for which heat pumping is envisaged. The simulation of the heat pump system using process modeling tools defines the hot and cold streams characteristics that are added to the system integration model and the optimal flow rate for each of the levels will be computed by solving the MILP problem. When mechanical vapor recompression (MVR) is used, a hot stream at lower temperature (rnvrLo,,,) is replaced by another hot stream (mvwh;&)with a higher temperature and a mechanical power consumption W,,,,,. From the grand composite curve analysis, it may happen that only part of the stream will be recompressed. This situation will be represented by considering as variable the fraction used in the two streams (resp.finvrL,, andfinvrhigh) and by adding the following constraints (3.35): (3.35)
The inequality represents the situation presented in Fig. 3.17 where the stream is partly recompressed and the remaining part is cooled to lower temperature. It should be noted that the heat load of the high temperature stream is higher than the one of the colder stream because it includes the mechanical power of compression. All the streams have the same target temperature. The adiabatic valve being considered after liquid subcooling. The useful heat of the recompressed stream is therefore lower than the total heat load since part of the heat remains available below the pinch point temperature. When the recompressed stream may be vented (e.g., in the case of evaporation stream), the constraint becomes an inequality. In this case, there are in fact three options: (1)venting the stream and the heat is lost to the atmosphere, (2) condensing the stream and use it because the heat is needed below the pinch point, and (3) recompressing it and using the recompressed stream as a hot utility. The optimal flow rate in each of the options will be obtained from the optimization and will take into account the constraints of the heat cascade. As for the refrigeration cycles, multiple options corresponding to different technologies with different levels of pressure and temperature may be considered and will be handled by the optimization. The results of the heat pump integration correspond to the activation of multiple pinch points. The integrated composite curve of the heat pumping system (Fig. 3.23) is a useful tool to verify the appropriate integration of the system. 3.5.5 Handling Nonlinear Cost Functions
In the problem formulation, linear costs are needed. When the whole range of sizes is covered by the model, a piecewise linearization technique may be used to re-
I
361
362
I
3 Computer-Aided Integration of Utility Systems
Size Figure 3.16
Piece-wise linearization o f the cost function (exponent 0.75)
present a nonlinear cost function. The generic investment cost function C (S) = C,f.
S (q)’ will be approximated by a set of segments (Fig. 3.16) defined by the
following set of constraints (3.36) in the linear optimization problem definition:
(3.36)
where C (S) is the installed cost of the equipment of size S; STa is the maximum size in segment i; is the size of the equipment in segment i; Spi is the integer variable used to select the segment i. yi The linearization by segments is also applicable for performances indicators of technologies like power and efficiency of a gas turbine. The linear formulation may in this case be extended to account for piecewise linearizations of nonlinear functions.
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
3.5.6 Using the Exergy Losses as an Objective Function
Due to the linear nature of the problem, the use of the energy cost as an objective function may reveal some difficulties [27]. When the cost of fuel and electricity is such that the electrical efficiency of a cogeneration unit is attractive without the use we' is greater than of heat (i.e., when the electrical efficiency of the unit qel= ___ L h V CLHV (e there is an economical interest to produce electricity even without Cd (ekJ-') cogeneration). In this case, the linear programming procedure leads to a situation where the cogeneration unit is used at its maximum. This situation usually does not occur when the investment costs are properly considered or when the costs of the different forms of energy are coherent with respect to the electrical efficiency. Nevertheless, the relative price of the different forms of energy will influence the technology selection and their level of usage in the integrated solution. When the target is the maximization of the system efficiency, alternative formulations that take into account the value of energy in the objective functions have to be considered. The minimization of the exergy losses (3.37) is an attractive way of formulating the problem:
(3.37) where AExw is the exergy consumed to produce the hot and cold streams and the electricity of equipment w ; A exwk is the exergy supplied by the conversion unit w in the temperature interval k,
ATmin/2s is the contribution to the ATminof the stream s; ATmin/2,2 0 for hot streams and 5 0 for cold streams; qsk is the heat load of the stream s in the temperature interval k computed for the nominal conditions of the related equipment; is the number of streams of conversion unit w. Using this formulation it is possible to define the set of energy conversion technologies that minimize the exergy losses of the system. It is even possible to introduce the aspects related to the investment by adding the grey exergy into the exergy consumption of the conversion technologies. 3.5.7 Handling Restricted Matches
The use of mixed-integer linear programming in process integration was first proposed to solve the problem of the heat exchanger network design [34]. The design
I
363
364
I
3 Computer-Aided Integration ofutility Systems
problem is presented as a transportation problem by Cerda et al. [35] or as a transshipment problem [3G]with a significant reduction of the number of variables. Such methods can easily be extended to account for restricted matches by adding constraints to the MILP formulation. The draw back of such approaches is that they are designed to find a feasible heat load distribution that satisfies the MER target. In this case, the restricted matches are an advantage because they reduce the search space of the possible connections. When there is no feasible solution, these methods give the energy penalty of the restricted matches. Unfortunately, none of these methods search for ways of reducing the penalty. The integration of energy conversion technologies allows one to go one step further in the analysis because the utility can be used as an intermediate heat transfer fluid to reduce the restricted matches penalty. Steam or hot water may be produced in one section and condensed or cooled down in another section of the plant. Another fluid (e.g., Dowtherm) can be used as a ,,heat belt". The computer aided design methodology will help in defining the characteristics of the streams to be considered as an intermediate heat transfer fluid to indirectly exchange heat between the streams of the penalizing restricted matches. The formulation proposed by Marechal and Kalitventzeff [23]uses a MILP formulation of the restricted matches to target the energy penalty of the restricted matches. The penalty is divided into two parts according to the position of the process pinch point. From the optimization results, hot and cold restricted matches penalty composite curves are computed (Fig. 3.22). The hot curve represents as a function of the temperature the heat that can not be removed from hot streams involved in restricted matches by the allowed process streams. This heat has to be received by the cold streams of the heat transfer fluid system. The cold composite represents the temperature enthalpy profile of the heat that has to be sent back to the process by the heat transfer fluid in order to avoid the restricted matches energy penalty. The two composite curves have a pinch point temperature that is identical to the one from the MER calculation since no heat can be exchanged through the pinch point in the MER situation. Any combination of hot and cold streams of the utility system that is ill allow one to eliminate the penalty, provided that the framed by the two curves w temperature difference will be sufficient. The use of intermediate streams imposes the use of two heat exchangers instead of one, thus doubling the A T min value. One should note that the restricted matches composite curves are designed in order to preserve the possible combined heat and power production (i.e., producing highpressure steam by the hot streams and sending it back to the process at a lower pressure after expansion in a turbine). Once the utility streams characteristics are identified, restricted matches constraints are added to the optimal energy conversion system integration problem. The constraint formulation is an extension of the heat load distribution formulation [37] that complements the MILP problem definition. As a result, the complete list of hot and cold streams to be considered for the heat exchanger network design will be obtained and we know that there exists at least one feasible heat exchanger network configuration that satisfies the restricted matches constraints and the minimum cost of the energy target.
3.5 Solving the Energy Conversion Problem Using Mathematical Programming
3.5.8 Nonlinear Optimization Strategies
The analysis of the integrated composite curve helps in interpreting the results of the optimization and verifjrlng (or optimizing) the choice of the utility system operating conditions (temperature or pressure levels) that were supposed to be “well chosen” when stating the MILP problem. In order to solve the nonlinear problems with a linear programming formulation, three strategies may be used (1)the appropriate formulation of the problem constraints as it is explained for the gas turbine integration, (2) the piecewise linearization, and (3) the continuous search space discretization. The latter consists of defining different operating conditions of a given technology as an option among which the optimization will select, using integer variables. The method presented here has tackled the problem of nonlinearities by discretization of the continuous variables search space. Other authors have directly tackled the MINLP problem. These formulations use the alternative heat cascade formulation (3.4) because in this formulation each constraint is associated with one key temperature (stream inlet temperature) of the system. In this case, when the inlet temperature changes, the definition of the equation is not changed, but it may happen that some of the streams that were above the key temperature will now be below. This creates a discontinuity that is usually not acceptable for most nonlinear programming solvers. In order to smooth the discontinuities, Duran and Grossmann [30] have defined a temperature enthalpy diagram of the streams that is continuous for the whole temperature range. For a cold stream, the enthalpy profile (Fig. 3.11) of a stream is defined by (3.39).
250
0
2000
4000
6OOO
Q(kW Figure 3.11 integration
8000
loo00
Temperature-enthalpy profile of a stream in the process
12000
I
365
366
I
3 Computer-Aided Integration ofUtility Systems
Heat supplement from W
Heat below the pinch Figure 3.17
Mechanical vapor recompression
(3.39a) (3.39b) (3.39c) A smoothing technique is then used to round the corners of the enthalpy profile. The difficulty being to tune the smoothing parameters to be compatible with the nonlinear solver criteria without introducing infeasibilities in the heat integration results. This technique was first proposed by Duran and Grossmann. It has been used and extended by other authors [38],[39] for simultaneously optimizing process performances and heat integration problems. When isothermal streams are considered in the problem, the smoothing technique to smooth the discontinuities is not valid anymore from the mathematical (numerical) point of view. In this case, it is necessary to use integer variables that will represent the contribution of the isothermal stream i in the heat balance above the temperature j constraint. The expressions y i j = 0 if T j < Ti and yii = 1 if 2 Ti are represented by the following equations:
(3.40a) (3.40b) This formulation is generic and may be used to represent all situations even the situations solved by the smoothing approximation technique. The resulting MINLP problem will have a huge number of integer variables and will be solved using an outer-approximation algorithm or using disjunctive optimization techniques.
3. G Solving Multiperiod Problems
I
3.6 Solving Multiperiod Problems
Even when the processes may be considered as being stationary, it is often necessary to consider multiperiod operations where the requirements of the processes are considered to be constant during a given period. The utility system serves the different processes of the plant. It therefore has to answer the varying demands of the different processes. Two different problems have to be addressed in this case: the optimal design of the system and the optimal operating strategy.
3.6.1 Optimal Design
In multiperiod problems, the goal of the optimal design task is to determine the best investment to be made in terms of energy conversion equipment considering the varying demands of the processes. This implies taking into account the partial load efficiencies of the equipment, but also to determine in each period the best way to operate the system. The use of MI(N)LP methods for solving multiperiod process synthesis problems has been reviewed by Grossmann et al. [40] and [41]. Multiperiod optimization is a well known problem that has been considered mainly in heat exchanger network design: e.g., [42],[43],[37].The generic problem is formulated as follows (3.41): (3.41)
vt = 1, ..., nt V t = 1, . . . ) nt Vt = 1, . . . , nt Vt = 1, . . . , nt
Yt S
Y
(3.42a) (3.4213) (3.42~) (3.42d)
is the set of modeling constraints during the period t ; is the set of inequality constraints during the period t; is an array representing the operating conditions of the equipments during time period t ; is an integer variable representing the use or not of an equipment during time period t ; is the array of the sizing parameters of the equipment sets, once an equipment is selected (see the value of y), it is used throughout all operating periods; is the array of the integer variables representing the global selection of the equipment sets (i.e., the decision to invest or not);
367
368
I
3 Computer-Aided Integration ofUtility Systems
c (xt,s) I (y, s) tt
is the operating cost during the operating period i; is the total annualized cost of the equipments of size s; is the operation time of period t.
In the general formulation, the set of constraints may be linear or nonlinear and it is assumed that the operating scenari in each period are independent and without heat exchange between periods. If this is not the case, the problem becomes an even more complex batch process synthesis problem. By generating the heat cascade constraints in each period, the model for integrating the energy conversion units presented above may be adapted for solving multiperiod problems [27]. The partial load operation of some of the units has been introduced in the model. Shang et al. [44] have demonstrated that a transshipment model can account for part-load efficiency of expansion turbines and boilers. When using heat cascade constraints, it is assumed that it will be possible to make the heat exchanges required by the heat cascade representation in each time period. If this appears to be a restriction with respect to the heat exchange network structure, its impact may be limited by considering the utility system that will offer a greater flexibility in terms of process utility interface. The multiperiod design formulation implicitly accounts for the optimal operating scenario. Iyers and Grossmann [45] proposed a MILP formulation for multiperiod problems where the design problem includes operational planning of the system in which the limited availability of the units is accounted for. To overcome the difficulties related to the size of the problem, the authors propose a bilevel decomposition of the problem that reduces the size of the MILP problems and considerably reduces the computing time required.
3.6.2 Optimal Operating Strategy
Once the utility system structure is defined, an optimal operating strategy has to be established. Some authors ([46], [47]) propose a linear programming approach to solve this multiperiod problem in order to calculate the optimal scheduling of the system, considering the start up and shut down time as well as the unavailability of the equipment during the maintenance period. The optimization of the operating conditions of a utility system has been solved by Kalitventzeff et al. [48], [49] as a mixed integer nonlinear programming problem. The nonlinear models allow one to account for effects of pressure, temperature, and flow rates, and to consider the available heat exchanger areas. To solve the MINLP problem, the outer-approximationalgorithm was applied [50]. Combined with data reconciliation techniques for process performance follow-up, this method has been applied with success for the optimal operation of complex industrial utility systems. The MINLP problem formulation assumed that each operating scenario may be optimized independently of the history. It has been demonstrated [49] that this MINLP formulation may be used to make the simultaneous optimization of heat exchanger
3.7 Example
network integrated with a complex utility system. The method has also been applied to solve utility system retrofit problems where available equipment (e.g., an exchanger) is attributed to a specific operation (e.g., a heat exchange). Papalexandri et al. [51] proposed a MINLP strategy that solves the multiperiod optimization problem and accounts for uncertainties in certain parameters.
3.7 Example
Let us consider the system requirements defined in Fig. 3.4 that result from the hot and cold composite curves of Fig. 3.3. For the calculations,we assumed that all possible process improvements were already implemented before analyzing the grand composite curve for the energy conversion technologies integration. In terms of energy, the requirements are given in Table 3.12. From the grand composite curve, several utilities may be proposed. The simplest solution is to integrate a boiler house using natural gas (with a LHV of 44495 kJ kg-’) and to cool the process with cooling water. The refrigeration needs will be supplied with a refrigeration cycle using ammonia (R717). The operating conditions of the refrigeration cycle (Table 3.13) have been obtained by simulation considering the temperature levels in the composite curve and the A T min to be reached in the heat exchangers. The results are presented on table 3.5 and the integrated composite curves presenting the results of the optimization are presented in Fig. 3.18. The refrigeration cycle consumption is 314 kW. It should be noted that the energy consumption is higher than the MER due to the losses at the boiler house stack (398 K). The solution accounts for the possibility of air preheating to valorize the energy excess available in the process. The heat load of air preheating is 131 kW. Table 3.12
Minimum energy requirements for the process
Heating requirement Cooling requirement Refrigeration requirement Table 3.1 3
6854 kW 6948 kW 1709 kW
Above 365 K Between 365 and 298 K Below 298 K (lowest T = 267 K)
Refrigeration cycle characteristics. Refrigerant Reference flow rate Mechanical power P
(bar)
R717 0.1 394
Ammonia kmol/s kW
T,. ( K)
TO”,
Q
A T minp
(K)
(kW)
(K)
340 264
304 264
2274 1880
2 2
~~
Hot stream Cold stream
12 3
I
369
370
I
3 Computer-Aided Integration of Utility Systems Table 3.14
Steam cycle characteristics.
P
HP2 HP1 HPU MPU LPU LPU2 LPU3 DEA Table 3.15
T
Comment
(bar)
(rc)
92 39 32 7.66 4.28 2.59 1.29 1.15
793 707 510 442 419 402 380 377
superheated superheated condensation condensation condensation condensation condensation deaeration
Results of the energy conversion system integration for different options.
~
Option
Boiler Boiler + steam GT + steam Boiler + heat pump Boiler + steam + heat pump
7071 10086 16961
-
666
-
-
5427
2957 2262
-
-
8979 9006 9160 2800 2713
738
-
-
485 496
I , - - - -- - _ _ - - - - - _ I I : , ............. Pro:ess composite curve .................... -1.. . Ut lity composite curve ...... 1100 -. 1000 __ ....... .;. ........,,................................................ ._ ,
1200
, . .
900 800
5
h
.
:;
-.........'.......
:.
I
/
/
1 .........'..........'.........
, *, ...,...................,.................... , #
700
.
,
,
.
,
600
__ ....... .; ...,,...............................
500
-. .......
200
,
I
,
-8.. I
.....
:. ....... .:. ........ .:.
...
., . ................................ , I , I
400
,300
'
.I.
1
I
I
,
-.
.......
.;:.-.-.:: I
: ?.-;?? I
7
.-.--A--.;.; I
I
~
I
I
I
In order to valorize the exergy potential (Fig. 3.8), a steam network has been integrated. The steam network characteristics are given in Table 3.14. The grand composite curve obtained by the integration of the steam network is given in Fig. 3.10. This figure is not readable and even the integrated composite curves of the system
3.7 Example
1200
-
,
I
I
I
I
Process composite 700
-
500
-
400
-
300 200 -2000
, 0
2000
I
I
I
4000
6000
I
8000
10000
12000
QWW) Figure 3.19 Integrated composite curve o f the utility system: boiler, steam, refrigeration, and cooling water
(Fig. 3.19) becomes difficult to read. Based on the same results. The integrated composite curves of the steam network (Fig. 3.20) offer a better visualization of the steam network integration. It should be mentioned that the choice of the process pinch point location as a reference for locating the temperature axis allows one to verify the
Q v
I-
700 600
500 400 300
200 -4000
-2000
0
2000
4000
6000
QWW) Figure 3.20 Integrated composite curve o f the steam network: boiler, steam network, refrigeration, and cooling system
8000
10000 12000
I
371
372
I
3 Computer-Aided Integration of Utility Systems
appropriate choice of the steam pressure levels. The energy balance of the hot and cold streams of the steam network is the net mechanical power production. When the steam levels are appropriately placed, the mechanical power production corresponds to a supplement of energy to be supplied to the process. The fact that the mechanical power production appears on the left of the temperature axis proves that the steam network characteristics are appropriate for the optimal production of mechanical power. The area between the two curves gives an indication of the quality of the exergy valorization. Applying the rules of the appropriate placement of heat pumping devices, three heat pumping cycles have been proposed and simulated (Table 3.17). The high values of the coefficient of performance (COP) are explained by the very small temperature raise to be obtained from the heat pump when considering small A T,in/2 values for the heat exchangers. Using the optimization tool, the optimal flow rates in the three cycles have been computed together with the new value of the fuel in the boiler house. In the example considered,this leads to a situation where the whole heat requirement may be provided by the heat pumps. The integrated curves of the heat pump system integration are given in Fig. 3.23. When the steam network is considered the results are slightly different since in this case, an additional amount of energy is supplied to the system to produce mechanical power by expansion in the steam network. The solution of heat pumping is then compared with a combined heat power production using a gas turbine. The summary of the energy conversion integration target is given in Table 3.16. It is shown that a MER of 6854 kW for the heating requirement and of 1709 kW for the refrigeration requirement is finally supplied with an equivalent of 893 kW of fuel when considering the possibility of heat pumping and when valorizing the exergy content of the process streams. Compared to the boiler house solution, the new situTable 3.16
Overall energy consumption of the different options based on 55 % fuel equivalent for elec-
tricity. Option
Boiler Boiler + steam GT + steam Boiler + heat pump Boiler + steam + heat pump Table 3.17
Cycle 3 Cycle 2 Cycle 0
Fuel
Net electricity
Total consumption
(kwd
(kWe)
(~WLHV)
7071 10 086 16 961
371 -2481 -7195 832 125
7746 5575 3879 1513 893
-
666
Characteristics of the heat pump system, based on R123 as working fluid.
5 6 6
354 361 361
7.5 10 7.5
371 384 371
15 12 28
130 323 34
Figure 3.21 Integration of a gas turbine with postcombustion process.
ation corresponds to a fuel consumption reduction by a factor of eight. These data have been computed by considering a fuel equivalence of 55 % for electricity production. The method applied here allows one to quickly evaluate energy conversion alternatives and to quickly assess the impact of process modifications on the processes. Using the targeting method based on optimization, the major advantage of the approach stands in the accounting of the energy savings in terms of cost of energy or in terms of exergy losses rather than in terms of energy. The method allows one to make a first selection of energy conversion options to be further analyzed in more detail using thermoeconomic optimization tools. In this analysis, the cost of the energy conversion system should then be considered together with the cost of the heat exchanger network in order to assess the profitability of the solutions. I
1
500
460 460
440 420 400
380 360
f i
-2000
Figure 3.22
,
I
-1000
0
1000
2000
3000
Restricted matches penalty composite curves.
4000
5000
6000
374
I
3 Computer-Aided fntegration ofUtility Systems
-2000
2000
0
6OOO
4OOO
8000
10000
12000
Q(kW Figure 3.23 Integrated composite curve o f the heat pump system: boiler, refrigeration, and cooling system
3.7.1 Rational Use of Water in Industry
In chemical process systems, water is often used as processing support. The optimization of the water usage concerns two aspects: water savings (i.e., resource usage minimization) and the emissions (i.e., the level of contaminant in the waste water). Resulting from the mass balance and the process requirements, both objectives are 1200 1100 1000 -
I
I
I
....,...................,........
I I ,
.
RiG:ess cdmposite cur!., Ut lity co,mpositecurye ...... 0
I
.. .
900
'.
800
.
8
.
8
, I
I
.
I
I
,
I (
antagonistic since the mass of contaminant to be extracted from the process is constant: if the flow rate decreases, the concentration should increase. El-halwagi and Manousiouthakis [52] presented the analogy between the energy pinch analysis and the mass exchange networks: the concentration replaces the temperature and the water flow replaces the heat load. The method uses mathematical programming techniques where a constraints set defines the water cascade. Wang and Smith [2] proposed a graphical method based on the same analogy. In their approach, each unit utilizing water is represented by a limiting water profile defined as a concentration/mass of contaminant profile (Fig. 3.25). This profile represents the worst conditions (in terms of flow rate and concentration) that should have the support water entering and leaving the unit in order to realize the mass transfer required. This allows one to draw a composite curve that may be assimilated to the utility profile in the energy domain and to which the fresh water curve (utility) will be integrated. The utility curve starts with a mass of contaminant of zero. Its slope corresponds to the fresh water flow rate required for the system. The minimum flow rate is computed by activating the pinch point between the utility curve and the limiting profile. By balance, the maximum concentration of water is determined. From this targeting procedure, it is then possible to design a water exchange network using a procedure similar to the pinch design method for the heat exchangers. The proposed method suffers however from the drawbacks of being based on the limiting profiles of the mass transfer that does not allow for generic representation of the water usage in the chemical industry and on the difficulty of handling multiple contaminants even if the authors have proposed an adaptation of the method [53] [54]. Dhole et al. [55]proposed another representation of the water cascade considering the water units as being source and demand of water with a given level of purity (Fig. 3.26). The advantage of this representation is the generalization of the approach to represent all types of water usage units and not only the one based on the mass transfer profiles. This allows one to draw the hot (source) and cold (demand) composite 250 200
150 h
E,
uva
100
50
.,
n
Figure 3.25
0
50
150 200 250 300 Mass load of contaminant (kg/h)
100
Water limiting composite curve
350
400
376
I
3 Computer-Aided Integration of Utility Systems
1.oo
0.9990 0.9980 h
'8
a
Demand composite Source composite ------
r : '
r
le recovery by mixing
-
0.9970
0.9950
-
0.9940
-
0.9960
Figure 3.26 Water sourcedemand profiles
curves, and the corresponding grand composite curve, and to identify the pinch of the system. This approach, however, suffers from the problem of not being able to identify the purity changes that will result from the mixing of a high purity stream (above the pinch point) with a low purity stream (below the pinch point) to produce water with a medium purity above the pinch point. This has some energy analogy with heat pumping by absorption heat pumps. Based on this representation, Hallale [SG]proposed an algorithm that computes the possible recovery by mixing and that is based on the water surplus profiles (Fig. 3.27). By analogy with the process integration techniques, the methods based on the graphical representations help to identify the possible integration of water regeneration equipment. The rules are similar to the one of the heat pumps integration.
.
0
with water k l o w the pinch
0.5
Figure 3.27 Water surplus composite
1 1.5 Water flow (t/h)
2
2.5
The graphical representations are attractive to solve and explain water usage targeting. These methods become difficult to apply, however, when multiple contaminants are concerned. In this case, the use of mathematical programming appears to be more convenient. Several formulations have been used: following the formulation of El-halwagi (521, Aha-Argaez et al. [57] proposed a transshipment formulation of the problem that allows one to introduce multi contaminants constraints and balances in the system. Such methods simultaneously solve the water target and the mass exchange network. Like in the case of energy conversion integration, the water usage minimization will not only concern the minimization of the water usage but also the integration of water treatment or purification units that will concentrate the effluent and perhaps transfer the contaminant to another phase (e.g., solid). In this case there is an interest to consider the combined integration of the water usage and the energy. This would become especially true in situations where waste water will be treated by concentration (cold stream) followed by biomethanation to produce biogas (fuel) or in the case of thermal water desalting systems. Furthermore, when analyzing the grand composite curve of the water requirement profile (water surplus composite), one may suggest introducing water regeneration units (e.g., filtration) that will change the shape of the water profile and will allow one to further reduce water consumption. This approach will therefore be similar to the one used in the energy conversion technology integration. The mathematical programming approach [58]presented below is based on a multiple contaminants transshipment model. It may be combined with heat integration by summing up MILP problems. The link between the two models will be the value of the level of utilization fw,which is related to water usage and to energy effects: (3.43)
subject to: Vd = 1, ..., nd s=l
Vs = 1, ..., ns
c . c2 . ns
s= 1 as
s= 1
h sd
d=l
xs,,
h d <.f,(d) xd
(1 - Xs)
.x z y
V ' = 1, ..., n,
h d <.fw(d) . (1 - Xd) xd
Vd = 1, ..., n d
(3.44) Vd = I, ...,
378
I
3 Computer-Aided Integration of Utility Systems
where
mi,d
r
Cs.d
is the flow rate exchanged from source s to demand d is the purity required by demand d is the purity of source s; is the level of usage of unit w corresponding to demand d (respectfully,source s); is the flow rate in the nominal conditions of demand d is the fraction of impurity j in source s; is the maximum allowable fraction of impurity j in demand d; are, respectllly, the fixed and the proportional cost of the unit w expressed as a function of its nominal flow rate; is the cost of exchanging one unit of flow rate from source s to demand is the number of utility units in the system; is the number of process units in the system; is the number of demands (sources); is the total number of impurities to be considered in the problem.
By analogy with the energy integration technique, the use of graphical representations allows a representation of the quality of the integration and suggests further process modifications. It should be mentioned that the approach suggested for water usage may be implemented by analogy to design and retrofit of the hydrogen networks in refineries. Both graphical techniques [59]and mathematical programming techniques [60] are transposed from water minimization to the refinery hydrogen management domain. In this context, it will be important to consider simultaneously the energy conversion integration techniques, since hydrogen recovery from the hydrogen network will have to be accounted for by its energy content value. The energy consumption used to produce pure hydrogen and the possible combined production of mechanical power must be considered, with the latter being used to drive the recycling hydrogen compressors. 3.8 Conclusions
In process design, the optimal integration of the utility system allows one to transform energy minimization problems into energy cost or exergy loss minimization. It offers a way of considering the energetic problem as a whole, adopting a system vision for the use of energy in the process. The optimal integration of the utility system defines the complete list of streams to be considered in the heat exchanger network design. It is therefore an important step in any process integration study. The computer-aided methodology for integrating the energy conversion system (utility) in chemical production sites combines the use of graphical techniques and mixed integer linear optimization. Considering that the problem formulation is not always known from the beginning, it is important to use methods that support an
References 1379
engineer’s creativity rather than a push-button method. The MILP technique is a robust problem formulation and solving method for process engineers. Many complex problems in process design and operation can be formulated and efficient solutions may be identified. Obviously, the utility integration study, as presented in this chapter, represents only one step of the problem, but it allows one to capture, in a quick and easy way, the major aspects of the integration and to identify the most important options while eliminating the less attractive ones. An important aspect of the methodology presented is the fact that it does not need the definition of the utility system structure. That will be done only when the best options are identified and together with the heat exchanger network design and optimization.
1 Boland P. Hewitt G. C. Thomas B. E. A. Guy
2
3
4
5
6
7
8
A. R. Marsland R. H . LinnhofB. Townsend D. W.A user guide on process integration for the efficient use of energy. The Institution of Chemical Engineers, 1982 Smith R. Wang Y. P. Wastewater minimisation. Chem. Eng. Sci. 49(7) (1994) p. 1981 - 1006 Paris J. Brown D. Markchal F. A dual representation for targeting process retrofit, application to a pulp and paper process. Appl. Therm. Eng. 25(7) (2005) p. 1067-1082 Markcha1 F. KalitventzefB. Optimal insertion of energy saving technologies in industrial processes: a web based tool helps in develop ments and coordination of a European r&d project. Appl. Therm. Eng. 20 (2000) p. 1347-1364 Pelster S. Environomic modelling and optimisation of advanced combined cycle cogeneration power plants including C02 separation options. Dissertation, LENI, EPFL, 1998 Von Spakovsky M. R. Pelster S. Favrat D. The thermoeconomic and environomic modeling and optimization of the synthesis, design, and operation of combined cycles with advanced options. J. Eng. Gas Turbines Power 123(4) (2001) p. 717-726 Frangopoulos C. A. Comparison of thermoeconomic and thermodynamic optimal designs of a combined-cycle plant. In: Rakopoulos C. D., Kouremenos D. A., Tsatsaronis G. (eds.) International Conference on the Analysis of thermal and energy systems, pp. 305-318, Athens, Greece, 1991 Pelet X. Optimisation de systPmes energetiques integrks pour des sites isoles en con-
9
10
11
12
13 14
15
siderant les parametres economiques, d’kmissions gazeuses, de bruit et de cycles de vie. Dissertation, Laboratory for Industrial Energy Systems, Swiss Federal Institute of Technology Lausanne, 2004 Arpentinier et al. Exsys ii: an expert system for optimal insertion of intensified energy saving technologies (iest) in the industrial processes. Publishable final report of project joe3-ct97-0070. Technical report, EU commission, 2001 Whiting W. Shaeiwitz]. Turton R. Bailie R. Analysis, Synthesis and Design of chemical processes. Prentice-Hall, Englewood Cliffs, NJ, 1998 Barthel Y.Rairnbault C. ArlieJ. P. Chauvel A. Leprince P. Manuel d’kvaluation economique des procCdes, avant-projets en rafinage et petrochimie, nouvelle edition revue et augmentee. Ed. Technip. 2001 LinnhoflB. Townsend D. W. Heat and power networks in process design. Part 1: Criteria for placement of heat engines and heat pumps in process networks. AIChE J. 29(5) (1983) p. 742-748 Kotas T. J . The Exergy Method of Thermal Plant Analysis. Krieger, Melbourne, FL, 1995 Favrat D. Staine F. Energy integration of industrial processes based on the pinch analysis method extended to include exergy factors. Appl. Therm. Eng. 16 (1996) p. 497-507 Lavric V. Baetens D. Plesu V. De Ruyck]. Broadening the capabilities of pinch analysis through virtual heat exchanger networks. Energy Convers. Manage. 44(14) (2003) p. 2321-2329
380
I
3 Computer-Aided lntegration of Utility Systems 16 L i n n h o f B . Dhole V. R. Total site targets for
17
18
19
20
21
22
23
24
25
26
fuel, co-generation emissions, and cooling. Comput. Chem. Eng. 17(161)(1992)p. s l 014 0 9 KalitventzefB. Marbchal F. Identify the optimal pressure levels in steam networks using integrated combined heat and power method. Chem. Eng. Sci. 52(17) (1996) p. 2977-2989 Ohba T. Ishida M. A new approach to energy-utilization diagrams for evaluation of energy of chemical systems. In: Pulido R. Tsatsaronis G. Rivero R., Monroy L. (eds.) Energy-Efficient. Cost Effective, and Environmentally-SustainableSystems and Processes, pp. 845-852. Instituto Mexican0 del Petroleo, 2004 KalitventzefB. Marbchal F. Targeting the minimum cost of energy requirements: a new graphical technique for evaluating the integration of utility systems. Comput. Chem. Eng. 20 (1996)p. S225-S230 KalitventzefB. Marbchal F. Heat and power integration: a milp approach for optimal integration of utility systems. Proceedings of the 22nd Symposium of the working party on use of computers in chemical engineering, COPE’91 (Barcelona) 1991 Kalituentzef B. Marbchal F. Process integration: Selection of the optimal utility system. Comput. Chem. Eng. 22(200) (1998)p. S149-S156 KalitventzefB. Marbchal F. Targeting the optimal integration of steam networks. Comput. Chem. Eng. 23: (1999)p. s133-sl36 Kalitventzef B. Marbchal F. Restricted matches and minimum cost of energy echal requirements: tools and methodology for producing practical solutions. PRES’99: 2nd conference on process integration, modelling and optimisation for energy saving and pollution reduction (1999)p. 433-438 KalitventzefB., Marbchal F. A tool for optimal synthesis of industrial refrigeration systems. Comput. Aided Chem. Eng. 90 (2001) p. 457-463 KalitventzefB. Pierucci S. Closon H . Marbchal F. Energy integration: Application to an olefins plant. ICHEAP 4: Fourth Italian Conference on chemical and process engineering 277 (1999) p. 131-134 KalitventzefB. Marbchal F. Favrat D. A methodology for the optimal insertion of organic rankine cycles in industrial processes. 2nd International Symposium on Process Integration, Halifax (2001) 2001.
27
28
29
30
31
32
33
34
35
36
37
38
39
Kalitventzef B. Marbchal F. Targeting the integration of multi-period utility echal F. systems for site scale process integration. Appl. Therm. Eng. 23 (2003)p. 1763-1784 Kalituentzef B.,Marbchal F. Study of the insertion of the partial oxidation gas turbine to satisfy high temperature requirements of industrial processes using energy integration techniques. Proceedings of ESCAPE 10, Elsevier Science, Amsterdam (2000) pp. 679-684 Marbchal F. KnlitventzefB. Dumont M . N. Process integration techniques in the development of new energy technologies: application to the isothermal gas turbine. Keynote lecture, paper F3.1, Proceedings of CHISA 98, 13th International Congress of Chemical and Process Engineering, Prague, 23-28 August 1998, p. 205, 1998 Duran M. A. Grossmann I. E. Simultaneous optimization and heat integration of chemical processes. AIChE J. 32(1) (1986a)p. 55 Grossmann I. E. Papoulias S. A. A structural optimization approach in process synthesis: i. utility systems. Comput. Chem. Eng. 7(6) (1983a)p. 695-706 Dhole V. R. L i n n h o f B . Shaft work targeting for subambient plants. Aiche Spring Meeting, Houston, Paper 34d, April 1989 Seider W. D. Colmenares T. R. Synthesis of cascade refrigeration systems integrated with chemical processes. Comput. Chem. Eng. 13(3) (1989) p. 247 Grossmann I. E. Sargent R. W. H. Optimum design of heat exchanger networks. Comput. Chem. Eng. 2 (1978)p. 1-7 Mason D. Linnhof B. Cerda /. Westerberg A. W . Minimum utility usage in heat exchanger network synthesis. A transportation problem. Chem. Eng. Sci. 38(3) (1983) p. 378-387 Grossmann I. E. Papoulias S. A. A structural optimization approach in process synthesis -ii. Heat recovery networks. Comput. Chem. Eng. 7(6) 1983b) p. 707-721 Kalitventzef B. Marbchal F. Synepl: a methodology for energy integration and optimal heat exchanger network synthesis. Comput. Chem. Eng. 13(4/5) (1989) Glavic P. Kravanja 2. Cost targeting for hen through simultaneous optimization approach: a unified pinch technology and mathematical programming design of large hen. Comput. Chem. Eng. 21 (1997) p. 833-853 Kravanja Z. Sorsak A. Grossmann 1. Hostrup M . Gani R. Integration of thermodynamic
References I 3 8 1
40
41
42
43
44
45
46 47
48
49
50
51
52
insights and minlp optimization for the synthesis, design and analysis of process flow sheets. Comput. Chem. Eng. 25 (2001) p. 73-8 3. Santibanezj. Grossmann I. E. Applications of mixed-integer linear programming in process synthesis. Comput. Chem. Eng. 4(49) (1980) p. 205-214 Grossmann I . E. Mixed integer programming approach for the synthesis of integrated process flowsheets. Comput. Chem. Eng. 9(5) (1985) p. 463-482 Grossmann I. E. Floudas C. A. Ciric A. R. Automatic synthesis of optimum heat exchanger network configuration. AIChE J. 32(2) (1986) p. 276-290 Grossmunn 1. E. Floudas C. A. Automatic generation of multiperiod heat exchanger network configuration. Comput. Chem. Eng. 11(2) (1987) p. 123-142 Kokossis A. Shang 2. A transhipment model for the optimisation of steam levels of total site utility systems for multiperiod operation. Comput. Chem. Eng. 28 (2004) p. 1673-1688 Iyer R. R. Grossmann I . E. Synthesis and operational planning of utility systems for multiperiod operation. Comput. Chem. Eng. 22(7-8) (1998) p. 979-993 Grossmann I. E. Iyer R. R. Optimal multiperiod operational planning for utility systems. Comput. Chem. Eng (8) (1997) p. 787-800 Hui C.-W.Cheung K.-Y. Total-site scheduling for better energy utilisation. J. Cleaner Prod. 12(2) (2004) p. 171-184 KalituentzefB. Mixed integer non linear programming and its application to the management of utility networks. Eng. Optim. 18 (1991) p. 183-207 Markchal F. Methode danalyse et de synthese energetiques des procedes industriels. PhD thesis, Laboratoire danalyse et de synthese des systemes chimiques, Facult6 des Sciences appliquees, Collection des publications no. 164, Universite de Liege, 1995 Grossman 1. E. Duran M. A. A mixed-integer nonlinear programming algorithm for process systems synthesis. AIChE J. 32(4) (1986b) p. 592-606 KalitventzefB. Dumont M . N . Papalexandri K. P. Pistikopoulos E. N. Operation of a steam production network with variable demands, modelling and optimisation under uncertainty. Comput. Chem. Eng. 20(207) (1996) p. S763-S768 Manousiouthakis V. I!-halwagi M . A. Automatic synthesis of mass exchange networks with single component targets. Chem. Eng. Sci. 45(9) (1990) p. 2813-2831
53
54
55
56
57
58
59
60
61
62
63
64 65
66 67
Smith R. Kuo W.-C./. Effluent treatment system design. Chem. Eng. Sci. 52(230) (1997) p. 4273-4290 Smith R. Doyle S. /. Targeting water reuse with multiple contaminants. Process Safty and Environmental Protection: Transactions of the Institution of Chemical Engineers 75(3) (1997) p . 181-189 Tainsh R. A. Wasilewski M. Dhole V. R. Ramchandani N. Make your process water pay for itself. Chem. Eng. 103(1) (1996) p. 100-103 Hallale N. A new graphical targeting method for water minimisation. Adv. Environ. Res. 6 (2002) p. 377-390 Kokossis A. Alua-ArgaezA. Vallianatos A. A multi-contaminant transhipment model for mass exchange networks and wastewater minimisation problems. Comput. Chem. Eng. 23(10) (1999) p. 1439-1453 Pans /. Brown D. Marechal F. Marechal F. Mass integration of a deinking mill. pretires 90ieme conf. ann. PAPTAC, 2004 Liu F. Hallale N. Refinery hydrogen management for clean fuels production. Adu. Enuiron. Res. 6 (2001) p. 81-98 Towler G. P. Zhangj. Zhu X.X . A simultaneous optimisation strategy for overall integration in refinery planning. Ind. Eng. Chem. Res. 40(12) (2001) pp. 2640-2653 California Energy Commission: California Distributed energy resource guide (visited 2005) www.energy.ca.gov. Taylor W . R. Holcomb F. H . Binder M . /. Reduced emissions from cogeneration, applications of dod fuel cell power plant fleet. DoD fuel cell, www,dodfuelcell.com, cited 29 June 2004, 2004 Cropper M. Why is interest in phosphoric acid fuel cells falling? Fuel Cell Today, 8 October 2003 Crawley G. Operational experience with Siemens-Westinghouse cogeneration experience. Fuel Cell Today, 17 October 2001 Teagan P. Yang Y. Carlson E. /. Srimamulu S. Cost modelling of SOFC technology. 1st International conference on fuel cell development and deployment, www.~~kell.uconn.edu/f~c/pdf/fcic-pro~amoral-4a.2.pdf; 2004 Colson-lnam S. Solid oxide fuel cells - ready to market? Fuel Cell Today,7/anuary 2004 n o m a s C. E. Cost analysis of stationary fuel cell systems including hydrogen cogeneration. Technical report, Directed Technologies, December 1999, www.directedtechnologies.com, 1999
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein I 3 8 3 Copyright 02006 WILEY-VCH Verlag GmbH 8
4 Equipment and Process Design 1. David L. Bogle and 6.Eric Ydstie
Abstract
The chapter introduces the computational basis for equipment and process design in the chemical manufacturing industries. Problems, discussed through the use of case studies, range from modeling, simulation and optimization of existing, and proposed processes. These case studies include minimization of exergy losses in distillation through design and control, conceptual design of a complex, multiphase system with fluid flow from pilot plant scale-up data, handling uncertainty in the design of a multiphase reactor, a biochemical process design, and control system design for a fluid bed reactor modeled using a population balance approach. In all of these studies we focus on exploiting a very rich mathematical structure defined by conservation laws and the second law of thermodynamics. The basic objectives will be briefly outlined dealing with the fundamentals of modeling systems of considerable complexity. The hierarchical modeling approach will lead to classes of systems that are suitable for the use of optimization, control design, and the study of the interaction between these for both unit operation and flowsheet design. Design objectives continue to include profitability, but are increasingly directed towards flexibility and robustness to allow for greater attention to responsiveness to market demands, and for improved environmental Performance. Controllability is also currently a focus of attention since flexibility implies dynamic performance objectives that include change of product and feedstock as markets change and environmental constraints become more important. The work we describe in the case studies represent recent developments and trends in the industry in the area of computer-aided process engineering (CAPE). The applications focus on the use of optimization techniques for obtaining optimal designs and better approaches for controlling the processes close to or at the optimal point of operation. Designs use rigorous models based on thermodynamics, conservation laws, and accurate models of transport and fluid flow with particular emphasis on dynamic behaviour and market condition uncertainties.
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
384
I
4 Equipment and Process Design
4.1 Introduction
Engineering design combines the need to specify a unit or system that can manufacture a product to meet output specifications while creatively developing new approaches with the potential to improve the performance of existing units. In many parts of the process industries this is now done routinely using computer-aided process engineering (CAPE)tools. Such tools mean that many alternative designs can be developed and evaluated and that designs can be optimized to obtain the best performance under a wide range of market conditions. Only a few years ago it was impossible to carry out such studies for all but the simplest processes. However, very significant advances have been made in optimization theory, modeling complex systems, and nonlinear control over the last decade. These new methods take advantage of the rapid increase in computational speed, distributed and Web-based computing, flexibility in software development, and graphical interfaces to a degree not imagined a few years ago. This conflux of trends of ideas will necessarily lead to opportunities to better the design of, optimize, and operate chemical processes, as we are able to analyze and integrate a much broader range of physical scales and physical phenomena with CAPE tools. In this chapter we will outline some recent trends in equipment and process design through a series of case studies. We have chosen to focus on cases where the use of simulation tools was significantly challenged and where opportunities for new areas of application, as well as new research, are present. When new processes are being developed there is an important requirement to obtain accurate property data on which to base the design. This means that for process development there is a strong need to tie the design process with bench and pilot plant data to attempt to reduce uncertainty. Of course, CAPE tools allow us to do this more efficiently. In several case studies this has been achieved. A second trend is the need to obtain more accuracy on spatial distributions as the production of by-products, for example toxic wastes or unsafe local transients, need to be reduced or eliminated. This can now be done with Computational Fluid Dynamics (CFD) calculations and one of the case studies uses this capability. One of the key aspects of design is the design of objective specifications. These are needed, in the first instance, to make a product to a minimum specification and quantity at least cost. Increasingly other objectives are becoming more important such as: environmental (least waste, least water usage, least COz generation), safety (least toxic by-products),flexibility (maximizing the window of operation), controllability (metrics based on closed loop responses to disturbances), and uncertainty (based on expected distributions of key variables). 4.2 The Structure o f Process Models
In this section we will very briefly review the basis for process modeling, a field whose aim it is to develop the physical relationships that will allow us to make pre-
4.2 The Structure of Process Models
dictions, using mathematical models, of how the choice of design and control variables impact the quality and quantity of a product we make. The field of modeling, of course, is extremely broad. However, a fact that is often overlooked is that thermodynamics, most notably the second law, provides us with a very rich framework for analysis of the topological structure of vector fields that define the static and dynamic behaviour of that class of systems we are interested in discussing. For example, classical thermodynamics defines the relationships amongst the extensive variables used to define conservation laws (conservation of energy, mass, charge, and momentum) and the intensive variables (temperature, pressure, chemical potential, voltage, and stress) that need to be controlled in order to maintain quality. Process design and control need to be concerned with both types of variables. The topological structures of these vector spaces are very different, however, and it is important to keep this fact in mind. The space of intensive variables is usually quite smooth since the intensive variables provide driving forces for flow and sharp gradients in temperature, pressure, electric field, stress, and chemical potential are difficult to maintain. The space of extensive variables can be quite irregular since the energy, composition, charge, density, etc., are discontinuous across phase boundaries. This has consequences for how we solve the conservation laws since these are stated in terms of the extensive variables. A thorough understanding of the underlying topology of the problem at hand can provide insights that can be used to obtain better solutions more rapidly, and solution structures may be obtained that are the result of discontinuous jumps in logic that are normally difficult to encapsulate in stand-alone computer programs. All systems we deal with satisfy to very high degree of accuracy the basic axioms of equilibrium thermodynamics : 0
0
0 0
0
The state of a fluidlsolid element is represented by the (n+2) dimensional vector Z = ( U,V, M ~.., ., M , ) ~of extensive variables. u is the internal energy, v the volume, and Mi is the amount (mass or moles) of chemical component i. This state can be augmented to include charge, momentum, and indeed any other extensive property. There exists a C2 function S(Z), called the entropy, which is first order homogeneous so that for any positive constant h we have S(h, z) = AS(z). The entropy of an isolated system increases (second law of thermodynamics). The energy of the system is conserved (first law of thermodynamics). 8 S / 8 U > 0 (the temperature is positive).
The vector Z of extensive variables belongs to a convex subset of Rn+2,which we denote by Z . Based on these concepts and the conservation laws derived therefrom we develop optimization and control models for almost all unit processes of chemical process systems, the structure of phase and reaction equilibrium, and the potentials (pressure, temperature, chemical potential) used to derive transport relations. In equilibrium thermodynamics we can combine subregions of a system while retaining the idea that the extensive variables describe the state of the system. This means that two fluid elements with states Z1 and Z2,respectively, give a new element when combined with the state Z3=Z1+Z2.The second law of thermodynamics can now be stated in the following manner:
I
385
386
I
4 Equipment and Process Design
S(Z3) z S @ l )
+ S(Z2)
(1)
In other words, the entropy of combined systems is never smaller than the sum of the entropies of its subsystems. It follows that for any positive constant A:
+
S(hZl+ (1 - h)Z2) L S(hZ1) S((1 - h P 2 )
Using the homogeneity gives the relationship:
This expression shows that the entropy is concave. Concavity of the entropy function and dissipation by entropy production lies at the heart of the beautiful structures that emerge as we simulate the static and dynamic behavior of process systems. Consider for example phase equilibrium. When Gibbs developed the tangent plane criteria for phase stability he used concavity of the entropy as a basis for defining the condition for phase equilibrium as shown in Fig.4.1. The slope of the supporting hyper-plane indicated satisfies: C * t W ) = sup ( S ( 2 ) - WTZ) ZE
t l
c
(4)
/’_
Figure4.1 Illustration (a) shows a projection o f an entropy function and a unique point of stability indicated at the point Zl. The slope ofthe tangent line, w1 defines the intensive variables at that point. Illustration (b) shows a projection, as it might arise in a cubic equation o f state like the van der Waals equation. In this case the EOS
gives three points where the slope, and hence the intensive variables, are the same. The actual entropy is defined to be the smallest concave envelope as indicated by the straight line segment. Along such segments the entropy is concave but not strictly concave. Entropy functions below the line are in violation of the second law.
4 2 The Structure of Process Models
Intensive Variables
Extensive Variables
Figure 4.2 The mapping w = PZ from the space o f extensive to the space of intensive variables is not one to one The line Z , , corresponding to a uniform scaling o f all extensive variables maps to the same point w, in Z’ indicating that pressure, temperature and chemical potential remain invariant Subregions with positive measure (regions in Z with phase or reaction equilibriumZ) map to points with zero measure (in 2”).
The vector w is called the vector of intensive variables in the entropy formulation. These variables belong to the convex set: C * ( W ) = sup (S(Z) - W T Z ) ZE c
The sets Z and Z” are subsets of Rn+2 and they are dual in sense of LegendreFenchel. It is critical to note that the mapping between these spaces is not one to one. The lack of invertibility between extensive and intensive parameters occurs on the subsets of Z where the entropy, although concave, is not strictly concave as illustrated in Fig. 4.2. If the entropy is linear it follows that it is merely concave in some directions. Along these directions we can scale the extensive variables without changing the intensive variables. We may also have discontinuities in the space of extensive variables (Callen 1985). The noninvertibility between the space of extensive and intensive variables is critical for process design. It allows the scaling of the size of the system along lines in the space of extensive variables without changing the intensive variables. We can therefore independently control the “size of the systems” (amount present in each phase) without changing the quality (pressure, temperature, chemical potential). In more practical terms, we can also change the hold-up without changing the properties of phase or reaction equilibrium. These scale invariant properties are so common that we use them always without thinking in thermodynamics. In transport we find similar scaling laws centered on the ideas of dimensionless numbers like the Reynolds, Rayleigh, Nusselt, and Damkohler numbers. These numbers include transport properties and estimates of the size of the domains. The relationship between size and shape, being the extensive properties that define the domain of interest, and the intensive properties is much more complex. It is especially difficult to unravel in systems that integrate fluid flow, chemical reactions and multiple phases. In chemical engineering we need to develop models to deal with systems that are not at equilibrium and have variations in space and in time. By far the most common approach is to develop a lumped description (a network model) of the process. In this
1
387
388
I
4 Equipment and Process Design
approach we use nodes distributed in space that are interconnected by steady or unsteady flow of material and energy. These nodes may represent a single phase in a separation system, a unit process, or an entire plant. Irrespective of size we have additivity satisfied so that we can write conservation equations: $inventory = inlets - outlets + generation dt Inventory is a macroscopic measure of the total internal energy, volume, number of moles of each species respectively so that
-
where the subscript t refers to total. The balances can be written on the form: dv dt
- = @ ( x 9m, 4
Nedlow to system
+ p\-.--/( x , m, 4
Net Generation
The variables x, rn, d represent the state, manipulated and disturbance variables needed to define the flow and generation variables. We make a distinction between the extensive variables Z and the inventories v. The extensive variables were assumed to represent the state of the system with some degree of accuracy. The inventory v does not represent the state unless the system under consideration is spatially uniform in each phase. For example, consider a distillation column with benzene as one component. The macroscopic inventory balance can be applied to the entire system with generation equal to 0. But the inventory of benzene does not represent the state of the system since it can be distributed in an infinite number of different ways throughout the column. In order to have a unique (state)description we need to look at a more refined system. In a stage to stage model of the system we find that the inventory in fact represents the state to a good degree of approximation. Thus we find that uniformity can be approached asymptoticallyby tessellating the physical space finer and finer. In the limit we define the point densities:
M
Z and z = lim -
p = lim -, v-0
v
where M =
M-0
n
M
(9)
M~ is the total mass or moles. These limits exist if the spatial varia-
i=l
tions of Z(x) are sufficiently regular. Local pressure, temperature and chemical potential is defined as before.’ The inventory vi is related to the density of the extensive variables z according to:
1 This Assumption is called “the principle of local
action” or the hypothesis oflocal equilibrium. The latter term can be misleading since it does not at all
mean that the TRP system is at equilibrium: there may be variations is space and time.
4.2 The Structure ofprocess Models
And for a system with one spatial dimension partial differential equations of the form: aPz af -+-=a at ax t is the time, x is position, pz defines the local state vector,f= Cfi,...,fn+2,Tis the transport velocity and u refers to the production rate of property z. These equations are easy in principle to extend to include the momentum transfer and the equations then include the Navier-Stokesequations as a special case. By applying the ideas above to the entropy we find that we can write the entropy balance of the form
*+ at
ah
- = a, where a, 1 0 ax
where s is the entropy density,fr represents the entropy flu through a point and os is the entropy production. The inequality states that the production of entropy is always positive. In a nonequilibrium system the rate of entropy production represents the dissipated energy and an important design objective therefore is the minimization of entropy production. In irreversible thermodynamics the entropy production is given by the expression a, = X T L X
where the vector X represents the thermodynamic driving forces (these are gradients in pressure, temperature and chemical potential) and L is a positive definite matrix. It is quite straightforward to show that the entropy production is minimized when the forces are kept constant and equal in certain staged systems (Farschman et al. 1998). This leads to the important idea of equipartition of forces as a criterion for energy efficient process design (Sauar et al. 1996). The assumptions made up to this point are well tested and hold at very small scales and sharp gradients. Only at extremely small scales and extreme gradients is it necessary to resort to other techniques like lattice Boltzmann methods to give better approximations to physical behaviors. The field of process control has typically concerned itself with how to adjust the flows in a network with fixed network topology. The field of process design has typically been concerned with how to design the network topology. The distinction between the two fields is becoming more and more artificial however since the topology and flows are very closely interlinked. In a high performance system it is not possible to separate the two and equipment and process design should be considered in conjunction with the process control.
I
389
390
I
4 Equipment and Process Design
4.3 Model Development
The book by Hangos and Cameron (2001) sets out systematic approaches to model development and analysis breaking down the steps as shown in Fig. 4.3. If undertaking a first principles modeling task, it is important first to establish the need for the model which define the “goalset definition” in their methodology. These should be specific and quantifiable and a list of common objectives is listed in the first section of this chapter. Models for controllability for example will be different from those required for capital cost minimization. As outlined in the previous section conservation relations can be developed for mass, component mass, energy, momentum, or population of a species. The choice of which relations are required will depend on the nature of the system. For example the population balance relations are only required if there are discrete elements whose properties affect the purpose of the system being analyzed. These may be solid particles, or micro-organisms, or bubbles in liquid or vapour phase. Further constitutive relations will be required to model transfer rates, reaction rates, property relations, and equipment and control constraints, as appropriate. Increasingly there is a need in process development and design to develop more detailed models of parts of the process. This is done for a variety of reasons. More confidence may be required in a new design of unit operation, particularly if the unit has only been used in the prototype. Dynamic simulation requires a more detailed model of complex units so that time-dependent elements, especially those where the time delays are related to transport effects, can be predicted with some degree of accuracy. Contractors are increasingly being required to provide dynamic simulations with the final delivery of a process design. Safety and environmental expectations of customers and local residents now can require more detailed models of plant items. All of these require sophisticated models.
4.4 Computer-aided Process Modeling and Design Tools
Most modeling and design is now done using CAPE tools. Bogle and Cameron (2002) recently reviewed the tools available and how they are used throughout the design and development lifecycle. Sequential modular packages commonly used in industry are based on calculating the sequence of unit operations in the order in which they appear in the flowsheet, using modules for each unit of the flowsheet. If the flowsheet contains one or more recycles a stream must be guessed (“torn”)and iterative methods are used to converge the calculated values to the guessed values. Most modules are written in FORTRAN, a high-level computer language widely used in the engineering and technical world. Examples are AspenPlus and ProII. The calculation procedure is at the heart of the computational procedure used by these packages to solve the problem. However, there are many other parts which make the whole package a convenient tool for the design engineer. The following are
r-------
I I
I I I I I I I I I I I
I
I
((uses)) I
I---_ I
I I I I I I
Analysis
I
1 I 1 I I I I I I I I I I I I I I I
~usesn
r--
I I
I
I I I I I I I I I
I
I I I I I I I
I
I
I I I I I I I I
I 1-
I
I I I I I I I I
I---------
I
I_
r_ Model Solution
I
I I I
-
L - _ _ - -
i-
- - - -- --- -
-j
W I
Model Calibration and Validation
I
~
Figure4.3 The systematic approach to model building (Hangos and Cameron 2001).
I I I I
I I
392
I
4 Equipment and Process Design
the most common elements: input interpreter, unit operation subroutines, physical property prediction subroutines, algorithms to select torn stream, recycle convergence block, costing databank, output post-processor. Equation-oriented packages use a matrix representation of material and energy balance problems. The set of modeling equations is assembled and can be solved numerically. From the equation set an occurrence matrix is assembled. This dictates the number of degrees of freedom, which in turn dictates the number of specifications that can be made. Once a square system is obtained the design equations can be solved giving the solution to the design problem. gPROMS (Oh and Pantelides 1994) and Aspen Custom Modeler are the most prominent examples currently. These systems have advantages over modular systems in that it is easier to set up and solve other types of problems such as dynamic simulation, optimization, and parameter estimation. In both cases models of each unit operation are required. In the design tools simple models are usually used so that overall designs for process flowsheets can be assembled quickly to be used as targets for more detailed design of individual units. There are many software programs for doing detailed design calculations obtaining internal flows and configurations such as the number of trays in a column or the size of a reactor’s packed bed. These calculations are often done using specialist programs which require a more detailed level of modeling. Eggersmann et al. (2002) reviewed the current state of such programs. Network representations use diagrams and pictures and show in a graphical manner the topological properties of a system as they are distributed in space and/or time. Examples of systems that can be effectively represented as networks include a flowsheet of a chemical process, chemical reaction networks, metabolic pathways, decision trees and transportation systems. An example of such a network is shown in Fig. 4.4 on the left. Each node represents an activity where information/material is processed, whereas each vertex can represent flow of information, material or energy. We note that several activities can be lumped together as indicated by activity A+ We can therefore decompose a process into a multiscale hierarchy ranging from the molecular to the macrolayer where economic decision making takes place (Grossman and Westerberg 2000; Ingram et al. 2004). These decompositions have motivated very active research which very recently has led to significant breakthroughs in our ability to model very complex systems with integrated fluid flow, multiple phases and population balances to express the dependency of physical properties on particulate matter. It is important to note that the additivity of the extensive variables allows us to combine such nodes very easily and we therefore have computational architectures with integrating scaling (Farschman et al. 1998; Mangold et al. 2002). These architectures are flexible and amenable to parallel computing using highly distributed Web-based systems and or cluster computer. Such systems are becoming increasingly easy to define and maintain due to interface standards like TCP/IP and CAPE-OPEN (www.colan.org). In this paper a series of case studies is presented where a sophisticated level of modeling was required. This required the use of commercial tools but enhanced with detailed models of the units that were not routinely available in the design sys-
4.5 Introduction t o the Case Studies
1
Physical network
login machine,
batch server.
Distributed Computer Representation
Figure 4.4 The topology o f the process system can be mapped onto a distributed cluster of fast computers interconnected by a high bandwidth communication for parallel processing.
tems on the market. This trend seems set to expand - that customers will require high-fidelity models of critical units often integrated into the design systems that they currently use so that the systems can be used for troubleshooting, startup and shutdown, and for plant enhancements. A closer relationship is required with existing plant, pilot plant, and bench-scale experimental data to improve the fidelity of the models. A rigorous thermodynamic basis is also necessary. This constitutes a development from the use of design systems as balance tools setting targets for detailed design to a staged use of the tools with increasingly sophisticated models used to achieve different objectives. These case studies highlight these more challenging objectives .
4.5 Introduction to the Case Studies
In the following sections we give an overview of five case studies that describe how CAPE tools have been used to improve process design and control for existing and conceptual processes. The first two case studies, of distillation and of a complex mul-
I
393
394
I
4 Equipment and Process Design
tiphase reactor, involve detailed and thermodynamically rigorous models, which are used to ensure the design will meet the productivity targets and reduce or minimize the energy consumption and hence improve the environmental performance. The third case study also considers a multiphase reactor and shows how uncertainty can be systematically considered in the design of the unit. The final two case studies, of a biochemical reactor and separation system and of a fluidized bed reactor, both involve particulates, which requires the use of population balance techniques to model the fluid particle systems. Each case represents a unique challenge and opportunity for modeling and optimization.
4.5.1 Case Study I: A Binary Separation Case Study
Distillation column design is a well established activity where the use of CAPE tools is routine. However design is usually based around well established procedures that are now being challenged with increasing need for design for environmental performance or for better closed loop performance. The CAPE tools allow engineers much more scope for developing designs to meet these new objectives. In this case study a column for the separation of vinyl chloride monomer was designed to meet environmental targets and determining the effect on closed loop dynamic behavior. While the conventional wisdom is that optimizing environmental performance results in a degradation of performance by other measures, this case study shows that is not necessarily the case. For this design a column with total condenser and reboiler was required for the purification of vinyl chloride monomer (VCM). The product quality is fxed at both ends of the column which in turn fmes the product withdrawal at both ends of the column. The feed location is an independent variable and was to be optimized. For binary mixtures there will be one feed plate where the feed stream and the internal column flowrates match. For an optimized column the feed always enters the column on a single plate and is not split over a range of plates. The pressure drop between the top plate and the reboiler is approximately 0.165 bar and depends on the number of plates and the size of the internal flow rates. The objective was to develop a column design which utilized the energy most efficiently by trying to avoid irreversible losses. A diabatic distillation column is a column with intermediate sideheating and cooling on individual plates. Parts of the heating and cooling utilities are re-directed from the bottom reboiler and the top condenser of the adiabatic column to individual plates, generally around the feed plate. The advantage of such a configuration is that heating and cooling can be supplied at alternative temperature levels and the column profiles can be moved towards the reversible column profiles, as discussed by Kaibel et al. (1989, 1990).A n operating curve close and parallel to the equilibrium line is a thermally optimal column configuration.
4.5 Introduction to the Case Studies
4.5.1.1 An Environmental Design Problem: Minimizing Exergy Loss
The concept of equipartition of the driving forces as an optimization criterion to minimize the exergy losses was first introduced by Ratkje et al. (1995).Earlier, Tondeur and Kvaalen (1987)proposed the equipartition of the entropy production (EDF) criterion while Sauar et al. (1996) showed that a better approach was to design for uniform driving forces for different paths over a given transfer area will yield minimum entropy production. The criterion has been further discussed by Kjelstrup et al. (1996) and Sauar et al. (1997). Similar techniques based on thermodynamics have also been used to find the optimal temperature profile in equilibrium chemical reactors (Sauar and Ydstie 1999). The method used here is to directly optimize the exergy losses to minimize the production of entropy, the ExL criterion. Entropy is produced because of the existence of driving forces. To make uniform these driving forces the separation exergy has to be redistributed and this should be an automatic result of minimizing exergy loss. The advantage of the ExL criterion over the EDF criterion is that it considers all possible sources of entropy production without the need for additional constraints. Kaibel (1990) pointed out that the main exergy losses occur on the feed plate due to (1)the feed condition and the adiabatic column design and (2) due to heat transfer, and pressure losses and are necessary because of there being a finite number of plates. The latter (2) can only be reduced by substantial capital investment. The first (1)can be improved by minimal capital investment, i.e., specifylng the feed condition correctly and a re-distribution of the heating and cooling utilities. Hence, the degrees of freedom are the heating/cooling utilities on each plate, the feed location, and the feed condition. The feed location and composition have been determined by traditional means. Heating and cooling requirements were calculated for exergy optimized 23, 25, and 35-plateVCM columns (Table 4.1).For the 25-plate column, seven side-reboilers and two side-coolers (Hagemann et al. 2001) would be required to minimize exergy 02.5
om a2 0 175 0.15
a075
00s
UOX 0
Total Condenser Figure4.5
Plates
Exergy loss profiles for the VCM column (ExL Criterion).
Reboiler
I
395
396
I
4 Equipment and Process Design Table4.1
~~
Exergy comparison of the diabatic with the adiabatic column [CJh-’1.
~~
Adiabatic - Diabatic
22 Plates
25 Plates
35 Plates
Difference in total cooling duty
-0.285 (+31.6%)
-0.182 (+20.7%)
-0.123 (+14.9%)
Difference in total heating duty Difference in total exergy losses, including losses due to intermediate heating or cooling
1-0.341 (-9.8%)
I
-0.552 (-25.1%)
1-0.385 (-11.5%)
I
-0.684 (-37.3%)
1-0.34 (-10.8%)
I
-0.592 (-35.4%)
loss. Seven reboilers is not a practical option but we will see later how much of this benefit can be achieved by a more practical solution. The exergy loss profiles of the VCM column are shown in Fig. 4.5. The profiles for the nonoptimized adiabatic column show very large exergy losses within the stripping section caused by large vapor phase gradients. In contrast, the optimized column shows an almost uniform profile over the mid section of the column. This is directly related to the re-location of heating and cooling utilities. The overall exergy losses are substantiallylower for the optimized column and decrease further with an increase in the number of plates. A dip in the exergy losses at the feed plate can be explained with the composition of the entering feed flow rates. The flow rates (vapor + liquid) closely match the outlet feed plate compositions and, hence, do not create exergy losses. 4.5.1.2 A Practical Interpretation
For the 25-plate column in Table 4.1 the thermally optimal design requires sideheating utilities on plates 14 to 20 and side-cooling on plates 12 and 13. For industrial application an additional seven side-heating units and two cooling units is clearly not a practical solution. However the intermediate cooling utilities can be neglected as these represent only 5.7% of the total required cooling utilities and the seven side-heating units can be replaced by one side-reboiler with very little effect on the results. This single side reboiler, located on plate 15, provides 74% of the side-reboiler utilities of the diabatic thermally optimal design, with an increase in exergy losses of only 11%. This side reboiler is located in an area of low driving forces. Figure 4.6 shows the temperature profiles for the adiabatic, diabatic thermally optimal, and diabatic one side-reboiler VCM column design. For the diabatic thermally optimal column design, the temperature and concentration (not shown) profiles are almost linear between the pinch points at both ends of the column. For the diabatic column with one side-reboilerthe profiles are also closer to being linear than for the adiabatic design. The critical factors for obtaining such “near linear” profiles are the location and size of the side-reboileras well as the condition and location of the feed stream. A shifting of the side-reboilerinto a region of already high-driving forces, or using the side-reboiler to dump available heating utilities, will increase the exergy
4.5 Introduction to the Case Studies
Figure 4.6
Temperature profiles for adiabatic and optimized 25-plate
col u m n.
losses but, perhaps more importantly for the control system design, will alter the column profiles away from near linear profiles. This optimized column design with one side-reboiler column configuration has been used to study a possible control system design and compare it with an adiabatic 25-plate VCM column configuration. 4.5.1.3 Effects on the Controllability of the Process
The new process design has a different dynamic response to feed disturbances which in turn can provide equivalent process stability and/or less capital investment. The design for both columns is based on the widely used LV column control configuration (Luyben 1990).The operating objectives for the VCM column are high product recovery and product purity. The selection of the LV configuration has the advantage that it has a direct effect on the product compositions and is weakly dependent on the level control scheme (Skogestad 1997).The controllers were tuned using the ZeiglerNichols tuning method. Results for 3% feed flow rate and 3 % feed composition disturbances and for a 1"C feed temperature disturbance are tabulated in Table 4.2. The response time, defined as time taken for the output variable to return within a fured distance from the new or old stationary value, has been used as a measure of the quality and stability of the process and control system design. Table 4.2 gives a summary of the response times, in hours, after the feed disturbances at time 0.05 h. It is assumed that the product compositions have reached a new stationary operating condition if the differential values are within (and remain within) *lo% of their final values. The results show that especially for the bottom composition the response times of the diabatic column are much shorter. In all of the feed disturbances reported here, except one, the dia-
I
397
398
I
4 Equipment and Process Design Table 4.2
Summary of response times after the feed step disturbances.
Distillate Temperature (Composition) Control ~~
Response time [h]
Disturbance at time 0.05 [h]
Adiabatic
Diabatic
Flow rate: [+3.0%]
0.68
0.53
VCM composition: [+3.0%]
0.72
0.81
Temperature: [+l.O"C]
0.69
0.54
Bottom temperature (composition) control Response time [h]
Disturbance at time 0.05 [h] ~~
~~~~
Adiabatic
Diabatic
Flow rate [+3.0%]
>0.95
0.37
VCM composition: [+3.0%]
>0.95
0.58
Temperature: [+1.0 (C]
>0.95
0.56
batic column shows better disturbance rejection. The exception is the VCM composition of the distillate after a 3.0% VCM feed composition increase. The reason for this result is the fixed side-reboiler duty. Most of the additional VCM entering the column is vapor. Hence, after the disturbance the side-reboilerevaporates a higher fraction of ethylene dichloride (EDC)until the main reboiler is adjusted to the new operating condition. These results are perhaps surprising because the side-reboiler for the diabatic design introduces strong cross-coupling effects causing considerable oscillations within the column. However the improved controlled response is due to the removal of the sharp nonlinearity in the temperature profile (Fig.4.6).Shifts in the internal temperature, used as a measured variable in the control scheme, are less drastic producing a more even closed loop response. This design study was undertaken using a tray to tray column model with rigorous thermodynamics using SPEEDUP, now a part of the Aspen Custom Modeler system. The ability to develop a rigorous model was essential here in order to achieve the two goals, that of improving environmental performance and analyzing the controllability. Ideally the ability to make structural changes to the process, in this case by adding and removing side-heaters and coolers, could be done automatically but this challenge still remains. The use of integer programming techniques would permit this but they are not yet available in modeling systems.
4.5 Introduction to the Case Studies
4.5.2 Case Study 2: Conceptual Design of Multiphase Reactor
In this case study we review very briefly a challenging program for developing a new process for producing primary aluminum (Johansen et al. 2000; Johansen and Aune 2002).The aim of this joint research program amongst ALCOA, ELKEM and Carnegie Mellon is to develop a process for production of primary aluminum which reduces capital, energy and environmental costs relative to the Hall process with a significant amount (Motzfeldt et al. 1989). The process under consideration, carbothermic reduction, uses electric heating, but a carbothermic process is more energy efficient and has very high volumetric productivity leading to better economies of scale. 4.5.2.1 Carbothermic Reactor Modeling
Carbothermic reduction can be realized in a two-stage process with additional steps needed to purify the product, recover aluminum from off-gases, and recover heat. A diagram of the process is shown in Fig. 4.7 and it is based on a reversible, multiphase and multispecies chemical reaction phenomenon. The kinetics of the reaction mechanism has not been completely finalized, but ionic species identification has been conducted and there is a reaction rate model proposed by Frank et al. (1989);the reactants are aluminum oxide (A1203)and carbon (C), key reaction intermediates are aluminum suboxide (A120)and aluminum carbide (A&), and the end products are molten aluminum (Al) and carbon monoxide (CO).The complexity is illustrated by this simultaneous coexistence of solids and gases in the multicomponent reactive slag.
n
Figure 4.7
Diagram of the carbothermic aluminum process.
I
399
400
I
4 Equipment and Process Design
1. The first stage is the pre-reduction smelting zone. Carbon and aluminum oxide pellets are continuously fed to a submerged arc smelter, melt, react and form a molten slag contained in an inert-atmosphere, oil-cooled reactor. The reaction of aluminum oxide with excess of carbon to form the AI4C3-richslag of the first stage is (T >1900(,))
2. The second stage is the high-temperature reaction zone: the first-stage molten slag
flows into this core stage (a multielectrode, high-temperature submerged arc reactor); the slag is heated to a higher temperature, avoiding severe local surface superheating caused in open arc reactors. Liquid A1 droplets and CO bubbles are rapidly generated, with concurrent A14C3injection to avoid carbon depletion. The decomposition of the first-stage, A14C3-richslag to form the second-stage, Al-rich phase is (T > 2000°C):
3. The third stage is a vapor recovery reactor (VRR), where A1 and A120 vapors react with C to form A14C3. Vaporization occurs as CO vapors sweep the second stage: unless Al,,) is recovered counter-current to incoming solid feed, metal losses are catastrophic for process economics. Undesirable vaporization is thus reduced by staging and feeding the first and second stage gas streams to the VRR. The recovered A& (recycle stream) is re-injected into the reactor, minimizing A1 emissions and maximizing yield. Counter-current flow exceeds incoming reactant preheating needs, thus allowing for energy recovery via cogeneration. 4. The fourth stage of the process is the purijcation zone: liquid aluminum flows through an overflow weir to a flotation and skimming unit, where entrained A14(& dissolved C,,, can be removed via proprietary technology. The goal of our research is to develop models for preliminary design, pilot plant scale-up, optimization of the process and process control. To make best use of energy aluminum vapors and their energy content must be recovered, preferably as aluminum carbide in the VRR. The approach we investigate solves the problem by feeding a carbon material at the top of the counter-current VRR. The carbon reacts with the aluminum compounds in a series of heterogeneous noncatalytic reactions forming solid and gas products. Liquid products, like molten slag, can be avoided by running the VRR at high temperature. The most important products are aluminum carbide, which is needed in the smelting stage of the aluminum process, and hot carbon monoxide gas, which can be used for co-generation of electricity. Garcia-Osorio and Ydstie (2004) developed an unsteady state reactor model which incorporated a shrinking core, mass transfer limited reaction separate mass and energy balances for the vapor, liquid and solid phases and dynamics distributed in space and time. The model equations take the form of conservation equations. The A120and A1 reactants take the form:
4.5 Introduction to the Case Studies
i = A120, Al Similar expressions were developed for the product species. Mass transfer limited diffusion was assumed for the solid gas reactions. All liquid phase reactions were assumed to be in equilibrium. The final model was solved using the method of lines and a stiff (index 1) DAE solver was used to solve the final large-scale system of differential algebraic equations. The model was matched to small-scale experiments and interfaced with proprietary thermodynamics describing the Al-0-C system for temperatures in the range 800 to 2050°C. It was found that the model predicted pilot plant results within 10% and that it could be effectively used for sensitivity studies and scale-up. An example of a sensitivity study is reported in Table 4.3. F, is the dimensionless feedrate of carbon with the range [SO-lOO%].
FC*
AI,O %
A1 %
x,
0.5
96
16
59
0.7
I 98
I 33
I 43
98
47
32
1
%
Gerogiorgis (2002, 2003a,b) developed a number of different CFD models to study design of the aluminum producing stage (stage 2). The objective was to solve the steady state PDE problems for the potential (V), field intensity (9,temperature (IT), velocity ( U,, U,)and pressure (P) to obtain reliable starting points so as to solve molar balances for species concentration profiles in a complete model, reliable for performance evaluation. Constant thermophysical properties were assumed in many studies, although a temperature-dependent electric conductivity has been used to illustrate the strong coupling between the charge balance and the Joule heat generation term. The k - E standard model of turbulence was used in the momentum balance in order to analyze the turbulent slag flow in the reactor. We also developed a two phase flow model to study how the gas generation impacts the fluid flow and mixing characteristics. The finite element CFD model of the reactor has been solved for different geometries with quadratic finite element basis functions on a fine unstructured triangular grid (12,124elements), using commercial simulation software (FEMLAB v. 2.3). Different FEMLAB multiphysics modules have been used and integrated for the simulations (Conductive Media DC, Convection and Conduction, k-E turbulence model and a two-phase model based on constant relative slip velocity). For the present case, nominal Reynolds numbers indicate slag flow well within the turbulent regime (Re -30,000). The steady state CFD problem with turbulence considers three PDE balances and the corresponding partial differential equations on a two-dimensional domain. The first part is the steady state electric charge balance:
I
401
402
I
4 Equipment and Process Design
v2v = vxx+ vw = 0
(17)
The second part is the steady state heat balance:
);;-(
v . (kVT - pCpTU) + o ( V V ) ~- ko exp
- AH = 0
The third part is the steady state momentum balance, which comprises the continuity and velocity PDE system: v.u=o p ( U . VU)
-
v.
[(. +
p -C-, ) k2 ak
. (vu + (VUf)
6
1
= -VP
complemented with the two standard k-E turbulence model: p ( U . Vk)
-
V
[
. (w
+p
s ok
c) E
Vk] = pC, k2 (VU
+ (VU) T ) 2
&
- PE
(20)
The imposed voltage on all electrode tips (Vi, i = 1-6) is set, zero voltage is used on both long horizontal domain sides to approximate the potential in the third lateral dimension, and zero gradient (VV= 0) is used on all other wall sides to account for the insulating behavior of solidified slag. Inlet slag (2173 K) and wall (473 K) temperatures are set, and insulation (VT= 0) is assumed at all six electrode tips. An inlet vertical slag velocity is assumed ( Uo= 0.01 m s-', with a suitable wall function (Gerogiorgis and Ydstie 2003a) on the walls and all six tips. A slip boundary condition is used for the slag-free surface and zero pressure has been assumed at the reactor outlet. A typical result for a two-dimensional simulation showing constant velocity contours is shown in Figs. 4.8 and 4.9. The simulation revealed several important aspects about the process that play a role in understanding the underlying physics and impact design and scale-up. The simulation showed that short, submerged electrodes were preferable to long ones in order to get better energy distribution and flexibility for controlling the process by
.. ...
Figure 4.8 Single-phase flow model with constant velocity contours for steady state multiphysics CFD model. The slag moves under the electrodes and avoids the primary reaction zone. This problem can be solved by raising the floor of the reactor.
4.5 lntroduction to the Case Studies I403
Figure 4.9 Two-phase flow model with major flows superimposed on small-scale movements leading to coarser scale representation of the system, with each zone being represented by a uniform mixing tank with inter-stage flow.
moving the electrodes. Our current studies focus on three-dimensional multiphase simulation to study the issues of back-mixing between stage 1 and 2 and the feasibility of controlling the flow between the stages. 4.5.2.2 MINLP Model for Electrode Heating System Design
One design challenge is to optimize electrode positions and the imposed voltage profile so as to achieve the reaction advance without unnecessary reactor space or energy use. Obviously, dense electrode placement and high voltage result in excessive superheating, causing aluminum evaporation, major yield reduction and losses, while sparse electrode placement and low voltage fail to achieve adequate slag heating resulting in limited conversion and low productivity. In order to address some of these issues Gerogiorgis and Ydstie (2003b) developed a Mixed Integer Nonlinear Programming (MINLP) finite volume model of stage 2. The goal here is to perform electrode placement as well as imposed voltage profile optimization for maximization of A1 production under mass, heat and molar species balance constraints. The mathematical formulation is based on a CSTR series steady state process model; each finite volume is assumed a CSTR with perfect separation of reactants and products. The advance of the reversible reduction towards A1 (liquid) and CO (gas) is considered governed by the overall reaction proposed in a recent kinetic study (Frank et al. 1989). A typical result from the optimization procedure is shown in Fig. 4.10. The long term objective of the research is to develop a multiscale hierarchy of models of the aluminum process, which can be used for pilot plant design, scale-up and optimization of the final process, including control studies. Towards this end we have developed a new approach for Web-based distributed simulation (Garcia-Osorio and Ydstie 2003), which will allow the solution of modules in a network representation of the process over the web.
404
I
4 Equipment and Process Design
IMPOSED ELECTRODE VOLTAGE
I
2bU0
F I I I ' I E VOLI
J
M E I! (i)
h
ABSOLUTE TEMPERATURE
2400 2(lll
2ouo
5 L
1500
1600 !JNl
I2UO 1000
I
I
-I
6
FENITE V O L U M E # (i) Figure 4.10 MINLP model comparing a reactor with three electrodes (gray) and one with six electrodes. The optimal imposed voltage is shown on the left while the resulting temperature profile is shown in the figure on the right.
4.5.3 Case Study 3: Uncertainty Analysis in a Multiphase Reactor
Risk analysis methods for the quantification of uncertainty and identification and ranking of major contributors can be integrated as part of the design process using CAPE tools. The risk analysis problem is a stochastic optimization of process performance under uncertainty and tolerance to operating policy variable error. This case study shows how this is applied to a multiphase reaction process where there is considerable uncertainty in the parameters. The use of stochastic optimization using complex models is a very computationallydemanding problem but is now achievable on the desktop.
4.5 Introduction to the Case Studies
I
405
The general problem for design under uncertainty was stated by Swaney and Grossmann (1985) and has been solved by determining the largest possible operating window based on conservative assumptions or using scenario-basedapproaches. The flexibility analysis assumes a “wait and see” approach where it is assumed that operating variables can be used to achieve any possible scenario. This requires a twostage optimization. The “here and now” approach takes design and operating variables together in a single optimization. A design was required for the production of a pharmaceutical intermediate, formed from the amination of a bromopropyl compound produced in an exothermic multiphase reactor. Sano et al. (1998) developed a kinetic model based on reaction calorimetry data obtained under laboratory conditions in order to determine the optimum feasible and safe operating policy. Solid particles of the active pharmaceutical ingredient (API) bromopropyl feed compound (A) reside in an organic solvent (methanol) inside the reaction vessel. A fuced volume of a 50 wt% aqueous dimethylamine reagent (B) is added to the vessel at a constant flow rate under continuous agitation. The solids gradually dissolve and react with the dimethylamine. The exact physicochemical phenomena for this process are not known. The reaction consists of a parallel-series reaction in which the dimethylamine reacts with the dissolved API feed to form the desired intermediate (C),which in turn reacts with the active feed (A) to form a dimeric by-product (D) in parallel: A + B --f C A C --f D
+
Main reaction Subreaction
D is very difficult to remove in the downstream purification stages. The model assumes intrinsic first order reaction kinetics. An initial rate limiting period due to the dissolution of solids (B) was observed to be independent of solvent concentration and agitation speed within the range of conditions approved. A crude approximation of first order kinetics is assumed in the model for this dissolution controlled period. This period was observed to last until approximately 55% conversion of A for all the conditions considered, at which point the reaction appeared to be limited by the intrinsic reaction kinetics. The kinetic model was integrated with a standard semibatch reactor model with constant volume addition (of reagent B). Consideration of the cooling capacity of the reactor resulted in a limiting relationship between the operating policy variables of feed B addition time, tad& and isothermal temperature, Tiso.For the purposes of this study, this relationship is well-approximated with Tisoas a quadratic function of tadd since data regarding the energy balance is unavailable. The model equations used to describe this process are given in Johnson (2002). 4.5.3.1 Nominal Optimization
Sano et al. (1998) state that one of the objectives for the development of the model was to help determine the best operating conditions for maximum product yield, Yc.
406
I
4 Equipment and Process Design
A reaction time, tf, of less than 8 h (terminated when the rate of conversion of A falls below 0.1%) and a final yield in the impurity, YD,of below 2% must be maintained. The optimization problem is:
max YC
Tiso,tadd
subject to: batch reactor model equations
tf58 YD 5 2% 288 5 Tiso 5 313 0.5 5 tadd 5 3.0
The optimal results the authors determined through repeated simulation of their model are given in Table 4.4. Optimization of an equivalent model for maximum product yield subject to the stated constraints using a nonlinear constrained Sequential Quadratic Programming (SQP) optimization software, are shown to compare reasonably well (Table 4.4). 4.5.3.2
Consideration of Uncertainty in the Stochastic Model
Uncertainty in the model parameters could have a large effect on any results predicted by the model. This may be of particular importance regarding the optimal operating policy determined subject to the safety constraint and the desired limits on process performance. The uncertain parameters which have a nonnegligible influence on yield of C, Yc, yield of D, YD and the final time, tf, are the kinetic rate law parameters, the transition point from dissolution controlled kinetics to intrinsically controlled kinetics, GiSs, and the isothermal temperature, Tiso.These parameters are assumed to be uncorrelated. The main results from sensitivity analysis of the sample generated under the SQP nominal optimum conditions are shown in Table 4.5. The results indicate that the Table 4.4
Comparison between the optimal literature results (determined through repeated simulation) and optimal results obtained with SQP optimization.
I
Sano et al. (1998) Not given
I
~~
97.1 1.4
6.7 298.0
~~
SQP optimization
296.8 1.79
Ea 1 A, Ea2 A2
Yield C
Yield D
Final time
-0.62 0.02 0.74 -0.03
0.60 -0.02 -0.76 0.03
0.99 -0.03 0.03 -0.00
activation energy parameters in the intrinsic reaction rate laws (Eal and Eaz)are the most strongly related to the observed uncertainty in the output criteria. The optimization problem under uncertainty aims to maximize the expected product yield ( Yc)with the expected violation in both the impurity yield ( Yo)and the final time ( t f ) . The formulation of the stochastic optimization problem is:
subject to the process model equations with stochastic parameters. The operating policy decisions, tadd and TI,,, are scenario independent, assuming the a priori here and now mode of control where knowledge of particular realizations of the uncertainties is not assumed in the optimal solution. One-sided stochastic constraints are incorporated to maintain the desired limitations on the impurity yield (YDless than 2.0%) and the final time (tf less than 8 h). Since certain realizations within the uncertainty space may result in values of the endpoint impurity yield and final time exceeding the desired limits, expected violations of these limits of 1.5% and 1 h, respectively, are permitted. This allows some tolerance to the desired limits (E,,I the summed extent of violation of those observations failing divided by the total number of observations), which permits the determination of optimal solutions which are not overly conservative. A stochastic optimization algorithm is used to solve the problem, where a Hammersley sequence sampling scheme is used to place observations in the uncertainty space. A reduced convergence criterion of i 2 % deviation in the output distribution parameters is permitted to reduce the number of observations per objective function evaluation. The results for the optimizations under uncertainty in the key parameters are given in Table 4.6, where the value for the product yield objective mean-variance weight ( x ) is 0.5. It is clear that under the nominal optimal operating policy decisions, the expected violation of the final time, Ev,,l { t f } ,is significantly greater (4.49h) than the desired limit (1h). In addition, the expected yield of impurity, E{ Y,} at 2.75% with an expected violation, Emo1{ Y,}, of 1.39% is not satisfactory. Comparing the results obtained when uncertainty was considered, shown in Table 4.6, it can be immediately seen that an improvement in the Ycmean-variance objective is achieved (largely due to the 47% reduction in the variance) with maintenance of both the stochastic constraints for En,, { tf } and E,,,I { Y,}. Huge reductions in the both the expected final time (2.35 h) and the 5-95% fractile width (5.17h) are
Criteria
Nominal optimal operation
Uncertain optimal operation
Scenarios
456
418
24.97
0.234 [94.30,26.91 2.77 2.35 15.92 7.55 5.17
[0.59, 0.591 [1.39. 4.491
[0.53, 0.981 [1.25, 0.051
1.79 296.8
1.12 312.4
(h) TI, (K)
#add
observed, with no significant loss in the expected product yield or increase in the impurity yield. Full statistical information can be obtained from the results of the stochastic optimization (Johnson and Bogle 2006). The use of this approach allows the designers to explore the effects of uncertain parameters on the design of unit operations, and ofwhole processes (Johnson 2002). In spite of the considerable computational time required this can be achieved with relatively simple models. The approach can also highlight which parameters are the most sensitive to the design and therefore may require better prediction or measurement to be able to have confidence in the design.
4.5.4
Case Study 4: Biochemical Process Design
Much biochemical process development is done using simple tools such as BioProdesigner (Petrides 1994). This tool accounts for material and energy flows and in particular is useful in tracking the large amounts of water that are used. However the incorporation of detailed models for fermentation is difficult particularly if structured models are used where the organism which produces the product organism is modeled with different compartments inside the cell. This is necessary if there are separate phases in the cell as is the situation in this case study where the product forms as a solid granule or inclusion body (IB) inside the cell. In order to optimize the process it is necessary to consider this solid phase since it will significantly affect the primary separation. Biochemical processes and some batch chemical plants consist of cascades of complex unit operations, which process a physical stream to produce valuable products.
4.5 Introduction to the Case Studies
The product is produced in a fermenter, which is usually designed to maximize the biological yield of product. However, it is not always true that maximal biochemical yield will necessarily produce the most efficient yield of purified product since effective separation also depends on the production of by-products that must be separated, and also sometimes on the physical form of the product. To consider these aspects it is again necessary to consider the fermenter design in tandem with the rest of the process. This can only be done effectively with CAPE tools. In this case study the product forms as an inclusion body within the cells so the physical form of the product affects the efficiency of the process. The models must include the size of the inclusion bodies as they grow and are separated and so this study includes population balance techniques for modeling particle size. The process produces a protein which forms as inclusion bodies in recombinant E.coli cells in a fermenter operated in fed-batch mode and purified by a chain of downstream processing units (Bogle,Hounslow and Middelberg 1991).This process is typical for the production of animal hormones such as bovine somatotropin (BST) and porcine somatotropin (pST),for which much of this work was done, and for synthetic insulin. A culture of genetically engineered cells is fully grown in a fermenter in a solution of various salts using glucose as a carbon source. After initial growth phase the E. coli cells are induced either chemically or thermally after which the protein is produced and grows as inclusion bodies within the cell. The inclusion bodies continue to grow in size until the cell population is killed by raising the temperature yielding a slurry of cells each containing an inclusion body. Once the fermentation has been stopped the cells within the broth must be concentrated in a filtration stage, in which some of the water is removed, and the cells disrupted in a high pressure homogenizer to release the intracellular material which includes the solid inclusion bodies. The homogenate is then diluted with water to improve its handling characteristics prior to centrifugation. In the centrifuge the dense inclusion bodies are collected as the concentrate while the less dense cell debris is lost in the supernatant. Naturally, some inclusion bodies are lost and some debris will appear in the concentrate. Further purification stages are required prior to final formulation. The optimal operating conditions for a plant will depend on the characteristics of the product, in particular number and size of inclusion bodies. These characteristics can be measured after a fermentation and this information used to decide optimal conditions. Simulations can be used to determine timings for key events for optimal operation (Bogle et al. 1995). Here we investigate the optimization of the process given a known set of fermenter outlet conditions. 4.5.4.1 The Optimization Problem
A key tradeoff occurs in this process: inclusion body recovery and product purity vary in a conflicting manner when using the operation of the centrifuge as a manipulated variable to improve performance (see Fig. 4.11).This results in an optimization problem: only one of the objectives can be optimized at a time with the manipulation of the centrifuge variables yielding low values for the other objectives. This indicates
I
409
410
I
4 Equipment and Process Design 1.3
too 15
Figure 4.11 Tradeoffs between product purity, recovery and productivity as a function of one operating variable-centrifuge throughput.
1.1
Overall IB
Productivity
Recovey 50
or Purity
(%I
0.9
25
c
0.7 0
2500
5000
7500
Centrifuge throughput (Ifh) +Purity
&Overall
IB Recovery -A-
Productivity
that no overall process optimum with acceptable values for each objective can be attained with these specified variables, although both are of practical importance. However, it is important that process interactions are not to be neglected when an overall process optimum is intended, and it is of particular interest to ensure that acceptable levels of recovery, purity, and productivity can be achieved when the entire process is considered simultaneously The objectives can be optimized separately or the whole process can be considered and all values of the objectives examined. The objective functions for the optimization of the BST production process are IB recovery, product purity, and process productivity.The time invariant parameters and control variables and their limits are described in the next section. More details can be found in Graf (1996). In this work we chose to focus on the following operational case: a fermentation has taken place and an engineer wishes to determine the best operating conditions economically. The fermentation time was fmed at 14 h. The boundaries of the optimization variables are either due to practical limitations, e.g., for the settling area and the throughputs, or due to the lack of further experimental data, e.g., for the number of homogenizer passes. 4.5.4.2 Maximization of the Inclusion Body Recovery
The results of the optimization of the overall inclusion body recovery are given in Table 4.7. The results show that a maximum of 95% of all inclusion bodies can be recovered by manipulating the specified variables. The results from the simulation (Fig.4.11) show the contrasting behavior of IB recovery and purity when variables influence the separator-centrifugeefficiency. This feature is supported by the optimization results. The purity is low with 73.8% when the IB recovery is optimized. A productivity of 1.021 kg BST h-' can be achieved due to the high recovered mass of product. Each optimization variable has taken the value of one of its limits except the separator-centrifuge settling area. It has been found during the simulation that the
Overall IB recovery
("/.I 9s
Harvester settling area
Purity
(%I
Productivity (kg BST h-')
Overall process time
Final process volume (L)
(h)
73.8
Recovered mass of BST (kg)
1.021
37.3
1304
38.1
Harvester
Number of
Dilution rate
Separator
Separator
passes
(m3
throughput (L h")
(-1
settling area (m7
throughput (L h-')
250000
1200
3
4
105837
1200
(-1
IB recovery increases with the number of homogenizer passes and a higher dilution rate. The upper limits of these variables are thus the expected values. The IB recovery is maximized when the losses in the centrifuges are minimized suggesting that the settling areas should take the values of the upper limits and the throughputs of the lower limits. In the case of the separator-centrifuge settling area, the critical IB separation diameter is already so small that an increase of the settling area cannot influence the IB recovery any more, an effect which is based on the discretization of the IB size distribution: the critical IB separation diameter is already smaller than the smallest diameter of the discretized IB size distribution. Hence the rate of change of the IB recovery with the further increase of the settling area has fallen below the sensitivity ofthe gOPT optimization procedure, and the optimization has stopped, showing values for the separator-centrifuge settling area which are not bounds. However, an IB recovery of 100% could not be achieved due to losses in the first centrifuge and the nonideal separation efficiencies.
4.5.4.3
Maximization of the Product Purity
The values of the objectives and the optimization variables for maximizing the product purity are listed in Table 4.8.A product purity of 92.1% in the separatorcentrifuge sediment can be achieved with a concomitant low overall IB recovery of 12.7%, and a productivity of 0.163 kg BST h-'. These results prove again the main feature of this type of process: the higher the purity, the lower the IB recovery and productivity. The values of the optimization variables show that the loss of cells in the harvester-centrifuge is minimized by the highest possible settling area and the lowest throughput. Three homogenizer passes and a dilution rate of 1.1 are expected values from the simulation whereas the values for the separator-centrifugeare not on the bounds. As has been earlier found with the discretized IB size distribution, any further change of the separator-centrifuge settling area and the throughput would not increase the purity, since the critical diameter for cell debris is already bigger than the biggest discretized diameter of the cell debris size distribution.
412
I
4 Equipment and Process Design Table 4.8 Results for the maximization o f the purity. The table shows the values ofthe objectives and the optimization variables determined for the maximum. Overall process time
Productivity (kg BST h”)
recovery
Final process volume (L)
Recovered mass of BST
(kg)
(h) ~~
12.7
Harvester settling area (m2)
92.1
1 Harvester
throughput (L h-’)
1200
250000
0.163
1 Number of 3
30.6
366
1 Dilution rate ISeparator
settling area
1.1
92062
5.1
ISeparator
throughput (L h-’)
3203
4.5.4.4 Maximizationof the Process Productivity Table 4.9 shows the results for maximizing the productivity. In this optimization, an
overall productivity of 1.16 kg BST and recovery of 93.5% is yielded whereas the purity is low at 55.7%. Again, the discrepancy between the IB recovery, productivity, and the purity is demonstrated: a high productivity and recovery together with a low purity. These results clearly prove that only one objective can be maximized at a time using the specified variables. The settling areas of both centrifuges are pushed to the upper bounds. This is expected since the best recovery of IBs will increase the productivity,and the settling areas do not influence the overall processing time. Furthermore, the throughput of the first centrifuge is 1200 L h-’ due to the best cell and thus IB recovery. The values for the harvester-centrifuge settling area and its throughput have been the same for all three maximizations suggesting that the influence of the Table 4.9 Results for the maximization o f the productivity. The table shows the values o f the objectives and the optimization variables determined for the maximum. Overall IB recovery
(W 93.5 Harvester settling area
I Purity
I
I Productivity I Overall pro- I Final process volume (L)
(kg)
155.7
11.16
132.7
I815
Harvester throughput
1200
37.9 Separator throughput (L h-‘)
(m9
250000
Recovered mass of BST
2
2.5
250000
1811
4.5 Introduction to the Case Studies
harvester-centrifuge on the objectives is negligible as long as its efficiency is maximized. The values for the number of homogenizer passes, the dilution rate, and the separator-centrifuge throughput are the values previously found as suboptimal for the productivity and were thus expected. The objectives IB recovery, purity, and productivity have been maximized separately. A maximum I B recovery of 95%, a purity of 92.1%, and a productivity of 1.16 kg BST h-' processing time can be achieved in separate optimizations of the specified variables. The optimization results confirm the previously found feature of this type of biochemical process: the contrasting behavior of IB recovery and purity. It has been demonstrated that only one of these objectives can be optimized with concomitant low values of the other objectives when operational and design variables are used. We have found that higher levels of each objective can be found if structural changes are made, in particular by adding a recycle around the centrifuge. An acceptable compromise has thus been found for high values of the objectives. These results have been obtained assuming a set of experimental data from one of many fermentations undertaken. To achieve the best process design the ideal would be to operate the fermenter in such a way as to allow the inclusion bodies to grow as much as possible and then obtain the cleanest split possible between inclusion bodies and cell debris. Models have been developed for the nucleation and growth of the inclusion bodies using population balance models (Graf) as well as for the growth of the host organism. These have also been implemented but a formal optimization is not valuable. It will inevitably result in requesting the arbitrary upper limit on the fermentation time since the stopping of the growth of the inclusion bodies has not been modeled and is not understood. The unit operation and process design can be achieved by the use of CAPE tools but this case study shows how development of designs in many new areas needs to be done in close conjunction with laboratory and pilot plant. This is especially true when designs are being optimized since they seem very often to predict designs that are on the boundary which is least understood physically. This then should lead to a planned and directed new experimental program to help in understanding the key process bottlenecks.
4.5.5 Case Study 5: Design of Fluid Bed Reactor
The micro-electronic and photovoltaic (PV) industries require high-purity silicon feedstock for production of integrated circuits and solar cells. Unfortunately, the cost of producing high-purity silicon, around $40-$65 kg-', is too high for the photovoltaic industry. Since their silicon purity requirements are not as restrictive, PV manufacturers have so far been able to use silicon that does not meet quality specifications as feedstock for solar cell production. Yet, this source of feedstock is diminishing. The annual rate of growth of total installed capacity of PV systems varied between 20% and 40% from 1992 to 2001 (Ydstie 2004), but the growth in the electronics
I
413
414
I
4 Equipment and Process Design
industry is 10% or less. Continued growth in the PV market is therefore contingent on new sources of solar-grade silicon. A promising method of affordable solar-gradesilicon production involves thermal decomposition of SiH4 (silane)inside a fluid bed reactor (Lord and Milligan 1998).A diagram of the reactor is shown in Fig. 4.12. The silane feed is preheated and delivered to the reactor in a single jet or multiple jets through the primary injector at the bottom of the reactor. A secondary injector is used to stabilize the jet. Resistance wall heaters are used to maintain reactor temperature. Hydrogen, a product of reaction, and the fluidizing gas are vented out the top. Silicon particles can be removed as product from an outlet close to the bottom of the reactor. The objective of our study is to develop a model suitable for control systems design, scale-up and process optimization. 4.5.5.1 Model Development
The fluid bed population balance model (White and Ydstie 2004) assumes that Silane decomposition in the fluid bed reactor occurs according to the overall reaction SiH4+ Si + 2 H2
(25)
The analysis of solid-phase behavior is presented first, and the new kinetic expressions are highlighted. To predict changes in silicon particle size, the model tracks particle movement to and from different intervals. This is accomplished by using mass and number balances over each size interval with constitutive equations to define particle growth. A representation of particle movement between size intervals is shown in Fig. 4.13.
4.5 Introduction to the Case Studies Interval i ,
'rxn,
Intervalj
:
i. .................................
*
. :
:...............
.............................
' 4 k!:l ...................................... i
Figure 4.13 Progression of silicon particles through size intervals.
Changes in the particle size distribution can be described with a mass balance over each size interval. Ignoring the potential for particle aggregation initially allows the change in number of moles of silicon in an interval (Mi) to be defined as: d Mi dt
-= j - 1
-$
+ rxn;.
So to predict the size distribution, it is necessary to define molar flow due to particle growth from one interval to the next If;) as well as the number of moles added to each size interval due to silane decomposition (rxni).This equation is then combined with a number balance and combined with rate expressions for chemical reaction. Since reaction occurs through heterogeneous and homogeneous pathways, the overall reaction rate is considered to be the sum of the rate expressions for both pathways. r = rhet - rhom
(27)
Rate expressions for both the heterogeneous and homogeneous reactions can be obtained from Furusawa et al. (1988). The reaction rate expressions require knowledge of silane and hydrogen concentrations, which can be determined from a gas phase model of the bed zone of the reactor which behaves as a CSTR so that we can write:
where V, is the volume of gas in reactor (L); C,,is the concentration of silane leaving reactor (mol L-'); cSi is the concentration of silane into reactor (mol L-'); C h o is the concentration of hydrogen leaving reactor (mol L-I); Fin is the flow into reactor (L s 8 ) ; and F,,, is the flow leaving reactor (L s-l). The model has been further modified to account for particle aggregation using binary interactions so that:
The value of ki,ican be estimated by comparing model predictions with experimental data. A typical model result is shown in Fig. 4.14.
I
415
416
4,
I
4 Equipment and Process Design
experiment
1i
i
i
-0- model -c
2-:.........
0 ...........
0
4
~
3
'7
; . wi , I i j
5
..-- 1
2
~
.
.....
0.5 ~
1
i
0
1
0.5
10'~
Radius, x ~ o - 3 ...............,
0
0.5
0.5
1
4 !-.
1
0
1
0
0
0.5
lo9 4
....
.......................
0.5
0.5
1
.
,
1
0
lo9 Figure 4.14 Experimental ( x ) and modeled no aggregation or loss of powder.
~~
............
j
0
0.5
0.5
x 10'~ ( 0 ) size
distribution with
4.6 Conclusions
The five case studies have shown the way in which CAPE tools can be used to design thermodynamically and physically complex industrial units and flowsheets. Increasingly this level of detail is being demanded by designers especially where tight quality, safety and environmental requirements are being expected. In all cases there has been a clear need for sophisticated modeling tied closely to bench-scale or pilot plant experimental systems to ensure that the right data is obtained to the right degree of accuracy. Some of this work has been done with commercial tools and some with bespoke programs. But it is clear that more sophisticated systems are required to facilitate rapid development of such models involving complex thermodynamics, irregular geometries, and particulates. These challenges remain. Systems which guarantee thermodynamic consistency are required as well, and systems incorporating the sys-
References
tematic and efficient assessment and minimization of uncertainty and its use to guide experiments remains a challenge to the developers of CAPE tools.
References 1 Bogle I. D. L., Cockshott A. R., Bulmer M.,
2
3
4
5
6
7
8
9
10
Thornhill N., Gregory M., and Deghani M. (1995)A Process Systems Engineering View of Biochemical Process Operations. Comput. Chem. Eng. 20(6/7),943-949. Bogle I. D. L., Hounslow M. I. and ,Middelberg A. P. J . (1991) Modelling of inclusion body formation for optimisation and recovery in a biochemical process. In Proceedings of 4th International Conference on Process Systems Engineering, Montebello, Canada. Bogle I. D. L. and Cameron D. (2002) CAPE tools for off-line simulation, design and analysis. In B. L. Braunschweig and R. A. Gani (eds.) Software Architectures and Toolsfor Computer-aided Process Engineering, Elsevier, Amsterdam. Callen H. B. (1985) Thermodynamics and an Introduction to Thermostatics, Wiley, New York. Eggersmann M., Hackenberg]., Marquardt W., and Cameron I. T. (2002)Applications of modelling - a case study from process design. In B. L. Braunschweig and R. A. Gani (eds.) Software Architectures and Toolsfor Computer-Aided Process Engineering, Elsevier, Amsterdam. Farschman C. A,, Viwanath, K. P., and Ydstie B. E. (1998) Process Systems and Inventory Control. AIChEj. 44(8). 184-1855. Frank R. A,, Finn C. W. and ElliottJ. F. (1989) Physical Chemistry of the Carbothermic Reduction of Alumina in the Presence of a Metallic Solvent: 2. Measurements of Kinetics of Reaction. Metall. Mater. Trans. B 20(2), 161. Furusawa T., Kojima T. and Hiroha H. (1988) Chemical Vapor Deposition and Homogeneous Nucleation in Monosilane Pyrolysis within Interparticle Spaces: Application of Fines Formation Analysis to Fluidized Bed CVD. Chem. Eng. Sci. 43(8), 2037-2042. Garcia-Osorio V. and Ydstie B. E. (2003) Distributed, Asynchronous and Hybrid Simulation of Process Networks Using Recording Controllers. rat. J . Robust Nonlinear Control 14, 227-248. Garcia-Osorio V. and Ydstie B. E. (2004) Vapor Recovery Reactor in Carbothermic
11
12
13
14
15
16
17
18
19
Aluminum Production. Chem. Eng. Sci 59(10), 2053-2064. Gerogiorgis D. I., Ydstie B. E. and Seetharaman S. S. (2002)A steady state electrothermic simulation analysis of a carbothermic reduction reactor for the production of aluminium. In Cross M., Evans ]. W. and Bailey C. (eds.) Computational Modeling of Materials, Minerals, and Metals Processing: 273; Warrendale, PA (TMS). Gerogiorgis D. I. and Ydstie B. E. (2003a) A finite element CFD sensitivity analysis for the conceptual design of a carbothermic aluminum reactor. In Crepeau P. (ed.), Light Metals 2003: 407; Warrendale, PA (TMS). Gerogiorgis D. I., et al. (2003) Process Systems Tools for Design and Optimization of Carbothermic Reduction Processes. In Das S . K. (ed.) Aluminum 2003, Warrendale, PA (TMS) (in press). Gerogiorgis D. I. and Ydstie B. E. (2003b) An MINLP model for conceptual design of a carbothermic aluminum reactor. In Proceedings of European Symposium on ComputerAided Process Engineering (ESCAPE-13), Lappeeranta, Finland, pp. 131-136. GrafH. (1996) Modelling and Optimisation of an Inclusion Body Type Biochemical Process. Externe Diplomarbeit (Universitat Dortmund) Project report, Dept of Chemical Engineering, University College, London. GrafH. and Bogle I. D. L. (1997) Simulation as a tool in at-line prediction and control of biochemical processes. In Proceedings ofthe 1st European Conference on Chemical Engineering, ECCEl Florence, Italy. Grossmann 1. E., Westerberg A. W. (2000), Research challenges in process systems engineering. AIChEJ. 46, 1700-1703. HagemannJ., Fraga E. S. and Bogk 1. D. L. (2001) Distillation column design for improved exergy utilisation and its impact on closed loop performance. In Proceedings of the World Congress of Chemical Engineering, Melbourne, Australia, Sept 2001. Hangos R and Cameron I . T. (2001) Process Modelling and Model Analysis. Academic Press, New York.
I
417
418
I
4 Equipment and Process Design 20 Ingram G. D., Cameron I. T., Hangos K. M.
21
22 23
24
25
26
27
28 29
30
31
32
(2004) Classification and Analysis of Integrating Frameworks in Multiscale Modelling. Chem. Eng. Sci 59(11), 2171-2187. Johansen K., et al. (2000) Carbothermic aluminum. In Proceedings ofthe Sixth Intemational Conference on Molten Slags, Fluxes and Salts (paper #192), Stockholm, Sweden (CD edition). /ohansen K. and Aune J. A. (2002) US Patent 6,440,193 to Alcoa Inc. & Ekem ASA. Johnson D. B. (2002) Integrated design under uncertainty for pharmaceutical processes. PhD Dissertation, University of London. Johnson D. B. and Bogle I . D. L. (2006) An approach for integrated design under uncertainty of pharmaceutical processes. Reliable Computing (in press). Kaibal G. (1990) Energieintegration in der thermische verfahrenstechnik. Chem. Ing. Tech. 62(2), 99-106. Kaibel G., Blass E., and Kohler]. (1989) Gestaltung destillativer Trennungen unter Einbeziehung thermodynamischer Gesichtspunkte. Chem. Ing. Tech. 61(1), 16-251. Kjelstrup S. and Hafskjold B. (1996) Nonequilibrium molecular dynamic simulations of steady state heat and mass transport in distillation. I d . Eng. Chem. Res. 35, 4203-4213. Lord S. M. and Milligan R. J. (1998) Method for Silicon Deposition, US Patent 5,798,137. Luyben W. (1990) Process Modelling Simulation and Controlfor Chemical Engineers. McGraw-Hill, Boston. Mangold M.,Motz S., Gilles E. D. (2002) Network theory for the structured modelling of chemical processes. Chem. Eng. Sci. 57, 4099-4116. Motzfeldt K., Kvande H., Schei A., Grjotheim K. (1989) Carbothermal Production of Aluminum - Chemistry and Technology. A1 Verlag, Dusseldorf, Germany. O h M. and Pantelides C. C. (1994) A Modelling and Simulation Language for Combined Lumped and Distributed Parameter Systems. In Proceedings ofthe 5th International Conference on Process Systems Engineering, Kyongju, Korea 1. 37-44.
33 Petrides D. P. (1994) Biopro designer - an
34
35
36
37
38
39
40
41
42
43
advanced computing environment for modelling and design of integrated biochemical processes. Comput. Chem. Eng. 18, S621-S625. Ratkje S. K. E., Hansen E. M., Lien K. M . and Hafskjold B. (1995) Analysis of entropy production rates for design of distillation columns. Ind. Eng. Chem. Res. 34(9). 3001-3007. Sano T., Sugaya T. and Kasai M . (1998) Process improvement in the production of a pharmaceutical intermediate using a reaction calorimeter for studies of the fraction kinetics of amination of a bromopropyl compound. Organic Res. Develop. 2, 169-174. Sauar E., Ratkje S. K. and Lien K. M. (1996) Equipartition of forces - a new principle for process design and optimization. Ind. Eng. Chem. Res. 35(11), 4147-4153. Sauar E., Rivero R., Kjelstrup S. and Lien K. M.(1997) Diabatic column optimization compared to isoforce columns. Energy Consew. Mgmt. 38(15), 1777-1783. Sauar E. and Ydstie B. E. (1998) The temperatures of the maximum reaction rate and their relation to the equilibrium temperatures. /. Phys. Chem. A 102,8860-8864. Skogestad S. (1997) Dynamics and control of distillation columns. Trans. IChemE Part A 75, 539-562. Swaney R. E. and Grossmann I. E. (1985) An index for operational flexibility in chemical process design. Part 1 Formulation and theO T ~ AIChEJ. . 31(4), 621-630. Tondeur D. and Kuaalen E. (1987) Equipartition of entropy production - an optimality criterion for transfer and separation processes. Ind. Eng. Chem. Res. 26(1), 50-56. White C. and Ydstie B. E. (2004) Modeling the Fluid Decomposition of Silane to form Photovoltaic Grade Silicon in a Fluid Bed Reactor, Technical Report, Carnegie Mellon, Chemical Engineering. Ydstie B. E. (2004) Decision making in complex organizations: the Adaptive Enterprise. Comput. Chem. Eng. (in press).
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
5 Product Development Andrzej Kraslawski
5.1 Background
Over the past 20 years, the changing political situation and the advent of new technologies have had a profound impact on societies in the vast majority of countries all over the world. This influence has been manifested, among other things, by the growing demand for totally new products and services. This trend has resulted in mergers, and acquisitions of many businesses at a scale not previously observed. The changes in the organization of the businesses have very deeply influenced the manufacturing and processing industries. The main consequences have been the lowering of profit margins of commodities, growing pressure on the environmental aspects of production and conservation of raw materials, very high costs of the research and development (R&D),significance of low-tonnage and high-added value products, growing importance of the customer-oriented products, reduction of the time between development and bringing the product to the market, shortening of the life cycle of the products, and issues related to intellectual property rights. All of these factors have caused a visible change in processing industries. It was manifested by the move from process-oriented to product-centered businesses. The consequence of this switch is the emergence of interest in product development. The activity has been known to the industry for many years. However, its importance and the need for a more systematic approach have only been realized in the last five to eight years. It is obvious that the driving force for the generation of new chemical products is higher satisfaction of existing needs or fulfillment of new ones. However, it is hard to imagine even a fraction of those needs when realizing that the largest substance database, the Chemical Abstracts Service (CAS),contains around seven million commercially available chemicals, and around 4000 new substances are added each day. The generalization of the product concept is captured by the multilevel approach proposed by Kotler (1989).The product is composed of core, tangible and augmented parts as presented in Fig. 5.1. The core product assures the minimal fulfillment of Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
420
I
5 Product Development augnmcnted product
Figure 5.1
The concept of product (Kotler 1989)
the customer’s needs. It is a physical component of the product, e.g., paint or adhesive. The tangible and augmented part of the product is composed of physical as well as the immaterial elements, e.g., services. The activity of CAPE specialists related to product development concentrates on the analysis of the function-property-compositionrelations (Villadsen 1997; Rowe and Roberts 1998a; Cussler and Moggridge 2001; Gani and O’Connell2001; Wesseling 2001; Charpentier 2002). However, when looking at Kotler’s concept of the product it is clearly visible that such an approach corresponds exclusively to the core product and still there is a lot to be done by the CAPE community with respect to tangible and augmented elements of the product. The consequence for the process engineer is a need to look not exclusively at the composition or form of the core product but also at its function in a more general context, e.g., chemical sensors in the package that inform about the state of the core product or degradability facilitating the elimination of the product. The behavior of the product at the market is described by the concept of product life time. The concept introduced by Buzzell (1966)and later much criticized (Dhalla and Yuspeh 1976)is still a good tool for the determination of where the product is in its development path and the consequences for its engineering. The generalized product life time is presented in Fig. 5.2. The analysis of a product life cycle shows that the involvement of process engineers changes during its duration. The most intensive design activities of process engineers at the product development phase are later replaced by the involvement of the production phase and incremental changes of the product. The last phase, elimi-
5.1 Background I421
quantity
I) Figure 5.2
time
Product life cycle (Baker and Hurt 1999).
nation, is also a place where the process engineers have to be involved to ensure the required environmental, legal and technological compromise. The first phase of the product life cycle, new product development (NPD),is a type of activity where process engineers have always been active. However, the structural changes of many industries, as mentioned at the beginning of this chapter, imposed a need for more comprehensive participation of the process engineers in NPD. The involvement of process engineers will be easier to structure by analyzing the theories of new product development. Saren (1984)has introduced four types of NPD: departmental stage model, activity-based approach, decision stage model and conversion process. The activity-basedapproach developed by Booz, Allen and Hamilton (1982) seems to be the most adequate for the illustration of the involvement of process engineers in product development. The modification of activity-based model presented by Ulrich and Eppinger (2000) is the starting point for the analysis of the methods and tools to be used by process engineers in NPD. The generic activity-based model of product development is composed of the phases presented in Fig. 5.3. Chemical product development is a process composed of the definition phase and product design, as presented in Fig. 5.3. The very essence of the product development is the identification of how the needs could be satisfied by the chemical and physical interaction of the substances and the chemical composition and structure of those substances. It means the determination of the needed (expected) function of the product, translation of the required function (adhesiveness, chemical impact, smell, taste, elasticity, etc.) into the properties like viscosity, density, color, smell, microstructure, etc., and finally the identification of the single chemical component (its molecular structure) or mixture possessing those properties. In the definition phase a specialized business strategy and market research tools are used. Business strategy tools are applied in the planning phase to outline what should be produced and for whom.
422
I
5 Product Development
Chemical Product Development
Design Phase (implementation Product Concept TestingProduct
\ Figure 5.3
Chemical product development (Ulrich and Eppinger 2000).
Market research tools are used to identify the needs and requirements concerning the pre-defined class of the products and the market segment. In the CAPE community, product developmentwas traditionally reduced to product design and, due to the character of the chemical industry, product design was equivalent to formulation. Formulation is defined as the blendinglmixing and processing of ingredients in order to obtain a product characterized by the precisely determined properties (specifications).The main design problem in formulation is finding the relation between the composition and structure of the mixture as well as the type and conditions of processing it and the required final properties of the product. The great variety of products is a source of the fundamental difficulties in the generalization of the product formulation. Usually the know-how in formulation is restricted to a narrow class of the products (e.g., specialized soaps or rubbers). Knowledge of formulation for one type of the product is not easily transferable to the other types of products. The high specialization and the very complicated physical and chemical phenomena related to the processing of the ingredients, as well as their interactions, resulted in characteristicdevelopment of methods and tools used in formulation and more generally in product design. There are three main classes of methods applicable to product design: 0 0 0
experimental design knowledge-based methods computer-aided molecular design.
5.1 Background I423
Historically, the oldest method of product design was experimental design that evolved into sophisticated statistical methods, capable of handling very complicated mixing and processing problems. The interest of the CAPE community has been recently attracted to the statistical analysis of processes that can lead to the identification of new products. The attractiveness of experimental design has been strongly limited by the considerable costs and time needed for the realization of the experiments and the subsequent analysis of data. The trends for limiting the cost and time of development, the existence of a huge amount of data, and information related to the development of new products and stored by the companies were the main factors promoting the application of knowledge-based methods. The common feature of these methods is the use of historic data and information and the generation, on this basis, of new knowledge applicable to the actual product design problems. The proposed methods and tools are: rule-based systems, neural networks, genetic algorithms, case-based reasoning systems, TRIZ, and data mining. The last group of methods is related to computer-based molecular design. This approach has been dynamically developing over recent years. An important contribution of the CAPE community in this field was recently reported by Achenie, Gani, and Venkatasubramanian (2003). There are two approaches to product design (Fig.5.4). Product design can be formulated as a forward problem if the design starts from the given structure of a molecule or composition of the material, and aims at determining the properties of this material. The second approach, considering design as a reverse problem, starts with the given properties of a material and finds the molecular composition fulfilling the requirements. Experimental methods are an example of the forward problem formulation. The knowledge-based methods as well as computer-based design could be used in forward or reverse problem formulation. This chapter is limited to the description of experimental and knowledge-based methods. It covers the last 10 years and does not include the review of computerbased molecular design. The relatively new subjects to the CAPE community, such as quality function deployment, case-based reasoning and TRIZ method, are described in more details. The application of other methods and tools is reviewed in the context of the designed product type, i.e., catalysts, dyes, rubbers, etc.
Forward problem
problem
structure Figure 5.4 problem.
Forward and reverse formulation of the product design
424
I
5 Product Development
5.2 Definition Phase
Identification of consumer needs is a field where market research is given the last word. Often the comments found in process engineering literature about the “identification of the consumer voice by the use of questionnaire” are misleading and in fact quite different questions should be asked. Although direct contact with the customer is not a core speciality of process engineers, a basic knowledge of the methods and the applicable tools is very important. Attention should be focused on two types of tools. The first group is composed of the tools enabling not only identification of the actually expressed requirements but also their translation into engineering characteristics of the materials. The second group focuses on the identification of future needs, which are not yet foreseen by the great majority of consumers. Both activities are aimed at positioning the future product with respect to customers, competitors, and regulators.
5.2.1 Translation of the Requirements into the Parameters
The capturing of the requirements of a new product is composed of information gathering, information transformation, and finally generation of the requirements. The specialized techniques are the domain of market research and many noteable books exist dealing with the subject, e.g., Bruce and Cooper (2000).There are several methods and tools related with the quantification of the product attributes expressed by the customers and their translation into the engineering variables of the product. The most common are: conjoint analysis and quality function deployment. Conjoint analysis is used to determine the desired level of the product attributes and their influence on various business decisions. It enables futing of the tradeoffs between the requirements. The detailed description is presented in Baker and Hurt (1999). Quality function deployment is a more popular method enabling determination of the engineering variable values of the product, identification of the attributes and variables to be improved, and positioning of the product with respect to the competitors. Quality Function Deployment
The quality function deployment (QFD) method enables the conversion of the needs of the customers into the product design variables (ReVelle et al. 1998).QFD is composed of four steps: 0
identification of the attributes essential for the consumer when evaluating the product, and determination of the relative importance of the attributes; determination of connections between the attributes identified by the consumer and design variables;
5.2 Definition Phase I425
estimation of the target values of the design Variables satisfylng the needs of the consumers; assessment of the degree of satisfaction with the existing products (designs). QFD is realized by the tool called House of Quality (Hauser and Clawing 1988),presented in Fig. 5.5. The basic form of House of Quality is used not only for the translation of the customer needs into the product design requirements but is also applied to the systematic identification of product features. This evaluation could be based on the assessment of the competitiveness of the given product with regard to the other products on the market or by the identification of tradeoffs using a correlation matrix. An application of the House of Quality for the chocolate couverture is given by Viaene and Januszewska (1999) (Fig. 5.6). The analysis, conducted by the use of the questionnaires, has revealed that the most important characteristics applied by the customers in the evaluation of the chocolate couverture are: flavor, appearance, and texture. Those attributes were introduced into the “what”part of House of Quality (Fig. 5.6). There are the parameters of the couverture that are measured by the instrumental analysis (acidity level, sugar content, fat content, hardness, particle size, adhesiveness). Moreover there is a group of sensory variables: color intensity, color brightness, texture on surface, texture on snap, melting in hand, aroma, acidity, bitterness, cocoa body, fruitiness, sweetness, smokiness, first bite, oily mouth coating, aftertaste, adhesiveness, smoothness, melting in mouth. Both variables determined by the instrumental anal-
A Correlation matrix
Customer needs (product attributes)
Figure 5.5
The House of Quality.
Product positioning with regard to comoetitors
iri
426
I
5 Product Development
.......
..................... 9 ........... 4
StrongPos @ Strong @ Positive Medium0 Negative Weak A Strong Neg %
0 x
I
I
...........
3 1
t
........... 0 I
I
Figure 5.6 House of Quality for the chocolate couverture (Viaene and Januszewska 1999).
ysis and sensory analysis are introduced into the “how“part. There were several samples evaluated by the customers. Every sample had the different values of the variables related to the instrumental and sensory analyses. In the next step the relation between the attributes proposed by the customers and the variables from the “how” part are determined using the statistical analysis of the results of the questionnaire. The relations between the attributes and the variables are expressed as strong positive, positive, neutral, negative, strong negative. They are symbolically introduced into the central part of the House of Quality. Next, based on the importance of the attributes given by the customers and the technical knowledge obtained from the analysis of the relations between the attributes and variables, the target values for the variables are determined and introduced at the bottom of the House. The roof of the House represents the interrelations between the design variables. It allows the study of the tradeoffs between them. On the right side of the House, the positioning of the
5.2 Definition Phase I 4 2 7
given product with respect to the products of the competitors is given. It allows for the identification of the weakest attributes of “our” product with respect to the competitors. The detailed information on the application of QFD to the product development is given in ReVelle et al. (1998).
5.2.2 Forecasting Methods for New Product
The forecasting of new products is an activity where the various methods are applied, e.g., expert judgment; brainstorming, one-to-one interviews with customers and salespeople, surveys of customers, idea generation, etc. The complete review is given by members of TFAM working group (2004). The methods of the new product forecasting could be divided as market and product-oriented. The market-oriented methods deal with the information collected from the analysis of the consumers’ behavior, their needs and economic factors related to the existing products. They use complicated statistical tools, mostly penetration models, however new approaches emerge, e.g., chaos theory (Phillips and Kim 1996). The market-oriented methods of new products forecasting are mainly the domain of market analysts. However, the obtained results are not encouraging as the estimations of the sales of the new products have errors reaching SO-60% (Kahn 2002). 5.2.2.1 Idea Generation
The phase of idea generation is a key point for engineers interested in the enhancement of their creativity.There are two major groups of methods supporting idea generation: intuitive and analytical. The most common, unstructured intuitive methods applied to the engineering problems are explained below. Brainstorming
Brainstorming is the most popular creativity enhancement method. Originally introduced by Osborn (1953), it is based on four rules: (1)evaluation of ideas must be done later; (2) the quantity of the generated ideas is most important; (3) encouragement of strange and “wild”proposals, (4) improvement and combination of the generated ideas is welcomed. There are many types of brainstorming: stop and go, Gordon-Littlevariation, trigger method, etc. The detailed description is given in Proctor (2002). Synectics
The objective of the method developed by Gordon (1961) is to look at the problem from a different perspective (to make familiar strange and vise versa). To achieve it, a set of four analogies (personal, direct, symbolic and fantastic) and metaphors is used. The detailed description of the method is presented in Proctor (2002).
428
I
5 Product Development
Lateral Thinking
Lateral thining is the group of methods introduced by De Bono in 1970 based on the unconventional perception of the problem. The main factors enabling the lateral thinking are: identification of the dominant ideas, searching for new ways of looking at the problem, relaxation of the rigid thinking process, and the use of chance to encourage the emergence of the other ideas. The most common analytical methods applied to the engineering problems are as follows. Morphological Analysis
This method was introduced by Zwicky in 1948. It is based on the combination of the attributes of the product or process (like properties, functions, etc.) with the elements of the product or the process. It can generate all possible combinations of the attributes, however, its applicability is practically limited to three-dimensional analysis of the attributes. There exist many variations of the method, such as attribute listing and SCIMTAR (Proctor 2002). Analogies
Analogies is a group of methods with case-based reasoning as the most useful for engineering applications. They exploit the similarities between the features of the existing problems and the features of the problems or design known from the past. A good survey of case-based methods is given by Leake (1996). The special class of analogy-based methods is an approach exploiting biomimetics, i.e., analogies with living systems (French 1994). TRIZ
TRIZ is a popular method of systematic creativity support introduced by Altshuller in 1984. There are many books that describe TRIZ (theory of inventive problem solving), e.g., Savransky (ZOOO), Salamatov (1999). However, the book by Mann D. (2002) seems to be the best introduction. The main findings of TRIZ are: 0
0 0 0
All innovations start with the application of a small number of inventive principles. Trends exist in the evolution of technologies. The best solutions transform the harmful effects into useful ones. The best solutions remove the conflicts existing in the system.
Those findings are translated into the basic principles of TRIZ. Contradictions There are two types of contradictions. A technical contradiction takes place when there are two parameters of the system in conflict, i.e., the improvement in the value of one parameter lessens the value of the other one. The technical contradictions are solved by applying a contradiction matrix. A physical contradiction takes place when
5.2 Definition Phase I429
the parameter should simultaneously have two different values. The physical contradictions are removed applying the principles of separation in time and space. Ideality There is a tendency in the evolution of the systems that they always change towards the state where all benefits of their function are realized at no cost or harm. Functionality Every system has its main useful function and all its elements have to fulfill this function. Otherwise they are under-used and could be a source of conflict. The notion of functionality allows for the generalization of various aspects of the system functioning resulting in the possibility of transferring know-how between the various fields (technical, medical, biological, etc.). Use of Resources Any physical element of the system or phenomena accompanying its functioning has to be used to maximize the system functionality. The application of the basic principles is realized as shown in Fig. 5.7. It is based on the translation of the actual problem into the general one (identified by Altshuller), finding the general solution to this problem (also identified by Altshuller), and then transforming it into the solution of the actual problem. The basic principles are implemented in several tools (inventive principles, Sfields, contradiction matrix, separation principles, knowledge effects, trends, etc.) (Mann 2002). An example of application of the modified TRIZ to product design, is given by Nakagawa (1999). The problem was the insufficient quality of the porous polymer sheet. The reason was a foam ratio in the process of forming a polymer sheet that was too low. The gas dissolved in the molten polymer was escaping during the process through the surface of the sheet, which resulted in the creation of small, unequally distributed bubbles. The application of USTI resulted in several alternatives for concerning the material itself as well as its manufacturing. The proposed solution dealing directly with the polymer composition was the addition of the solid powdcr to the molten polymer to act as secds for thc bubbles.
Problem
Solution
Specific
Specific Solution
Figure 5.7 The general principle of application of TRIZ (Mann 2002)
430
I
5 Product Development
5.2.2.2 Creativity Templates
The analysis of the market by interviewing the customers and salespeople can lead to incrementally innovative products. However, as was shown by Goldenberg and Mazursky (2002),it will not introduce a breakthrough product earlier than the competition. The reason is that the saturation of the market with the new-needs-aware customers is so low that their identification by the market research is practically impossible early on in the process. There are practically very few customers able to express the new breakthrough need in terms of the existing products. With time the awareness of the consumers grows and it becomes easier to reach new-needs-aware persons, but the competitors have the same chance as well and the competitive advantage of our product diminishes (Fig. 5.8). Goldenberg and Mazursky (2002) have suggested an introduction of creativity templates to generate new products. This idea was inspired by the TRIZ method (Orloff 2003). The first step consists of listing the attributes of the existing product. Next the attributes are manipulated
Number of customers
A
:>
c;' 0
000
0
b
0
0
0
unaware customer
b
Time
.
0 . 0
0
a o
C
0
C
new-needs- aware customer
Figure 5.8 The change of the new-needs awareness o f customers with time. (a) Early period o f the need recognition-very few aware consumers. (b) Mature period of the need recognition-aware consumers are numerous. (c) Saturation period of the awareness-the need is obvious for the considerable majority o f the customers.
5.3 Product Design I 4 3 1
using one or more templates (subtraction of the attributes, their multiplication, division, task unification or attribute dependency change). The examples of the method application are given in Goldenberg et al. (2003). 5.2.2.3 Product-oriented Methods
The product-oriented methods could be of special interest for the CAPE community. They concentrate mainly on the technical aspect of the product and try to forecast future trends of the technology development. The most common method is technology/product roadmapping (Petrick and Echols 2004). An example of a roadmap for technology/product planning is presented by Phaal et al. (2004). The forecasting of technological trends and products based on the use of TRIZ is presented by Mann (2003).He cited the generic trends of technology evolution: 0 0 0 0 0
0 0 0 0 0
systems with increasing benefits and decreasing cost and harm; increased dynamization within systems; increased space segmentation; increased surface segmentation; increased controllability: increased complexity followed by reduced complexity; use of all available physical dimensions within a system; decreased number of energy conversions; increased rhythm coordination; increased action coordination.
Mann (2003) also presented the application of trends to the forecasting of the product development.
5.3 Product Design
The concept of design is very difficult to define due to the fact that design activity could lead to extremely diverse products and consequently a broad spectrum of activities is possible. A survey of chemical product design could be performed from the following various perspectives: 0
0
classes of the products (e.g., basic chemicals, specialty chemicals, pharmaceuticals, crop protection, consumer products); development phases of the project: - development of the product concepts (determination of its form, features and draft specifications); - detailed design (identification of the complete specification and corresponding composition and structure); - product testing; - final design;
432
I
5 Product Development
- The specificity of the product development has to be taken into account, e.g.,
0
0
pharmaceuticals the steps involved are discovery, initial tests, clinical trials, legal approval: degree of innovativeness: new class of products, derivatives of the existing group of products, incrementally improved and breakthrough of new products; used tools, etc.
The next subchapter focuses on the methods and tools used in chemical product design.
5.3.1 Experimental Design
Experimental design is one of the oldest engineering product design methods. Traditionally, it can be divided into the studies of the influence of process variables (temperature, pressure, etc.) and analysis of the properties of the materials by changing their composition. The first type of experiment is called experimental process design and the second is mixture design. There is also a third type called combined experimental design, when the characteristic of the materials is studied as a hnction of mixture composition and process conditions. 5.3.1.1 Planned Experimental Design
The mixture design experiments are carried out to precisely determine the composition of the components. The usual experimental mixture design deals with three to ten components. The existence of the various types of the component constraints (relational constraints, constraints resulting from the interaction of components) is typical for the mixture design. The studied attribute (taste, viscosity, stability, etc.) is recorded for every experiment. Depending on the number of components and the specific composition constraints, a precise number of experiments are conducted according to the selected experimentation plan. A pliori determined compositions and experimentally obtained values of the attributes are used for obtaining linear, cubic or higher-order regression equations. A good introduction to mixture design is given by Eriksson et al. (1998). The most popular fields of application of mixture design are pharmaceuticals, food products, polymers and paints. A few of the most typical are: 0
0
Blending: Olive oils were blended to ensure the best sensory quality. There were eighteen experiments carried out to find the optimal composition, according to eight sensory criteria, of four olive oil mixtures (Vojnovic et al. 1995). Formulation: A mixture of dispersants was designed for use in oil spill cleanup. In order to evaluate a mixture of three different dispersants, ten experiments were performed consisting of the measurement of the effectiveness of oil dispersion (percentage of oil removed from the water surface) as a function of the dispersant mixture composition (Brandvik 1998).
5.3 Product Design 1433 0
Pharmaceuticalfomtulation: The objective was to find the optimal composition of the tablet coating ensuring the required release rate of the active ingredient from inside of the tablet. The coating was composed of three components and the release rate was determined by the measurements of the diffusion coefficient in 13 experiments with different mixture compositions (Bodea and Leucuta 1997).
The combined experimental design is an active field of research allowing for the prediction of the product properties not only as a function of the composition but also of processing mode. The application fields are the same as for the mixture design: 0 Blending: In a process consisting of the blending of three different flours, the product attribute was baking quality. Sixty-sixexperiments were performed to identify the polynomial that correlates the composition of the flour mixture and mixing time with the baking quality of bread (Naes et al. 1999). 0 Phamaceuticalformulation: The studied property of the material was its viscosity as a function of its composition (active substance: tolfenamic acid, block copolymer, ethanol and buffer) and one process variable (Cafaggi et al. 2003). Mixture design is a very well-established method within product design. However, changes in the business environment, as mentioned at the beginning of this chapter, have caused long time periods and relatively high costs, which are the main factors limiting the attractiveness of experimental mixture design. 5.3.1.2 High-Throughput Experimentation
The estimated number of inorganic compounds is around 10’ (only for materials composed of five elements). The number of drug-like structures is projected to be as high as loG3(Cawse 2002). Consequently, new methods of research are needed to identify, with relatively high probability, new promising materials in such a huge space. High-throughput experimentation (HTE)is used to tackle the problems where the parameter space is too large to be handled efficiently using conventional approaches. HTE consists of the use of miniaturized laboratory equipment, robotics, screening apparata and computers. The main field of HTE application is the determination of the composition of drugs, multifunctional materials, coatings, and catalysts as well as the determination of their formulation. A good introduction to the subject is given by Cawse (2002). The main problems related to the application of CAPE to HTE consist of handling the complexity resulting from the amount of data and highly complicated phenomena under consideration. As a consequence, the research interest concentrates on the design of databases, data mining, integration and representation, conversion of data to knowledge, experimental control systems, and decision-supporting systems to facilitate “hit-to-lead”process aiming at the maximization of the number of the successful designs. The successful application of HTE to products design are numerous, e.g., homogeneous and heterogeneous catalysts (Murphy et al. 2003). The review of HTE applications to materials design is presented by McFarland and Weinberg (1999). A few examples given below focus on CAPE-related problems in THE.
434
I
5 Product Development 0
0
Catalyst design: Caruthers etal. (2003) have outlined a procedure for the more comprehensive use of data from HTE in catalyst design. The final objective of the presented research is to achieve a balance between the speed of the data generation and abilities of their transformation into knowledge. The proposed process of “knowledge extraction” consists of planning HTE in a way that allows for the discrimination of the models of catalybc reaction models, determination of the kinetic constants and relating them to the catalyst microstructure. The proposed forward modeling is realized by the application of the rules capturing the human expertise, neural network (NN), and genetic algorithm (GA). The proposed system is not yet fully automatic. Phamaceuticals design: The acute problem facing the pharmaceutical industry is growing discrepancy between the R&D investment and a number of newly registered drugs (Anonymous 2004). The declining number of new drugs is due, among other factors, to the concentration of efforts in HTE technology on the generation of active pharmaceutical ingredients (API) - selectivity and potency at the target are the mainly studied features - and neglect of the studies of the API form (salt, polymorphic, hydrates, solvates, etc.). Those forms determine the properties of the API, such as solubility, stability, biocompatibility, etc. Those properties, on the other hand, determine the metabolism, toxicity, and formulation design of the pharmaceuticals. A comprehensive description of the problem is presented by Gardner et al. (2004).
An interesting application of CAPE tools to HTE is the use of case-based reasoning for identifylng the required conditions for protein crystallization (Jurisica et al. 2001).
5.3.2 Knowledge-based Tools 5.3.2.1 Case-based Reasoning
Case-based reasoning (CBR) consists of solving new problems using the solutions of old problems. The central notion of CBR is a case. The main role of a case is to record a single past event where a problem was solved. A case is represented as a pair: problem and solution. Several cases are collected in a set to build a case library (case base). The library of cases must roughly cover the set of problems that may arise in the considered domain of the application. The set of cases in the case library generates two different spaces (a problem space, involving only problem descriptions of the individual cases), and a solution space built by solutions of those problems. There are five steps in CBR: (1)introduction of a new problem, (2) retrieval of the most similar cases, (3) adaptation of the most similar solutions, (4) validation of the current solution, and (5) system learning by adding the verified solution to the database of cases. The retrieval is a basic operation of CBR. The current problem is defined by a list of the parameters with their values, e.g., names of additives, their
5.3 Product Design
amount, material processing, etc. This description is positioned in the problem space. During the retrieval step, the current problem is matched against the problems stored in the case base. The matching is realized by the calculation of the similarity function. The similarity function can be visually defined as a distance between the current problem and the past one in the problem space. The most similar problem and its solution are retrieved to create a starting point for finding the solution of the current problem. In most cases, the solution of the retrieved problem could not be accepted as the solution to the current one as even small differences between problems may require significant modification of the solution. An adjustment of the parameters of the retrieved solution to be conformed to the current problem is called adaptation. The adaptation often requires additional knowledge, which can be represented by the rules or equations. The received solution and the current problem together form the new case that is incorporated in the case base during the learning step. In such a way, the CBR system evolves into a better reasoner as its capability is improved by extending stored experience. CBR is beneficial when the problems are not completely understood and a reliable model can not be built. Moreover, the problem may not be completely defined before starting to search possible solutions. The approach proposes solutions quickly so it can considerably accelerate the design process. Introductions to CBR are given in Watson (1997) and Aamodt and Plaza (1994). The method has been used in various applications: formulation of tablets, plastics, rubber and agrochemicals. The examples described in the literature that deal with the tablet formulation concentrate on the selection of the tablets ingredients and their composition. Typically the tablet is composed of: 0 0 0 0 0
0
filler (to ensure that the tablet is large enough to be handled); binder (to facilitate granulation); lubricant (to facilitate manufacturing); disintegrant (to ensure an easy intake of the drug after the swallowing); surfactant (to guarantee dissolution of the drug); active component (to perform the treatment).
The proposed system (Craw et al. 1998) is aimed at the retrieval as well as adaptation. The potentially interesting formulation is adapted by the application of the rulebased system. It allows one to determine the most adequate ingredients by the elimination of the various conflicts and constraints related to the simultaneous presence of some of them in a tablet. The adaptation is realized using the voting mechanism consisting of the selection of the most frequently used ingredients among the retrieved cases. The supplementary rules ensuring, for example, stability of the SYStem, are used. An additional feature is adaptation of ingredients quantity by the application of two methods (average quantity from the retrieved cases or the best match). The adaptation phase realized by the application of the hybrid algorithm combining induction and instance-based learning was introduced by Wiratunga et al. (2002). An interesting improvement of the cases retrieval has been proposed by Craw and Jarmulak (2001). The application of the genetic algorithm has been proposed for handling the growing case base as well as to capture the changing approach to the principles of formulation. The growing case base contains new
I
435
436
I
5 Product Development
knowledge that should be handled differently and the changes of the principles of formulation reflect the modifications in company politics. The application of CBR to the design of the closed rubber cells was presented by Herbeaux and Mille (1999). The paper deals with the determination of composition of the rubber as well as the operating parameters for the extrusion and vulcanization phase. The main function of the proposed CBR system is the retrieval of the cases and not their adaptation. The design of tires was presented by Bandini et al. (2004).The problem consisted of determining the recipe for the manufacturing tread (elastomers, silica, carbon black, accelerants, etc.) as well as determination of the conditions for compound mixing and vulcanization. The chemical formulation of the product depends on the desired properties of the tire. The desired properties are determined as a function of the car set-up, weather, type of road, etc. The problems dealing with the vulcanization and tuning of the product have been solved using the CBR system. The rest of the activities related to the tire design are obtained using rule-based systems. The design of lubricating oils was introduced by Shi et al. (1997). The main problem tackled in the paper was the formulation of an additive that was combined with the oil to create a lubricating agent. The applied CBR system starts with the input information such as the base oil type, viscosity, and constraints of use. The adaptation phase is realized using a rule-based system. 5.3.2.2 Neural Networks
Neural networks (NNs) are well-established techniques in CAPE (see Fausett 1994). The application of NNs to product development covers several types of products. Below a few of the most common applications have been presented. Fuel Additives
The use of N N for product design has been illustrated by Sundaram et al. (2001)for fuel additives. Fuel additives have been used as combustion modifiers, antioxidants, inhibitors of corrosion, deposit controllers, etc. The role of the deposit controllers is to limit the formation of the deposit on the intake valve and combustion chamber. Due to the high cost of the experiments, additive design has focused on use of the mathematical tools. The authors proposed a hybrid approach combining modeling and NNs. The results obtained from modeling have been used to train the neural net and then compared with the experimental results. Such an approach enabled the tuning of the model, allowing for the prediction of the build-up of the deposit as a function of the composition of the additives. Rubber Mixture
Borosy (1999) has proposed the application of NNs for the formulation of rubber mixtures. The Internet was used for the purpose of direct (what are the properties of the mixture when the composition is given) and indirect modeling (what should be the composition to ensure the required properties of the mixture). A maximum of 32 variables were needed to describe the mixture composition and processing condi-
5.3 Product Design
tions, and nine variables to capture the characteristics of the product. Adaptively learning NNs have been trained to map the relations between the input and output. Dyes
Chen et al. (1998) have proposed a combination of NNs and experimental design to formulate a pigment composed of six components. It was a case of solving the direct product design problem. The pigment quality was determined by comparison with three color indices. The proposed approach was limited exclusively to the determination of the pigment composition and did not consider the complicated heating policy in the processing phase. Greaves and Gasteiger (2001)have applied NNs to predict the molecular surface of acid, reactive, and direct dyes. The mapping of the three-dimensional molecular surfaces into a Kohonen network, enabling the prediction of substantivity,was an example of the direct formulation of the product design problem. Pharmaceuticals
NN applications concentrate on finding the relation between the composition of the tablet and the required release time, prediction of the physicochemical properties of the substances based on their molecular composition, and formulation of the special purpose tablets (e.g., fast disintegrating). Takayama et al. (2003)have presented an application of NNs to direct and reverse problems of product design. Two examples of NN application have been given. In the first case, the objective was determination of the optimal composition ensuring the required release of the active substance from the tablet. The tablet was composed of the active substance, two gel-forming materials and disintegrant. The second example dealt with the determination of the optimal composition of the mixture of ketoprofen hydrogels composed of two gel bases, two penetration enhancers, and two solvents. The objective function was the required rate of penetration of active substance and skin irritation. Turkoglu et al. (2004)have presented the application of N N to the formulation of the tablets composed of the coated pellets. The objective was to identify the formulation ensuring the required disintegration in the gastrointestinal fluid. Four properties of six-component tablets were studied to identify the required composition. Based on molecular structure, Taskinen and Yliruusi (2003) have presented a survey of NN applications to the direct prediction of the 24 physicochemical properties essential in drug design. Sunada and Bi (2002)reported their research on the formulation of the rapidly disintegrating tablets. The objective was to achieve fast disintegration in the oral cavity without drinking water. Four-component tablets that formed powder characterized by nine properties were studied. The applied processing method was wet compression. Refrigerants
Sozen et al. (2004) have studied the prediction of the specific volume of refrigerant/ absorbent couples as a function of the system composition, pressure, and tempera-
I
437
438
I
5 Product Development
ture using NN. The objective was to determine the properties of the couples ensuring zero ozone depletion, thermal resistance, high evaporation heat at low pressure, low specific volume of vapor, low solidification and high critical temperatures. A similar study was done by Chouai et al. (2002).They determined the thermodynamic properties of three refrigerants. Polymer Composites
The application of N N to the formulation of the polymer composites has been reviewed by Zhang and Fredrich (2003).They gave a comprehensive overview of the NN application to the estimation of fatigue life, design of unidirectional and laminate composites, assessment of wear of composites, and processing optimization. There are some examples of NN applications in the design of other products, e.g., food (Jimenez-Marquez et al. 2003), catalysts (Liu et al. 2001), ceramic materials (Sebastia et al. 2003), and lubricants (Konno et al. 2002). 5.3.2.3 Genetic Algorithms
Genetic algorithms (GAS)are gaining a lot of attention in CAPE. A classic book on genetic algorithms was written by Goldberg (1989).This very promising tool of artificial intelligence has been used to design polymers (Venkatasubramanian et al. 1996; Venkatasubramanian et al. 1995), catalysts (McLeod et al. 1997; Corma et al. 2003), and rubber (Lakshminarayananet al. 2000; Ghosh et al. 2000). Very often GAS are combined with first-principles modeling and other artificial intelligence methods. The most common hybrid system is NN-GA. They are used for catalyst design (Huang et al. 2003; Rodemerck et al. 2004; Caruthers 2003) and fuel additive design (Ghosh et al. 2000). 5.3.2.4 Rule-based Systems
Nowadays the “classical”rule-based systems are relatively seldom encountered as tools for the support of the product design. The review of such systems has been presented, e.g., by Rowe and Roberts (1998b). However, there are several examples of hybrid systems for design of: 0
0 0
materials: rule-based and case-based system (Netten and Vingerhoeds 1997),and rule-based and genetic algorithms (Kim et al. 1999); cosmetics: rule-based and first principles (Wibowo and Ng 2001); pharmaceuticals: rule-based and first principles (Fung and Ng 2003), and rulebased and neural networks (Guo et al. 2002).
5.3.2.5 Data Mining
Data mining is a term used for the group of techniques used in finding useful patterns in the datasets. A comprehensive survey of the methods is given by Hand et al. (2001).The approach consists of the analysis of huge data depositories in search for
the useful correlations between material properties and its composition. A simple example of data mining application to rubber design is given by Chen et al. (2000). Beliaev and Kraslawski (2003)have applied a semantic analysis of the scientific literature to identify new products that can be synthesized from substrates (sucrose, glucose, fructose, fatty acids) with Candida antarctica lipase acting as a catalyst. 5.3.2.6 Semantic Networks
The application of the semantic networks to predict the qualitative properties of complex compounds as a function of their composition has been presented by Kiselyova (2000) for the design of inorganic materials. The results of predicting the crystal structure types at normal pressure and room temperature for compounds with given composition are presented.
5.4 Summary
Analysis of the literature shows that solving pure forward problems in product design is still in the trial phase. Exclusively predictive methods for product design are not yet in industrial use. Therefore, it seems the application of existing experience in product development is the main approach allowing the achievement, in a reasonable time, of good product design. However, the reuse of knowledge is mainly useful for incremental improvements of existing products. It is estimated that around 100 ideas are needed to generate a new product (Cussler and Wei 2003). The discoveries leading to the breakthrough innovations need new tools for the management of the existing knowledge, enabling the generation of new ideas. Tools like data mining and TRIZ still wait for broader application in product design. A new subject will be the combination of CAPE tools and market analysis approach in the estimation of risk related with the rejection, by the consumers, of new chemical products (e.g., fears related with nanomaterials). The development of techniques for assessing the impact of new chemical products on the environment, health and safety as well as communication of those facts to society, is a new important role connected with their development. A fascinating field of CAPE applications in product design is tangible and augments elements of chemical products, not only their structure-property behavior, but also the design of their dynamic behavior in the context of the product-related services or functions. Keeping in mind the transition of product design from purely experimental towards computer-aided product design, the following challenges that face CAPE in the field of product development seem to be crucial: 0
0
development and implementation of the approaches aimed at closing the gap between the phase of the data collection and generation of information; adaptation of the existing methods in process design to product development and possibility of reusing available software for new tasks;
440
I
5 Product Development 0
development of the methods aimed at the generation of product-related knowledge from the available information, and consequently creation of more generalized approaches for solving behavior-properties-composition(structure) problems at the level of project and business units.
References 1 Aamodt A. and E. Plaza (1994) Case-based
2
3 4 5
6
7
8
9
10
11
12
reasoning: foundational issues, methodological variations, and system approaches. Art$ cia1 Intell Commun 7, 39-59. Achenie L. E. K., R. Gani, V. Venkatasubramanian (2003) Computer-Aided Molecular Design: Theory and Practice. Elsevier, Amsterdam. Anonymous(2004) Fixing the drugs pipeline. Thz Economist March 13-17th, 28-30. Baker M.and S. Hurt (1999) Product Strategy and Management. Prentice Hall, Harlow. Bandini S., E. Colombo, G. Colombo, F. Sartori, C. Simone (2004) The role of knowledge artifacts in innovation management: the case of a chemical compound designer COP, available at: http: www. disco. unimib. it/ simone/gestcon/C&TO3-Bandini-et. al. doc, 2004. Beliaev S. and A. Kraslawski (2004) Literaturebased discovery in product design, in CHISA 2004, 22-26 August 2004, Prague, Czech Republic. Bodea A. and S. E. Leucuta (1997) Optimization of hydrophilic matrix tablets using a Doptimal design. I n t j Pharm 153, 247-255. Borosy A. (1999) Quantitative compositionproperty modeling of rubber mixtures by utilizing artificial neural networks. Chemometrics Intell Lab Syst 47, 227-238. Branduik P. ]. (1998) Statistical simulation as an effective to evaluate and illustrate the advantage of experimental designs and response surface. Chemometrics Intell Lab Syst 42, 51-61. B m M. and R. Cooper (2000) Creative Product Design. John Wiley & Sons, Chichester. Cafagi S., R. Leardi, B. Parodi, G. Cavigliolo, G. Bignardi (2003) An example of application of a mixture design with constraints to a pharmaceutical formulation. Chemometrics lntell Lab Syst 65, 139-147. Caruthers ]. M., J. A. Lauterbach, K. T. 7'homson, V. Venkatasubramanian, C. M.Snively, A. Bhan, S. Katare, and G. Oskarsdottir (2003) Catalyst design: knowledge extraction from high-throughput experimentation. J Catalysis 216, 98-109.
13 Cawse]. N. (2002) (ed.) Experimental Design
14
15
16
17
18
19
20
21
22
23 24
25
26
for Combinatorial and High Throughput Materials Development. John Wiley & Sons, New York. Charpentier]. C. (2002) The triplet molecular processes-product-process engineering. Chem. Eng. Sci. 57,4667-4690. Chen]., D. Wong, S. S. Jang, S. L. Yang (1998) Product and Process Development Using Artificial Neural-Network Model and Information Analysis. AIChE] 44,876-887. Chen N., D. D. Zhu, W. Wang (2000) Intelligent materials processing by hyperspace data mining. Eng Appl Artqcial Intell 13, 527-532. Chouai A., S. Laugier, D. Richon (2002) Modeling of thermodynamic properties using neural networks application to refrigerants. Fluid Phase Equilibria 199, 53-62. Mann D. (2002) Hands-on Systematic Innovation. CREAX Press, Ieper. Coma A., J . M . Serra, A. Chica (2003) Discovery of new paraffin isomerization catalysts based on S0>-/Zr02 and wo,/Zr02 applying combinatorial techniques. Catalysis Today 81,495-506. Craw S., N. Wiratunga, R. Row (1998) CaseBased Design for Tablet Formulation, in Proceedings ofthe 4th European Workshop on Case-Based Reasoning, pp. 358-369. Craw S. and]. Jarmulak (2001) Maintaining retrieval knowledge in a case-based reasoning system. Comput Intell 17, 346-363. Cussler, E. L., and C. D. Moggridge (2001) Chemical Product Design. Cambridge University Press, Cambridge. Cussler E. L. and J. Wei (2003) Chemical product design. AICHEJ 49, 1072-1075. Eriksson L., E. Johansson, C. Wikstrcim (1998) Mixture design: design generation, PLS analysis, and model usage. Chemometrics Intell Lab Syst 43, 1-24. Fausett L. V.(2004) Fundamentals ofNeural Nehuorks. Prentice Hall, New York. Fung K. Y. and K. M.Ng (2003) Productcentered processing: pharmaceutical tablets and capsules. AIChEJ 49, 1193-1215.
References I441 27
2s
29
30
31
32
33
34
35
36
37
3s
39
40
Gani R. and]. P. O'Connell(2001) Properties and CAPE: from present uses to future challenges. Comput Chem Eng 25, 3-14. Gardner C. R, 0. Almarsson, H. Chen, S. Morissette, M. Peterson, Z. Zhang, S. Wang, A. Lemmo, /. Gonzafez-Zugasti,1.Monagfe, /. Marchionna, S. Ellis, C. McNulty, A. Johnson, D. Leuinson, M. Cima (2004) Application of high-throughput technologies to drug substance and drug product development. Comput Chem Eng, available online 5 January 2004. Goldenberg]., R. Horowitz, A. Leuau D. Mazursky (2003) Finding the sweet spot of innovation. Haward Business Rev 120-129, March 2003. Goldenbergj. and D. Mazursky (2002) Creatiuity in Product Innovation. Cambridge University Press, Cambridge. Ghosh P., A. Sundaram, V. Venkatasubramanian, J. Camthen (2000) Integrated product engineering: a hybrid evolutionary framework. Comput Chem Eng 24, 685-691. Gofdberg D. E. (1989) Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley,Boston. Greaves A. J. and]. Gasteiger (2001) The use of self-organising neural networks in dye design. Dyes Pigments 49, 51-63. Guo M., G. Kalra, W. Wilson, Y. Peng, L. L. Augsburger (2002) A prototype intelligent hybrid system for hard gelatin capsule formulation development. Pharm Technol 44-60, September 2002. Hand D., H. Mannila, P. Smyth (2001) Principles of Data Mining (Adaptive Computation and Machine Learning). MIT Press, Cambridge. Hauser]. R. and D. Clausing (1988) The House of Quality. Haward Business Rev May-June, 63-73. Herbeaux 0. and A. Milk (1999) ACCELERE: a case-based design assistant for closea ceu rubber industry. Knowledge Based Syst 12, 231-238. Huang K., X-L. Zhan, F-Q. Chen, D-W. Lii (2003) Catalyst design for methane oxidative coupling by using artificial neural network and hybrid genetic algorithm. Chem Eng Sci 58, 81-87. Jimenez-Marquez S. A., C. Lacroix, 1.Thibault (2003) Impact of modeling parameters on the prediction of cheese moisture using neural networks. Comput Chem Eng 27, 63 1-646. Jurisica I.,]. R. Wolfey, P. Rogers, M. A. Bianca, J. I. Glasgow, D. R. Weeks, S. Fortier,
41
42
43
44
45
46
47
48
49
50 51
52
53
G. DeTitta,]. R. Luft (2001) Intelligent decision support for protein crystal growth. IBM Syst J 40, 394-409. Kahn K. B. (2002) An exploratory investigation of new product forecasting practices. ] Product lnnou Manage l9>133-143. Kimj.-S., Ch.-G. Kim, Ch.-S. Hong (1999) Optimum design of composite structures with plv drop using genetic algorithm and exnert system shell. Composite S t r u t 46, 171-187. Kisefyoua N . N. (2000) Databases and semantic networks for the inorganic materials computer design. Eng Appl Artijcial Intell 13, 533-542. Konno K., D. Kamei, T. Yokosuka, S. Takami, M . Kubo, A. Miyamoto (2003) The development of computational chemistry approach to predict the viscosity of lubricants. Tribology Int 36, 455-458. Lakshminarayanan S., H . Fujii, B. Grosman, E. Dassau, D. R. Lewin (2000) New product design via analysis of historical databases. Comput Chem Eng 24, 671-676. Liu Y., Y . Liu, D. Liu, T. Cao, S. Han, G. Xu (2001) Design of CO" hydrogenation catalyst by an artificial neural network. Comput Chem Eng 25, 1711-1714. Mann D. L. (2003) Better technology forecasting using systematic innovation methods. Technol Forecasting Social Change 70, 779-795. McFarfand E. W. and W. H. Weinberg (1999) Combinatorial approaches to materials discovery. TibTech 17, 107-115. McLeod S., M.E. Johnston, L. F. Gladden (1997) Algorithm for molecular scale catalyst design.] Catalysis 167, 279-285. OrlofM. A. (2003) Inventive Thinking Through TRIZ. Springer, Berlin. Murphy V., A. F. Volpe, W. H . Weinberg (2003) High-throughput approaches to catalyst discovery. Curr Opinion Chem Biof 7, 427-433. Naes T., F. Bjerke, E. M. Faergestad (1999) A comparison of design and analysis techniques for mixtures. Food Qua1 PreflO, 209-217. Nakagawa T. (2004) USIT Case Study (2) Increase the Foam Ratio in Forming a Porous Sheet from Gas Solved Molten Polymer (1999),available at: http://www.osakagu.ac. jp/php/nakagawa/TRIZ/eTRIZ/epapers/eUSITCases990826/eUSITC2Polymer990826.html.
442
I
5 Product Development 54 Netten D. B. and R. A. Vingerhoeds(1997)
55
56
57
58
59
60
61
62
63
64
65
66
67
EADOCS: conceptual design in three phases-an application to fibre reinforced composite panels. Eng Appl Artijicial Intell 10, 129-138. Petrick I. J. and A. E. Echols (2004) Technology roadmapping in review: a tool for making sustainable new product development decisions. Technol Forecasting Social Change 71,81-100. Phaal R., C. Farrukh. D. Probert (2004) Customizing Roadmapping, ReserarchTechnology-Management 26-37, March-April, 2004. Phillips F. and N. Kim (1996) Implications of chaos research for new product forecasting. Technol Forecasting Social Change 53, 239-261. ReVelle]. B., J. W . Moran; C. A. Cox (1998) The QFD Handbook. John Wiley & Sons, New York. Rodemerck U., M. Baems, M. Holenu, D. Wolf (2004) Application of a genetic algorithm and a neural network for the discovery and optimization of new solid catalytic materials. Appl Surface Sci 223, 168-174. Rowe R. C. and R. J. Roberts (1998a) Intelligent Softwarefor Product Formulation. Taylor & Francis, London. Rowe R. C. and R.]. Roberts (1998b) Artificial intelligence in pharmaceutical product formulation: knowledge-based and expert systems. PSTT4, 153-159. Sebastia M., 1. Femandez Olmo, A. Irabien (2003) Neural network prediction of unconfined compressive strength of coal fly ashcement mixtures. Cement Concrete Res 33, 1137-1146. Shi Z., H. Zhou, J. Wang (1997) Applying case-based reasoning to engine oil design. Artijcial Intell Eng 11, 167-172. Sdzen A., M . &alp, E. Arcaklioglu (2004) Investigation of thermodynamic properties of refrigerant/absorbent couples using artificial neural networks. Chem Eng Process, available online 25 March 2004. Sunada H., Y. Bi (2002) Preparation, evaluation and optimization of rapidly disintegrating tablets. Powder Technol 122, 188-198. Sundaram A., P. Ghosh, J . Caruthers, V. Venkatasubramanian (2001) Design of fuel additives using neural networks and evolutionary algorithms. AlChE] 47(6), 1387-1406. Takayama K., M. Fujikawa, Y. Obata, M. Morishita (2003) Neural network based
68
69
70
71
72
73
74
75
76
77
78
79
80
optimization of dmg formulations. Adv Drug Delivery Rev 55, 121771231, Taskinen Y.,J . Yliruusi (2003) Prediction of physicochemical properties based on neural network modeling. Adv Drug Delivery Rev 55, 1163-1183. TechnologyFutures Analysis Methods Working Group (2004) Technology futures analysis: toward integration of the field and new methods. Technol Forecasting Social Change 71, 287-303. Tiirkoglu M., H. Varol, M. Celikok (2004) Tableting and stability evaluation of entericcoated omeprazole pellets. EurJ Pharm Biopharm 57,279-286. Venkatasubramanian V., A. Sundaram, K. Chan, J . M. Caruthers (1996) ComputerAided Molecular Design Using Neural Networks and Genetic Algorithms Polymers. in Delivers 1. (ed.) Genetic Algorithms in Molecular Modeling. Academic Press, New York. Venkatasubramanian V., K. Chan, and J. M . Caruthers (1995) Evolutionary design of molecules with desired properties using genetic algorithms. ] Chem Info Comput Sci 35, 188-195. Viaene J. and R. Januszewska (1999) Quality function deployment in the chocolate industry. Food Qual PreflO, 377-385. VilladsenJ. (1997) Putting structure into chemical engineering. Chem Eng Sci 52, 2857-2864. VojnovicD., B. Campisi, A. Mattei, L. Favretto (1995) Experimental mixture design to ameliorate the sensory quality evaluation of extra virgin olive oils. Chemometrics Intell Lab Syst 27, 205-210. Watson 1. (1997) Applying Case-Based Reasoning: Techniquesfor Enterprise Systems. Morgan Kaufmann, San Francisco. WesselingJ.A. (2001) Structuring of products and education of product engineers. Powder Technol 119, 2-8. Wibowo C. and K. M . Ng (2001) Productoriented process synthesis and cevelopment: creams and pastes. AIChE J 47, 2746-2767. Wiratunga N., S. Craw, R. Rowe (2002) Learning to Adapt for Case-Based Design, in Proceedings ofthe 6th European Conference on Case-based Reasoning, Springer-Verlag, Berlin, pp.421-435. Zhung Z., K. Friedrich (2003) Artificial neural networks applied to polymer composites: a review. Composites Sci Technol 63, 2029-2044.
Section 3 Cornputer-aided Process Operation
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
Section 3focuses on the application of computing technology to integrate and facilitate the key technical decision processes which arise in chemical manufacture. It comprises decision support systems at different levels of decision making. I t discusses the problem of coordinated planning and scheduling of distributed plants at the top level, product sequencing, and precise allocation over time of detailed process operations resource constrained, coordination between units, process monitoring and regulatory control in a real-time environment extended to contemplate hybrid systems. An introduction to modeling the entire supply chain is also presented, which isfirther elaborated in Section 4. Section 3 consists of seven chapters introducing the current problemsfacing process operations, the state ofrelevant methods and technology, and needed advances to combat everincreasing complexity of computer-aided process operations in a business-wide context. Chapter 2 presents a comprehensive review of state-of-the-art models, algorithms, methodologies, and toolsfor the resource planningproblem, covering a wide range of manufacturing activities and including a detailed critical discussion on the effect of uncertainty. The emerging trend in the area of short-term scheduling is the development of ejicient solution techniques and to render ever largerproblems tractable. Chapter 2deals with issues that must be resolved related mainly to problem scale. A sensible way forward is proposed by trying to capture the problem in all its complexity and then to explore rigorous or approximate solution procedures, rather than develop exact solutions to somewhat idealized problems. Afinal challenge relates to the seamless integration of the activities at different levels including data and finctional fiagtnentation, inconsistencies between activities and datasets, and different tools being used for different activities, which are the subjects of subsequent chapters and sections. The needfor quality measurements to monitor process operations, evaluation of their eflciency and the equipment condition, thus avoiding equipment failure and any subsequent hazardous conditions is treated in Chapter 3. Recent progress in automatic data collection and current developmentsaimingat combining online data acquisition with data reconciliation are presented in detail. The measurement information can also be used in the control scheme in various ways The weakestform offeedback is to use the measurements forparameter adaptation only, which requires a structurally correct model. Chapter 4 introduces modelbased control techniques and points out new trends. The techniques use the combination of first principles-based and black-box models the parameters of which are estimatedfiom operational data as a way to obtain suflciently accurate models without excessive effort. Online optimization using measurement information in many cases is an attractive alternative to the tracking of pre-computed references because the process can be
operated much closer to its real optimum, while still meeting hard bounds on the specifications. A rigorous treatment of real-time optimization problem is given i n Chapter 5. The last two chapters expand the concept of computer-aided process operations to also consider hybrid systems (Chapter 6) and the whole network of material procurement, material transformation to intermediates a n d j n a l products, and distribution of these products to customers (Chapter 7). The needfor integrated solutions will behrther explored in Section 4.
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
1 Resource Planning Michael C. Georgiadis and Panagiotis Tsiakis
1.1 Introduction
Until recently, resources planning exercises in many companies were based on quantitative, managerial judgements about the future directions of the firms and the markets in which they compete. Complex interactions between the different decision-making levels were often ignored. In the past few years however, important planning decisions, such as those relating to capacity expansion, new product introduction, oil and chemical product distribution and energy planning, have been formally addressed based on recent developments in mixed-integer optimization. Today, most process and energy industries have turned to the use of optimization models in seeking efficient long-term planning use of their resources (Shapiro 2004). In the last two decades, new techniques have developed to analyze large size planning models, while research in aggregation and decomposition techniques has multiplied. In addition, many managers have begun to recognize the major drawbacks of most current planning systems and the necessity for more intelligent and quantitative decisions tools instead of administrative routine procedures. This comes as a natural continuation of the pioneering work of F. W. Taylor and H. L. Gantt, who in the early 1900s identified the impact on productivity and other key performance indices, of general production planning systems based on scientific approaches (Wilson 2003). Today’scomputing offers more powerful techniques for modeling and solving planning problems, while Gantt charts still provide an excellent display tool for understanding and acceptance of plans in any type of environment, in addition to other available interfaces. During the last year, companies have realized that in order to achieve significant competitive advantage within their sector they need to understand the operations hierarchy and solve their problems in a unified framework, a fact that is resulting in the development of corresponding tools. Towards that, the interest in planning and scheduling capabilities has given rise to the providers of solution systems designing, developing and implementing planning systems as part of general supply-chain supComputer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA,Weinheim ISBN: 3-527-30804-0
448
I
I Resource Planning
port systems or with open architecture to allow easy integration. Owing to the inherent complexity and the different scales of integration this has been accepted by the research community as a topic that needs urgent answers, since the planning software industry is in its infancy and under pressure to respond to the demand. The objective of this chapter is to present a comprehensive review of state-of-theart models, algorithms, methodologies and tools for the resource planning problem covering a wide range of manufacturing activities. For reasons of presentation, the remaining of this chapter is organized as follows. The long-range planning problem in the process industries is considered in Section 1.2 including a detailed discussion on the effect of uncertainty, the planning of refinery operations and offshore oilfields, the campaign planning problem and the integration of scheduling and planning. Section 1.3 describes the planning problem for new product development with emphasis on pharmaceutical industries. Section 1.4 presents briefly the tactical planning problem, followed by a description of the resource planning problem in the power market and construction projects in Section 1.5. Section 1.6 is a review of recent computational solution approaches to the planning problem are reviewed, while available software tools are outlined in the Section 1.7. Finally, Section 1.8 will make some conclusions and propose future challenges in this area.
1.2 Planning in the Process Industries 1.2.1 Introduction
New environmental regulations, new processing technologies, increasing competition and fluctuating prices and demands in process industries have led to an increasing need for quantitative techniques for planning the selection of new processes, the expansion and shut down of existing processes, and the production of new products. Further decisions also include creation of production, distribution, sales and inventory plans. (Kallrath 2002). It has been recently realized that in a competitive and changing environment the need to plan new output levels and production mixes is likely to arise much more frequently than the need to design new batch plants. Although the boundaries between planning and scheduling are not very clear we can distinguish the following basic features of the process planning problem: multipurpose equipment sequence-dependent set-up times and cleaning costs 0 combined divergent, convergent and cyclic material flows 0 multistage, batch and campaign production using shared intermediates 0 multicomponent flow and nonlinear blending for the refinery operations a finite intermediate storage, dedicated and variable tanks. 0 0
Structurally, these features often lead to allocation and sequencing problems and knapsack structures, or to the pooling problem for the petrochemical industries. In
7.2 Planning in the Process Industries
I
449
production planning we usually consider material flow and balance equations connecting sources and sinks of a production network. Time-indexedmodels using a relative coarse discretization of time, e.g., a year, quarter, months or weeks are usually accurate enough. linear programming (LP), mixed-integer linear programming (MILP)and mixed-integer nonlinear programming (MINLP)technologies are often appropriate and successful for problems with a clear quantitative objective function, as will become clear in the following sections. Nowadays, it is possible to find the optimal way to meet business objectives and to fulfil all production, logistics, marketing, financial and customer constraints and especially: 0 0
0
0
to accurately model single-site and multisite manufacturing networks; to perform capital planning and acquisition analysis, i.e., to have the possibility to change the structure of a manufacturing network through investment and to determine the best investment type, size and location based on user-defined rules related to business objectives and available resources; the results of such analysis can lead to nonintuitive solutions that provide management with scenarios that could dramatically increase profits; to produce integrated enterprise solutions and to enable a crossfunctional view of the planning process involving production, distribution and transport, sales, marketing and finance functions; to develop new product and introduction strategies along with capacity planning and investment strategies.
The following sections provide a comprehensive review of the above areas.
1.2.2 LongRange Planning in the Process Industries
Chemical process industries are increasingly concerned with the development of planning techniques for their process operations. The incentive for doing so derives from the interaction of several factors (Reklaitis 1991, 1992). Recognizing the potential benefits of new resources when these are used in conjunction with existing processes is the first. Another major factor is the dynamic nature of the economic environment. Companies must assess the potential impact on their business of important changes in the external environment. Included are changes in product demand, prices, technology, capital market and competition. Hence, due to technology obsolescence, increasing competition, and fluctuating prices of and demands for chemicals, there is an increasing need to develop quantities techniques for planning the selection of new processes, and the production of chemicals (Sahinidis et al. 1989) The long-range planning problem in process industries has received a lot of attention over the last 20 years and numerous sophisticated models exist in the literature. Sahinidis et al. (1989)consider the long-range planning problem for a chemical complex involving a network of chemical processes that are connected in a finite number of ways. The network also consists of chemicals: raw materials, intermediates and
450
I
1 Resource Planning
products that may be purchased from and/or sold to different markets. The objective function to be maximized is the net present value (NPV) of the planning problem over a long-range horizon of interest consisting of a number of NT time periods during which prices and demands of chemicals, and investment and operating costs of the processes can vary. The problem consists of determining the following items: 0 0
0 0
capacity expansion and shut down policy selection of new processes and their capacity expansion and shut down policy production profiles sales and purchase of chemicals at each time period.
It is assumed that the material balance and the operating cost in each process can be expressed linearly in terms of the operating level of the plant. The investment costs of the processes and their expansions are considered to be linear expressions of the capacities with a fmed charge cost to account for economies of scale. This is a multiproduct, multifacility, dynamic, location-allocation problem that has been formulated using MILP modes. Sahinidis and Grossmann (1991) extended the above model to account for production facilities that are flexible manufacturing systems operating in a continuous or in a batch mode. The suggested model provides a unified representation for the different types of processes. Norton and Grossmann (1994)extended the original model of Sahinidis and Grossmann (1991) for dedicated and flexible processes by incorporating raw materials flexibility in addition to product flexibility. In their model, raw material flexibility is characterized by different chemicals as raw materials or different sources of the same raw material. The model was able to handle any combination of raw material and process flexibility, thus providing a truly unified representation for all types of process flexibility in the long-range planning problem. The above industrial relevance of the chemical process planning problem motivated the need to develop more efficient solution techniques for large-scale problems. Liu and Sahinidis (1995)presented a comprehensive investigation of the effect of time discretization, data uncertainty and problem size, on the quality of the solution and computational requirements of the above MILP planning models. The importance of detailed time discretization was demonstrated and the effect of uncertainty was critically assessed. An exact branch-and-bound algorithm was also presented along with several heuristic approaches for the solution of larger problems. Extending this work, Liu and Sahinidis (199Ga) investigated separation algorithms and cutting plane approaches that were demonstrated to be more robust and faster than conventional solution approaches for large-scale problems with long timehorizons. Oxe (1997) considered a LP approach to choose an appropriate subset of existing production plants and lines and to optimize allocation, transportation paths and central stock profiles so that the overall costs are minimized while product delivery is ensured within some months (specified for each product) from the order. McDonald and Karimi (1997) developed production planning and scheduling models for the case of semicontinuous processes, which are assumed to comprise several facilities in distinct geographical locations, each potentially containing multi-
1.2 Planning in the Process Industries
ple parallel lines. The models developed are deterministic in nature and are formulated as mixed-integer linear programs. Oh and Karimi (2001a)presented a new methodology for determining the optimal campaign numbers for producing multiple products on a single machine with sequence-dependent set-ups. Their methodology is intended mainly for the purpose of capacity planning. In the second part of this work (Oh and Karimi 2001b) they addressed the problem of determining the sequence of these given product campaigns to obtained a detailed schedule of operation. Heuristic algorithms based on a decomposition scheme were investigated for the efficient solution of the underlying optimization problem.
1.2.3 Process Planning under Uncertainty
Decision making in the design and operation of industrial processes is frequently based on parameters of which the values are uncertain. Sources of uncertainty, which tend to imply the means for dealing with them, can be divided into: 0
0
short-term uncertainties such as processing time variations, rush orders, failed batches, equipment breakdowns, etc.; long-term uncertainty such as market trends, technology changes, etc.
A detailed classification of different areas of uncertainty is suggested by Subrahmanyam et al. (1994)including uncertainty in prices and demand, equipment reliability and manufacturing uncertainty. An excellent review on the general subject of optimization under uncertainty has recently been presented by Sahinidis (2004). In the area of process planning, uncertainty is usually associated with product demand fluctuations, which may lead to either unsatisfied customer demands or loss of market share or excessive inventory costs. A number of approaches have been proposed in the process systems engineering literature for the quantitative treatment of uncertainty in the design, planning and scheduling of batch process plants with an emphasis on the design. The most popular one so far has been the scenario-based approach, which attempts to forecast and account for all possible future outcomes through the use of scenarios. The scenario approach was suggested by Shah and Pantelides (1992) for the design of flexible multipurpose batch plants under uncertain production requirements, and was also used by Subrahmanyam et al. (1994). Scenario-based approaches provide a straightfonvard way to implicitly account for uncertainty (a comprehensive discussion is presented by Liu and Sahinidis (199613)). Their main drawback is that they typically rely on either the a priori forecasting of all possible outcomes of the discretization of a continuous multivariable probability distribution, resulting in an exponential number of scenarios. Liu and Sahinidis (199Ga,b)and Iyer and Grossmann (1998) extended the MILP process and capacity planning model of Sahinidis and Grossmann (1991)to include multiple product demands in each period. They then propose efficient algorithms for the solution of the resulting stochastic programming problems (formulated as large
I
451
452
I
I Resource Planning
deterministic equivalent models), either by projection (Liu and Sahinidis 199Ga) or by decomposition and iteration. However, as pointed out by Shah (1998) a major assumption in their formulation is that product inventories are not carried from one period to the next. This has the advantage in ensuring that the problem size is of, O(np x ns), where np is the number of periods and ns is the number of demand scenarios, rather than O(ns"J'+').However, if the periods are too short, this compromise the solution from two perspectives: 0
0
All products must be produced in all periods if demand exists for them - this may be suboptimal. Plant capacity must be designed for a peak demand period.
Clay and Grossmann (1994)addressed this issue. They considered the structure of both the two-period and multiperiod problem for LP models and derived an approximate model based on successive repartitioning of the uncertain space, with expectations being applied over partitions. This has the potential to generate solutions to a high degree of accuracy in a much faster time than the full deterministic equivalent model. Liu and Sahinidis (1997)presented two different formulations for the planning in a fuzzy environment (the forecast model parameters are assumed to be fuzzy). The first considers uncertainty in demands and availabilities, whereas the second accounts for uncertainty of the right hand size of made constraints and objective function coefficient. The approaches above mainly focus on relatively simple planning models of plant capacity. Petkov and Maranas (1997)considered the multiperiod planning model for multiproduct plants under demand uncertainty. Their planning model embeds the planninglscheduling formulation of Birewar and Grossmann (1990) and therefore calculates accurately the plant capacity. They do use discrete demand scenarios, but assume normal distributions and directly manipulate the functional forms to generate a problem which maximizes the expected profit and meets certain probabilistic bounds on demand satisfaction without the need for numerical integration. Ierapetritou and Pistikopoulos (1994)proposed a two-stage stochastic programming formulation for the long-range planning problem including capacity expansion options. Based on the Gaussian quadrature method for approximating multiple probability integrals, Ierapetritou et al. (1996) considered the operational and production planning problem under varying conditions and changing economic circumstances. The effect of uncertainty on future plant operation was investigated via the incorporation of explicit future plan feasibility constraints into a two-stage stochastic programming formulation, with the objective of maximizing an expected profit over a time horizon, and the use of the value of perfect information. The main drawback of this approach is its high computation cost. To address this issue Bernard0 et al. (1999)investigated more efficient integration schemes for the solution of problems with many uncertain parameters. Recently, Ryu et al. (2004)addressed bilevel decision making problems under uncertainty in the context of enterprise-wide supply-chain optimization with one level corresponding to a plant planning problem, and the other to a distribution network problem. The bilevel problem was transformed into a family of single parametric optimization problems solved to global optimality.
1.2 Planning in the Process Industries
Rodera et al. (2002) presented a methodology for addressing investments planning in the process industry using a mixed-integer multiobjective optimization approach. Romero et al. (2003) proposed a modeling framework integrating cash flow and budgeting aspects with an advanced scheduling and planning model. It was illustrated that potential budget limitation can significantly affect scheduling and planning decisions. Recently, Barbaro and Bagajewicz (2004) proposed a new mathematical formulation for problems dealing with planning under design uncertainty that allows management of financial risk according to the decision-maker’s preferences. Sanmarti et al. (1995)define a robust schedule as one which has a high probability of being performed, and it is readily adaptable to plant variations. They define an index of reliability for a unit scheduled in a campaign through its intrinsic reliability, the probability that a standby unit is available during the campaign, and the speed with which it can be repaired. An overall schedule reliability is then the product of the reliabilities of units scheduled in it, and solutions to the planning problem can be driven to achieve a high value of this indicator. Ahmed and Sahinidis (1998)noted that the resulting two-stage stochastic optimization models in process planning under uncertainty minimize the sum of the costs of the first stage and the expected cost of the second stage. However, a limitation of this approach is that it does not account for the variability of the second-stage costs and might lead to solutions where the actual second-stage costs are unacceptably high. In order to resolve this difficulty they introduced a robustness measure that penalizes second-stage costs that are above the expected cost. Pistikopoulos et al. (2001) presented a systems effectiveness optimization framework for multipurpose plants that involves a novel preventive maintenance model coupled with a multiperiod planning model. This provides the basis for simultaneously identifying production and maintenance policies, a problem of significant industrial interest. This framework was then extended by Goel et al. (2003) to incorporate the reliability allocation problem at the design stage. Li et al. (2003) employed probabilistic programming approach to plan operations under uncertainty and to identify the impact on profits based on reliability analysis. Recently, Suryadi and Papageorgiou (2004) presented an integrated framework for simultaneous maintenance planning and crew allocation in multipurpose plants.
1.2.4 Integration of Production Planning and Scheduling
The decisions made by planning, scheduling, and control functions have a large economic impact on process industry operations - estimated to be as high as US $10 increased margin per ton of feed for many plants. The current process industry environment places even more of a premium on effective execution of these functions. In spite of these incentives, or perhaps because of them, there exists significant disagreement about the proper organization and integration of these functions, indeed even which decisions are properly considered by the planning, scheduling or control business processes. It has long been recognized that maintaining consistency among
I
453
454
I
7 Resource Planning
the decisions in most process companies continues to be difficult and the lack of consistency has real economic consequences. In their recent work Shobrys and White (2002) presented a critical and comprehensive analysis of several practical aspects that need to be carefully considered when challenges, associated with improving these functions and achieving integration, arise. The planning and scheduling levels of the operations hierarchy are natural candidates for integration because the structure of these two decision problems is very similar. However, the direct merging of these two levels requires embedding the details of the scheduling level into a super-scheduling-problem defined over the entire planning horizon. The result is a problem that is extremely difficult to solve. Thus, in recent years research has been increasingly interested in the issues around the integration of production and scheduling, in order to provide greater consistency. The most common approach for the simultaneous treatment of production planning and scheduling is a hierarchical decomposition scheme, where the overall production planning problem is decomposed into two levels (Bitran and Hax 1977).At the upper level, the planning problem, which usually involves a simplistic representation of the scheduling problem, is solved as a multiperiod LP problem in order to maximize the profit and set production targets. At the lower level, the scheduling problem is concerned with the sequencing of tasks that should meet the goals. An alternative integration approach is through the rolling schedule strategy (Hax 1978). Production planning and scheduling are closely related activities. Ideally these two should be linked, in order that the production goals set at the production plan level should be implementable at the scheduling level. Birewar and Grossmann (1990), based on their initial LP flow-shop scheduling model, proposed aggregate methods that allow tackling longer time-horizons by reducing the combinatorial nature of the problem. The model accounts for inventory costs, sequence-dependent clean-up times and costs, and penalties for not meeting predefined product demands. Using a graph enumeration method, the production goals predicted by the planning model are applied to the actual schedule, with the key point that both problems are solved simultaneously, since the sequencing constraints can be accounted for at the planning level with very little error. Bassett et al. (199Ga), working in the same direction of model-based integrated applications and focused on integrating planning decisions with the actual schedule, proposed an aggregation/disaggregation technique that can be used to provide solutions to otherwise intractable mid-term planning models. The initiative is the exploitation of available enterprise information within the process operational hierarchy tree. A more formal approach to integration of production and scheduling is described based on the previous work of Subrahmanyam et al. (1996), where the planning model, based on an aggregate formulation, is modified to be consistent with detailed scheduling decisions. Hierarchical production planning algorithms often make use of rolling horizon algorithms as a suboptimal to obtain feasible, but often good, solutions. The disadvantage of the method is reliance on the simplistic or rather poor representation of the scheduling problem within the aggregate part. Wilkinson (1996) derived an accurate aggregate formulation by applying formal aggregation operators to the resource-
7.2 Planning in the Process Industries
task network (RTN) formulation, and dividing the horizon into aggregate time periods (ATPs). This allows creating single MILP models that have varying time resolution. The first ATP is modeled in fine detail (scheduling) and the subsequent ATPs are modeled using the aggregate formulation (planning). The problem can then be solved as a single MILP, maintaining consistency between plan and schedule. Rodrigues et al. (2000) presented a two-level decomposition procedure for integrating scheduling and planning decisions. At the planning level, demands are adjusted, a raw material plan is defined and a capacity analysis is performed. At the scheduling level an MILP model is proposed. Geddes and Kubera (2000)described a practical integration between planning and online optimization with application in olefins production. Das et al. (2000) developed a prototype system by integrating two higher-level hierarchical production planning application programs (aggregated production plan and master production schedule) using a common data model integration approach into an existing planning system for short-term scheduling and supervisory management, which was originally developed by Rickard et al. (1999). Bose and Pekny (2000) presented a similar approach to model predictive control for integrated planning and scheduling problems. Van den Heever and Grossmann (2003) addressed the integration of production planning and reactive scheduling for the optimization of a hydrogen supply network consisting of 5 plants, 4 interconnected pipelines and 20 customers. A multiperiod MINLP model was proposed for both the planning and scheduling levels, along with heuristic solution methods based on Lagrangean decomposition. During the last Foundations on Computer Aided Process Operations (FOCAPO2003) event, several contributions presented were on the integration between planning and scheduling decisions. Harjunkoski et al. (2003) provided a comprehensive analysis of different aspects needed for the integration of the planning, scheduling and control levels in the light of ABB’s industrial initiative. They presented a framework introducing an approach to integrating all aspects relevant to decision making in a supportive way. An industrial case study was used to illustrate the benefits of the integrated framework. Yin and Liu (2003)developed a problem formulation and solution procedure for production planning and inventory management of systems under uncertainty. The production system is modeled by finite-event continuous-time Markov chains. Kabore (2003) presented a model predictive control formulation for the planning and scheduling problem in process industries. The main idea is to use moving-horizon techniques as well as a feedback control concept to continuously update production schedules. Wu and Ierapetritou (2003) proposed a method for simultaneously solving a planning and scheduling problem. The mathematical formulation of the planning problem involves scheduling decisions and results to a large MINLP problem, intractable to solve directly within reasonable computational time. A nonoptimal solution strategy is selected to provide near-optimal solutions within reasonable computational times. Tsiakis et al. (2003) applied the algorithm of Wilkinson (1996) to obtain an integrated plan and schedule of the operations of a complex specialty oil refinery, focusing on the downstream products of the oil supply-chain. Operating in an uncertain environment, the company needed to schedule the refinery operations in detail over the next month, while producing plans for the next year that were both reasonably accurate and consistent with the short-term schedule.
I
455
456
I
1 Resource Phnnhg
1.2.5 Planning of Refinery Operations and Offshore Oilfields
The refinery industry is currently facing a rather difficult situation, typically characterized by decreasing profit margins, due to surplus refinery capacity, and increasing oil prices. Simultaneous market competition and stringent environmental regulations are forcing the industry to perform extensive modifications in its operations. As a result there is no refinery nowadays that does not use advanced process engineering tools to improve its business performance. Such tools range from advanced process control to long-range planning, passing through process optimization, scheduling and short-term planning. Despite their widespread use and the existence of quasi-standard technologies for these applications,their degree of commercial maturity varies greatly and there are many unresolved problems concerning their use. Moro (2003)presents a comprehensive discussion on current approaches to solving these problems and proposes directions for future development in this area. Traditionally, planning and scheduling decisions in refinery plants have been addressed using LP techniques and several tools exist such as the Integrated System for Production Planning (SIPP) and the Refinery and Petrochemical Modelling System (RPMS).An excellent review has recently been presented by Pinto et al. (2000). These tools allow the development of general production plans of the whole refinery. As pointed out by Pelham and Pharris (1996),the planning technology in the refinery operations can be considered well-developed and the margins for further improvement are very tight. The major advances in this area should be expected in the form of more detailed and accurate modeling of the underlying processes, notably through the use of nonlinear programming (NLP) as illustrated by Moro et al. (1998) using a real-world application. Ballintjin (1993) compared continuous and mixed-integer linear formulations and emphasized the low applicability of models based solely on continuous variables. In the literature, the first mathematical programming (MP) approaches utilizing advances in mixed-integer optimization are focused on specific applications such as gasoline blending (Rigby et al. 1995)and crude oil unloading. Shah (1996)presented a MP approach for scheduling the crude oil supply to a refinery, whereas Lee et al. (1996) developed a MILP model for short-term refinery scheduling of crude oil unloading with optimal inventory management decisions. Gothe-Lundgren (2002) proposed a planning and scheduling model which seems to be limited to the specific industrial problem to which it has been applied, whereas Jia and Ierapetritou (2004) addressed the optimal operation of gasoline blending and distribution, the transfer to product stock tanks and the delivery schedule to satisfy all of the orders. Recent work by Pinto et al. (2000) is a key contribution in this area. A nonlinear planning model for refinery production was developed that is able to represent a general refinery topology. The model relies on a general representation for refinery processing units in which nonlinear equations are considered. The unit modes are composed of blending relations and process equations. Certain constraints are imposed to ensure product specifications,maximum and minimum unit feed flow rates, and limits on operating variables. Real-world industrial case studies for the planning of
7.2 Planning in the Process industries
diesel production were used to illustrate the applicability and usefulness of the overall approach. In the second part of their work scheduling problems in oil refineries were studied in detail. Discrete time representations were employed to model scheduling decisions in important areas of the refinery such as crude oil inventory management and fuel oil, asphalt, and liquefied petroleum gas (LPG) production. Several real-world refinery problems were presented and solved using the developed models. Based on the above work, Neiro and Pinto (2004)proposed a general mathematical framework for modeling petroleum supply chains. A set of crude oil suppliers, refineries that can be interconnected by intermediate and final product streams and a set of distribution centres form the basis for this work. The scheduling of well and facility operations is a very relevant problem in offshore oil field development and represents a key subsystem of the petroleum supplychain. The problem is characterized by long planning horizons (typically 10 years) and a large number of choices of platforms, wells, and fields and their interconnecting pipeline infrastructure. Resource constraints such as availability of the drilling rings make the requirement for proper scheduling more imperative to utilize resources efficiency. The sequencing of installation of well and production platforms is essential to ensure their availability before drilling wells. The operational design of the well and production platforms and the time of installation are critical, as they involve significant investment costs, these decisions must be optimized to maximize the return on investment. Thus, oil field development represents a complex and expensive undertaking in the oil industry. The process systems engineering community has recently made several key contributions in this area based on advances in mixed-integer optimization. Iyer et al. (1998) developed a multiperiod MILP formulation for the planning and scheduling of investments and operations in offshore oil field facilities. For a given time-horizon, the decision variables in their model are the choice of reservoir to develop, selection from among candidate well sites, and the well-drilling and platform installation planning, the capacities of well and production platforms and the fluid production rates from wells for each time period. The nonlinear reservoir behavior is handled with piecewise linear approximation functions. Van den Heever and Grossmann (2000) presented a mixed-integer nonlinear model for oilfield infrastructure that involves design and planning decisions. The nonlinear reservoir behavior is directly incorporated into the formulation. For the solution of this model an iterative aggregationjdisaggregation algorithm is proposed according to which time periods are aggregated for the design problem, and subsequently disaggregated for the planning subproblem. Van den Heever et al. (2000) addressed the design and planning of offshore oilfield infrastructure focusing on business rules and complex economic objectives. A specialized heuristic algorithm that relies on the concept of Lagrangean decomposition was proposed by Van den Heever et al. (2001) for the efficient solution of this problem. Ierapetritou et al. (1999) studied the optimal location of vertical wells for a given reservoir property map. The problem is formulated as a large-scale MILP and solved by a decomposition technique that relies on quality cut constraints. Kosmidis et al. (2002) described a MILP formulation for the well allocation and operation of integrated gas-oil systems, whereas Barnes et al. (2002) focused on the production design of offshore plat-
I
457
458
I
7 Resource Planning
forms. Kosmidis (2003) presented a MINLP model for the daily well-scheduling, where the nonlinear reservoir behavior, the multiphase flow in the well, and constraints from the surface facilities are simultaneously considered. An efficient solution strategy is also proposed. Lin and Floudas (2003) presented a continuoustime modeling and optimization approach for the long-term planning problem for integrated gas-field development. They proposed a two-level formulation and solution framework taking into account complicated economic calculations. I.E. Goel, V. Grossmann Comput. Chem. Eng. 28 (2004), 1409 considered the optimal investment and operational planning of gas-field development under uncertainty in gas reserves. A novel stochastic programming model that incorporates the decisiondependence of the scenario was presented. Aseeri et al. (2004) discussed the financial risk management in the planning and scheduling of offshore oil infrastructures. They added budgeting constraints to the model of Iyer et al. (1998)by following the cash flow of the project, taking care of the distribution of proceeds and considering the possibility of taking loans.
1.2.6 The Campaign Planning Problem
The campaign planning problem has received rather limited attention in the past 20 years, yet it is considered a key problem in chemical batch production. If reliable long-term demand predictions are available, it is often preferable to partition the planning horizon into a smaller number of relatively long periods of time (“campaigns”),each dedicated to the production of single product. The campaign mode of operations may result in important benefits such as minimizing the number and costs of changeovers when switching production from one product to another. The complexity of management and control of the plant operation is further reduced by operating the plant in a more regular fashion, such as in a cyclic mode within each campaign, with the same pattern of operations being repeated at a constant frequency. Typical campaign lengths are from weeks to several months, with cycle times ranging from a few hours to a few days. The campaign mode of operations is often used for the manufacture of “generic” materials (e.g., base pharmaceuticals) which are produced in relatively large amounts and are then used as feedstocks for downstream processes producing various more specialized final products (Papageorgiou 1994, Grunow, et al. 2002). Mauderli and Rippin (1979)studied the combined production planning and scheduling problem, developing a hierarchical procedure suitable for serial processing networks operated in a zero-wait mode. First, they consider each product individually, generating alternative production lines of a single product by assembling the available processing equipment in groups in order to achieve maximum path capacity. A LP-based screening procedure is used to determine a set of dominant campaigns. Finally, the production plan is generated by solving a LP or MILP problem, allocating the available production time to the various dominant campaigns for a given set of production requirements.
7.2 Planning in the Process fndustries
The generation of alternative production lines in the Mauderli and Rippin (1979) algorithm is based on an exhaustive enumeration procedure. A more efficient generation procedure is described by Wellons and Reklaitis (1989a),who formulated the optimal scheduling of a single-product production line as a MINLP model. However, this approach has several limitations, including high degeneracy, as many path assignments result in equivalent schedules. The elimination of this degeneracy was considered by Wellons and Reklaitis (1989b)who identified a set of dominant unique path sequences and hence improved the solution efficiency of the original formulation. A further improvement from the single-product production line scheduling to the single-product campaign formulation problem has been presented by Wellons and Reklaitis (1991a),including the automatic assignment of different equipment items to groups, and also the assignment of these groups to production stages. This work was extended by Wellons and Reklaitis (1991b) to the multiproduct campaign formulation problem for multipurpose batch plants. Finally, a multiperiod planning model is proposed, allocating the production time among the dominant campaigns while considering simultaneously profit from sales, changeover, inventory costs and campaign set-ups. Papageorgiou and Pantelides (1993)presented a hierarchical approach attempting to exploit the inherent flexibility of multipurpose plants by removing various restrictions regarding the intermediate storage policies between successive processing steps, the utilization of multiple equipment items in parallel and also the use of the same item of equipment for more than one task within the same campaign. A threestep procedure was proposed. First, a feasible solution to the campaign planning problem is obtained to determine the number of campaigns and the active parts of the original processing network involved within each campaign. Secondly, the production rate in each campaign is improved by removing some assumptions and applying the cyclic scheduling algorithm of Shah et al. (1993).Finally, the timing of the campaigns is revised to take advantage of the improved production rates. An interesting feature of this approach is that any existing campaign planning algorithm can be used for its first step. However, this approach relies on several restricted assumptions, including limited flexibility in the utilization of processing equipment and limited operating modes, while multiple production routes or material recycles are not taken into account. The algorithms described above are hierarchical in nature, and therefore relatively easy to implement given the reduction in the size of the problem solved at each step. On the other hand it is difficult to relate the exact objective for each individual step in the hierarchy to the overall campaign and planning objective function, and therefore it is very difficult to assess the quality of the final solution obtained. Shah and Pantelides (1991)proposed a single-level mathematical (MILP) formulation for the simultaneous campaign formation and planning problem. Their algorithm simultaneously determines the number and the length of the campaigns and the products and/or stable intermediates manufactured within each campaign. They consider serial processing networks operating in a mixed Zero-Wait/Unlimited Intermediate Storage (ZW/UIS) mode, and nonidentical parallel equipment items operating in phase.
I
459
460
I
I Resource P/anning
Voudouris and Grossmann (1993)extended the work originally presented by Birewar and Grossmann (1989a,b, 1990) to campaign planning problems for multiproduct plants. They introduced cyclic scheduling, location and sizing of intermediate storage, and inventory considerations along with novel linearization schemes transforming the resulting MINLP formulation. Tsiroukis et al. (1993) considered the optimal operation of multipurpose plants operating in campaign mode to fulfil outstanding orders. Resource constraints are explicitly taken into account while the limited availability of resource levels affects the operation of the plant. To deal with the complexity,nonconvexity and nonlinearity of the MINLP formulation, more efficient formulations along with a problemspecific two-level decomposition strategy were proposed. Papageorgiou and Pantelides (199Ga)presented a general MP formulation for multiple campaigns planning/scheduling of multipurpose batch/semicontinuous plants. In contrast to hierarchical approaches presented above, a single-levelformulation was developed, encompassing both overall planning considerations pertaining to the campaign structure and scheduling constraints describing the detailed operating of the plant during each campaign. The problem involves the simultaneous determination of the campaigns (i.e., duration and constituent products) and for every campaign the unit-task allocations, the tasks’ timings and the flow of material through the plant. A cyclic operating schedule is repeated at a fxed frequency within each campaign, thus significantly simplifying the management and control of the plant operation. A rigorous decomposition approach to the solution of this problem is presented by Papageorgiou and Pantelides (199Gb) and its effectiveness was demonstrated by applying it to a number of examples. Ways in which the special structure of the constituent mathematical models of the decomposition scheme can be exploited to reduce the size and associated integrality gaps are also considered.
1.3 Planning for New Product Development
Pharmaceutical industries are undergoing major changes to cope with the new challenges of the modern economy. The internationalization of the business, the diversity and complexity of new drugs, and the diminishing protection provided by patents are some of the factors driving these challenges. Market pressures are also forcing pharmaceutical industries to take a more holistic view of their product portfolio. The typical life cycles of new drugs are becoming shorter making it harder to recover the investments, especially with the expiry of short-life patents and the arrival of generic substitutes that can later appear in the market, reducing its profitability. It becomes necessary that the industry protects itself against these pressures while considering the limited physical and financial resources available. Several important issues and strategies for the solution of problems concerning pharmaceutical supplychains are critically reviewed by Shah, (2004). A large number of candidate new products in the agricultural and pharmaceutical industry must undergo a set of steps related to safety, efficacy, and environmental
1.3 P h n i n g f o r New Product Development
impact prior to commercialization. If a product fails any of the tests then all the remaining work of that product is halted and the investment in the previous tests is wasted. Depending on the nature of the products, testing may last up to 10 years and the problem of scheduling of tests should be made with the goal of minimizing the time-to-market and the expected cost of the testing. Another important challenge that the pharmaceutical and agrochemical industry faces today is how, then, to configure its product portfolio in order to obtain the highest possible profit, including any capacity investments, in a rapid and reliable way. These decisions have to be taken in the face of considerable uncertainty as demands, sales prices, outcomes of clinical tests, etc. may not turn out as expected. These problems have recently received attention from the process systems engineering community utilizing advances from the process planning and scheduling area. The first approach appeared in the literature by Schmidt and Grossmann (199G),who considered the problem of optimal sequencing of testing tasks for new product development, assuming that unlimited resources are available. For a product involving a set of testing tasks with given costs, durations and probabilities of success, these authors formulated a MILP model based on a continuous-time representation to determine the sequence of those tasks. The objective of the model is to maximize the expected net present value (NPV)associated with a product, while a special case considers the minimization of cost, subject to a time completion constraint. Even though there may be a number of new products under consideration, the assumption of unlimited resources allows the problem, with either of the two objectives, to be decomposed by each product. Extending this work, Jain and Grossmann (1999) developed an MILP model that performs the sequencing and scheduling of testing tasks for new product development under resources constraints. It was shown that it is critical to incorporate resource constraints along with the sequencing of testing tasks to obtain a globally optimal solution. Blau et al. (2000) developed a simulation model for risk management in the new product development process andsubramanian et al. (2001) proposed a simulation-based framework for the management of the research and development (R&D)pipeline. The focus of these works, however, is the new products development processes and not the planning and design of manufacturing facilities. In most of these references it is assumed that there are no capacity limitations or that the production level of a new product is not affected by the production levels of other products. Furthermore, investments costs are not explicitly included in the calculation of the NPV of the projects. The problem of simultaneous new product development and planning of manufacturing facilities has received rather limited attention. Papageorgiou et al. (2001) developed a novel optimization-based approach to selecting a product development and introduction strategy, as well as capacity planning and investment strategies. The overall problem is formulated as a MILP model that takes account of both the particular features of pharmaceutical active ingredient manufacturing and the global trading structures. Maravelias and Grossmann (2001) considered the simultaneous optimization of resource-constrained scheduling of testing tasks in new product development and design/planning of batch manufacturing facilities. A multiperiod MILP model was proposed that takes into account multiple tradeoffs and predicts
I
461
462
I
7 Resource Planning
which products should be tested, the detailed test schedule that satisfy design decisions for the process network, and production profiles for the different scenarios defined by the various testing outcomes. A heuristic algorithm based on Lagrange decomposition was investigated for the solution of larger problem instances. Roger et al. (2002)have addressed a similar problem. In most of the above approaches it is assumed that the resources available for testing, such as laboratories and scientists, are constant throughout the testing horizon, and that all testing tasks have fured costs, duration and resources requirements. Another common assumption in all the above approaches is that the cost of one test does not depend on the amount of resources allocated to one test. However, as noted in the recent contribution by Maravelias and Grossmann (2004) a company may decide to hire more scientists or build more laboratories to handle more efficiently a great number of potential new products in the R&D pipeline. As another option the company may have to outsource the tests, often at a high cost. All these issues have been addressed by proposing a MILP model that is efficiently solved with a heuristic decomposition algorithm. In most of the above approaches uncertainty aspects have been neglected although clinical tests are highly uncertain in practice. The recent work by Gatica et al. (2003) explicitly considers uncertainty in clinical trial outcomes. A multistage, multiperiod stochastic problem was developed that was reformulated as a multiscenario MILP model. For this model, a performance measure that takes appropriate account of risk and potential returns has also been formulated. Levis and Papageorgiou (2004) extended the work of Papageorgiou et al. (2001) and proposed a two-stage multiscenario MILP model determining both the product portfolio and the multisite capacity planning in the face of uncertain clinical outcomes, while taking into account the trading structure of the company. They proposed a novel hierarchical algorithm to reduce the computational effort needed for the solution of the resulting large-scale MILP models.
1.4 Tactical Planning
Planning and scheduling is usually part of a company-wide logistics and supplychain management platform. However, to distinguish between those topics, or even to distinguish further between planning and scheduling is often an artificial rather than a pragmatic approach. In reality, the borderline between all these areas is diffuse, due to the strong overlaps between scheduling and planning in production, distribution or supply-chain management and strategic planning. Planning and scheduling considerations are very closely related and often confused. The most common distinction between the two concepts is based on the time horizon they consider. While scheduling considers problems that may be of some hours to a few weeks, planning problems may consider time horizons that are of a few weeks up to a few months, and in many applications can even be of years. Tacti-
7.4 Tactical P/ann;ng I463
cal planning aims to set the targets for the scheduling applications that will follow in order to determine the operational policy of the plant in the short term. Owing to its nature of involving longer time horizons, planning decisions are often subject to uncertainty that might arise from many sources. The planning operation in the process industry is focused on analyzing the supplychain operations as they are defined by strategic planning (see Fig. 1.1).Competitive environment and technological advances have resulted in enterprise resource planning (ERP) systems to be widely used within the process sector; they are considered to be software suites that help organizations to integrate their information flow and business processes (Abdinour-Helm et al. 2003). The fundamental benefits of ERP systems do not in fact come from their planning capabilities but rather from their abilities to process transactions efficiently and to provide organized structured data bases (Jacobs 2003). Planning and decision support applications represent optional additions to this basic transactional, query and report capability. ERP has been designed to supersede the earlier concepts of material requirement planning (MRP) and manufacturing resource planning (MRP-11) that were designed to assist planners at a local level, by linking various pieces of information, especially in manufacturing. The advantage of a successful ERP implementation is the integration between different levels of the enterprise, such as financial, controlling, project management, human resources, plant maintenance and material flow logistics (Mandal and Gunasekaran 2003). The planning functions at a tactical level benefit from the existence in-place of an ERP system; the two systems do not replace each other but their relationship can be described as complementary. ERP systems play the role of an information highway that connects all planning levels and links various decision support systems to the same data. MRP systems were designed to work backwards from the sales orders to determine the raw material required for production (Orlicky 1975). MRP-I1 was introduced as a follow-up to resolve obvious operational problems usually associated with the absence of capacity considerations from MRP that resulted in poor schedules (Wight 1984).The weakness of both approaches is that they were targeting and develMulti-site planning
[~ _ Enterprise _ ~ _ _ _ _Resource _ - _ _ - _ Planning ___-
1
1 (MRP, VMI, D e y a n d management) I
Business transaction system Site planning
Scheduling & Execution
Scheduling & Execution systems
Fig. 1.1 Operations planning decision hierarchy.
464
I
7 Resource Planning
oped for the manufacturing environment, and very often ignored the complexities of the process world. ERP on the other hand, is not limited only to manufacturing companies, but is useful for any company with the need to integrate their information across many functional areas. Planning in the process industry is used to create production, distribution, order satisfaction and inventory plans, based on the information that can be extracted from ERP systems, while satisfying several constraints. In particular, operational plans have to be determined that are aimed to set targets for future production, plan the distribution of materials and allocate other related activities according to the business expectations. Business expectations are the product of strategic resource planning. A successful strategic resource planning, which can be performed either by activity-based cost (ABC), MP, resource-based view (RBV), or a combined approach, is sent to the ERP (Shapiro 1999).It is common practice that, based on these tactical plans, detailed schedules may be produced that define the exact sequence of operations, and determine the utilization of the available resources. Tactical planning is called on to address a number of decisions: the manufacturing policy (what shall we make?), the procurement policy (what do we need?), the inventory or stock policy (what stock already exists?),the resources utilization policy (what do I need to make it?). Tactical planning supports different short- to medium-term objectives for the business by using different objective functions. By using different objective functions we can create several operational plans to support the various strategic supply-chain decisions. Its differentiation from other planning approaches is that it requires a more detailed representation of the resources in a system. These resources are tied with a number of constraints that might need to be satisfied. A common approach to tactical planning in the process industry is to describe the problem using a MP model, and then to optimize towards a desired objective. The objective can be maximization of profit, customer order satisfaction,minimization of cost, minimization of tardiness, minimization of common resource utilization, etc. The production environment is a rather complex network and most standard heuristic production planning tools fail to address this complexity. This situation gave rise to the idea of employing MP-based models to provide planning systems with a higher degree of flexibility by considering both product demands as a function of the marketing and sales departments of an organization, and the plant capacity in terms of equipment, material, manpower and utility resources. The problem has been modeled using a number of approaches. Bassett et al. (1996b) proposed a higher level planning model based on formal aggregation techniques and using uniform time discretization. The model contains aggregate material balance constraints and equipment allocation constraints similar to those of the state-task network (STN) description of a process. This planning model forms part of a decomposition strategy where production is allocated to different time zones, thus creating a set of scheduling problems that can then be solved independently. Wilkinson (1995)presented a generic mathematical technique to derive aggregate planning models of high accuracy based on the resource-task network (RTN) repre-
7.5 Resource Planning in the Power Market and Construction Projects
sentation. The proposed formulations allow a large number of the complicating features of multipurpose, multiproduct plant operation to be taken into account in a unified manner. Sequence-dependent changeovers, task utility requirements and limited intermediate storage are some of the additional features included. Also the use of linking variables allows the planning model to take into account inventory levels more accurately. These two formulations are fairly generic and include most of the important features regarding the planning in process industries where fixed recipes are employed. Prior to them, most of the planning models contained complicated sets of constraints which had been tailored to a specific problem type. 1.5 Resource Planning in the Power Market and Construction Projects
The area of resources planning in the energy and power market and construction projects is worthy of a review in its own right; it will be considered somewhat briefly here, mainly due to its strong similarities with the process planning problem. 1.5.1 Resource Planning in the Power Market
In a traditional electric power system, a utility company is responsible for generating and delivering power to its industrial, commercial and residential customers in its service area. It owns generation facilities and transmission and distribution networks, and obtains necessary information for the economical and reliable operation of its system. For instance, an important problem faced daily by a traditional utility company is to determine which and when generating units should be committed, and how they should be dispatched to meet the system-wide demand and reserve requirements. The centralized resource planning problem involves discrete states (e.g., on/off of units) and continuous variables (e.g., units’ generation levels), with the objective being to minimize the total generation costs. A 1% reduction in costs can result in more than US$10 million dollars savings per year for a large utility company. Various methods have been presented in the literature and impressive results have been obtained (Wang et al. 1995, Guan et al. 1997, Li et al. 1997). Today, the deregulation and reconstruction of the electric power industry worldwide have raised many challenging issues for the economic and reliable operation of electric power systems. Traditional unit commitment of hydrothermal scheduling/ planning problems are integrated with resource bidding, and the development of optimization-based bidding strategies is a preliminary stage. Ordinal optimization approaches seek “good enough” binding strategies with high probabilities, and will turn out to be effective in handling market uncertainties with much reduced computational cost. Under this new structure, resource planning is intertwined with bidding in the market, and power suppliers and system operators are facing a new spectrum of issues and challenges (Guan and Luh 1999).
I
465
466
I
7 Resource Planning
Many approaches have been presented in the literature to address resources planning in the deregulated power markets. In this context, modeling and solving the bid selection problem has recently received significant attention. In Hao et al. (1998), bids are selected to minimize the total system cost, and the energy clearing price is determining as the highest accepted price for each hour. In Alvey et al. (1998),a bid clearing system in New Zealand is presented. Detailed models are used, including network constraints, reserve constraints, and ramp-rate constraints, and LP is used to solve the problem. Another very popular way to model the bidding process is to model the competitors’ behavior as uncertainties. Therefore, the bidding problem can be converted to a stochastic optimization problem. One of the widely used approaches in stochastic optimization to address this problem is stochastic dynamic programming (Contaxis 1990, Li et al. 1990).The basic idea is to extend the backward dynamic programming procedure by having probabilistic input and probabilities state transmissions in place of determining input and transitions and by using expected costs to calculate deterministic costs-to-go.The direct consequence is increased computational cost due to the significant increase in the input space and the number of possible transitions. For example, when stochastic dynamic programming is used to solve a hydroscheduling problem with uncertain inflows, one more dimension is needed to consider probable inflows in addition to reservoir levels, which significantly worsens the dimension of the problem. Another approach is scenario analysis (Carpentier et al. 1998, Takriti et al. 1996). Each scenario (or a possible realization of random events) is associated with a weight representing the probability of its occurrence. The objective is to minimize the expected costs over all possible scenarios. Since the number of possible scenarios and consequently the computational requirements increase drastically as the number of uncertain factors and the number of possibilities per factor increase, this approach can only handle problems with a limited number of uncertainties. Recently, stochastic dynamic programming has been embedded within the Lagrangean relaxation framework for energy scheduling problems, where stochastic dynamic programming is used to solve uncertain subproblems after system-widecoupling constraints are relaxed. Since dynamic programming for each subproblem can be effectively solved without encountering the curse of dimensionality, good schedules are obtained without a major increase in computational requirements (Luh et al. 1999). Among alternatives that are being investigated for the generation of electricity are a number of unconventional sources including solar energy and wind energy. In recent decades photovoltaic (PV) energy found its first commercial use in space. In many parts of the globe, PV systems are being considered as a viable alternative for generating electricity. Achieving this goal requires PV systems to enter the utility market, whereby electric utilities evaluate the potentials of each PV system corresponding to its impact on the electric utility expansion planning, and requirements for backup generating capacity to ensure a reliable supply of electricity. AbdulRahman (1996) presented a model for the short-term resource scheduling in power systems. An augmented Lagrangean relaxation was used to overcome difficulties with the solution convergence as realistic constraints were introduced (i.e.,transmis-
7.5
Resource Planning in the Power Market and Construction Projects
sion flows, fuel emissions, ramp-rate limits, etc.) in the formulation of unit commitment. Manvali et al. (1998) presented an efficient approach to short-term resource planning for an integrated thermal and PV battery generation. The proposed model incorporates battery storage for peak loads. Several constraints including battery capacity, minimum up/down time and ramp-rates for thermal units as well as natural PV capacity are considered in the proposed model.
1S . 2 Resource Planning in Construction Projects
Traditionally, resource planning problems in construction projects have been solved either as resources-leveling or as a resource-constrained scheduling problem. The resources-constrained scheduling problem constitutes one of the most challenging facing the construction industry, due to the limited availability of skilled labor, and the increasing need for productivity and cost-effectiveness. These challenges have been discussed by many practitioners and have led researchers to investigate various avenues. One of the most promising solutions to the problem of the shortage of skilled labor has been to develop methods that optimize or better utilize the skilled workers already in the industry (Burleson et al. 1998). The resource-leveling problem arises when there are sufficient resources available and it is necessary to reduce the fluctuations in the resource usage over the project duration. These resource fluctuations are undesirable because they often require a short-term hiring and firing policy. The short-term hiring and firing presents labor, utilization, and financial difficulties because (a) the clerical costs for employee processing are increased, (b) topnotch journeymen are reluctant to join a company with a reputation of doing this and (c) new, less experienced employees require long periods of training. The scheduling objective of the resource-leveling problem is to make the resource requirements as uniform as possible or to make them match a particular nonuniform resource distribution in order to meet the needs of a given project. Resource usage usually varies over the project duration because different types of resources are needed in varying amounts over the life of the project. In construction projects, for example, operators are needed in the beginning of the project to dig the foundations, but they are not needed at the end of the project for the interior finish work. In resource-leveling, the project duration of the original critical path remains unchanged. MILP models have been used to formulate the resource-constrained scheduling problem (Nutdtasomboon and Randhawa 1996). The efficiency of these models usually decreases due to the high combinatorial nature of the problem, and special algorithms have been developed as an attempt to reduce computational costs and improve the quality of the solution. Most of these algorithms rely on special branchand-bound and implicit enumeration approaches (Sung and Lim 1996, Demeulemeester and Herroelen 1997). An alternative approach to improving the computational efficiency is the use of heuristic methods that produce feasible, but not necessarily optimal, solutions (Padilla and Carr 1991, Seibert and Evans 1991).
I
467
468
I
I Resource Planning
Savin et al. (1998) presented a neural network application for construction resource-levelingusing an augmented Lagrangian multiplier. The formulation objective is to make the resource requirements as uniform as possible. Thus, the formulation does not consider the case of nonuniform resource usage. Also, it only allows for one precedence relationship (finish-start) and one resource type, and does not perform cost optimization. Chan et al. (1996) proposed a resource scheduling method based on genetic algorithms (GAS). The method considers both resource-leveling and resourceconstrained scheduling. It can minimize the project duration, but it does not consider the case of nonuniform resource usage, neither does it minimize the construction cost. Adeli and Karim (1997)presented a general mathematical formulation for project scheduling. Multiple crew strategies, work continuity considerations, and the effect of varying job conditions on the crew performance could be modeled. They developed an optimization framework for minimizing the direct construction cost. However, the resource-leveling and resource-constrained scheduling problems were not addressed. Recently, Senousi and Adeli (2001) presented a new formulation including project scheduling characteristics such as precedence relationships, multiple crew strategies, and time-cost tradeoff. The formulation can handle minimization of the total construction cost or duration while resource-leveling and resourceconstrained scheduling are performed simultaneously. An important problem that has received rather limited attention in the literature is related to the optimal allocation of multiskilled labor resources in construction projects. This strategy is commonly found in the manufacturing and process industries where some of the labor force is trained to be multiskilled. Various studies have demonstrated the benefits of multiskilled resources. Nilikari (1995) presented a study involving Finnish shipbuilding facilities, based on a multiskilled work team strategy and found savings of up to SO % in production time. Burleson et al. (1998) explored several multiskill strategies such as a dual-skill strategy, a four-skill strategy and an unlimited-skill strategy. The study compared the economic benefits in a huge construction project to prove the benefits of multiskilling but did not develop a mechanism for selecting the best strategy for a given project. The work of Brusco and Johns (1998) presented an integer goal-programming model for investigating cross-training multiskilled resource policies to determine the number of employees in each skill category so as to satisfy the demand for labor while minimizing staff costs. The model was applied to the maintenance operations of a large paper mill in the USA. Hegazy et al. (2000)presented an approach for modifying existing resource scheduling heuristics that deal with limited resources, to incorporate the multiskills of available labor and accordingly to improve the schedule. The performance of the proposed approach was demonstrated using a case study and the solution is compared with that of a high-end software system that considers multiskilled resources.
7.6 Solution Approaches to the Planning Problem
1.6 Solution Approaches to the Planning Problem
Most of the planning problems in the process industry result in an LP/MILP or NLP/ MINLP model. Planning problems are usually NP-hard and data-driven;no standard solution techniques are therefore available,and in many cases we are actually searching for a feasible solution to the problem rather than an optimal one. The solution approaches found in the literature may be categorized as: 0
0
exact and deterministic methods such as mathematical optimization including MILP and MINLP, graph theory (GT) or constraint programming (CP), or hybrid approaches in which MILP and CP are integrated; rnetaheuristics (evolutionarystrategies, tabu search, simulated annealing (SA),various decomposition schemes, etc.).
In this section we are going to focus on general solution approaches applied to planning problems, in addition to those that are mentioned in other sections and are problem-dependent. We are not going to describe extensively how these methods have been employed by a variety of authors, but we are going to describe the algorithms and the classes of problem to which they have been applied. Despite the extensive research work that exists for the solution of long-term planning and shortterm scheduling problems, the interest in medium-term planning problem is limited. While the benefits of integrating tactical planning into strategic planning and production scheduling are becoming clear, interest in research into more effective methods has increased. Applequist et al. (1997) provide an excellent review on planning technology and the approaches available for solving planning and scheduling problems. Despite substantial efforts over the last 40 years, no algorithm, either exact or heuristic, has been found that can provide a solution to all planning problems.
1.6.1 Exact and Deterministic Methods
In real life applications we rarely see any NLP/MINLP planning models, except in pooling or refinery planning. The rest of the models proposed, despite their complexity, in term of features they include and mathematical terms, they remain or are transformed to be linear regarding their variables and constraints. Therefore, using state-of-the-art commercial solvers, such as, XPRESS-MP (Dash Optimization, http://www.dashoptimization.com), CPLEX (ILOG, http://www.ilog.com), or OSL (IBM, http://www.ibm.com), LP/MILP problems can be solved efficiently and at a reasonable computational cost. In the case of NLP/MINLP, the solution efficiency depends strongly on the individual problem and the model formulation. Thus, in many cases the structure of the problem is exploited in order to provide valid cuts, or identify special structures in order to reduce computational times and increase the quality of the solution.
I
469
470
I
7 Resource Planning
However, as both MILP and MINLP are NP-hard problems, it is recommended that the full mathematical structure of a problem to be exploited. Software packages may also differ with respect to their ability in presolving techniques, default strategies for the branch-and-bound algorithm, cut generation within the branch-and-cut algorithm, and last but not least, diagnosing and tracing infeasibilities, which is an important issue in practice. Kallrath (2000) provides an extensive review of mixedinteger optimization in the process industry by describing solution methods, algorithms, and applications. Taking advantage of the special structure of mathematically formulated problems either as MILPs or MINLPs, several decomposition methods have been proposed and implemented in various types of problems. Bassett et al. (199Ga),focusing on chemical process industries, examined a number of time-based decomposition approaches along with their associated strengths and weaknesses. It is shown that the most promising of the approaches utilizes a reverse rolling window in conjunction with a disaggregation heuristic, applied to an aggregate production plan as part of their approach to integrate hierarchically related decisions. Resource- and task-based decompositions are also examined as possible approaches to reduce the problem to manageable proportions. To validate their proposed schemes a number of examples are presented. Gupta, A. Maranas, C. D. Ind. Eng. chem. Res. 38 (1999) 1937 utilized an efficient decomposition procedure to solve mid-term planning problems based on Lagrangean relaxation. Having tried commercial MILP solvers, they found that the employed solution strategy is more efficient. The basic idea of the proposed solution technique is the successive partitioning of the original problem into smaller, more computationally tractable subproblems by hierarchical relaxation of key complicating constraints. Alongside with the hierarchical Lagrangean relaxation they employ a heuristic algorithm to obtain valid upper bounds. Two examples are used to demonstrate the capabilities of the proposed algorithm. The size of the actual planning problem may be prohibitive for standard commercial solvers. Therefore, rigorous decomposition techniques that benefit from the special structure of MILP problems is exploited. Dimitriadis (2001), identified that block-diagonal MILP problems may be decomposed to simpler ones and introduced the concept of decomposable MILP (D-MILP). An algorithm based on the idea of “key variables”, which break the problem down into a number of smaller partial MILPs that can be solved independently and in parallel, was implemented based on a standard branch-and-bound scheme. The decomposition branch-and-bound (dBB) as the algorithm is called, achieves better performance by obtaining quick upper bounds to the problem and assisting the solver to find an optimal solution within reasonable computational time. One of the advantages of the approach is that is can guarantee the optimality of the solution. Tsiakis et al. (2000) improved the algorithm by providing an automated method to decompose the problem and implementing a more generic solution scheme applicable to all MILP problems that have a similar structure.
7.G Solution Approaches t o the Planning Problem I471
1.6.2 Metaheuristics
In addition to the so called optimization methods we have techniques described as heuristics. These techniques differ in the sense that cannot guarantee an optimal solution; instead they aim to find reasonably good solutions in a relatively short time. Heuristics tend to be fairly generic and easily adaptable to a large variety of planning problems. There are a number of heuristic general-purpose approaches that can be applied to planning and scheduling problems (Pinedo 2003). SA and tabu searchare described as improvement algorithms. Algorithms of the improvement type are conceptually completely different from the constructive type algorithms. The algorithm starts by obtaining a complete plan that can be selected arbitrarily, and then tries to obtain a better plan by manipulating the current solution. The procedure is described as local search. A local search procedure does not guarantee an optimal solution, but aims to obtain a better solution in the neighborhood of the current one. They very often employ a probabilistic acceptance-rejection criterion with the hope it will lead to better solution. Reeves (1995) describes extensively the methods and applications in production planning systems. GAsare more general than SA and tabu search and they can be classified as a generalization of the previously mentioned techniques. In this case a number of feasible solutions are initially found. Then, local search based on an evolution criterion is employed to select the most promising solution for further exploitation. The rest of the solutions are fathomed (Reeves 1995). Heuristics are widely employed in industry to provide solutions to production planning problems. Stockton and Quinn (1995) describe how a GA based on aggregate planning techniques is used to develop a production plan that allows a strategic business objective to be implemented in short- and mid-term operational plans. LeBlanc et al. (1999) utilize an extension of the multiresource generalized assignment problem (MRGAP) in order to provide an implementable solution to production planning problems. The model considers splitting of individual batches across multiple machines, while considering the effect of set-up times and set-up costs, features that the standard assignment problem (AP) fails to capture. The proposed formulations are solved using adaptations of a GA and SA. A multiobjective GA (MOGA) approach was employed by Morad and Zalzada (1999) for the planning of multiple machines, taking into account their processing capabilities and the process costs incurred. The formulation is based on multiobjective weighted-sums optimization, which is to minimize makespan, to minimize total rejects produced and to minimize the total cost of production. Tabu search is employed by Baykasoglu (2001) to solve multiobjective aggregate production planning (APP) problems based on a mathematically formulated problem. The model by Masud and Hawng was selected as the basis due to its extensibility characteristics.
472
I
7 Resource Planning
1.7 Software Tools for the Resource Planning Problem
Enterprise resource planning (ERP) is a software-driven business management system which integrates all facets of the business, including planning, manufacturing, sales and marketing. Increasingly complex business environments require better and more accurate resource planning. Furthermore, management is under constant pressure to improve competitiveness by lowering operating costs and improving logistics, thus increasing the level of control within the operating environment. Organizations therefore have to be more responsive to the customer and competition. Resource planning as a business solution aims to help the management by setting up better business practices and equipping them with the right information to take timely decisions. Production planning as a later business function is considered to be part of the supply-chain planning and scheduling suite, alongside other functions such as demand forecasting, supply-chain planning, production scheduling, distribution and transportation planning. Tactical production planning includes those software modules responsible for production planning within a single manufacturing facility. These solutions normally address tactical activities, although they may also be used to support both strategic and operational decisions and are very often integrated with them.
1.7.1 Enterprise Resource Planning
A big share of the software and services provided worldwide is targeting the integration of ERP and supply-chainoperations. Most of the information needed by production planning software tools resides within ERP systems. Most of the ERP software providers already have developed their own fully integrated planning applications, have acquired smaller companies with production planning software or have been in partnership with such providers. The standard object-oriented approach to the implementation of ERP systems has contributed towards an easy integration. The leading suppliers and systems integrators to the worldwide ERP market across all industry sectors are alphabetically: Oracle (http://www.oracle.com), Manugistics (http://www.manugistics.com), Peoplesoft (http://www.peoplesoft.com), and SAP (http://www.sap.com)according to the latest market share studies. In small-medium enterprises (SMEs)the leading provider of ERP systems is Microsoft Business Solutions with its Navision system (http://www.microsoft.com/businessSolutions). 1.7.2 Production Planning
Production planning deals in medium-range time horizons, where decisions about incremental adjustments to the capacity or customer service levels are made.
1.7 Software Toolsfor the Resource Planning Problem
Changes to supplier delivery dates, swings in raw materials purchases, and outsourcing agreements may require 3-5 months. Thus, production planning deals with what will be done, and when, in a factory over longer time frames. Tactical plans are updated frequently based on the operational plan and the actual schedule. This section provides the profiles of production planning software suppliers with main focus on the process industry, again in alphabetical order. 1.7.2.1 Advanced Process Combinatorics (http://www.combination.com)
The company’s modular supply-chain product VirtECS contains a module, called Scheduler, with production planning capability. The package handles complex production planning models with multiple input/output bills of material, multiple routings, resource constraints and set-up times. Their algorithms used for production planning are based on a MILP formulation, with a number of techniques applied for their solution. Additionally, a set of Gantt-chart-based interactive tools provides the user with manipulating capabilities on the actual plan. A key strength of APC revolves around the research on optimization since the company was generated from an industrial research consortium at Purdue University. 1.7.2.2 Aspen Technologies (http://www.aspentech.com)
Aspen Technology’s supply-chain capabilities are based largely on the company’s acquisition of the Process Industry Modelling Systems (PIMS) from Bechtel and Manager for Interactive Modelling Interfaces (MIMI) of Chesapeake Decision Sciences. Aspen PIMS is a tactical level refinery planning package that is widely used in over 170 refineries worldwide. The Aspen MIMI production planner is focused on models that include material flows, set-up times, labor constraints and other resource restrictions. In addition to the standard heuristics and simulation employed for production planning and scheduling, the advanced planning offers LP-based optimization capabilities. Users can interact with the Gantt chart in order to develop “what-if”analysis cases and add constraints. Aspen has one of the larger installed bases of MIMI products for over 300 customers around the globe. 1.7.2.3 i2 Technologies (http://www.iz.com)
As part of its supply-chain platform, i2’s Factory Planner manages material and capacity constraints to develop feasible operating plans for production plants. The tool aims to be a decision support system in the areas of production planning and scheduling, taking into account material and capacity requirements. It utilizes a number of heuristic algorithms and basic optimization to obtain feasible plans, and to answer capable-to-promise delivery-date quoting.
I
473
474
I
I Resource Planning
1.7.2.4 Manugistics (http://www.manugistics.com)
Manufacturing Planning and Scheduling, integrated within the Constraint-Based Master Planning supply-chain system of Manugistics, provides the detailed operational plan. It is based on a flow-oriented model, and uses the theory of constraints to solve the production planning problems. It takes into account throughput of equipment, determines the bill of materials, and allows what-if scenario analysis. 1.7.2.5 Process System Enterprise (PSE) (http://www.psenterprise.com)
PSE’s ModelEnterprise has been designed as a modular supply-chainmodeling platform that allows the construction and maintenance of complex enterprise models, and supports a wide range of tools applied to these models for solving different types of problem. The Optimal Single Site Planner and Scheduler (OSS Planner Scheduler) determines an optimal schedule for a plant producing multiple products. It is especially suited to multipurpose plant where products can be made on a selection of equipment units, via different routes and in different sizes. The plans produced are finite capacity and rigorously optimal. The objective of the optimization problem can be configured according to the economic requirements of the operation - for example, to deliver maximum profit, maximum output or on-time in-full. The OSS Planner Scheduler uses state-of-the-artMILP optimization algorithms that allow complex systems to be modeled. Utilizing comprehensive costing all costs may be accounted such as processing, storage, utilities, cleaning, supplies and penalties for late delivery. PSE originated at Imperial College, London, in the 1990s. and ModelEnterprise has been developed based on knowledge and research found there. 1.7.2.6 SAP AC (http://www.sap.com)
The APO Production Planning and Detailed Scheduling (PP/DS) tool comes under the umbrella of SAP APO supply-chainsolutions. The software can be used to generate production plans and sequence schedules. A variety of approaches is included in this solution for theory of constraints and mathematical optimization, but in principle it is a heuristics-based tool, where the user-developed rules are employed. Other features of the tool include forward and backwards scheduling, simultaneous capacity and material planning in detail, what-if analysis to simulate effects of changes in constraints, and interactive scheduling via a Gantt chart interface.
1.8 Conclusions
The impact of accurate resource planning on the productivity and performance of both manufacturing and service organizations are tremendous. Researchers have found that organizations that had no resource planning information technology
1.8 Conclusions 1475
infrastructure in place performed poorly most of the time compared to those who had a specific plan. The successful implementation of planning capabilities means reduction in cost, increased productivity and improved customer services. The importance of resource planning models and systems therefore becomes significant. Moreover, the solution to the problems associated with that poses further challenges. Despite many years of study in resource planning models, plus numerous examples of successful modeling systems implementations and industrial applications, there is still a great potential for applying them in a pervasive and enduring manner to a wider range of real-life industrial applications. Several researchers have tackled the resource planning problem under uncertainty using different approaches. However, in most cases they have skirted around the problem of multiperiod, multiscenario planning with detailed production capacity models (i.e., embedding some scheduling information). Here, issues that must be addressed mainly relate to problem scale. Combined mixed-integer programming and stochastic optimization techniques seem to offer a promising solution alternative to this problem. One of the major challenges will be to develop planning approaches that are consistent with detailed resource scheduling as part of the overall supply-chain integration. An obvious drawback is the problem size. This poses the need for rigorous decomposition algorithms and techniques that will enable handling problems of greater size without compromising the quality of the solution. Over the last few years a trend has developed bringing MP and constrained programming techniques closer to each other. This results in hybrid approaches (i.e., in algorithms combining elements from both areas) that may have a great impact on reducing computational requirements for solving large-scale planning problems. In addition to new techniques and solution approaches, advances in computational power in terms of hardware and software allow the exploitation of parallel algorithm optimization techniques. The tree structure in mixed-integer optimization, and the time- or scenario-dependent structures, indicates that more benefits are to be expected from parallelizing the combinatorial part. Dealing with large-scale NP-hard problems may lead to the implementation of distributed planning, where the computational effort and time is divided over a number of computers or clusters. Timebased or spatial decomposition methods will be exploited more and more. Resource planning is a fundamental business process that exists in every production environment. It has long been recognized that in the process industries there are very large financial incentives for planning, scheduling and control decisions to function in a coordinated fashion. Nevertheless, many companies have not achieved integration in spite of multiple initiatives. An important challenge thus relates to the development of efficient theoretical methodologies, algorithms and tools to achieve this integration in a formal way, allowing process industries to take steps to practically improve the integration activities at different levels. The planning problem of refinery operations and offshore oilfields has been recently attacked by several researchers. However, the practical implementation of most of the developed approaches is usually limited to subsystems of a plant with considerable simplifications. Here, the trend is to expand the planning process to
476
I
I Resource Planning
include larger systems, such as a group of refineries instead of a single one. Another area that deserves further attention is the inclusion of scheduling decisions in planning processes. Furthermore, there is a lot of scope for developing commercial tools to serve refineries to cope with daily operational problems. In the area of product planning, the integration of development management and capacity and production planning seems to be very important. Currently, capacity issues are often not considered at the development stage. The development of integrated models of the life cycle, from the discovery through to consumption would greatly facilitate strategic decision making. Demand for advanced planning systems (APS) is expected to grow with the solutions being increasingly industry- and supply-chain-specific.The standards are specified by the large software suppliers, such as i2 and Manugistics. The scope for smaller suppliers is to have a more specific focus in segments of the industry. There is a clear trend for industry-specificsolutions, this being due to the different operating environments and the detail required in order to generate a meaningful plan. The development of resource planning systems very much depends on the industry segment (industrial or nonindustrial) and the manufacturing type (process or discrete industries). While segmentation based on the type of industry is common, it is important to be able to segment the operational environment based on the supplychain type. In this case we have distribution, manufacturing or source intensive supply chains, each one with their own needs. Many companies are competing as software providers for planning systems. However, they have realized that they need to be able to communicate with other libraries and software modules as part of supplychain solutions, and at minimum cost. Systems with open architecture and ease of integration are in demand. Initiatives such as CAPE-OPEN aim to define industrywide standards (CO-LaN 2001).
References 1 Abdinour-Helm S. Lengnick-Hall M. L. Lengnick-Hill C. A. Eur. 1. Oper. Res. 146 2
3 4
5
6 7
8
(2003) p. 258 Abdul-Rahman K. H. Shahidehpour S. M. Aganagic M. Mokhtari S. IEEE Trans. Power Syst. 11 (1996) p. 254 Adeli H. Karim A,]. Constr. Eng. Manage. 123 (1997) p. 450 Ahmed S. Sahinidis N. V. Ind. Eng. Chem. Res. 37 (1998) p. 1883 Aluey T. Goodwin D. Ma X . Streifeert D. Sun D. IEEE Trans. Power Syst. 13 (1998) p. 986 Applequist G. Samikoglu 0. Peknyj. Reklaitis G. ISA Trans. 36 (1997) p. 81 Aseeri A. Gorman P. Bagajewicz M. J. Ind. Eng. chem. Res. 43 2004, 3063 Ballingin K. 1993 In: Ciriani T. A. Leachman R. C. (eds.),Vol. 3, Wiley, New York
9 Barbaro A. Bagajewicz AICHJournal, 50
(2004) 963 10 Barnes R. Linke P. Kokossis A. 2002 In: Pro-
ceedings of ESCAPE-12, The Hague, The Netherlands, p 631 11 Bassett M. H. Dave P. Doyle F. J. Kudva G. K. Pekny J. F. Reklaitis G. V. Subrahmanyam S. Miller D. L. Zentner M. G. Comput. Chem. Eng. 20 (1996a) p 821 12 Bassett M. H . PeknyJ. F. Reklaitis G. V. AICHE 1.42 (1996b) p 3373 13 Bassett M. H. Peknyj. F. Reklaitis G. V. Comput. Chem. Eng. 21 (1997) p. S1203 14 Baykasoglou A. Int. 1. Prod. Res. 39 (2001) p. 3685 15 Bemardo F. P. Pistikopoulos E. N. Saraiva P. Ind. Eng. Chem. Res. 38 (1999) p. 3056 16 Birewar D. B. Grossmann I. E. Comput. ChemM. Eng. 13 (1989a) p. 141
References 1477 17 Birewar D. B. Grossmann I . E. Ind. Eng.
Chem. Process Des. Dev. 28 (1989b) p. 1333 18 Birewar D. B. Grossmann I. E. Ind. Eng. Chem. Res. 29 (1990) p. 570 19 Bitran G. R. Hax A. C. Decision Sci. 8 (1977) p. 28 20 Blau G. Metha B. Bose S. Pekny J. Sinclair C. Keunker K. Bunch P. Comput. Chem. Eng. 24 (2000) p. 659 21 Bose S. PeknyJ. F. Comput. Chem. Eng. 24 (2000) p. 329 22 Brusco M. J. Johns T. R. Decision Sci. J. 29 (1998) p. 499 23 Burleson R. C. Hass C. T. Tucher R. L. Stanley A. J. Constr. Eng. Manage. 124 (1998) p. 480 24 Carpentier P. Woodwin D. Ma X . Streiyeert D. Sun D. IEEE Trans. Power Syst. 13 (1998) p. 986 25 Chan W. T. Chua D. K. H. Kannan G. J. Constr. Eng. Manage. 122 (1996) p. 125 26 Clay R.L. Grossmann I. E. Chem. Eng. Res. Des. 72 (1994) p. 415 27 CO-LaN: The CAPE-OPEN Laboratories Network (http://zann.informatik.rwthaachen,de:808O/opencms/opencms/COLANgamma/index.html) 28 Contaxis G. Kavantza S. IEEE Trans. Power Syst. 5 (1990) p. 766 29 Das B. P. Rickard J. G. Shah N. Macchietto S. Comput. Chem. Eng. 24 (2000) p. 1625 30 Demeulemeester E. Herroelen W. Manage. Sci. 43 (1997) p. 1485 31 A. D. Dimitriadis 2001 PhD thesis Title: Algorithmsfor the solution of large-scale Scheduling problems. Imperial College of science, Technology and Medicine 32 Gatica C. Papageorgiou L. G. Shah N. Chem. Eng. Res. Des. 81 (2003) p. 665 33 Geddes D. Kubera T. Comput. Chem. Eng. 24 (2000) p. 1645 34 Goel H. D. Grieuink]. Welj'nen M. P. C. Comput. Chem. Eng. 27 (2003) p. 1543 35 Gothe-Lundgren M. Lundgren]. T. Persson J. A. Int. J. Prod. Econ. 78 (2002) p. 255 36 Grunow M. Gunther H. Lehmann M. OR Spectrum 24 (2002) p. 281 37 Guan X . . Luh P. B. Discrete Event Dyn. Syst. Theor Appl. 9 (1999) p. 331 38 Guan X , Ni E. Li R. Luh P. B. IEEE Trans. Power Syst. 12 (1997) p. 1775 39 Hao S.Angelidis G. A. Singh H. Papakxopoulos A. D. IEEE Trans. Power Syst. 13 (1998) p. 986 40 Harjunkoski I. Grossmann I. E. Friedrich M. Holmes R. 2003. FOCAPO 2003 p. 315 41 Ha% A. C. 1978 Handbook of Operations Research Models and Applications
42 Hegazy B. T. Shabeeb A. K. Elbeltagi E.
Cheema T. J. Constr. Eng. Manage. 126 (2000) p. 414 43 Ierapetritou M. C. Floudas C. A. Vasantharaj a n S. Cullick A. S. AIChE J. 45 (1999) p. 844 4.4 Ierapetritou M. G. Pistikopoulos E. N. Ind. Eng. Chem. Res. 33 (1994) p. 1930 45 Ierapetritou M. G. Pistikopoulos E. N. Floudas C. A. Comput. Chem. Eng. 20 (1996) p. 1499 46 Iyer R. R. Grossmann I. E. Ind. Eng. Chem. Res. 37 (1998) p. 474 47 lyer R. R. Grossmann I . E. Vasantharajan S. Cullick A. S. Ind. Eng. Chem. Res. 37 (1998) p. 1380 48 lacobs R. F. Bendoly E. Eur. J. Oper. Res. 146 (2003) p. 233 49 Jain V. Grossmann I. E. Ind. Eng. Chem. Res. 38 (1999) p. 3013 50 Jia 2. Ierapetritou M. C. Ind. Eng. Chem. Res. 43 (2004), p. 3782 5 1 Kabore P. FOCAPO 2003 p. 285 52 KallrathJ. Trans. IChemE 78 (2000) p. 809 53 Kallrath]. OR Spectrum 24 (2002) p. 219 54 Kosmidis V. D. 2003 Ph.D. Thesis University of London U.K 55 Kosmidis V. D. PerkinsJ. D. Pistikopoulos E. N. 2002 In: Proceedings of ESCAPE-12 The Hague The Netherlands p 327. 56 LeBlanc L. ]. Shtub A. Anandnalingham G. Eur. J. Oper. Res. 112 (1999) p. 54 57 Lee H. Pinto]. M. Grossmann I . E. Park S. Ind. Eng. Chem. Res. 35 (1996) p. 1630 58 Levis A. A. Papageorgiou L. G. 28 (2004), p. 707 Comput. Chem. Eng. 59 Li C. Johnson R. B. Svaboda A. J. IEEE Trans. Power Syst. 12 (1997) p. 1834 60 Li C. Yan R. Zhou]. IEEE Trans. Power Syst. 5 (1990) p. 1487 61 Li P. Wendt M. WoznyG. 2003 FOCAPO 2003 p. 289 62 Lin X. Floudas C. A. Optimisation Eng. 4 (2003) p. 65 63 Liu M.L. Sahinidis N. V. Ind. Eng. Chem. Res. 34 (1995) p. 1662 64 Liu M.L. Sahinidis N. V. Comput. Oper. Res. 23 (1996a) p. 237 65 Liu M.L. Sahinidis N. V. Ind. Eng. Chem. Res. 35 (199613) p.4154 66 Liu L. Sahinidis N. V. Eur. J. Oper. Res. 100 (1997) p. 142 67 Luh P. B. Chen D. Tahur L. S. IEEE Trans. Robot. Autom. 15 (1999) p. 328 68 Mandal P. Gunasekaran A. Eur. J. Oper. Res. 146 (2003) p. 274 69 Maravelias C. T. Grossmann I. E. Ind. Eng. Chem. Res. 40 (2001) p. 6147
478
I
I Resource Planning 70 Maravelias C. T. Grossmann I . E. 28 (2004),
p. 1921 Comput. Chem. Eng. in press 71 M a w a l i M. K. C. M a H. Shahidehpour S. M . Abdul-Rahman H . IEEE Trans. Power Syst. 13 (1998)p. 1057 72 Mauderli A. Rippin D. W. T. Comput. Chem. Eng. 3 (1979) p. 199 73 McDonald C. M . Karimi I . A. Ind. Eng. Chem. Res. 36 (1997)p. 2691 74 Morad N. Zalzada A. J. Intel1 Manuf. 10 (1999)p. 169 75 Moro L. F. L. Comput. Chem. Eng. 27 (2003) p. 1303 76 Moro L. F. L. Zanin A. C. Pinto]. M. Comput. Chem. Eng. S22 (1998)p. S1039 77 Neiro S. M . S. Pinto]. M. Comput. Chem. Eng. 28 (2004), p. 871 78 Nilikari M. J. Ship Prod. 11 (1995)p. 239 79 Norton L. C. Grossmann I. E. Ind. Eng. Chem. Res. 33 (1994) p. 69 80 Nutdtasomboon N. Randhawa S. Comput. Ind. Eng. 32 (1996)p. 227 81 O h H. Karimi I. A . Comput. Chem. Eng. 25 (2001a)p. 1021 82 O h H . Karimi 1. A. Comput. Chem. Eng. 25 (2001b)p. 1031 83 Orlicky ]. 1975 Material Requirements Planning. McGraw Hill. New York 84 Oxe G. Eur. J. Oper. Res. 97 (1997)p. 337 85 Padilla E. M . Carr R. L. J. Constr. Eng. Manage. 117 (1991)p. 279 86 Papageorgiou L. G. 1994; Ph.D. Thesis University of London 87 Papageorgiou L. G. Pantelides C. C. Comput. Chem. Eng. 17s (1993) p. S27 88 Papageorgiou L. G. Pantelides C. C. Ind. Eng. Chem. Res. 35 (1996a)p. 488 89 Papageorgiou L. G. Pantelides C. C. Ind. Eng. Chem. Res. 35 (1996b)p. 510 90 Papageorgiou L. G. Rotstein G. E. Shah N. Ind. Eng. Chem. Res. 40 (2001)p. 275 91 Pelham R. Phams C. Hydrocarb. Process. 75 (1996)p. 89 92 Petkou S. B. Maranas C. D. Ind. Eng. Chem. Res. 36 (1997) p. 4864 93 Pinedo M . 2003 Scheduling: Theory Algorithms and Systems 2nd Edn. Prentice Hall, New Jersey 94 Pinto]. M . joly M. Moro L. F. L. Comput. Chem. Eng. 24 (2000) p. 2259 95 Pistikopoulos E. N. Vassiliadis C. G. Arvela ]. A. Papageorgiou L. G. Ind. Eng. Chem. Res. 40 (2001)p. 3195 96 Reeves C. R. 1995 Modern Heuristic Techniques for Combinatorial Problems Wiley
97 Reklaitis G. V. 1991 Proceedings of the 4th
International Symposium on Process Systems Engineering Montebello Canada 98 Reklaitis G. V. 1992 NATO Advanced Study Institute on Batch Processing Systems Eng. Antalya Turkey 99 Rickard]. G. Macchietto S. Shah N. Comput. Chem. Eng. S23 (1999) p. S539 100 Rigby B. Lasdon L. S. Waren A. D. Interfaces 25 (1995) p. 64 101 Rodera H . Bagajewicz M . J . Tsafalis T. B. Ind. Eng. Chem. Res. 41 (2002) p. 4075 102 Rodrigues M . M . Latre L. G. Rodrigues L. A. Comput. Chem. Eng. 24 (2000)p. 2247 103 Roger M. ]. Gupta A. Maranas C. D.Ind. Eng. Chem. Res. 41 (2002) p. 6607 104 Romero J. Badell M. Bagajewicz M. Puigjaner L. Ind. Eng. Chem. Res. 42 (2003) p. 6125 105 RyuJ.-H. Dua V. Pistikopoulos E. N. Comput. Chem. Eng. 28 (2004),p. 1121 106 Sahinidis N. V. Comput. Chem. Eng. 28 (2004),p. 971 107 Sahinidis N. V. Grossmann I. E. Ind. Eng. Chem. Res. 30 (1991) p. 1165 108 Sahinidis N. V. Grossmann I. E. Fornari R. E. Chathrathi C. Comput. Chem. Eng. 13 (1989) p. 1049 109 Sunmarti E. Espuna A. Puigjaner L. Comput. Chem. Eng. S19 (1995)p. S565 110 Savin D. Alkass S. Fazio P. J. Comput. Civ. Eng. 12 (1998)p. 241 111 Schmidt C.W. Grossmann I. E. Ind. Eng. Chem. Res. 35 (1996)p. 3498 112 Seibert]. E. Evans G. W. J , Constr. Eng. Manage. 117 (1991)p. 503 113 Senousi A. B. Adeli H. J. Constr. Eng. Manage. 127 (2001) p. 28 114 Shah N. Comput. Chem. Eng. S20 (1996) p. S1227 115 Shah N. AIChE Symp Ser. 1998 No. 320 94 75 116 Shah N . Comput. Chem. Eng. 28 (2004),p. 929 117 Shah N. Pantelides C. C. Ind. Eng. Chem. Res. 30 (1991) p. 2308 118 Shah N. Pantelides C. C. Ind. Eng. Chem. Res. 31 (1992) p. 1325 119 Shah N. Pantelides C. C. Sargent R. W. H . Ann. Oper. Res. 42 (1993) p. 193 120 ShapiroJ.F. Eur. J. Oper. Res. 118 (1999) p. 295 121 Shapiro]. F. Comput. Chem. Eng. 28 (2004), p. 855 122 Shobrys D. E. White D. C. Comput. Chem. Eng. 26 (2002) p. 149
123 Stockton D. ]. Quinn L. Proc. Inst. Mech.
136 van den Heever S. A. Grossmann 1. E. Vasan-
Eng. 209(3) (1995) p. 201 124 Subrahmanyam S. Pekny]. F. Reklaitis G. V. Ind. Eng. Chem. Res. 33 (1994) p. 2668 125 Subrahmanyam S. Pekny ]. F. Reklaitis G. V. Ind. Eng. Chem. Res. 35 (1996) p. 1866 126 Subramanian D. Pekny]. F. Reklaitis G. V. AIChE J. 47 (2001) p. 2226 127 Sung C. S. Lim S. K. Comput. Ind. Eng. 1 2 (1996) p. 227 128 Suryadi H. Papageorgiou L. G. Int. J. Prod. Res. 42 (2004) p. 355 129 Takriti S. Birge]. Long E. IEEE Trans. Power Syst. 11 (1996) p. 1497 130 Tsiakis P. Rickard ]. G. Shah N. Pantelides C. C. 2003 AlChE Annual Conference San Francisco California USA 131 Tsiakis P. Dimitriadis A. D. Shah N. Pantelides C. C. 2000 AlChE Annual Conference Los Angeles California U S A 132 Tsiroukis A.G. Papageorgaki S. Reklaitis G. V. Eng. Chem. Res. 32 (1993) p. 3037 133 van den Heever S. A. Grossmann I. E. Ind. Eng. Chem. Res. 39 (2000) p. 1955 134 van den Heever S. A. Grossmann 1. E. Comput. Chem. Eng. 27 (2003) p. 1813 135 van den Heever S. A. Grossmann I. E. Vasantharajan S. Edwards K. Comput. Chem. Eng. 24 (2000) p. 1049
tharajan 5’. Edwards K. Ind. Eng. Chem. Res. 40 (2001) p. 2857 137 Voudouris T. V. Grossmann 1. E. Ind. Eng. Chem. Res. 32 (1993) p. 1962 138 Wang 5’. J. Shahidehpour S. M. Kirschem D. S. Mokhtari S. lrissari G. D. I E E E Trans. Power Syst. 10 (1995) p. 1294 139 Wdlons M. C. Reklaitis G. V. Comput. Chem. Eng. 13 (1989a) p. 201 140 Wellons M. C. Reklaitis G. V. Comput. Chem. Eng. 13 (1989b) p. 213 141 Weflons M. C. Reklaitis G. V. Ind. Eng. Chem. Res. 30 (1991a) p. 671 142 Wellons M. C. Reklaitis G. V. Ind. Eng. Chem. Res. 30 (1991b) p. 688 143 Wight 0. 1984 Manufacturing Resource Planning: MRP-I1 Wight Williston 144 Wilkinson S. ]. 1996 PhD Thesis University of London 145 Wilkinson S. /. Shah N. Pantelides C. C. Comput. Chem. Eng. 19 (1995) p. S583 146 Wilson]. M. Eur. J. Oper. Res. 149 (2003) p. 430 147 Wu D. Ierapetritou M. G. 2003. FOCAPO 2003 p. 257 148 Yin K. K. Liu H . 2003. FOCAPO 2003 p. 261
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
2 Production Scheduling Nihy Shah
2.1
Introduction
The theme of production planning and scheduling has been the subject of great attention in the recent past. Initially, especially from the early 1980s to the early 199Os, this was due to the resurgence in interest in flexible processing either as a means of ensuring responsiveness or of adapting to the trends in chemical processing towards lower volume, higher-value-addedmaterials in the developed economies (Reklaitis 1991, Rippin 1993, Hampel 1997). More recently, the topic has received a new impetus as enterprises attempt to optimize their overall supply chains in response to competitive pressures or to take advantage of recent relaxations in restrictions on global trade, as well as the information storage and retrieval capabilities provided by ERP systems. It is widely recognized that the complex problem of what to produce and where and how to produce it is best considered through an integrated, hierarchical approach that also acknowledges typical corporate structures and business processes. This type of structure is illustrated in Figure 2.1. In the most general case, the extended supply chain is taken to mean the multienterprise network of manufacturing facilities and distribution points that perform the functions of materials procurement, transformation into intermediate and finished materials and distribution of the finished products to customers. The most common context for planning at the supply-chain level is the coordination of manufacturing and distribution activities across multiple sites operated by a single enterprise (enterprise-wide or multisite planning). Here, the aim is to make the best use of geographically distributed resources over a certain time period. The result of the multisite planning problem is typically a set of production targets for each of the individual sites, and rough transportation plans for the network as a whole. The production scheduling activity at each individual site seeks to determine precisely how these targets can be met (or indeed how best to compromise them if they cannot be met in whole). This involves determining the precise details of resource allocation over time. Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
482
I
2 Production Scheduling Figure 2.1
EnterpriweESupply chain manning
Process operations hierarchy
_ J * Single Site Production Plannmg
+
On-line Scheduling
.f
Supervisory ContraUMonitoring
1
Regulatory Control
I
Once a series of activities has been determined, these must be implemented in the plant. The role of the supervisory control system is to initiate the correct sequences of control logic with the correct parameters at the correct time, making sure that conflicts for plant resources are resolved in an orderly manner. It is also useful at this level to create a schedule of planned operations over a short future interval using a model detailed enough to ensure that there are no anticipated resource conflicts. This “online” scheduling allows current estimates of the starting and finishing times of each operation to be known at any time. Although this capability is not essential for the execution of operations in the plant, it is vital if the hierarchical levels are to be integrated so that production scheduling is performed in response to deviations in expected plant operation (“reactivescheduling”). Finally, the lowest levels of the hierarchy relate to execution of individual control phases and ensuring safe and economic operation of the plant.
2.1.1 Why Is Scheduling Important?
The planning function aims to optimize the economic performance of the enterprise as it matches production to demand in the best possible way. The production scheduling component is of vital importance as it is the layer which translates the economic imperatives of the plan into a sequence of actions to be executed on the plant, so as to deliver the optimized economic performance predicted by the higher-level plan.
2 2 The Single-Site Production Scheduling Problem
I
483
2.1.2 Challenges in Scheduling
There is clearly a need for research and development in all the levels of the operations hierarchy. Four previous reviews in this area (Reklaitis 1991, Rippin 1993, Shah 1998, Kallrath 2002a) summarized some of the main challenges as: 0
0
0
0
0
0 0 0
the development of efficient general-purpose solution methods for the mixedinteger optimization problems that arise in planning and scheduling; the design of tailored techniques for the solution of specific problem structures, which either arise out of specific types of scheduling problems or are embedded substructures in more general problems; the design of algorithms for efficient solution of general resource constrained problems, especially those based on a continuous representation of time; the development of hybrid methods based on optimization and constraint propagation methods; the development of commercially available software packages for optimizationbased scheduling (as distinct from planning); the systematic treatment of uncertainty; the advancement of online techniques for rapid adaptation of operations; the development of methods for the integrated planning and scheduling of multisite systems.
Multisite and supply-chain planning and scheduling is dealt with in section 5.7, and the focus here is on scheduling at a single site. Progress towards these challenges will be described. The remainder of this chapter is organized as follows: Section 2.2 describes the problem in more detail. Sections 2.3-2.9 review research into alternative solution methods for scheduling problems, both with deterministic and uncertain data, and Section 2.10 describes some successful industrial applications of advanced scheduling methods. The remaining sections list some new application domains and describe conclusions drawn.
2.2 The Single-Site Production Scheduling Problem
The scheduling problem at a single site is usually concerned with meeting fairly specific production requirements. Customer orders, stock imperatives or higher-level supply chain or long-term planning would usually set these requirements, as described in subsequent sections. It is concerned with the allocation over time of scarce resources between competing activities to meet these requirements in an efficient fashion. The data required to describe the scheduling problem typically include: 0
Production recipes: details of how each product is to be produced, including details of production of intermediates. This will include material balance information, resource requirements, processing times/rates of the process tasks, etc.
484
I
2 Production Scheduling 0
0
0
Resource data: for process equipment, storage equipment, utilities (capacities, capabilities, availabilities, costs, etc.) Material data: stability, opening inventories, anticipated receipts of raw materials/ intermediates. Demand data: time horizon of interest, firm orders, forecasted demands, sales prices.
The key components of the scheduling problem are resources, tasks and time. The resources need not be limited to processing equipment items, but may include material storage equipment, transportation equipment (intra- and interplant), operators, utilities (e.g., steam, electricity, cooling water), auxiliary devices and so on. The tasks typically comprise processing operations (e.g., reaction, separation, blending, packaging) as well as other activities that change the nature of materials and other resources such as transportation, quality control, cleaning, changeovers, etc. There are both external and internal elements to the time component. The external element arises out of the need to coordinate manufacturing and inventory with expected product lifings or demands, as well as scheduled raw material receipts and even service outages. The internal element relates to executing the tasks in an appropriate sequence and at the right times, taking account of the external time events and resource availabilities. Overall, this arrangement of tasks over time and the assignment of appropriate resources to the tasks in a resource-constrained framework must be performed in an efficient fashion, which implies the optimization, as far as possible, of some objective. Typical objectives include the minimization of cost or maximization of profit, maximization of customer satisfaction,minimization of deviation from target performance, etc. Generally speaking, depending on raw material lead times, production lead times, forecast accuracy and other similar factors, production scheduling is driven either by firm customer orders (“make-to-order”)or forecasted demands (“make-to-stock”). As noted by Gabow (1983), all but the most trivial scheduling problems belong to the class of NP hard (Non-deterministic Polynomial-timehard) problems; there are no known solution algorithms that are of polynomial complexity in the problem size. This has posed a great challenge to the research community, and a large body of work aiming to develop either tailored algorithms for specific problem instances or efficient general-purpose methods has arisen. Solving the scheduling problem requires methods that search through the decision space of possible solutions. The search processes can be classified as follows: 0
0
Heuristic: a series of rules (e.g., the sequence of production should be based on order due-dates) are used to generate alternative schedules. Metaheuristic: higher level generic search algorithms (e.g., simulated annealing, genetic algorithms) are used to explore the decision space. Mathematical programming: the scheduling problem is posed as a formal mathematical optimization problem and solved using general-purpose or tailored methods (see section 4.2).
2.2 The Single-Site Production Scheduling Problem
The research into production scheduling techniques may be further subdivided into specific and general application domains. The latter division is intended to reflect the scope of the technique (in terms of plant structure and process recipes). Rippin (1993) classified different flexible plant structures as follows: 0
0
Multiproduct plants, where each product has the same processing network, i.e., each product requires the same sequence of processing tasks (often known as “stages”). Owing to the historic association between the work on batch plant scheduling and that on discrete parts manufacturing, these plants are sometimes called “flowshops”. Multipurpose plants (“jobshops”),where the products are manufactured via different processing networks, and there may be more than one way in which to manufacture the same product. In general, a number of products undergo manufacture at any given time.
In addition to the process structure, the storage policies for intermediate materials are critical in production scheduling, especially for batch plants. Any intermediate material can usually be classified as being subject to one of five intermediate storage policies: 0
0
0
0
0
Zero-wait (ZW): the material is not stable and must be processed further upon production. No intermediate storage (NIS): the material is stable, but no storage vessels are provided. However, it may reside temporarily in the processing equipment that produced it before being processed further. Shared intermediate storage (SIS): the material is stable, and may be stored in one or more storage vessels that may also be used to store other materials (though not at the same time). Finite intermediate storage (FIS): the material is stable, and one or more dedicated storage vessels are available. Unlimited intermediate storage (UIS): the material is stable, and one or more dedicated storage vessels are available, the total capacity of which is effectively unlimited.
The importance of these is clearly evident from the following example. Consider a process whereby a material C, is made from raw material A, via the following reactions: A -+ B (reaction 1, duration 3 h) B -+ C (reaction 2, duration 1 h)
Reaction 1takes place in reactor 1 (capacity 10.000 kg) and reaction 2 takes place in reactor 2 (capacity 5000 kg). The average production rate of C depends strongly on the storage policy for B. The rates for the ZW, NIS and UIS cases are calculated below.
I
485
486
I
2 Production Scheduling
2.2.1 ZW Case
Here, only 5000 kg of A can be loaded into reactor 1,because once this batch is complete, it must be immediately transferred to reactor 2, which limits the size of the batch. A sample operating schedule is shown in Figure 2.2.
Figure 2.2
Sample schedule for the zero-wait (ZW) case
According to the schedule, 5000 kg of C is produced every 3 h, so the average production rate is 1667 kg h-'.
2.2.2 NIS Case
Here, 10.000 kg of A can be loaded into reactor 1. After the reaction is complete, 5000 kg of B can be transferred to reactor 2, and 5000 is held in reactor 1for an extra hour before being transferred to reactor 2. A sample operating schedule is shown in Figure 2.3.
Figure 2.3
Sample schedule for the no intermediate storage (NIS) case
In this case, 10.000 kg of C is produced every 4 h, so the average production rate is 2500 kg h-'.
2.2.3 UIS Case
In this case, there is sufficient storage (e.g., 10.000 kg) to decouple the operation of reactor 1 and reactor 2 completely. The production rate is then limited by the bottleneck stage; in this case reactor 1. A sample operating schedule is shown in Figure 2.4.
Figure 2.4 Sample schedule for the unlimited intermediate storage (UIS) case
2.3 Heuristics/Metaheuristics: Spec$% Processes
Here, 10.000 kg of C is produced every 3 h; the production rate is 3333 kg h-’. The above discussion serves to define categories for scheduling techniques and categories for process structures. The next sections review developments in the solution of scheduling problems and are organized along the categories listed above.
2.3 Heuristics/Metaheuristics: Specific Processes
Most scheduling heuristics are concerned with formulating rules for determining sequences of activities. They are therefore best suited to processes where the production of a product involves a prespecified sequence of tasks with fixed batch sizes; in other words, variants of multiproduct processes. Often, it is assumed that fixing the front-end product sequence will fuc the sequence of activities in the plant (the socalled permutation schedule assumption; see Figure 2.5). Generally, the processing of a product is broken down into a sequence of jobs that queue for machines, and the rules dictate the priority order of the jobs. Dannebring (1977), Kuriyan and Reklaitis (1985, 1989) and Pinedo (1995) give a good exposition on the kinds of heuristics (dispatching rules) that may be used for different plant structures. Typical rules involve ordering products (see, e.g., Hasebe et al. 1991) by processing time (either shortest or longest), due dates and so on. Most of the heuristic methods originated in the discrete manufacturing industries, and might be expected not to perform as well in process industry problems, because in the latter material is infinitely divisible and batch sizes are variable (unlike discrete “jobs”). Furthermore, batch splitting and mixing are allowed and are becoming increasingly popular as a means of effecting late product differentiation. Stochastic search approaches (“metaheuristics”)are based on continual improvement of trial solutions by the application of an evolutionary algorithm which modifies solutions and prioritizes solutions from a list for further consideration. The two main evolutionary algorithms applied to this area are simulated annealing and genetic algorithms. An early application of simulated annealing to batch process scheduling problems was undertaken by Ku and Karimi (1991), where they applied the algorithm to multiproduct plant scheduling. They concluded that such algorithms are easy to implement and tended to perform better than conventional heuristics, but often required significant computational effort. Xia and Macchietto (1997) described the application of simulated annealing and genetic algorithm techniques to the scheduling of multiproduct plants with complex material transfer policies. More recently, Murakami et al. (1997) described a repetitive simulated annealing procedure which avoids local minima by using many starting points with fewer evolutionary iterations per starting point. ._
Figure 2.5
Permutation schedule
488
I
2 Production Scheduling
Sunol et al. (1992) described the application of a genetic algorithm approach to a simple flowshop sequencing problem, and found the technique to be superior to explicit enumeration. As noted by Hasebe et al. (1996),the performance of a genetic algorithm depends on the operators used to modify trial solutions. They applied a technique that selects appropriate operators during the solution procedure for the scheduling of a parallel-unit process. Overall, the stochastic search processes are best applied to problems of an entirely discrete nature, where an objective function can be evaluated quickly. The classic example is the sequencing and timing of batches in a multiproduct plant, where the decision variables are the sequence of product batches, and the completion time of any candidate solution is easily evaluated through recurrence relations or minimax algebra. The main disadvantages are that it is difficult to consider general processes, and inequality constraints and continuous decisions, although some recent work (e.g., Wang et al. 2000) aims at addressing this.
2.4 Heuristics/Metaheuristics: General Processes
The problem of scheduling in general multipurpose plants is complicated by the additional decisions (beyond the sequencing of product batches) of assignment of equipment items to processing tasks, task batch sizes and intermediate storage utilization. It is difficult to devise a series of rules to resolve these, and there are therefore few heuristic approaches reported for the solution of this problem. Kudva et al. (1994) consider the special case of “linear”multipurpose plants, where products flow through the plant in a similar fashion, but potentially using different stages and with no recycling of material. A rule-based constructive heuristic is used, which requires the maintenance of a status sheet on each unit and material type for each time instance on a discrete-time grid. The algorithm uses this status sheet with a sorted list of orders and develops a schedule for each order by backwards recursive propagation. The schedule derived depends strongly on the order sequence. Solutions were found to be within acceptable bounds of optimality when compared with those derived through formal optimization procedures. Graells et al. (1996) presented a heuristic strategy for the scheduling of multipurpose batch plants with mixed intermediate storage policies. A decomposition procedure is employed where subschedules are generated for the production of intermediate materials. Each subschedule consists of a mini production path determined through a branch-and-cut enumeration of possible unit-to-taskallocations. The minipaths are then combined to form the overall schedule. The overall schedule is checked for feasibility with respect to material balances and storage capacities. Improvements to the schedules may be effected manually through an electronic Gantt chart or through a simulated annealing procedure. Lee and Malone (2000) describe the application of a simulated annealing metaheuristic to a variety of batch process planning problems. Here, intermediate products, inventory costs and a variety of process flow networks can be represented.
2.5 Mathematical Programming: Specific Processes
As mentioned earlier, the application of heuristics to such problems is not straightforward. Although this effectively represents current industrial practice, most academic research has been directed towards the development of mathematical programming approaches for multipurpose plant scheduling. As will be described later, these approaches are capable of representing all the complex interactions present.
2.5 Mathematical Programming: Specific Processes
Here, we shall first outline some of the features of mathematical programming approaches in general, and then consider their application to processes other than the general multipurpose one. The latter will be considered in the Section 2.6. Mathematical programming approaches to production scheduling in the process industries have received a large amount of attention recently. This is because they bring the promise of generality (i.e., ability to deal with a wide variety of problems), rigor (the avoidance of approximations) and the possibility of achieving optimal or nearoptimal solutions. The application of mathematical programming approaches implies the development of a mathematical model and an optimization algorithm. Most approaches aim to develop models that are of a standard form (from linear programming (LP) models for refinery planning to mixed-integer nonlinear programming (MINLP) models for multipurpose batch plant scheduling). These may then be solved by standard software or specialized algorithms that take account of problem structure. The variables of the mathematical models will tend to include some or all of the following choices, depending on the complexity considered: 0 0 0 0 0
sequence of products or individual tasks timing of individual tasks in the process selection of resources to execute tasks at the appropriate times amounts processed in each task inventory levels of all materials over time.
The discrete nature of some of the variables (sequencing and resource selection) implies that binary or integer-valuedvariables will be required. The selection of values for all the variables will be subject to some or all of the following constraints: 0
0
0 0
0
nonpreemptive processing: once started, processing activities must proceed until completion; resource constraints: at any time, the utilization of a resource must not exceed its availability; material balances; capacity constraints: processing and storage; orders being met in full by their due dates.
490
I
2 Production Scheduling
Finally, optimization methods dictate that an objective function be defined. This is usually of an economic form, involving terms such as production, transition and inventory costs and possibly revenues from product sales. A very good review of mathematical programming techniques applied to scheduling and an associated classification of methods and models is provided by Pinto and Grossmann (1998). A critical feature of mathematical programming approaches is the representation of the time horizon. This is important because activities interact through the use of resources; therefore, the discontinuities in the overall resource utilization profiles must be tracked with time, to be compared with resource availabilities to ensure feasibility. The complexity arises because these discontinuities (unlike discontinuities in availabilities) are functions of any schedule proposed and are not known in advance. The two approaches for dealing with this are: 0
0
Discrete-time (or “uniform discretization”):the horizon is divided into a number of equally spaced intervals so that any event that introduces such discontinuities (e.g., the starting of a task or a due date for an order) can only take place at an interval boundary. This implies a relatively fine division of the time grid, so as to capture all the possible event times, and in the solution to the problem it is likely that many grid points will not actually exhibit resource utilization discontinuities. Continuous time (or “nonuniform discretization”): here, the horizon is divided into fewer intervals, the spacing of which will be determined as part of the solution to the problem. The number of intervals will correspond more closely to the number of resource utilization discontinuities in the solution.
In addition to the above, another attribute of time representation is whether the same grid is used for all major equipment items in the plant (the “common grid” approach) or whether each major equipment item operates on its own grid (the “individual resource grid”, only used with continuous-time models). Generally speaking, the former approach is more suitable for processes in which activities on the major equipment items also interact with common resources (materials, services, etc.) and the latter where activities on the major equipment items are quite independent in their interactions with common resources. These distinctions will become clearer when individual pieces of research are discussed. The distinctions between these representations are shown in Figures 2.6 and 2.7. The simplest specific scheduling process is probably a single production line which produces one product at time in a continuous fashion. Work in this area has been directed towards deriving cyclic schedules (where the production pattern is
1 unit2 1 unit3 1 unit4 1 unit1
1 1 1 1
Figure 2.6
Discrete-time representation
-
2.5 Mathematical Programming: Specific Processes
1 - 1 unit2 1 - 1 unit3 1 - 1 unit1
4 - 1
unit4
individual resource grid Figure 2.7
Continuous-time representations
common grid
repeated at a fixed frequency) that balance inventory and transition costs by determining the best sequence of products and their associated run-lengths or lot-sizes.A review of this so-called economic lot scheduling problem is given by Elmaghraby (1978). Sahinidis and Grossmann (1991) consider the more general problem of the cyclic scheduling of a number of parallel multiproduct lines, where each product may in principle be produced on more than one line and production rates and costs vary between lines. They utilize a continuous-time individual resource grid model, which turns out to be a MINLP. This includes an objective function that includes combined production, product transition and inventory costs for a constant demand rate for all products. Their work was extended by Pinto and Grossmann (1994),who considered the case of multiple production lines, each consisting of a series of stages decoupled by intermediate storage and operating in a cyclic mode. Each product is processed through all stages, and each product is processed only once at each stage. The model again uses a continuous-time model, and it is possible to use the independent grid approach despite the fact that stages interact through material balances; this is due to the special structure of the problem. A number of mathematical programming approaches have been developed for the scheduling of multiproduct batch plants. All are based (either explicitly or implicitly) on a continuous representation of time. Pekny et al. (1988)considered the special case of a multiproduct plant with no storage (zero wait (ZW)) between operations. They show that the scheduling problem has the same structure as the asymmetric traveling salesman problem, and apply an exact parallel computation technique employing a tailored branch-and-bound procedure which uses an assignment problem to provide problem relaxations. The work was extended to cover the case of product transition costs, where the problem structure is equivalent to the prize-collecting traveling salesman problem (Pekny et al. 1990),and LP relaxations are used. For both cases, problems of very large magnitude were solved to optimality with modest computational effort. Gooding et al. (1994) augmented this work to cover the case of multiple units at each stage (the so-called “parallel flowshop” stage). A more complete overview of the development of algorithms for classes of problems (“algorithm engineering”) is given by Applequist et al. (1997)and a commercial development in this area is described by Bunch (1997). Birewar and Grossmann (1989) developed a mixed-integer programming model for a similar type of plant. They show that through careful modeling of slack times,
I
491
492
I
2 Production Scheduling
and by exploiting the fact that relatively large numbers of batches of relatively few products will be produced (which allows end-effects to be ignored), a straightforward LP model can be used to minimize the makespan. The result is a family of schedules from which an individual schedule may be extracted. They extend the work to cover simultaneous long-term planning and scheduling, where the planning function takes account of scheduling limitations (Birewar and Grossmann 1990). Pinto and Grossmann (1995) describe a mixed-integer linear programming (MILP) model for the minimization of earliness of orders for a multiproduct plant with multiple equipment items at each stage. The only resources required for production are the processing units. Pinto and Grossmann (1997)then augmented the model to take account of interactions between processing stages and common resources (e.g., steam). Rather than utilize a common grid, they retained individual grids, and accounted for the resource discontinuities through complex mixed-integer constraints which weakened the model and resulted in large computational times. They therefore proposed a hybrid logic-based/MILP algorithm where the disjunctions relate to the relative timing of orders. This dramatically reduces the computational effort expended. Moon et al. (1996)also developed a MILP model for ZW multiproduct plants. The objective was to assign tasks to sequence positions so as to minimize the makespan, with nonzero transfer and set-up times being included. The extension of the work to more general intermediate storage policies was described by Kim et al. (199G),who proposed several MINLP formulations based on completion time relations. The case of single-stage processes with multiple units per stage has been considered by Cerda et al. (1997)and McDonald and Karimi (McDonald and Karimi, 1997; Karimi and McDonald, 1997). Both describe continuous-time-based MILP models. Cerda et al. focus on changeovers and order fulfilment, while Karimi and McDonald focus on semicontinuous processes and total cost (transition, shortage and inventory) with the complication of minimum run lengths. A characteristic of both approaches is that discrete demands must be captured on the continuous-time grid. Mkndez and Cerdi (2000) developed a MILP model for a process with a single production stage with parallel units followed by a storage stage with multiple units, with restricted connectivity between the stages. This was extended to the multistage case with general production resources by Mendez et al. (2001). In common with other models, there are no explicit time slots in the model; the key variables are allocations of activities to units and the relative orderings of activities. The work described above all relates to special process structures, which means that mathematical models can be designed specifically for the problem class. This ensures that, despite the typical concerns about computational complexity of discrete optimization problems, solutions are available with reasonable effort. The drawback of the work is its limited applicability. Nevertheless, several models appear to have been developed with specific industrial applications in mind (e.g., Sahinidis and Grossmann (1991a), Pinto and Grossmann (1995) and Karimi and McDonald (1997)).
2.G Mathernatica/ Programming: Multipurpose Plants
2.6 Mathematical Programming: Multipurpose Plants
A large portion of the most recent research in planning and scheduling undertaken by the process systems community relates to the development of mathematical programming approaches applied to multipurpose plants. As intimated earlier, in this case the application domain tends to imply the solution approach: mathematical models are the best way of representing the complex interactions between resource allocations, task timings, material flows and equipment capacities. Much of the recent work reported in the literature deals with this class of problem. The work in this are can be characterized by three different assumptions about plant operation: 0
0
0
The unique assignment case: each task can only be performed by a unique piece of equipment, and there are no optional tasks in the process recipe and batch sizes are usually fured. The campaign mode of operation: the horizon is divided into relatively long campaigns, and each campaign is dedicated to one or a few products. Short-term operation: products are produced as required and no particular scheduling pattern may be assumed.
The first assumption is particularly restrictive. The second relates to a mode of operation that is becoming relatively scarce, as it implies a low level of responsiveness. One sector in which campaign operation is still prevalent is in the manufacture of active ingredients for pharmaceuticals and agrochemicals. The short-term mode of operation is tending to become the most prevalent elsewhere, as it best exploits operational flexibility to meet changing external circumstances. Mauderli and Rippin (1979) developed a procedure for campaign planning which attempts to optimize the allocation of equipment to tasks. An enumerative procedure (based on different equipment-to-task allocations) is used to generate possible singleproduct campaigns which are then screened by LP techniques to select the dominant ones. A production plan is then developed by the solution of a MILP that sequences the dominant campaigns and fixes their lengths. The disadvantages of this work are the inefficiency of the generation procedure and the lower level of resource utilization implied by single-product campaigns. Wellons and Reklaitis (1991a,b) addressed this through a formal MINLP method to generate campaigns and production plans in a two-stage procedure, as did Shah and Pantelides (1991) who solved a simultaneous campaign generation and production planning problem. An early application of mathematical programming techniques for short-term multipurpose plant scheduling was the MILP approach of Kondili et al. (1988). They used a discrete representation of time, and introduced the state-task network (STN) representation of the process (see Figure 2.8). The STN representation has three main advantages: 0
It distinguishes the process operations from the resources that may be used to execute them, and therefore provides a conceptual platform from which to relax the unique assignment assumption and optimize unit-to-task allocation.
I
493
494
I
2 Production Scheduling
React
0
0
Figure 2.8 Example of a state-task network. Circles: material states, rectangles: tasks
It avoids the use of task precedence relations, which become very complicated in multipurpose plants: a task can be scheduled to start if its input materials are available in the correct amounts and other resources (processing equipment and utilities) are also available, regardless of the plant history. It provides a means of describing very general process recipes, involving batch splitting and mixing and material recycles, and storage policies including ZW, NIS, SIS and so on.
The formulation of Kondili et al. (1988) (described in more detail in Kondili et al. (1993))is based on the definition of binary variables that indicate whether tasks start in specific pieces of equipment at the start of each time period, together with associated continuous batch sizes. Other key variables are the amount of material in each state held in dedicated storage over each time interval, and the amount of each utility required for processing tasks over each time interval. Their key constraints related to equipment and utility usage, material balances and capacity constraints. The common, discrete-time grid captures all the plant resource utilizations in a straightforward manner; discontinuities in these are forced to occur at the predefined interval boundaries. Their approach was hindered in its ability to handle large problems by the weakness of the allocation constraints and the general limitations of discrete-time approaches, such as the need for relatively large numbers of grid points to represent activities with significantly different durations. Their work formed the basis of several other pieces of research aiming to take advantage of the representational capabilities of the formulation while improving its numerical performance. Sahinidis and Grossmann (1991b)disaggregated the allocation constraints and also exploited the embedded lot-sizing nature of the model where relatively small demands are distributed throughout the horizon. They disag gregate the model in a fashion similar to that of Krarup and Bilde (1977),who were able to improve the solution efficiency despite the larger nature of the disaggregated model. This was due to a feature particular to mixed-integer problems: other things being equal, the computational effort for problem solution through standard procedures is dictated mainly by the difference between the optimal objective function and the value of the objective function obtained by solving the continuous relaxation where bound constraints rather than integrality restrictions are imposed on the integer variables (the so-called “integralitygap”).The formulation of Sahinidis and Grossmann (1991b) was demonstrated to have a much smaller integrality gap than the original. Shah et al. (1993a)modified the allocation constraints even further to generate the smallest possible integrality gap for the type of formulation. They also devised a tai-
2.6 Mathematical Programming: Multipurpose Plants
lored branch-and-boundsolution procedure which utilizes a much smaller LP relaxation and solution processing to improve integrality at each node. The same authors (Shah et al. 1993b) considered the extension to cyclic scheduling, where the same schedule is repeated at a frequency to be determined as part of the optimization. This was augmented by Papageorgiou and Pantelides (1996a,b) to cover the case of multiple campaigns, each with a cyclic schedule to be determined. Elkamel (1993) also proposed a number of measures to improve the performance of the STN-based discrete-time scheduling model. A heuristic decomposition method was proposed, which solves separate scheduling problems for parts of the overall scheduling problem. The decomposition may be based on the resources (“longitudinal decomposition”) or on time (“axial decomposition”). In the former, the recipes and suitable equipment for each task are examined for the possible formation of unique task-unit subgroups which can be scheduled separately. Axial decomposition is based on grouping products by due dates and decomposing the horizon into a series of smaller time periods, each concerned with the satisfaction of demands falling due within it. He also described a perturbation heuristic, which is a form of local search around the relaxation. Elkamel and Al-Enezi (1998) describe valid inequalities that tighten the MILP relaxations of this class of model. Yee and Shah (1997, 1998) and Yee (1998) also considered various manipulations to improve the performance of general discrete-time scheduling models. A major feature of their work is variable elimination. They recognize that in such models, only about 5-15 % of the variables reflecting task-to-unit allocations are active at the integer solution, and it would be beneficial to identify as far as possible inactive variables prior to solution. They describe a LP-based heuristic, a flexibility and sequence reduction technique and a formal branch-and-price method. They also recognize that some problem instances result in poor relaxations and propose valid inequalities and a disaggregation procedure similar to that of Sahinidis and Grossmann (1991b) for particular data instances (Romero and Puigjaner (2004)). Bassett et al. (1996) and Dimitriadis et al. (1997a, 1997b) describe decompostion-based approaches which solve the problems in stages, eventually generating a complete solution. Blomer and Giinther (1998)also introduced a series of LP-based heuristics that can reduce solution times considerably, without compromising the quality of the solution obtained. Grunow et al. (2002) show how the STN tasks can be aggregated into higher level processes for the purposes of longer-term campaign planning. Gooding (1994) considers a special case of the problem with firm demands and dedicated storage only. The scheduling model is described in a digraph form where nodes correspond to possible task-unit-time allocations and arcs the possible sequences of the activities. The explicit description of the sequence in this form addresses one of the weaknesses of the discrete-time formulation of Kondili et al. (1988, 1993),which was that it did not model sequence-dependent changeovers very well. Gooding’s (1994) model therefore performed relatively well in problems with a strong sequencing component, but suffers from model complexity in that all possible sequences must be accounted for directly.
I
495
496
I
2 Production Scheduling
Pantelides et al. (1995)reported a STN-based approach to the scheduling of pipeless plants, where material is conveyed between processing stations in movable vessels. This requires the simultaneous scheduling of the movement and processing operations. Pantelides (1994)presented a critique of the STN and associated scheduling formulations. He argued that despite its advantages, it suffers from a number of drawbacks: 0
0
0
0
The model of plant operation is somewhat restricted: each operation is assumed to use exactly one major item of equipment throughout its operation. Tasks are always assumed to be processing activities which change material states: changeovers or transportation activities have to be treated as special cases. Each item of equipment is treated as a distinct entity: this introduces solution degeneracy if multiple equivalent items exist. Different resources (materials, units, utilities) are treated differently, giving rise to many different types of constraints, each of which must be formulated carefully to avoid unnecessarily increasing the integrality gap.
He then proposed an alternative representation, the resource-task network (RTN), based on a uniform description of all resources (Figure 2.9). In contrast to the STN approach, where a task consumes and produces materials while using equipment and utilities during its execution, in this representation, a task is assumed only to consume and produce resources. Processing items are treated as though consumed at the start of a task and produced at the end. Furthermore, processing equipment in different conditions (e.g., “clean” or ”dirty“) can be treated as different resources, with different activities (e.g., ”processing”or “cleaning”)consuming and generating them this enables a simple representation of changeover activities. Pantelides (1994) also proposed a discrete-time scheduling formulation based on the RTN that, due to the uniform treatment of resources, only requires the description of three types of constraint, and does not distinguish between identical equipment items (which results in more compact and less degenerate optimization
-
*-.. \
.y / Resources
rc-3-4
-*% 1 -T
I
.
I’
’Vh k
The resource-task network representation
*
Protluchon Cleanrn~changeover Transportation
A
Figure 2.9
l
Tasks
Feed\. products, intermediates Iftilities (\teain, operator\, etc ) Equipment (cleaddirty) Material at different locations
\
l-
do
0
2.6 Mathematical Programming: Multipurpose Plants
models). He illustrated that the integrality gap could not be worse than the most efficient form of STN formulation, but that the ability to capture additional problem features in a straightforward fashion made it an ideal framework for future research. The review above has mainly considered the development of discrete-time models. As argued by Schilling (1997),discrete-time models have been able to solve a large number of industrially relevant problems (see, e.g., Tahmassebi 1996), but suffer from a number of inherent drawbacks: 0
0
0
The discretization interval must be fine enough to capture all significant events; this may result in a very large model. It is difficult to model operations where the processing time is dependent on the batch size. The modeling of continuous and semicontinuous operations must be approximated, and minimum run-lengths give rise to complicated constraints.
A number of researchers have therefore attempted to develop scheduling models for multipurpose plants which are based on a continuous representation of time, where fewer grid points are required as they will be placed at the appropriate resource utilization discontinuities during problem solution. Zentner and Reklaitis (1992)described a formulation based on the unique assignment case and futed batch sizes. The sequence of activities as well as any external effects can be used to infer the discontinuities and therefore the interval boundaries. A MILP optimization is then used to determine the exact task starting times. Reklaitis and Mockus (1995) detailed a continuous-time formulation based on the STN formulation, and exploiting its generality. A common resource grid is used, with the timing of the grid points (“event orders” in their terminology) determined by the optimization. The model is a MINLP, which may be simplified to a mixedinteger bilinear problem by linearizing terms involving binary variables. This is solved using an outer-approximationalgorithm. Only very preliminary findings were reported, but the promise of such models is evident. Mockus and Reklaitis (1996)then reported an alternative solution procedure. They introduce the concept of Bayesian heuristics, which are heuristics that can be described through parameterized functions. The Bayesian technique iteratively modifies the parameters to develop a heuristic that is expected to perform well across a class of problem parameters. They illustrate the procedure using a material requirements planning (MRP) backward-scheduling heuristic which outperforms a standard discrete-time MILP formulation solved using branch-and-bound. They extend this work (Mockus and Reklaitis 1999a,b)to the case where a variety of heuristics are used in combination with optimization). Zhang and Sargent (1994, 1996) presented a continuous-time formulation based on the RTN representation for both batch and continuous operations, with the possibility of batch-size-dependent processing times for batch operations. Again, the interval durations are determined as part of the optimization. A MINLP model ensues; this is solved using a local linearization procedure combined with what is effectively a column generation algorithm.
I
497
498
I
2 Production Scheduling
A problem with continuous-time models of the form described above arises from the inclusion of products of binary variables and interval durations or absolute starting times in the constraints. The linearization of these products gives rise to terms involving products of binary variables and maximum predicted interval durations or starting times. The looser these upper bounds, the worse the integrality gap of the formulation and, in general, the more difficult it becomes to solve the scheduling problem. Furthermore, it is difficult to predict good duration bounds a priori. The poor relaxation performance of the continuous-time models is the main obstacle to their more widespread application. Schilling and Pantelides (1996)and Schilling (1997)attempted to address this deficiency. They developed a continuous-time scheduling model based on the RTN. They proposed a number ofmodifications to the formulation of Zhang and Sargent (1996) ,which simplify the model and improve its general solution characteristics. A global linearization gives rise to a MILP. They then developed a hybrid branch-and-bound solution procedure which branches in the space of the interval durations as well as in the space of the integer variables. For a given problem instance, this can be viewed as generating a number of problem instances, each with tighter interval duration bounds. The independence of these new instances was recognized by Schilling (1997),who implemented a parallel solution procedure based on a distributed computing environment. The combination of the hybrid and parallel aspects of the solution procedure resulted in a much improved computational performance on a wide class of problems. Their model and solution procedure was then extended to the cyclic scheduling case (Schilling and Pantelides 1999). Castro et al. (2001) made some adjustments to the model to account for stable materials that can be held temporarily in processing units; these improve the computational performance considerably. Ierapetritou and Floudas (1998a-c) and Ierapetritou et al. (1999)introduced a new continuous-time model where the task and unit events are not directly coordinated against the same grid, but have their own grids (i.e., individual resource grids). Sequencing and timing constraints are then introduced to ensure feasibility. This has the effect of reducing the model size and reducing the associated computational effort required to find a solution. Their model is able to deal with semicontinuous processes and products with due dates falling arbitrarily within the horizon. Lin and Floudas (2001) extended this work to cover simultaneous design and scheduling. Wu and Ierapetritou (2003) employed time-, recipe- and resource-based decomposition procedures to this class of model. They indicated that near-optimal solutions may be obtained with modest effort. The work was generalized further by Janak et al. (2004)who included mixed storage policies, batch-size-dependentprocessing times, general resource constraints and sequence-dependent changeover constraints. Lee et al. (2001)extended this body ofwork with a formulation that uses binary variables to represent the start, process and end components of a process task. The computational performance of their model is similar to that of Castro et al. (2001). Orcun et al. (2001)used the concept of operations through which batches flow to develop a continuous-time model. Each batch has a prespecified alternative set of recipes for its manufacture; each recipe defines the flow through the operations. One of
2.G Mathematical Programming: Multipurpose Plants
I
the major decisions is then the choice of recipe for each batch. The complicating constraints are those that ensure the timing and sequence of batches and operations are feasible. Majozi and Zhu (2001) modified the STN concept by removing tasks and units; thereby generating a state-sequence network (SSN); essentially a state-transition network. They developed a continuous-time model based on this, which relies on specialized sequencing constraints to ensure feasibility. Resources other than processing units cannot be treated. Giannelos and Georgiadis (2002)described a very straightforward continuous-time model for multipurpose plant scheduling based on the STN process representation. In their work, they introduced “buffer times”, which means that although all tasks must start at event points, they do not need to finish on event points. This reduced synchronization improves the computational performance considerably when compared to similar mathematical models. Giannelos and Georgiadis (2003) applied their continuous-time model to the scheduling of consumer goods factories. The latter are characterized by sequencedependent changeovers and flexible intermediate storage. By preprocessing the data, good upper bounds on the number of changeover tasks can be estimated and used to tighten the MILP model. Good solutions were found for the case of a medium-size industrial study. Maravelias and Grossmann (2003)used a mixed time representation, where tasks which produce ZW states must start and finish at slot boundaries, while others are only anchored at the start, as per Giannelos and Georgiadis (2002).A different set of binary variables and assignment constraints is used from other works in this area. General resource constraints, sequence-dependent cleaning, and variable processing times are also included. Good computational performance is observed. Castro et al. (2003) investigated both discrete- and continuous-time RTN models for periodic scheduling applied to a pulp and paper case study. A coarse and then a fine grid is used with the discrete-time model to optimize the cycle time, and an iterative search on the number of event points is used in the continuous-time model. The discrete-time model allowed more flexibility in the statement of the objective function and is easier to solve, while the continuous-time model in principle allows more accurate modeling of the operations. Castro et al. (2004) presented a simple model for both batch and continuous processes which has an improved LP relaxation compared to that of Castro et al. (2001), due to a different set of timing constraints; this model generally outperforms other continuous-time models proposed in the literature. Overall, considerable progress has been made towards the development of generalpurpose mathematical programming-based methods for process scheduling. At least two commercial packages ModelEnterprise (see http://www.psenterprise.com/products-me.htm1)and Verdict (see http://www.combination.com) have resulted from these academic endeavours.
499
500
I
2 Production Scheduling
2.7
Hybrid Solution Approaches So far, we have described individual solution approaches. A new class of solution methods has arisen out of the recognition that mathematical programming approaches are very effective when the scheduling problems are dominated by flowtype decisions, but often struggle when sequencing decisions dominate. Hybrid approaches decompose the problem into flow and sequence-based components and then apply different algorithms to these components. An example of this is described by Neumann et al. (2002), who solve separate batching and scheduling problems. The batching problem is a mixed-integer optimization problem which determines the number of instances of each task on each unit. The scheduling problem then determines the sequence and timing of the batches. A tailored algorithm based on project scheduling concepts is used for the scheduling/sequencing problem (Schwindt and Trautmann 2000). Most of the other hybrid methods are based on the same decomposition principle, but recognize that constrained logic programming (CLP) (also called constraint programming) is a powerful technique for the solution of sequencing problems. It is based on the concept of domain reduction and constraint propagation (van Hentenryck 1989). Jain and Grossmann (2001)use a single, hybrid model where different degrees of freedom are determined by the two solvers. Effectively, this requires an iteration between MILP and CLP solvers which proceeds until an optimal and feasible solution is found. A specific (parallel flowshop),rather than general process is studied. Harjunkoski et al. (2000) extend this work by developing a solution procedure which starts with the relaxed MILP and then iterates through different CLP solutions, each with a different objective function target. This approach performed better on a trim-loss problem than a traditional jobshop scheduling problem as tackled by Jain and Grossmann (2001). Huang and Chung (2000) explained how CLP can be used on its own (along with some dispatching rules) for the scheduling of a simple pipeless batch plant. Maravelias and Grossmann (2004)and Roe et al. (2003, 2004) also recognized that mixed-integer optimization techniques are appropriate for the batching problem, and constrained logic programming (CLP) approaches may be appropriate for the scheduling problem. Roe et al. (2003,2005) developed an algorithm appropriate to all types of processes. A STN-based description is used. A “batching” optimization is solved to determine the number of allocations of STN tasks to units and the average batch sizes, and then a tailored algorithm based on the ECLiPse framework (Wallace et al. 1997) is used to derive the schedule which aims to ensure completion of all tasks in the minimum possible time. A series of constraints are introduced in the batching problem (as per Maravelias and Grossmann 2004) to try to ensure schedule feasibility. Their approach has a single pass, while that of Maravelias and Grossmann (2004) is more sophisticated in that the algorithm iterates between MILP and CLP levels, and the two solution methods deal with different parts of a single model (essentiallythat of
2.8 Combined Scheduling and Process Operation
Maravelias and Grossmann 2003), which allows for variable batch sizes. The CLP level, rather than just adding simple integer cuts, adds a more general type of cut that excludes similar permutations of the solution to be excluded. Romero et al. (2004) used a graph theoretical framework to tackle scheduling problems which involve complex shared intermediate storage systems. Their representation is based on two types of graph: the recipe graph which depicts possible material flow routes, and the schedule graph which shows a unique solution to the scheduling problem. A branch-and-bound algorithm is used to search for optimal solutions.
2.8 Combined Scheduling and Process Operation
A feature of scheduling problems is that the representation of the production process depends on the gross margin of the business. Businesses with reasonable to large gross margins (e.g., consumer goods, specialties) tend to use “recipe-based”representations, where processes are operated at fured conditions and to fixed recipes. Recipes may also be fuced by regulation (e.g., pharmaceuticals) or because of poor process knowledge (e.g., food processing). On the other hand, businesses with slimmer margins (e.g., refining, petrochemicals) are moving towards “property-based”representations, where process conditions and (crude) process models are used in the process representation, and stream properties are inferred from process conditions and mixing rules. Hence some degrees of freedom associated with process operation are optimized during production scheduling. Some examples of this type of process are described below. Castro et al. (2002) described the use of both dynamic simulation and scheduling for the improved operation of the digester part of a pulp mill. A dynamic model is used to determine task durations and an RTN-based model is used for scheduling. The schedule optimization indicated that steam availability limits throughput. Alternative task combinations based on different steam sharing options were generated through the detailed modeling, and these were made available to the scheduling model. This then (approximately) enables the scheduling model to optimize the details of process operation. Glismann and Gruhn (2001) describe a model which combines scheduling with nonlinear recipe blend optimization. Here, a long-range planning problem using nonlinear programming identifies products to be produced and different blending recipes to produce them. A short-term RTN-based scheduling model then schedules the blending activities in detail. Deviations between plan and schedule can then be reduced in a further step. Alle and Pinto (2002) developed a model for the cyclic scheduling of continuous plants including operational variables such as processing rates and yields. This enables the optimization procedure to trade off time and material efficiencies. A tailored algorithm is used to find the global optimum of this mixed-integer nonconvex optimization problem.
I
501
502
I
2 Production Scheduling
A different type of operational consideration is performance degradation over time. Jain and Grossmann (1998)describe the cyclic scheduling of an ethylene cracking process. Here, the conversion falls with time, until a cleaning activity is undertaken to restore the cracker to peak performance. There is a tradeoff between frequent cleaning (and high downtime) and high average performance and infrequent cleaning (and lower downtime) and lower average performance. The problem is complicated by the presence of multiple furnaces and model nonlinearity. Joly and Pinto (2003)describe a discrete-time MINLP model for the scheduling of fuel oil and asphalt production at a large Brazilian refinery. The nonlinear operational component comes from the calculation of the viscosity through variable flow rates rather than through fixed recipes. Because the viscosity specifications are fixed, a linear model can be derived from the MINLP and solved to global optimality. Pinto et al. (2000) and Joly et al. (2002) described a refinery planning model with nonlinear process models and blending relations. They demonstrated that industrial scale problems can in principle be solved using commercially available MINLP solvers. Neiro and Pinto (2003)extended this work to a set of refinery complexes, and also added scenarios to account for uncertainty in product prices. To ensure a robust solution, the decision variables are chosen “here and now”. They demonstrate that nonlinear models reflecting process unit conditions and mixture property prediction can be used in multisite planning models. They also show that there are significant cost benefits in solving for the complex together rather than for the individual refineries separately. Moro (2003),in his review of technology in the refining industry, indicates that scheduling tools that include details of process operation are still not available, but their application should results in benefits of US$IO million per year for a typical refinery.
2.9 Uncertainty in Planning and Scheduling
As with any other industrially relevant optimization problem, production scheduling requires a considerable amount of data. This is often subject to uncertainty (e.g., task processing times, yields, market demands, etc.). Sources of uncertainty (which tend to imply the means for dealing with them) can crudely be divided into: 0
0
short-term uncertainties such as processing-time variations, rush orders, failed batches, equipment breakdowns, etc.; long-term uncertainties such as market trends, technology changes, etc.
Traditionally, short-term uncertainties have been treated through online or reactive scheduling, where schedules are adjusted to take account of new information. Longer-term uncertainties have been tackled through the solution of some form of stochastic programming problem. These two areas are considered below.
2.9 Uncertainty in Planning and Scheduling
2.9.1 Reactive Scheduling
A major requirement of reactive scheduling systems is the ability to generate feasible updated schedules relatively quickly. A secondary objective is often to minimize deviations from the original schedule. As plants become more automated, this may become less important. Cott (1989) presented some schedule modification algorithms to be used in conjunction with online monitoring, in particular to deal with processing-time variations and batch-size variations. Kanakamedala et al. (1994) presented a least-impact heuristic beam search for reactive schedule modification in the face of unexpected deviations in processing times and resource availability. This is based on evaluating possible product reroutings and selecting that which has least overall impact on the schedule. Rodrigues et al. (1996)modified the discrete-time STN formulation to take account of due-date changes and equipment unavailability. They use a rolling horizon (rolling out a predefined schedule) approach which aims to look ahead for a short time to resolve infeasibilities.This implies a very small problem size and fast solution times. Schilling (1997) adapted his RTN-based continuous-time formulation to create a hierarchical family of MILP-based reactive scheduling formulations. At the lowest level, the sequence of operations is futed as in the original schedule and only the timing can vary. At the topmost level, a full original scheduling problem is solved. The intermediate levels all trade off degrees-of-freedomwith computational effort. This allows the best solution in the time available to be implemented on the plant. Bael (1999)combined constraint satisfaction and iterative improvement (based on local perturbations) in his rescheduling strategy for jobshop environments. A tradeoff between computational time and solution quality was identified. Castillo and Roberts (2001) described a real-time scheduling approach for batch plants, based on model predictive control methods. The future allocation of orders to machines can be investigated using a fast tree-search algorithm, and robust solutions are generated in real time. This works due to the assumption of batch integrity throughout the process (i.e., there is no mixing or splitting of material). Wang et al. (2000) described a genetic algorithm for online scheduling of a complex multiproduct polymer plant with many conflicting constraints. They describe how this technique may be successfully applied to scheduling problems and give guidance on appropriate mutation and crossover operations. Henning and Cerda (2000)described a knowledge-based framework which aims to support a human scheduler performing both offline and reactive scheduling. They argue that purely automated scheduling is difficult because plant circumstances change regularly. An object-oriented,knowledge-based framework is used to capture problem information and a scheduling support system developed (within which a variety of scheduling algorithms can be encoded) that enhances the capabilities of the human domain expert via an interactive front end. Because schedules can be generated very quickly using this approach, it is suitable for reactive scheduling.
I
503
504
I
2 Production Scheduling
One reason for reactive scheduling is the need to rework batches when quality criteria are not met. Flapper et al. (2002) provide a review of methods for planning and control of rework.
2.9.2 Planning and Scheduling under Uncertainty
Most of the work in this area is based on models in which product demands are assumed to be uncertain and to differ between a number of time periods. Usually, a simple representation of the plant capacity is assumed, and the sophistication of the work relates to the implementation of stochastic planning algorithms to select amounts for production in the first period (here and now) and potential production amounts in different possible demand realizations in different periods (see, e.g., Ierapetritou et al. (1996)). In relatively long-term planning, it is reasonable to introduce additional degrees of freedom associated with potential capacity expansions. Liu and Sahinidis (199Ga,b) and Iyer and Grossmann (1998) extended the MILP process and capacity planning model of Sahinidis and Grossmann (1991b)to include multiple product-demand scenarios in each period. They then proposed efficient algorithms for the solution of the resulting stochastic programming problems (formulated as large deterministic equivalent models), either by projection (Liu and Sahinidis 199Ga) or by decomposition and iteration (Iyer and Grossmann 1998).A major assumption in their formulation is that product inventories are not carried over from one period to the next. Clay and Grossmann (1994) also addressed this issue. They considered the structure of both the two-period and multiperiod problem for LP models and derived an approximation method based on successive repartitioning of the uncertain space with expectations being applied over partitions. This has the potential to generate solutions to a high degree of accuracy in a much faster time than the full-scale deterministic equivalent model. The approaches above are based on relatively simple models of plant capacity. Petkov and Maranas (1997) treat the multiperiod planning model for multiproduct plants under demand uncertainty. Their planning model embeds the planning/ scheduling formulation of Birewar and Grossmann (1990) and therefore accurately calculates the plant capacity. They do not use discrete demand scenarios but assume normal distributions and directly manipulate the functional forms to generate a problem which maximizes expected profit and meets certain probabilistic bounds on demand satisfaction without the need for numerical integration. They also make the no-inventory-carryoverassumption, but show how this can be remedied to a certain extent at the lower level scheduling stage. Sand and Engell (2004) use a rolling horizon, two-stage stochastic programming approach to schedule an expandable polystyrene plant that is subject to uncertainty in processing times, yields, capacities and demands. The former two sources of uncertainty are considered short-term and the latter two medium-term. A hierarchical scheduling technique is used where a master schedule deals with medium-term uncertainties and a detailed schedule with the
2.9 Uncertainty in Planning and Scheduling
short-term ones. The uncertainties are represented through discrete scenarios and the two-stage problem solved using a decomposition technique. Alternative approaches have attempted to characterize the effects of some sources of uncertainty on detailed schedules. Rotstein et al. (1994) defined flexibility and reliability indices for detailed schedules. These are based on data for equipment reliability and demand distributions. Given a schedule (described in network flow form),these indices can be calculated to assess its performance. Dedopoulos and Shah (1995) used a multistage stochastic programming formulation to solve short-term scheduling problems with possibilities of equipment failure at each discrete time instant. The technique can be used to assess the impact of different failure characteristics of the equipment on expected profit, but suffers from the very large computational effort required even for small problems. Sanmarti et al. (1995)define a robust schedule as one which has a high probability of being performed, and is readily adaptable to plant variations. They define an index of reliability for a unit scheduled in a campaign through its intrinsic reliability, the probability that a standby unit is available during the campaign, and the speed with which it can be repaired. An overall schedule reliability is then the product of the reliabilities of units scheduled in it, and solutions to the planning problem can be driven to achieve a high value of this indicator. Mignon et al. (1995)assess schedules obtained from deterministic data for performance under variability by Monte Carlo simulation. Although a number of parameters may be uncertain, they focus on processing time. Performance and robustness (predictability)metrics are defined and features of schedules with good indicators are summarized (e.g., introducing an element of conservatism when futing due dates). Honkomp et al. (1997)build on this to compare schedules generated by discretetime and continuous-time algorithms and two means of ensuring robustness in the face of processing time uncertainties, namely increasing the processing times of bottleneck stages and increasing all processing times at the deterministic scheduling level. They found that the latter heuristic was better, and that the rounding effect of the discrete-time model results in marginally better robustness. Robustness is defined with respect to variance in the objective function. Strictly speaking, penalizing the variance of a metric to ensure robustness assumes that the metric is twosided (i.e., “the closer to nominal the better” in the Taguchi sense). Since economic objective functions are one-sided (“themore the better”),robustness indicators such as these should be used with caution. This has been noted recently by Ahmed and Sahinidis (1998). Gonzalez and Realff (1998a) analyze MILP solutions for pipeless plants that generated by assuming lower level controls for detailed vehicle movements and futed, nominal transfer times. The analysis performed using stochastic simulation with variabilities in the transfer times. The system performance was found not to degrade considerably from its nominal value. They extended the work (Gonzalez and Realff 1998b) to consider the development of dispatching rules based on both general flexible manufacturing principles and properties of the MILP solutions. They found that rules abstracted from the MILP solutions were superior, and could be used in realtime.
I
505
506
I
2 Production Scheduling
A similar “multimodel” technique that combines optimization, expert systems and discrete-eventsimulation is described by Artiba and Riane (1998), although their focus is on a robust package for an industrial environment rather than uncertainty per se. Bassett et al. (1997) contrasted aggregate planning and detailed scheduling under uncertainties in processing times and equipment failure. They argue that aggregate models that take these into account miss critical interactions due to the complex short-term interactions. They therefore propose the use of detailed scheduling to study the effects of such uncertainties on aggregate indicators such as average probabilities in meeting due dates and makespans. They also use Monte Carlo simulation, but use each set of sampled data to generate a detailed scheduling problem instance, solved using a reverse rolling horizon algorithm. Once enough instances have been solved for statistical significance, a number of comparisons can be made. For example, they conclude that long, infrequent breakdowns are more desirable, with obvious implications for maintenance policies. Lee and Malone (2001a, 2001b) developed a hybrid Monte Carlo simulationsimulated annealing approach to planning and scheduling under uncertainty. They treat uncertainties in demands, due dates, processing times, product prices and raw material costs. An expected NPV objective is chosen; this is calculated through simultaneous Monte Carlo simulation and simulated annealing. This can also be used to devise strategies to ensure flexibility and robustness, for example by including enforced idle times in the schedule to allow for adjustments or rush orders. Ivanescu et al. (2002) describe an approach for makespan estimation and order acceptance in multipurpose plants with uncertain task processing times (following an Erlang distribution). Instead of using a large mathematical model, regression analysis is used instead, based on a family of problem classes. Balasubramanian and Grossmann (2003) presented an alternative approach to scheduling under uncertainty, arguing that probabilistic data on the uncertainties (e.g., in processing times) are unlikely to be available, and instead proposing a fuzzy set and interval theory approach. A rigorous MILP that can provide bounds on the makespan is developed for the flowshop case, based on the evaluation of a fuzzy, rather than crisp, makespan, and rules for comparing alternative makespans in order to determine optimality. Kuroda et al. (2002)use the simple concept of due-date buffers to allow for flexibility in adjusting schedules in an operational environment. Here, orders further out in the horizon are allowed to move around within the buffer, while those near the current time remain futed. This facilitates responsiveness to unforeseen orders.
2.1 0
Industrial Applications of Planning and Scheduling
Honkomp et al. (2000) give a list of reasons why the practical implementation of scheduling tools based on optimization is fraught with difficulty. These include: 0
The large amount of user-defined input for testing purposes.
2.70 Industrial Applications ofplanning and Scheduling 0
0
0
0 0 0 0
The difficulty in capturing all the different types of operational constraints within a general framework, and the associated difficulty in defining an appropriate objective function. The large amounts of data required; Book and Bhatnagar (2000) list some of the issues that must be faced if generic data models are to be developed for planning/ scheduling applications. Computational difficulties associated with the large problem sizes found in practice. Optimality gaps arising out of many shared resources. Intermediate storage and material stability constraints. Nonproductive activities (e.g., set-up times, cleaning, etc.) Effective treatment of uncertainties in demands and equipment effectiveness.
Nevertheless, there have been several success stories in the application of state-ofthe-art scheduling methods in industry. Schnelle (2000) applied MILP-based scheduling and design techniques to an agrochemical facility. The results indicated that sharing of equipment items between different products was a good idea, and the process reduced the number of alternatives to consider to a manageable number. Berning et al. (2002)describe a large-scale planning-scheduling application which uses genetic algorithms for detailed scheduling at each site and a collaborative planning tool to coordinate plans across sites. The plants all operate batchwise, and may supply each other with intermediates, creating interdependencies in the plan. The scale of the problem is large, involving on the order of GOO different process recipes, and 1000 resources. Kallrath (2002b)presented a successful application of MILP methods for planning and scheduling in BASF. He describes a tool for simultaneous strategic and operational planning in a multisite production network. The aim was to optimize the total net profit of a global network, where key decisions include: operating modes of equipment in each time period, production and supply of products, minor changes to the infrastructure (e.g., addition and removal of equipment from sites), and raw material purchases and contracts. A multiperiod model is formulated where equipment may undergo one mode change per period. The standard material balance equations are adjusted to account for the fact that transportation times are much shorter than the period durations. Counterintuitive but credible plans were developed that resulted in cost savings of several millions of dollars. Sensitivity analyses showed that the key decisions were not too sensitive to demand uncertainty. Keskinocak et al. (2002) describe the application of a combined agent- and optimization-based framework for the scheduling of paper products manufacturing. The framework solves the problems of order allocation, run formation, trimming and trim-loss minimization and load planning. The deployment of the system is claimed to save millions of dollars per year. The “asynchronous agent-based team” approach uses constructor and improver agents to generate candidate solutions that are evaluated against multiple criteria.
I
507
508
I
2 Production Scheduling
2.1 1
New Application Domains
The scheduling techniques described above have in the main been applied to batch chemical production, particularly fine and specialty chemicals and pharmaceuticals. They are also appropriate to the wider and emerging process industries, and have started to find application in other domains, some of which are reviewed below. Kim et al. (2001)tackle the problem of semiconductor wafer fabrication scheduling involving multiple products with different due dates. A series of dispatching (“lot release”) and lot scheduling rules are evaluated. Bhushan and Karimi (2003) describe the application of scheduling techniques to the wet-etchingcomponent of a semiconductor manufacturing process. This process is complicated by its “re-entrant”nature, where a product revisits stages of manufacture, and therefore does not fit the classical flowshop structure. A MILP formulation combined with a heuristic is used to minimize the makespan required to complete an outstanding set of jobs. Pearn et al. (2004) describe the challenges associated with the scheduling of the final testing stage of integrated circuit manufacture and compare the performance of alternative heuristic algorithms. El-Halwagi et al. (2003) describe a system for efficient design and scheduling of recovery of nutrients from plant wastes and reuse of the nutrients, with a view to developing a strategy for future planetary habitation. Lee and Malone (2000b)show how planning can be useful in waste minimization. They combine scheduling with the main process and scheduling of the solvent recovery system. They show that such simultaneous scheduling can reduce waste disposal costs significantly. Pilot plant facilities can become very scarce resources in the modern chemical industry, with many more short-run processes. Mockus et al. (2002) describe the integrated planning and scheduling of such a facility. The long-term planning problem is primarily concerned with skilled human resource allocation while the shortterm scheduling problem deals with production operations. Roslof et al. (2001)describe the application of production scheduling techniques to the paper manufacturing industry. In this sector, large numbers of orders, some of which are for custom products, are the norm. Van den Heever and Grossmann (2003) describe the production planning and scheduling of a complex of plants producing hydrogen. This requires the description of the behavior of a pipeline and its associated compressors, which adds complexity and nonlinearity. The combination of longer-term planning and short-term reactive scheduling enables the decision-makers to deal effectively with uncertainty.
2.72 Conclusions and Future Challenges
2.12 Conclusions and Future Challenges
Production scheduling has been a fertile area for CAPE research and the development of technology. Revisiting the challenges posed by Reklaitis (1991), Rippin (1993), Shah (1998) and Kallrath (2002a),it is clear that considerable progress has been made towards meeting them. Overall, the emerging trend in the area of short-term scheduling is the development of techniques for the solution of the general, resource-constrained multipurpose plant scheduling problem. The recent research is all about solution efficiency and techniques to render ever-larger problems tractable. There remains work to be done on both model enhancements and improvements in solution algorithms if industrially relevant problems are to be tackled routinely, and software based on these are to be used on a regular basis by practitioners in the field. Many algorithms have been developed to exploit the tight relaxation characteristics of discrete-time formulations. There remains work to be done in this area, in particular to exploit the sparsity of the solutions. Direct intervention at the LP level during branch-and-bound procedures (e.g., column generation and branch-and-price) seems a promising way of solving very large problems without ever considering the full variable space. Decomposition techniques (e.g., rolling horizon methods) will also find application here. Much of the more recent research has focussed on continuous-time formulations, but little technology has been developed based on these. The main challenge here is in continual improvement in problem formulation and preprocessing to improve relaxation characteristics, and tailored solution procedures (e.g., branch-and-cut, and hybrid logic-continuous variable-integer variable branching) for problems with relatively large integrality gaps. Probably the most promising recent development is the implementation of hybrid MILP/CLP solution methods which recognize that different algorithms are suitable for different components of the scheduling problem. An important contrast between early and recent work is that the early algorithms tended to be tested on “motivating”examples (e.g.,to find the best sequence of a few products), while recent algorithms are almost always tested on (and often motivated by) industrial or industrially based studies. The multisite problem has received relatively little attention, and is likely to be a candidate for significant research in the near future. A major challenge is to develop planning approaches that are consistent with detailed production scheduling at each site and distribution scheduling across sites. An obvious stumbling block is problem size, and a resource-task-based decomposition based on identifying weak connections should find promise here as the problems tend to be highly structured. As scheduling and planning become integrated, the financial aspects will require more rigorous treatment. For example, Romero and Puigjaner (2004)describe the integration of cash flow modeling with production scheduling. This facilitates an accurate forward prediction of cash flow and even allows the enterprise to optimize treasury
15m
510
I
2 Production Scheduling
management simultaneously with decisions on purchasing, production and sales, and can be used to enforce upper and lower bounds on cash balances. Researchers have attacked the problem of planning and scheduling under uncertainty from a number of angles, but have tended to skirt around the fundamental problem of multiperiod, multiscenario planning with realistic production capacity models (i.e., embedding some scheduling information) in the case of longer-term uncertainties. Issues that must be resolved relate mainly to problem scale. A sensible way forward is to try to capture the problem in all its complexity and then to explore rigorous or approximate solution procedures, rather than develop exact solutions to somewhat idealized problems. Process industry models are complicated by having multiple stages (periods) and integer variables in the second and subsequent stages, so most of the classical algorithms devised for large scale stochastic planning problems are not readily applicable. The treatment of short-term uncertainties through the determination of characteristics of resilient schedules and then to use online monitoring and rescheduling seems eminently sensible. Further work is required in such characterization and in the design of rescheduling algorithms with guaranteed real-time performance. A final challenge relates to the seamless integration of the activities at different levels - this is of a much broader and more interdisciplinary nature. Shobrys and White (2002) describe some of the difficult challenges to be faced here, including data and functional fragmentation, inconsistencies between activities and datasets, different tools being used for different activities,time and material buffers at each function for protection, slow responses and information flow.
References 1 Ahmed S. Sahinidis N. V. Robust process
planning under uncertainty, Ind. Eng. Chem. Res. 37 (1998) p. 1883-1892 2 Alle A. Pinto J . M . A general framework for simultaneous cyclic scheduling and operational optimization of multiproduct continuous plants, Braz. J. Chem. Eng. 19:4 (2002) p. 457-466 3 Artiba A. Riane F. An application of a planning and scheduling multi-model approach in the chemical industry, Comput. Ind. 36 (1998) p. 209-229 4 Applequist G. 0. Sarnikoglu]. Pekny G. V. Reklaitis Issues in the use, design and evolution of process scheduling and planning systems, ISA Trans. 36 (1997) p. 81-121 5 Bael P. A study of rescheduling strategies and abstraction levels for a chemical process scheduling problem, Prod. Planning Control lO(4) (1999) p. 359-364 6 J . Balasubramanian Grossmann 1. E. Scheduling optimization under uncertainty - an
alternative approach, Comput. Chem. Eng. 27 (2003) 469-490 7 Bassett M. H . Peknyj. F. Reklaitis G. V. Decomposition techniques for the solution of large-scale scheduling problems, AIChE J. 42 (1996) p. 3373-3387 8 Bassett M . H Pekny]. F. Reklaitis G. V. Using detailed scheduling to obtain realistic operating policies for a batch processing facility, Ind. Eng. Chem. Res. 36 (1997) p. 1717-1726 9 Berning G. Brandenburg M. Gursoy K. Mehta V. Tolle F.-J. An integrated system for supply chain optimisation in the chemical process industry, OR Spectrum 24 (2002) p. 371-401 10 Bhushan S. Karimi I. A. An MILP approach to automated wet-etch station scheduling, Ind. Eng. Chem. Res. 42 (2003) p. 1391-1399 11 Birewar D. B. Grossmann I. E. Efficient optimi. zation algorithms for zero-wait scheduling of multiproduct batch plants, Ind. Eng. Chem. Process Des. Dev. 28 (1989) p. 1333-1345
References 12 Birewar D. B. Grossmann I. E. Simultaneous
production planning and scheduling in multiproduct batch plants, Ind. Eng. Chem. Res. 29 (1990) p. 570-580 13 Blomer F . Giinther H.-0. Scheduling of a multi-product batch process in the chemical industry, Comput. Ind. 36 (1998)p. 245-259 14 Book N . L. Bhatnagar V. Comput. Chem. Eng. 24 (2000) p. 1641-1644 15 Bunch P. A simplex-based primal-dual algorithm for the perfect B-matching problem a study in combinatorial optimisation, Phd Thesis, Purdue University (1997) 16 Castiflo I Roberts C. A. Real-time controll scheduling for multipurpose batch plants, Comp. Ind. Eng. 41 (2001)p. 211-225 17 Castro P. Barbosa-Pdvoa A. P. F. D. Matos H . An improved RTN continuous-timeformulation for the short-term scheduling of multipurpose batch plants, Ind. Eng. Chem. Res. 40 (2001) p. 2059-2068 18 Castro P. Barbosa-Pdvoa A. P. F. D. Matos H . Ind. Eng. Chem. Res. 43 (2004)p. 105-118 19 Castro P. M . Barbosa-Pdvoa A. P. Matos H . A. Optimal periodic scheduling of batch plants using RTN-Baseddiscrete and continuoustime formulations: a case study approach, Ind. Eng. Chem. Res. 42 (2003)p. 3346-3360 20 Castro P. M . Barbosa-Pdvoa A. P. Matos H . A . Novais A. Q. Ind. Eng. Chem. Res. 43 (2004) p. 105-118 21 Castro P. M . Matos H . Barbosa-Pdvoa A. P. F . D. Dynamic modelling and scheduling of an industrial batch system, Comput. Chem. Eng. 26 (2002) 671-686 22 Cerda 1.Henning G. P. Grossmann I. E. A mixed integer linear programming model for short-term scheduling of single-stage multiproduct batch plants with parallel lines, Ind. Eng. Chem. Res. 36 (1997) p. 1695-1707 23 Clay R. L. Grossmann I. E. Optimization of stochastic planning-models, Chem. Eng. Res. Des. 72 (1994) p. 415-419 24 Cott B. /. An integrated computer-aided production management system for batch chemical processes, PhD Thesis, University of London (1989) 25 Dannebring D. G. An evaluation of flowshop sequencing heuristics, Manage. Sci. 23 (1977) p. 1174-1182 26 Dedopoulos I. T. Shah N. Preventive maintenance policy optimisation for multipurpose plant equipment, Comput. Chem. Eng, S19 (1995)p. S693-S698 27 Dimitriadis A. D. Shah N. Pantelides C. C. RTN-based rolling horizon algorithms for
medium-term scheduling of multipurpose plants, Comput. Chem. Eng. S21 (1997a)p. S 1061- S 1066 28 Dimitriadis A. D. Shah N. Pantelides C. C. A rigorous decomposition algorithm for solution of large-scale planning and scheduling problems, paper presented at AlChE Annual Meeting, Nov. 16-21, Los Angeles (1997b) 29 El-Halwagi M . Williams L. Hall /. Aglan H . Mortley D. Trotman A. Mass integration and scheduling strategies for resource recovery in planetary habitation, Chem. Eng. Res. Des. 81 (2003) p. 243-250 30 Elkamel A. Scheduling of process operations using mathematical programming techniques, PhD Thesis, Purdue University (1993) 31 Elkamel A. AI-Enezi G. Structured valid inequalities and separation in optimal scheduling of the resource-constrained batch chemical plant, Math. Eng. Ind. 6 (1998) p. 291-318 32 Elmaghraby S. The economic lot scheduling problem. review and extensions, Manage. Sci. 24 (1978) p. 587-598 33 Flapper S. D. P. Fransooj. C. Broekmeulen R. A. C. M . Inderfurth K. Planning and control of rework in the process industries: a review, Prod. Planning Control, 13 (2002) p. 26-34 34 Gabow H . N. On the design and analysis of efficient algorithms for deterministic scheduling, Proceedings of the 2nd International Conference Foundations of Computer-Aided Process Design, Michigan, June 19-24, (1983) USA, Cache Publications, pp. 473-528 (1983) 35 Giannelos N . F. Georgiadis M . C. A simple new continuous time formulation for short tern scheduling of multipurpose batch processes, Ind. Eng. Chem. Res. 41 (2002) p. 2178-2184 36 Giannelos N . F . Georgiadis M . C. Efficient scheduling of consumer goods manufacturing processes in the continuous time domain Comput. Oper. Res. 30 (2003) p. 1367-1381 37 Glismann K. Gruhn G. Short-term scheduling and recipe optimization of blending processes, Comput. Chem. Eng. 25 (2001). p. 627-634 38 Gonzalez R. R e a l f M . /. Operation of pipeless batch plants - 1. MILP schedules, Comput. Chem. Eng. 22 (1998a)p. 841-855 39 Gonzalez R. R e a l f M . /. Operation of pipeless batch plants - 11. Vessel dispatch rules, Comput. Chem. Eng. 22 (199%) p. 857-866
I
51 1
512
I
2 Production Scheduling 40 Gooding W. B. Specially structured formula-
tions and solution methods for optimisation problems important to process scheduling, PhD Thesis Purdue University (1994) 41 Gooding W. B. PeknyJ. F. McCroskey P. S. Enumerative approaches to parallel flowshop scheduling via problem transformation Comput. Chem. Eng. 18 (1994) p. 909-927 42 Graells M. Espuria A. Puiaaner L. Sequencing intermediate products: a practical solution for multipurpose production scheduling, Comput. Chem. Eng. S20 (1996)p. S1137-S1142 43 Gmnow M. Giinther H.-0. Lehmann M. Campaign planning for multistage batch processes in the chemical industry, OR Spectrum, 24 (2002)p. 281-314 44 Hampel R. Beyond the millenium, Chem. Ind. 10 (1997)p. 380-382 45 HajunkoskiJain V. Grossmann I. E. Hybrid mixed-integer/constrained logic programming strategies for solving scheduling and combinatroial optimization problems, Comput. Chem. Eng. 24 (2000)p. 337-343 46 Hasebe S. Hashimoto I. Ishikawa A. General reordering algorithm for scheduling of batch processes, J. Chem. Eng. Jpn. 24 (1991) p. 483-489 47 Hasebe S. Taniguchi S. Hashirnoto I. Automatic adjustment of crossover method in the scheduling using genetic algorithm, Kagaku Kogaku Ronbunshu, 22 (1996)p. 1039-1045 48 Henning G. P. Cerdli J . Knowledge based predictive and reactive scheduling in industrial environments, Comput. Chem. Eng. 24 (2000)p. 2315-2338 49 Honkomp S. J. Mockus L. Reklaitis G. V. Robust scheduling with processing time uncertainty, Comput. Chem. Eng. S21 (1997) p. S1055-S1060 50 Honkomp S. J . Lombardo S. Rosen 0. Pekny J. F. The curse of reality - why process scheduling optimisation problems are difficult in practice, Comput. Chem. Eng. 24 (2000) p. 323-328 51 Huang W. Chung P. W. H. Scheduling of pipeless batch plants using constraint satisfaction techniques, Comput. Chem. Eng. 24 (2000) p. 377-383 52 Ierapetritou M . G. Floudas C. A. Short-term scheduling: new mathematical models vs algorithmic improvements, Comput. Chem. Eng. 22 (1998a)p. S419-S426 53 Ierapetritou M. G. Floudas C. A. Effective continuous-time formulation for short-term
scheduling: I. Multipurpose batch processes, Ind. Eng. Chem. Res. 37 (199%) p. 4341-4359 54 Ierapetritou M. G. Floudas C. A. Effective continuous-time formulation for short-term scheduling: 11. Multipurpose/multiproduct continuous processes, Ind. Eng. Chem. Res. 37 (1998~) p. 4360-4374 55 Ierapetntou M. G. Hene T. S. Floudas C. A. Effective continuous-time formulation for short-term scheduling: 111. Multi intermediate due dates, Ind. Eng. Chem. Res. 38 (1999) p. 3446-3461 56 Ierapetntou M. G. Pistikopoulos E. N. Floudas C. A. Operational planning under uncertainty, Comput. Chem. Eng. 20 (1996) p. 1499- 1516 57 Ivanescu C. V. FransooJ. C. Bertrand]. W. M. Makespan Estimation And Order Acceptance In Batch Process Industries When Processing Times Are Uncertain, OR Spectrum, 24 (2002) p. 467-495 58 Iyer R. R. Grossmann I. E. Bilevel Decomposition A lgorithm For Long-Range Planning Of Process Networks, Ind. Eng. Chem. Res. 37 (1998) p. 474-481 59 l a i n V. Grossmann I. E. Cyclic scheudling of continuous paralle-process units with decaying performance, AIChE J. 44 (1998) p. 1623-1636 60 l a i n V. Grossmann I. E. Algorithms for hybrid MILP/CP models for a class of optimization problems, Informs J. Comput. 13 (2001) p. 258-276 61 Janak S. L. Lin X. Floudas C. A. Enhanced continuous-timeunit-specific event-based formulation for short-term scheduling of multipurpose batch processes: Resource constraints and mixed storage policies, Ind. Eng. Chem. Res. 43 (2004)p. 2516-2533 62 Joly M. Moro L. F. L. Pinto J. M. Planning and scheduling for petroleum refinerines using mathematical programming, Braz. J. Chem. Eng. 19(2) (2002)p. 207-228 63 Joly M . Pinto J . M. Mixed-integer programming techniques for the scheduling of fuel oil and asphalt production, Chem. Eng. Res. Des. 81 (2003) p. 427-447 64 Kallrath J . Planning and scheduling in the process industry, OR Spectrum, 24 (2002a) 219-250 65 Kallrath J . Combined strategic and operational planning - an MILP success story in chemical industry, OR Spectrum, 24 (2002b) p. 315-341
References 66 Kanakamedafa K. B. Rekfaitis G. V. Venkatasubramanian V. Reactive schedule modifica-
tion in multipurpose batch chemical plants, Ind. Eng. Chem. Res. 33 (1994)p. 77-90 67 Karimi I. A. McDonald C. M. Planning and scheduling of parallel semicontinuous processes. 2. Short-term Scheduling, Ind. Eng. Chem. Res. 36 (1997)p. 2701-2714 68 Keskinocak P. Wu F. Goodwin R. Murthy S. Akkiraju R. Kumaran S. Derebaif A. Scheduling solutions for the paper industry, Oper. Res. 50 (2002) p. 249-259 69 Kim M. Jung]. H . l e e I:B. Optimal scheduling of multiproduct batch processes for various intermediate storage policies, Ind. Eng. Chem. Res. 35 (1996) p. 4048-4066 70 Kim Y.-D. Kim].-G. Choi B. Kim H.-U. Production scheduling in a semiconductor wafer fabrication facility producing multiple product types with distinct due dates, IEEE Trans. Robot. Autom. 17 (2001) p. 589-598 71 Kondifi E. Pantelides C. C. Sargent R. W. H. A general algorithm for scheduling of batch operations, Proceedings of the 3rd International Symposium on Process Systems Engineering, Sydney, Australia, (1988) pp. 62-75 72 Kondifi E. Pantelides C. C. Sargent R. W. H . A general algorithm for short-term scheduling of batch operations - 1. Mixed integer linear programming formulation, Comput. Chem. Eng. 17 (1993) p. 211-227 73 Krarupj. B i l k 0. Plant location, set covering and economic lot size: an O(mn) algorithm for structured problems, Int Ser. Num. Math. 36 (1977) p. 155-180 74 Ku H . Karimi I. A . An evaluation of simulated annealing for batch process scheduling, Ind. Eng. Chem. Res. 30 (1991)p. 163-169 75 Kudva G . EfkamefA. Pekny]. F. Rekfaitis G. V. Heuristic algorithm for scheduling batch and semicontinuous plants with production deadlines, intermediate storage limitations and equipment changeover costs, Comput. Chem. Eng. 18 (1994) p. 859-875 76 Kuriyan K. Reklaitis G. V. Approximate scheduling algorithms for network flowshops, Chem. Eng. Symp. Ser. 92 (1985)p. 79-90 77 Kuriyan K. Rekfaitis G. V. Scheduling network flowshops so as to minimise makespan, Comput. Chem. Eng. 13 (1989) p. 187-200 78 Kuroda M . Shin H . Zinnohara A. Robust scheduling in an advanced planning and scheduling, Int. J. Prod. Res. 40 (2002) p. 3655-3668
79 l e e Y. G Malone M . F. Batch processes plan-
ning for waste minimization, Ind. Eng. Chem. Res. 39 (2000) p. 2035-2044 80 l e e Y. G Malone M. F. Flexible batch process planning, Ind. Eng. Chem. Res. 39 (2000b) p. 2045-2055 81 lee Y. G Malone M. F. Batch process schedule optimization under parameter volatility, Int. J. Prod. Res. 39 (2001a) p. 603-623 82 l e e Y. G. Malone M . F. A general treatment of uncertainties in batch process planning, Ind. Eng. Chem. Res. 40 (2001b)p. 1507-1515 83 Lee R-H. Park H . I. l e e 1. B. A novel nonuniform discrete time formulation for shortterm scheduling of batch and continuous processes, Ind. Eng. Chem. Res. 40 (2001a) p. 4902-4911 84 Lin X . Ffoudas C. A. Design, synthesis and scheduling of multipurpose batch plants via en effective continuous-time formulation, Comput. Chem. Eng. 25 (2001b) p. 665-674 85 l i u M. 1.Sahinidis N. V. Long-range planning in the process industries - a projection approach, Comput. Oper. Res. 3 (1996a)p. 237-253 86 l i u M. 1.Sahinidis N.V. Optimization in process planning under uncertainty, Ind. Eng. Chem. Res. 35 (1996b) p. 4154-4165 87 Majozi T. Zhu X . X . A novel continuloustime MILP formulation for multipurpose batch plants. 1. Short-term scheduling, Ind. Eng. Chem. Res. 40 (2001) p. 5935-5949 88 Maravefias C. T. Grossmann I . E. New general continuous-time statetask network formulation for short-term scheduling of multipurpose batch plants, Ind. Eng. Chem. Res. 42 (2003) p. 3056-3074 89 Maravefias C. T. Grossmann I . E. A hybrid MILP/CP decomposition approach for the continuous time scheduling of multipurpose batch plants, Comput. Chem. Eng. 28 (2004) p. 1921-1949 90 Mauderfi A. M . Rippin D. W. T. production planning and scheduling for multi-purpose batch chemical plants, Comput. Chem. Eng. 3 (1979) p. 199-206 91 McDonald C. M. Karimi 1. A. Planning and scheduling of parallel semicontinuous processes. 1. Production planning, Ind. Eng. Chem. Res. 36 (1997) p. 2691-2700 92 Mtndez C. A. Cerdd /. Optimal scheduling of a resource-constrained multiproduct batch plant supplying intermediates to nearby endproduct facilities, Comput. Chem. Eng. 24 (2000) p. 369-376
I
513
514
I
2 Production Scheduling 93 Mtndez C. A. Henning G . P. Cerdlt ]. An
MILP continuous-time approach to short-term scheduling of resourceconstrained multistage flowshop batch facilities, Comput. Chem. Eng. 25 (2001)p. 701-71 1 94 Mignon D. J . Honkomp S. J . Reklaitis G. V. A framework for investigating schedule robustness under uncertainty, Comput. Chem. Eng. S19 (1995) p. S615-S620 95 Mockus L. Reklaitis G . V. Continuous-Time Representation In Batch/Semicontinuous Process Scheduling - Randomized Heuristics Approach, Comput. Chem. Eng. S20 (1996) p. S1173-S1178 96 Mockus L. Reklaitis G. V. Continuous time representation approach to batch and continuous process scheduling. 1. MINLP formulation, Ind. Eng. Chem. Res. 38 (1999a)p. 197-203 97 Mockus L. Reklaitis G. V. Continuous time representation approach to batch and continuous process scheduling. 2. Computational issues, Ind. Eng. Chem. Res. 38 (1999b)p. 204-210 98 Mockus L. Vinsonj. M. Luo K. The integration of production plan and operating schedule in a pharmaceutical pilot plant, Comput. Chem. Eng. 26 (2002) p. 697-702 99 Moon S. Park S. Lee W. K. New MILP models for scheduling of multiproduct batch plants under zero-wait policy, Ind. Eng. Chem. Res. 35 (1996) p. 3458-3469 100 Moro L. F. L. Process technology in the petroleum refining industry - current situation and future trends, Comput. Chem. Eng. 27 (2003)p. 1303-1305 101 Murakami Y . Uchiyama H. Hasebe S. Hashimoto I. Application of repetitive SA method to scheduling problems of chemical processes, Comput. Chem. Eng. S21 (1997)p. S1087-S1092 102 Neiro S. M. S. Pinto]. M. Supply chain optimisation of petroleum refinery complexes, Proceedings of the 4th International Conference on Foundations of Computer-Aided Process Operations, Florida, USA, Jan (2003), Cache Corp, pp. 59-72 103 Neumann K. Schwindt C. Trautmann N. Advanced production scheduling for batch 104 Orcun S. Altinel I. K. Hortapu 6. General continuous time models for production planning and scheduling of batch processing plant, mixed integer linear program formulations and computational issues, Comput. Chem. Eng. 25 (2001) p. 371-38
plants in process industries, OR Spectrum, 24 (2002) p. 251-279 105 Pantelides C. C. Unified frameworks for optimal process planning and scheduling, Proceedings of the 2nd Conference on Foundations of Computer-Aided Process Operations, Snowmass, Colorado, USA, July 10-15 (1994) Cache. Corp., pp 253-274 106 Pantelides C. C. Real8 M.J. Shah N. Shortterm scheduling of pipeless batch plants Trans..IChemE A, 73 (1995) p. 431-444 107 Papageorgiou L. G. Pantelides C. C. Optimal campaign planning/scheduling of multipurpose batch/semicontinuous plants. 1. Mathematical formulation, Ind. Eng. Chem. Res. 35 (1996a)p. 488-509 108 Papageorgiou L. G. Pantelides C. C. Optimal campaign planninglscheduling of multipurpose batch/semicontinuous plants. 2. A mathematical decomposition approach, Ind. Eng. Chem. Res. 35 (199613) p. 510-529 109 Peam W. L. Chung S. H. Chen A. Y. Yang M. H. A case study on the multistage IC final testing scheduling problem with reentry, Int. J. Prod. Econ. 88 (2004)p. 257-267 110 Pekny J. F. Miller D. L. McCrae G. J. Application of a parallel travelling salesman problem to no-wait flowshop scheduling, paper presented at AIChE Annual Meeting, November 27 - December 2, Washington D.C. (1988) 111 Pekny J. F. Miller D. L. McCrae G. ]. An exact parallel algorithm for scheduling when production costs depend on consecutive system states, Comput. Chem. Eng. 14 (1990) p. 1009-1023 112 Petkov S. B. Maranas C. D. Multiperiod planning and scheduling of multiproduct batch plants under demand uncertainty, Ind. Eng. Chem. Res. 36 (1997)p. 4864-4881 113 Pinedo M.Scheduling. theory, algorithms and systems. Prentice Hall, New York (1995) 114 Pinto]. M. Grossmann I . E. Optimal cyclic scheduling of multistage continuous multiproduct plants, Comput. Chem. Eng. 18 (1994)p. 797-816 115 Pinto J . M. Grossmann I. E. A continuous time MILP model for short-term scheduling of multistage batch plants, Ind. Eng. Chem. Res. 34 (1995) p. 3037-3051 116 Pinto]. M. Grossmann I. E. A logic-based approach to scheduling problems with resource constraints, Comput. Chem. Eng. 21 (1997) p. 801-818 117 Pinto ]. M.Grossmann I. E. Assignment and sequencing models for the scheduling of
References chemical processes, Ann OR, 81 (1998) p. 433-466 118 Pintoj. M . Joly M . Moro L. F. L. Planning and scheduling models for refinery operations, Comput. Chem. Eng. 24 (2000) p. 2259-2276 119 Reklaitis G. V. Perspectives on scheduling and planning of process operations, Proceedings of the 4th International Symposium on Process Systems Engineering. Montebello, Canada, August 5-9 (1991) 120 Reklaitis G. V. Mockus L. Mathematical programming formulation for scheduling of batch operations based on non-uniform time discretization, Acta Chim. Slov. 42 (1995) p. 81-86 121 Rippin D. W. T. Batch process systems engineering: a retrospective and prospective review, Comput. Chem. Eng. S17 (1993) p. Sl-Sl3 122 Rodrigues M . T. M . Gimeno L. Passos C. A. S. Campos M . D. Reactive scheduling approach for multipurpose batch chemical plants, Comput. Chem. Eng. S20 (1996) p. S1215-Sl226 123 Roe B. Papageorgiou L. G. Shah N. A hybrid CLP and MILP approach to batch process scheduling, Proc. of 8th International Symposium on Process Systems Engineering, Kunming, China (2003) p. 582-587 124 Roe B. Papageorgiou L. G. Shah N. A hybrid MILP/CLP algorithm for multipurpose batch process scheduling, Comput. Chem. Eng. 29 (2005) p. 1277-129 125 Romero /. Puigjaner L. Joint financial and operating scheduling/planning in industry, Proceedings of 14th European Symposium on Computer-Aided Process Engineering, May (2004) Elsevier, p. 883-888 126 Romero /. Puigjaner L. Holczinger T. Friedler F . Scheduling intermediate storage multipurpose batch plants using the S-Graph, AIChE J. 50 (2004) p. 403-417 127 RoslbjJ. Hajunkoski I. Bjorkqvist /. Karlsson S. Westerfund T. An MILP-based reordering algorithm for complex industrial scheduling and rescheduling, Comput. Chem. Eng. 25 (2001) p. 821-828 128 Rotstein G. E. Lavie R. Lewin D. R. Synthesis of Flexible and Reliable Short-Term Batch Production Plans, Comput. Chem. Eng. 20 (1994) p. 201-215 129 Sahinidis N. V. Grossmann 1. E. Reformulation of multiperiod MILP models for planning and scheduling of chemical processes, Comput. Chem. Eng. 15 (1991) p. 255-272
E. Fomari R. E. Chathrathi M . Optimisation model for long-range planning in the chemical industry, Comput. Chem. Eng. 15 (1991a) 255-272 131 Sahinidis N. V. Grossmann I. E. MINLP model for cyclic multiproduct scheduling on continuous parallel lines, Comput. Chem. Eng. 15 (1991b) 85-103 132 Sand G. Engell S. Modelling and solving realtime scheduling problems by stochastic integer programming, Comput. Chem. Eng. 28 (2004) p. 1087-1103 133 Sanmarti E. Espuria A. Puigjaner L. Effects of equipment failure uncertainty in batch production scheduling, Comput. Chem. Eng. S19 (1995) p. S565-S570 134 Schilling G. Pantelides C. C. A simple continuous time process scheduling formulation and a novel solution algorithm, Comput. Chem. Eng. S20 (1996) p. S1221LS1226 135 Schilling G. H. Algorithms for short-term and periodic process scheduling and rescheduling, PhD Thesis, University of London (1997) 136 Schilling G. Pantelides C. C. Optimal periodic scheduling of multipurpose plants, Comput. Chem. Eng. 23 (1999) p. 635-655 137 Schnelle K. D. Preliminary design and scheduling of a batch agrochemical plant, Comput. Chem. Eng. 24 (2000) p. 1535-1541 138 Schwindt C. Trautmann N. Batch scheduling in process industries: an application of resource-constrained project scheduling, OR Spektmm, 22 (2000) p. 501-524 139 Shah N. Single- and multi-site planning and scheduling: Current status and future challenges, AIChE Ser. 94 (1998) p. 75-90 140 Shah N. Pantelides C. C. Optimal long-term campaign planning and design of batch plants, Ind. Eng. Chem. Res. 30 (1991) p. 2308-2321 141 Shah N. Pantelides C. C. Sargent R. W. H. A general algorithm for short-term scheduling of batch operations - 2. Computational issues, Comput. Chem. Eng. 17 (1993a) p. 229-244 142 Shah N. Pantelides C. C. Sargent R. W. H. Optimal periodic scheduling of multipurpose batch plants, Ann. Oper. Res. 42 (1993b) p. 193-228 143 Shobrys D. E. White D. C. Planning, scheduling and control systems: why cannot they work together, Comput. Chem. Eng. 26 (2002) p. 149-160 130 Sahinidis N. V. Grossmann I.
I
515
516
I
2 Production Scheduling 144 Sunol, A. K. Kapanoglu M. Mogili P. Selected
topics in artificial intelligence for planning and scheduling problems, knowledge acquisition and machine learning, Proc NATO AS1 on Batch Processing Systems Engineering, Series F, Vol, 143 (1992) pp 595-630 145 Tahmassebi T. Industrial experience with a mathematical programming based system for factory systems planning/scheduling, Comput. Chem. Eng. S20 (1996)p. S1565-S1570 146 uan den Heever S. A. Grossmann 1. E. Comput. Chem. Eng. 27 (2003) p. 1813-1839 147 van Hentenryck P. Constraint satisfaction in Logic Programming, MIT Press (1989) 148 Wallace M. G. Novello S. Schimpfj. ECLiPSe: A platform for constraint logic programming, ICL Systems Journal 12 (1997)p. 137-158 149 Wang K. F. Lohl T. Stobbe M. Engell S. A genetic algorithm for online-scheduling of a multiproduct polymer batch plant, Comput. Chem. Eng. (2000)p. 393-400 150 Wellons M. C. Reklaitis G. V. Scheduling of multipurpose batch plants. 1. Formation of single-product campaigns, Ind. Eng. Chem. Res. 30 (1991a)p. 671-688 151 Wellons M. C. Reklaitis G. V. Scheduling of multipurpose batch plants. 2. Multipleproduct campaign formation and production planning, Ind. Eng. Chem. Res. 30 (1991b) p. 688-705 152 Wilkinson S. /. Shah N. Pantelides C. C. Aggregate modelling of multipurpose plant operation, Comput. Chem. Eng. S19 (1995) p. S583-SS88
153 Wilkinson S. /. Cortier A. Shah N. Pantelides C. C. Integrated production and distribution
scheduling on a Europe-widebasis, Comput. Chem. Eng. S20 (1996)p. S1275-S1280 154 Wu D. Ierapetritou M. G. Decomposition approaches for the efficient solution of short-term scheduling problems, Comput. Chem. Eng. 27 (2003) p. 1261-1276 155 Xia Q. Macchietto S. Routing, scheduling and product mix optimization by minimax algebra, Chem. Eng. Res. Des. 72 (1994) p. 408-414 156 Yee K. L. Efficient algorithms for multipurpose plant scheduling, PhD Thesis, University of London (1998) 157 Yee K. L. Shah N. Scheduling of fast-moving consumer goods plants, J. Oper. Res. SOC.48 (1997)p. 1201-1214 158 Yee K. L. Shah N. Improving the efficiency of discrete-time scheduling formulations, Comput. Chem. Eng. S22 (1998) p. S403-S410 159 Zentner M.G. Reklaitis G. V. An intervalbased mathematical model for the scheduling of resource-constrained batch chemical processes, Proc. NATO AS1 on Batch Processing Systems Engineering, Series F, Vol, 143 (1992) p. 779-807 160 ZhangX, Sargent R. W. H. The optimal operation of mixed production facilities-ageneral formulation and some approaches for the solution, Proceedings of the 5th International Symposium on Process Systems Engineering, Kyongju, Korea, June (1994)p. 171-178 161 Zhang X Sargent R. W. H. The optimal operation of mixed production facilitiesextensions and improvements, Comput. Chem. Eng. S20 (1996) p. S1287-S1292
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
3 Process Monitoring and Data Reconciliation Georges Heyen and Boris Kalitventzef
3.1
Introduction Measurements are needed to monitor process efficiency and equipment condition, but also to take care that operating conditions remain within an acceptable range to ensure good product quality and to avoid equipment failure and any hazardous conditions. Recent progress in automatic data collection and archiving has solved part of the problem, at least for modern, well-instrumented plants. Operators are now faced with a lot of data, but they have little means to extract and fully exploit the relevant information it contains. Furthermore, plant operators recognize that measurements and laboratory analysis are never error-free. Using these measurements without any correction yields inconsistencies when generating plant balances or estimating performance indicators. Even careful installation and maintenance of the hardware can not completely eliminate this problem. Model-based statistical methods, such as data reconciliation, have been developed to analyze and validate plant measurements. The objective of these techniques is to remove errors from available measurements and to yield complete estimates of all the process state variables as well as of unmeasured process parameters. This chapter constitutes a tutorial on process monitoring and data reconciliation. First, the key concepts and issues underlying a plant data validation, sources of error and redundancy considerations are introduced. Then, the data reconciliation problem is formulated for simple stready-state linear systems and extended further to consider nonlinear cases. The role of sensibility analysis is also introduced. Dynamic data reconciliation, which is still a subject of major research interest, is treated next. The chapter concludes with a section devoted to the optimal design of the measurement system. Detailed algorithms and supporting software are presented along with the solution of some motivating examples.
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 200G WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
I517
518
I
3 Process Monitoring and Data Reconciliation
3.2 Introductory Concepts for Validation o f Plant Data
Data validation makes use of a plant model in order to identify measurement errors and to reduce their average magnitude. It provides estimates of all process state variables, whether directly measured or not, with the lowest possible uncertainty. It allows one to assess the value of key performance indicators, which are target values for process operation, or is used as a soft sensor to provide estimates of some unmeasured variables, as in inferential control applications. Especially in a framework of real-time optimal control, where model fidelity is of paramount importance, data validation is a recommended step before fine-tuning model parameters: there is no incentive in seeking to optimize a model when it does not match the actual behavior of the real plant. Data validation can also help in gross error detection, meaning either process faults (such as leaks) or instrument faults (such as identification of instrument bias and automatic instrument recalibration). Long an academic research topic, data validation is currently attracting more and more interest, since the amount of measured data collected by Digital Control Systems (DCS) and archived in process information management systems,exceeds what can be handled by operators and plant managers. Real-time applications, such as optimal control, also require frequent parameter updates, in order to ensure fidelity of the plant model. The economic value of extracting consistent information from raw data is recognized. Data validation thus plays a key role in providing coherent and error-free information to decision makers.
3.2.1 Sources of Error
Some sources of errors in the balances depend on the sensors themselves: 0
0
0 0
Intrinsic sensor precision is limited, especially for online equipment, where robustness is usually considered more important than accuracy. Sensor calibration is seldom performed as often as desired, since this is a costly and time-consuming procedure requiring competent manpower. Signal converters and transmission add noise to the original measurement. Synchronization of measurements may also pose a problem, especially for chemical analysis, where a significant delay exists between sampling and result availability.
Other errors arise from the sensor location or the influence of external effects. For instance, the measurement of gas temperature at the exit of a furnace can be influenced by radiation from the hot wall in the furnace. Inhomogeneous flow can also cause sampling problems. A local measurement is not representative of an average bulk property. A second source of error when calculating plant balances is the small instabilities of the plant operation and the fact that samples and measurements are not taken at
3.2 Introductory Conceptsfor Validation ofPlant Data
exactly the same time. Using time averages for plant data partly reduces this problem.
3.2.2 Redundancy
Besides safety considerations, the ultimate goal in performing measurements is to assess the plant performance and to take actions in order to optimize the operating conditions. However, most performance indicators can not be directly measured and must be inferred from some measurements using a model. For instance, the extent of a reaction in a continuous reactor can be calculated from a flow rate and two composition measurements. In general terms, model equations that relate unmeasured variables to a sufficient number of available measurements are used. However, in some cases, more measurements are available than are strictly needed, and the same performance indicator can be calculated in several ways using different subsets of measurements. For instance, the conversion in an adiabatic reactor where a single reaction takes place is directly related to the temperature variation. Thus the extent of the reaction can be inferred from a flow rate and two temperature measurements using the energy balance equation. In practice, all estimates of performance indicators will be different, which makes life difficult and can lead to endless discussions about “best practice.” Measurement redundancy should not be viewed as a source of trouble, but as an opportunity to perform extensive checking. When redundant measurements are available,they allow one not only to detect and quantify errors, but also to reduce the uncertainty using procedures known as data validation.
3.2.3 Data Validation
The data validation procedure comprises several steps. The first is the measurement collection. Nowadays, in well-instrumented plants, this is performed routinely by automated equipment. The second step is conditioning and filtering: not all measurements are available simultaneously, and synchronization might be required. Some data are acquired at higher frequency and filtering or averaging can be justified. The third step is to verify the process condition and the adequacy of the model. For instance, if a steady-statemodel is to be used for data reconciliation, the time series of raw measurements should be analyzed to detect any significant transient behavior. The fourth step is gross error detection: the data reconciliation procedure to be applied later is meant to correct small random errors. Thus, large systematic errors that could result from complete sensor failure should be detected first. This is usually done by verifying that all raw data remain within the upper and lower bounds.
I
519
520
I
3 Process Monitoring and Data Reconciliation
More advanced statistical techniques, such as principal component analysis (PCA), can also be applied at this stage. Ad hoc procedures are applied in case some measured value is found inadequate or missing: it can be replaced by a default value or by the previous one available. The fifth step checks the feasibility of data reconciliation. The model equations are analyzed and the variables are sorted. Measured variables are redundant (and can thus be validated) or just determined; unmeasured variables are determinable or not. When all variables are either measured or observable, the data reconciliation problem can be solved to provide an estimate for all state variables. The sixth step is the solution of the data reconciliation problem. The mathematical formulation of this problem will be presented in more detail later. Each measurement is corrected as slightly as possible in such a way that the corrected measurements match all the constraints (or balances) of the process model. Unmeasured variables can be calculated from reconciled values using some model equations. In the seventh step the systems perform a result analysis. The magnitude of the correction for each measurement is compared to its standard deviation. Large corrections are flagged as suspected gross errors. In the final step, results are edited and may be archived in the plant information management system. Customized reports can be edited and forwarded to various users (e.g., list of suspect sensors sent to maintenance, performance indicators sent to the operators, daily balance and validated environmental figures to site management).
3.3 Formulation
Data reconciliation is based on measurement redundancy. This concept is not limited to the case where the same variable is measured simultaneously by several sensors. It is generalized with the concept of spatial redundancy, where a single variable can be estimated in several independent ways from separate sets of measurements. For instance, the outlet of a mixer can be directly measured or estimated by summing the measurements of all inlet flow rates. For dynamic systems, temporal redundancy is also available, by which repeated observations of the same variables are obtained. More generally, plant structure is additional information that can be exploited to correct measurements. Variables describing the state of a process are related by some constraints. The basic laws of nature must be verified: mass balance, energy balance, some equilibrium constraints. Data reconciliation uses information redundancy and conservation laws to correct measurements and convert them into accurate and reliable knowledge. Kuehn and Davidson (1961)were the first to explore the problem of data reconciliation in the process industry. Vaclavek (1968, 1969) also addressed the problem of variable classification, and the formulation of the reconciliation model. Mah et al.
3.3 Formulation I521
(1976) proposed a variable classification procedure based on graph theory, while Crowe (1989) based an analysis on a projection matrix approach to obtain a reduced system. Joris and Kalitventzeff (1987)proposed a classification algorithm for general nonlinear equation systems, comprising mass and energy balances, phase equilibrium and nonlinear link equations. A thorough review of classification methods is available in Veverka and Madron (1996) and in Romagnoli and Sanchez (2000). A historical perspective of the main contributions on data reconciliation can also be found in Narasimhan and Jordache (2000).
3.3.1 Steady-State Linear System
The simplest data reconciliation problem deals with steady state mass balances, assuming all variables are measured, and results in a linear problem. In this case x is the vector of n state variables, while y is the vector of measurements. We assume that random errors e=y-x follow a multivariate normal distribution with zero mean. The state variables are linked by a set of rn linear constraints:
Ax-d=O
(1)
min(y - xlTw(y- x)
(2)
The data reconciliation problem consists of identifying the state variables x that verify the set of constraints and are close to the measured values in the least square sense, which results in the following objective function: where W is a weight matrix. The method of Lagrange multipliers allows one to obtain an analytical solution:
i = y - W-'AT(AW-'AT)-'(Ay
-
d)
It is assumed that there are no linearly dependent constraints. In order to solve practical problems and obtain physically meaningful solutions, it may be necessary to take into account inequality constraints on some variables (e.g., flow rate should be positive). However, this makes the solution more complex, and the constrained problem can not be solved analytically. It can be shown that is the maximum likelihood estimate of the state variables if the measurement errors are normally distributed with zero mean, and if the weight matrix W corresponds to the inverse of the error covariance matrix C. Equation (3) then becomes:
x
i = y - CAT(ACAT)-'(Ay
i=My+e
-
d) = [I
-
CAT(ACAT)-'A]y
+ CAT(ACAT)-'d
(3b)
The estimates are thus related to the measured values by a linear transformation. They are therefore normally distributed with the average value and covariance matrix obtained by calculating the expected values:
522
I
3 Process Monitoring and Data Reconciliation
E ( i ) = ME().) = x Cov(2) = E [ ( M Y ) ( M ~ )= ~ ]MCMT
(4)
This shows that the estimated state variables are unbiased. Furthermore, the accuracy of the estimates can easily be obtained from the measurement accuracy (covariance matrix C ) and from the model equations (matrix A). 3.3.2 Steady-State Nonlinear System
The data reconciliation problem can be extended to nonlinear steady-state models and to cases where some variables z are not measured. This is expressed by: min(y - xlTw(y - x) x,z
s.t. f (x, z) = 0 where the model equations are mass and component balance equations, energy balance, equilibrium conditions, and link equations relating measured values to state variables (e.g., conversion from mass fractions to partial molar flow rates). Usually the use of performance equations is not recommended, unless the performance parameters (such as compressor efficiency and overall heat transfer coefficients or fouling factors for heat exchangers) remain unmeasured and will thus be estimated by solving the data reconciliation problem. It would be difficult to justify correcting measurements using an empirical correlation, e.g., by correcting the outlet temperatures of a compressor by enforcing the value of the isentropic efficiency. The main purpose of data reconciliation is to allow monitoring of those efficiency parameters and to detect their degradation. Equation (5) takes the form of a nonlinear constrained minimization problem. It can be transformed into an unconstrained problem using Lagrange multipliers A and the augmented objective function L has to be minimized:
L(x, z, A) =
{1
(X - y)
T -1
C
(X - y)
+ AT . f (x, Z)
min L(x, y, A)
x,y.A
The solution must verify the necessary optimality conditions i.e., the first derivatives of the objective function with respect to all independent variables must vanish. Thus one has to solve the system of normal equations:
aL _ - f(x, z) = 0
aA
3.3 Formulation I 5 2 3
This last equation can be linearized as:
z+d=O
(7)
where A and B are partial Jacobian matrices of the model equation system: A=-
af ax
The system of normal equations in Eq. (6) is nonlinear and has to be solved iteratively. Initial guesses for measured values are straightforward to obtain. Process knowledge usually estimates good initial values for unmeasured variables. No obvious initial values exist for Lagrange multipliers, but solution algorithms are not too demanding in that respect (Kalitventzeffet al., 1978).The Newton-Raphson method is suitable for small problems and requires a solution of successive linearizations of the original problem Eq. (6):
where the Jacobian matrix J of the equation system has the following structure: C-'
0 AT
Numerical algorithms embedding a step size control, such as Powell's dogleg algorithm (Chen and Stadtherr 1981) are quite successful for larger problems. When solving very large problems, it is necessary to exploit the sparsity of the Jacobian matrix and use appropriate solution algorithms, such as those described by Chen and Stadtherr (1984a).It is common to assume that measurements are independent, which reduces the weight matrix C-' to a diagonal. Ideally, the elements of matrices A and B should be evaluated analytically. This is straightforward for the elements corresponding to mass balance equations, which are linear, but can be difficult when the equations involve physical properties obtained from an independent physical property package. The solution strategy exposed above does not allow one to handle inequality constraints. This justifies the use of alternative algorithms to solve directly the nonlinear programming (NLP) problem defined by Eq. (6). Sequential quadratic programming (SQP) is the method of choice (Chen and Stadtherr 1984a; Kyriakopoulou and Kalitventzeff 1996, 1997). At each iteration, an approximation of the original problem is solved: the original objective function being quadratic is retained and the model constraints are linearized around the current estimate of the solution.
524
I
3 Process Monitoring and Data Reconciliation
Before solving the NLP problem, some variable classification and preanalysis is needed to identify unobservable variables, parameters, and nonredundant measurements. Measured variables can be classified as redundant (if the measurement is absent or detected as a gross error, the variable can still be estimated from the model) or nonredundant. Likewise, unmeasured variables are classified as observable (estimated uniquely from the model) or unobservable. The reconciliation algorithm will correct only redundant variables. If some variables are not observable, the program will either request additional measurements (and possibly suggest a feasible set) or solve a smaller subproblem involving only observable variables. The preliminary analysis should also detect ouerspecijied variables (particularly those set to constants) and trivial redundancy, where the measured variable does not depend at all upon its measured value but is inferred directly from the model. Finally, it should also identify model equations that do not influence the reconciliation,but are merely used to calculate some unmeasured variables. Such preliminary tests are extremely important, especially when the data reconciliation runs as an automated process. In particular, if some measurements are eliminated as gross errors due to sensor failure, nonredundant measurements can lead to unobservable values and nonunique solutions, rendering the estimates and fitted values useless. As a result, these cases need to be detected in advance through variable classification. Moreover, under these conditions, the NLP may be harder to converge.
3.3.3 Sensitivity Analysis
Solving the data reconciliation problem provides more than validated measurements. A sensitivity analysis can also be carried out. It is based on the linearization of equation system in Eq. (9),possibly augmented to take into account active inequality constraints. Equation (9) shows that reconciled values of process variables x and z, and of Lagrange multipliers A are linear combinations of the measurements. Thus their covariance matrix is directly derived from the measurements covariance matrix (Heyen et al. 1996). Knowing the variance of validated variables allows one to detect the respective importance of all measurements in the state identification problem. In particular, some measurements might appear to have little effect on the result and might thus be discarded from analysis. Some measurements may appear to have a very high impact on key validated variables and on their variance: these measurements should be carried out with special caution, and it may prove wise to duplicate the corresponding sensors. The standard deviation of validated values can be compared to the standard deviation of the raw measurement. Their ratio measures the improvement in confidence brought by the validation procedure. A nonredundant measurement will not be improved by validation. The reliability of the estimates for unmeasured observable variables is also quantified.
The sensitivity analysis also allows one to identify all state variables dependent on a given measurement, as well as the contribution of the measurement variance to the variance of the reconciled value. This information helps locate critical sensors, whose failure may lead to troubles in monitoring the process. A similar analysis can be carried out for all state variables, whether measured or not. For each variable, a list of all measurements used to estimate its reconciled value is obtained. The standard deviation of the reconciled variable is calculated, but also its sensitivity with respect to the measurement's standard deviation. This allows one to locate sensors whose accuracy should be improved in order to reduce the uncertainty affecting the major process performance indicators.
3.3.4 Dynamic Data Reconciliation
The algorithm described above is suitable for analyzing steady-state processes. In practice it is also used to handle measurements obtained from processes operated close to steady state, with small disturbances. Measurements are collected over a period of time and average values are treated with the steady state algorithm. This approach is acceptable when the goal is to monitor some performance parameters that vary slowly with time, such as the fouling coefficient in heat exchangers. It is also useful when validated data are needed, to fine tune a steady-state simulation model, e.g., before optimizing set point values that are updated once every few hours. However, a different approach is required when the transient behavior needs to be monitored accurately. This is the case for regulatory control applications, where data validation has to treat data obtained with a much shorter sampling interval. Dynamic material and energy balance relationships must then be considered as a constraint. The earliest algorithm was proposed by Kalman (1960)for the linear time-invariant system model. The general nonlinear process model describes the evolution of the state variables x by a set of ordinary differential equations (ODE):
x = f ( t , x, u)
+ w(t)
(11)
where x are state variables, u are process inputs, and w(t) is white noise with zero mean and covariance matrix R(t). To model the measurement process, one usually considers sampling at discrete times t = kT, and measurements related to state variables by: Yk = h(xk)
+ "k
(12)
where measurement errors are normally distributed random variables with zero One usually considers that process noise wand meamean and covariance matrix Qk. surement noise v are not correlated. By linearizing Eqs. 11 and 1 2 at each time step around the current state estimates, an extended Kalrnanfilter can be built (see, for instance, Narasimhan and Jordache
526
I
3 Process Monitoring and Data Reconciliation
2000). It allows one to propagate an initial estimate of the states and the associated error covariance, and to update them at discrete time intervals using the measurement innovation (the difference between the measured values and the predictions obtained by integrating the process model from the previous time step). An alternative approach relies on NLP techniques. As proposed by Liebman et al. (1992), the problem can be formulated as
subject to
f(F , x(t) h(x(t)) = 0
)
= 0;
x(t0)
=i o
(14)
(15)
In this formulation, we expect that all state variables can be measured. When some measurements are not available,this can be handled by introducing null elements in the weight matrix Q . Besides enforcing process specific constraints, the equalities in Eq. (15) can also be used to define nonlinear relationships between state variables and some measurements. All measurements pertaining to a given time horizon [to...tN] are reconciled simultaneously. Obviously, the calculation effort increases with the length of the time horizon, and thus with the number of measurements. A tradeoff exists between calculation effort and data consistency. If measurements are repeated N times in the horizon interval, each measured value will be reconciled N times with different neighboring measurements, as long as it is part of the moving horizon. Which set of reconciled values is the “best”and should be considered for archiving is an open question. The value corresponding to the latest time t N will probably be selected for online control application, while a value taken in the middle of the time window might be preferred for archiving or offline calculations. Two solution strategies can be considered. The sequential solution and optimization combines an optimization algorithm such as SQP with an ODE solver. Optimization variables are the initial conditions for the ODE system. Each time the optimizer sets a new value for the optimization variables, the differential equations are solved numerically and the objective function Eq. (13) is evaluated. This method is straightforward, but not very efficient: accurate solutions of the ODE system are required repeatedly and handling the constraints Eqs. (15) and (16) might require a lot of trial and error. An implementation of this approach in a MATLAB environment is described by Romagnoli and Sanchez (2000). Simultaneous solution and optimization is considered more efficient. The differential constraints are approximated by a set of algebraic equations using a weighted residuals method, such as orthogonal collocation. Predicted values of the state vari-
3.5 Integration in the Process Decision Chain
ables are thus obtained by solving the resulting set of algebraic equations, supplemented by the algebraic constraints of Eqs. (15) and (16). With this transformation, the distinction between dynamic data reconciliation and steady state data reconciliation vanishes. However this formulation requires solving a large NLP problem. This approach was first proposed by Liebman et al. (1992).
3.4 Software Solution
Data reconciliation is a functionality that is now embedded in many process analysis and simulation packages or is proposed as a standalone software solution. Bagajewicz and Rollins (2002) present a review of eight commercial and one academic data reconciliation packages. Most of them are limited to material and component balances. More advanced features are only available in a few packages: direct connectivity to DCS systems for online applications, access to an extensive physical property library, handling pseudocomponents (petroleum fractions), simultaneous data validation, and identification of process performance indicators, sensitivity analysis, automatic gross error detection and correction, a model library for major process unit modules, handling of rigorous energy balances and phase equilibrium constraints, evaluation of confidence limits for all estimates. The packages offering the larger sets of features are Datacon (Invensys) (2004) and Vali (Belsim) (2004). Dynamic data reconciliation is still an active research topic (Binder et al., 1998). It is used in combination with some real-time optimization applications, usually in the form of custom-developed extended Kalman Filters (see, for instance, Musch et al. (2004)),but dedicated commercial packages have yet to reach the market.
3.5 Integration in the Process Decision Chain
Data reconciliation is just one step - although an important step - in the data processing chain. Several operations, collectively known as data validation, are executed sequentially : 0 Raw measurements are filtered to eliminate some random noise. When data is collected at high frequency, a moving average might be calculated to reduce the signal variance. 0 If steady state data reconciliation is foreseen, the steady state has to be detected. 0 Measurements are screened in order to detect outliers, or truly abnormal values (out of feasible range, e.g., negative flow rate). 0 The state of the process might be identified when the plant can operate in different regimes or with a different set of operating units. Principal Component Analysis (PCA) analysis is typically used for that purpose, and allows one to select a reference case and to assign the right model structure to the available data set. This
I
527
528
I
3 Process Monitoring and D a t a Reconciliation
0
0 0 0
0
0
step also allows some gross error detection (measurement set deviates significantly from all characterized normal sets). Variable classification takes place in order to verify that redundancy is present in the data set and that all state variables can be observed. The data reconciliation problem is solved. A global Chi-square test can detect the presence of gross errors. A posteriori uncertainty is calculated for all variables, and corrections are compared to the measurement standard deviation. In an attempt to identify gross errors, sequential elimination of suspect measurements (those with large corrections) can possibly identify suspect sensors. Alternatively, looking at subsystems of equations linking variables with large corrections allows one to pinpoint suspect units or operations in the plant. Key performance indicators and their confidence limits are evaluated and made available for reporting. Model parameters are tuned based on reconciled measurements and made available to process optimizers.
3.6 Optimal Design of Measurement System
The quality of validated data obviously depends on the quality of the measurement. Recent studies have paid more attention to this topic. The goal is to design measurement systems allowing one to achieve a prescribed accuracy in the estimates of some key process parameters, and to secure enough redundancy to make the monitoring process resilient with respect to sensor failures. Some preliminary results have been published, but no general solution can be found addressing large-scale nonlinear systems or dynamics. Madron (1972)solved the linear mass balance case using a graph-oriented method. Meyer et al. (1994)proposed an alternative minimum-cost design method based on a similar approach. Bagajewicz (1997) analyzed the problem for mass balance networks, where all constraint equations are linear. Bagajewicz and Sanchez (1999) also analyze reallocation of existing sensors. The design and retrofit of a sensor network was also analyzed by Benqlilou et al. (2004)who discussed both the strategy and tools structure.
3.6.1 Sensor Placement based on Genetic Algorithm
A model-based sensor location tool, making use of a genetic algorithm to minimize the investment cost of the measurement system has been proposed by Heyen et al. (2002)and further developed by Gerkens and Heyen (2004). They propose a general mathematical formulation of the sensor selection and location problem in order to reduce the cost of the measurement system while providing
3.G Optimal Design ofMeasurement System
estimates of all specified key process parameters within a prescribed accuracy. The goal is to extend the capability of previously published algorithms and to address a broader problem, not being restricted to flow measurements and linear constraints. The set of constraint equations is obtained by linearizing the process model at the nominal operating conditions, assuming steady state. The process model is complemented with link equations that relate the state variables to any accepted measurements, or to key process parameters whose values should be estimated from the set of measurements. In our case, the set of state variables for process streams comprises all stream temperatures, pressures and partial molar flow rates. In order to handle total flow rate measurements, the link equation describing the mass flow rate as the sum of all partial molar flow rates weighted by the component's molar mass has to be defined. Similarly, link equations relating the molar or mass fractions to the partial molar flow rates have also to be added for any stream where an analytical sensor can be located. Link equations also have also to be added to express key process parameters, such as heat transfer coefficients, reaction extents or compressor efficiencies. In the optimization problem formulation, the major contribution to the objective function is the annualized operating cost of the measurement system. In the proposed approach, we will assume that all variables are measured; those that are actually unmeasured will be handled as measured variables with a large standard deviation. Data reconciliation requires a solution of the optimization problem described by Eq. (5). The weight matrix W = C-' is limited to diagonal terms, which are the inverse of the measurement variance. The constrained problem is transformed into an unconstrained one using the Lagrange formulation as previously shown. Assuming all state variables are measured, the solution takes the following form:
The linear approximation of the constraints is easily obtained from the solution of the nonlinear model, since A is the Jacobian matrix of the nonlinear model evaluated at the solution. Thus matrix M can be easily built, knowing the variance of measured variables appearing in submatrix W and the model Jacobian matrix A (which is constant). This matrix will be modified when assigning sensors to variables. Any diagonal element of matrix W will remain zero (corresponding to infinite variance) as long as a sensor is not assigned to the corresponding process variable; it will be computed from the sensor precision and the variable value when a sensor is assigned in Section 3.6.2.3. Equation (17) need not be solved, since measured values Y are not known. However the variances of the reconciled values X depend only on the variance of measurements as shown in Heyen et al. (1996): var(xi) =
C ([M-11y)2 var (5)
j=1
I
529
530
I
3 Process Monitoring and Data Reconciliation
The elements of M-' are obtained by calculating a lower and upper triangular (LU) factors of matrix M. In the case when matrix M is singular, we can conclude that the measurement set has to be rejected, since it does not allow observation of all variables. Row i of M-' is obtained by back substitution using the LU factors, using a right-hand-side vector whose components are 6 , (Kronecker factor: 6, = 1when i =j, 6 , = 0 otherwise). In the summation of Eq. (18),only the variables Yjthat have been assigned a sensor are considered, since the variance of unmeasured variables has been set to infinity.
3.6.2 Detailed Implementation of the Algorithm
Solution of the sensor network problem is carried out in seven steps: 1. process model formulation and definition of link equations; 2. model solution for the nominal operating conditions and model linearization; 3 . specification of the sensor database and related costs; 4. specification of the precision requirements for observed variables; 5. verification of problem feasibility; 6. optimization of the sensor network 7. report generation. Each of the steps is described in detail before presenting a test case. 3.6.2.1 Process Model Formulation and Definition of Link Equations
In the current implementation, the process model is generated using the model editor of the Vali 3 data validation software, which is used as the basis for this work (Belsim 2004). The model is formulated by drawing a flow sheet using icons representing the common unit operations, and linking them with material and energy streams. Physical and thermodynamic properties are selected from a range of physical property models. Any acceptable measurement of a quantity that is not a state variable (T, P,partial molar flow rate) requires the definition of an extra variable and the associated link equation, which is done automatically for standard measurement types (e.g., mass or volume flow rate, density, dew point, molar or mass fractions, etc.). Similarly, extra variables and link equations must be defined for any process parameter to be assessed from the plant measurements. A proper choice of extra variables is important, since we may note that many state variables can not be measured in practice (e.g., no device exists to directly measure a partial molar flow rate or an enthalpy flow). In order to allow the model solution, enough variables need to be set by assigning them values corresponding to the nominal operating conditions. The set of specified variables must at least match the degrees of freedom of the model, but overspecifications are allowed, since a least square solution will be obtained by the data reconciliation algorithm.
3.6 Optimal Design $Measurement System
3.6.2.2 Model Solution for the Nominal Operating Conditions and Model Linearization
The data reconciliation problem is solved either using a large-scale SQP solver, or the Lagrange multiplier approach. When the solution is found, the value of all state variables and extra variables is available, and the sensitivity analysis is carried out (Heyen et al. 1996). A dump file is generated, containing all variable values, and the nonzero coefficients of the Jacobian matrix of the model and link equations. All variables are identified by a unique tag name indicating its type (e.g., S32.T is the temperature of stream S32, E102.K is the overall heat transfer coefficient of heat exchanger E102, and S32.MFH20 is the molar fraction of component H 2 0 in stream S32). 3.6.2.3 Specification o f the Sensor Database and Related Costs
A data file must be prepared that defines for each acceptable sensor type the following parameters: 0 0 0
0
the sensor name; the annualized cost of operating such a sensor; parameters a, and bi of the equation allowing to estimate the sensor accuracy from the measured value y,, according to the relation: oi = a, + biy,; a character string pattern to match the name of any process variable that can be measured by the given sensor (e.g., a chromatograph will match any mole fraction, and will thus have the pattern MF", while an oxygen analyzer will be characterized by the pattern M F 0 2 ) .
3.6.2.4 Specification of the Precision Requirements for Observed Variables
A data file must be prepared that defines the precision requirements for the sensor network after processing the information using the validation procedure. The following information is to be provided for all specified key performance indicators or for any process variable to be assessed: 0 0
the composite variable name (stream or unit name + parameter name); the required standard deviation a;, either as an absolute value, or as a percentage of the measured value.
3.6.2.5 Verification of Problem Feasibility
Before attempting to optimize the sensor network, the program first checks for the existence of a solution. It solves the linearized data reconciliation problem assuming all possible sensors have been implemented. In the case where several sensors are available for a given variable, the most precise one is adopted. This also provides an , for the cost of the sensor network. upper limit,,C A feasible solution is found when two conditions are met: 0 0
the problem matrix M is not singular. the standard deviation oi of all selected reconciled variables is lower than the specified value oti.
I
531
532
I
3 Process Monitoring and Data Reconciliation
When the second condition is not met, several options can be examined. One can extend the choice of sensors available in the sensor definition file by adding more precise instruments. One can also extend the choice of sensors by allowing measurement of other variable types. Finally, one can modify the process definition by adding extra variables and link equations, allowing more variables besides state variables to be measured. 3.6.2.6 Optimization of the Sensor Network
Knowing that a feasible solution exists, one can start a search for a lower cost configuration. The optimization problem as posed involves a large number of binary variables (in the order of number of streams X number of sensor types). The objective function is multimodal for most problems. However, identifylng sets of suboptimal solutions is of interest, since criteria besides cost might influence the selection process. Since the problem is highly combinatorial and not differentiable, we attempted to solve it using a genetic algorithm (Goldberg 1989). The implementation we adopted is based on the freeware code developed by Carroll (1998). The selection scheme used involves tournament selection with a shuffling technique for choosing random pairs for mating. The evolution algorithm includes jump mutation, creep mutation, and the option for single-point or uniform crossover. The sensor selection is represented by a long string (gene) of binary decision variables (chromosomes). In the problem analysis phase, all possible sensor allocations are identified by finding matches between variable names (see Section 3.6.2.2) and sensor definition strings (see Section 3.6.2.3). A decision variable is added each time a match is found. Multiple sensors with different performance and cost can be assigned to the same process variable. The initial gene population is generated randomly. Since we know from the number of variables and the number of constraint equations the number of degrees of freedom of the problem, we can bias the initial sensor population by fxing a rather high probability of selection (typically 80 %) for each sensor. We found however that this parameter is not critical. The initial population count does not appear to be critical either. Problems with a few hundred binary variables were solved by following the evolution of populations of 10-40 genes, 20 being our most frequent choice. Each time a population is generated, the fitness of its members must be evaluated. For each gene representing a sensor assignment, we can estimate the cost C of the network, by summing the individual costs of all selected sensors. We also have to build the corresponding matrix M (Eq. (3b))and factorize it, which is done using a code exploiting the sparsity of the matrix. The standard deviation ui of all process variables is then estimated using Eq. (18). This allows calculating a penalty function P that takes into account the uncertainty affecting all observed variables. This penalty function sums penalty terms for all rn target variables. m
P = p i i=l
3.G Optimal Design ofMeasurernent System
where Pi
=
ai when oi 5 o:
(Ji
The fitness function F of the population is then evaluated as follows: 0
0
if matrix M is singular, return F otherwise return F = - (C + P).
= -,,C ,
Penalty function Eq. (5) (slightly)increases the merit of a sensor network that performs better than specified. Penalty function Eq. (6) penalizes genes that do not meet the specified accuracy, but it does not reject them totally, since some of their chromosomes might code interesting sensor subnetworks. The population is submitted to evolution according to the mating, crossover, and mutation strategy. Care is taken that the current best gene is always kept in the population, and is duplicated in case it should be submitted to mutation. After a specified number of generations, the value of the best member of the population is monitored. When no improvement is detected for a number of generations, the current best gene is accepted as a solution. There is no guarantee that this solution is an optimal one, but it is feasible and (much) better than the initial one. 3.6.2.7 Report Generation
The program reports the best obtained configurations as a list of sensors assigned to process variables to be measured. The predicted standard deviation for all process variables is also reported, as well as a comparison between the achieved and target accuracies for all key process parameters.
3.6.3 Perspectives
The software prototype described here has been further improved by allowing more flexibility in the sensor definition (e.g., defining acceptable application ranges for each sensor type) and by addressing retrofit problems by specifylng an initial instrument layout. The capability of optimizing a network for several operating conditions has also been implemented. The solution time grows significantly with the number of potential sensors. In order to address this issue, the algorithm has been parallelized (Gerkens and Heyen 2004) and the efficiency of parallel processing remains good as long as the number of processors is a divisor of the chromosome population size. Full optimization of very complex processes remains a challenge, but suboptimal feasible solutions can be obtained by requiring observability for smaller subflowsheets. The proposed method can be easily adapted to different objective functions besides cost to account for different design objectives. Possible objectives could address the
I
533
534
I
3 Process Monitoring and Data Reconciliation
resiliency of the sensor network to equipment failures, or the capability to detect gross errors, in the line proposed by Bagajewicz (2001). There is no guarantee that this solution found with the proposed method is an optimal one, but it is feasible and (much)better than the initial one.
3.7 An Example
A simplified ammonia synthesis loop illustrates the use of data validation, including sensitivity analysis and the design of sensor networks. The process model for this plant is shown in Figure 3.1. The process involves a five-componentmixture (N2, H2,NH3,CH4,Ar), 10 units, 14 process streams, and 4 utility streams (ammonia refrigerant, boiler feed water, and steam). Feed stream f0 is compressed before entering the synthesis loop, where it is mixed with the reactor product f14. The mixture enters the recycle compressor C-2 and is chilled in exchanger E-1 by vaporizing ammonia. Separator F-1 allows one to recover liquid ammonia in 0, separated from the uncondensed stream fb. A purge f7 leaves
F7-MFAR
Figure 3.1 Data validation, base case. Measured and reconciled values are shown in result boxes as well as key performance indicators
0 045 0046.
3.7 ~n Example
the synthesis loop, while f8 enters the effluent to feed preheater E-3. The reaction takes place in two adiabatic reactors R-1 and R-2, with intermediate cooling in E-2, where steam is generated. Energy balances and countercurrent heat transfer are considered in heat exchangers E-1, E-2, and E-3. Reactors R-1 and R-2 consider atomic balances and energy conservation. Compressors C-1 and C-2 take into account an isentropic efficiency factor (to be identified). Vapor-liquid equilibrium is verified in heat exchanger E-1 and in separator F-1. The model comprises 160 variables, 89 being unmeasured. Overall, 118 equations have been written: 70 are balance equations and 48 are link equations relating the state variables (pressure, enthalpy and partial molar flow rates) either to variables that can be measured (temperature, molar fraction, and mass flow rate) or to performance indicators to be identified. A set of measurements has been selected using engineering judgment. Values taken as measurements were obtained from a simulation model and disturbed by random errors. The standard deviation assigned to the measurements was: 0
0 0 0
0
1°C for temperatures below 1OO"C, 2°C for higher temperatures 1% of measured value for pressures 2 % of measured values for flow rates 0.001 for molar fractions below 0.1, 1% of measured value for higher compositions 3 % of the measured value for mechanical power.
Measured values are displayed in Figure 3.1, as are the validated results. The identified values of performance indicators are also displayed. These are the extent of the synthesis reaction in catalytic bed R-1 and R-2, the heat load and transfer coefficients in exchangers E-1, E-2 and E-3, and the isentropic efficiency of compressors C-1 and c-2. Result analysis shows that all process variables can be observed. All measurement corrections are below 20, except for methane in stream f 7. The value of objective function Eq. (5) is 19.83, compared to a x2 threshold equal to 42.56. Thus, no gross error is suspected from the global test. Sensitivity analysis reveals how the accuracy of some estimates could be improved. For instance, Table 3.1 shows the sensitivity analysis results for the heat transfer coefficient in unit E-1. The first line in the table reports the value, absolute accuracy and relative accuracy of this variable. The next rows in the table identify the measurements that have a significant influence on the validated value of the E-1 heat transfer coefficient. For instance, 77.57% of the uncertainty on U comes from the uncertainty of variable AMO1-T (temperature of stream amOl). The derivative of U with respect to AMO1-T is equal to 0.12784. Thus one can conclude that the uncertainty on the heat transfer coefficient could be reduced significantly if a more accurate measurement of a single temperature is available. Table 3.2 shows that the reaction extent in reactor R-2 can be evaluated without resorting to precise chemical analysis. The uncertainty for this variable is 4.35 % of
I
535
536
I
3 Process Monitoring and Data Reconciliation Table 3.1
Sensitivity analysis for heat transfer coefficient in exchanger E-1
Variable
K
U E-1
Measurement
Tag Name
Value
Abs.Acc.
Computed
3.5950
Tag Name
Contrib.
ReLAcc.
Penal.
-
4.04 %
0.14515 Der.Val.
P.U.
Rel.Cain
Penal.
P.U.
1.21%
0.01
c
0.21%
0.00
C
34.29%
3.67
-
T
SAM01
AMO1-T
77.57%
T
SAM02
AMO2-T
5.75%
-0.34800E-01
MFNH3
R F6
F7-MFNH3
4.33 %
-30.216
MASSF
R AMOl
AMOI-MASSF
4.05%
0.16227E-01
46.50%
0.23
t/h
MASSF
R F12
F14-MASSF
1.75%
-0.27455E-02
33.79%
0.99
t/h
T
S F7
F7-T
1.50%
-0.177948-01
62.36%
1.16
C
T
S F6
F6-T
1.50%
-0.177948-01
62.36%
0.01
C
T
S F4
F4-T
1.50%
-0.177948-01
62.36%
1.16
C
Table 3.2
0.12784
Sensitivity analysis for reaction extent in reactor R-2
Variable
EXTENT1
U R-2
Measurement
Tag Name
Value
Abs.Acc.
Rel.Acc.
Penal.
Computed
7.6642
0.33372
4.35 %
Tag Name
Contrib.
Der.Val.
ReLCain
P.U. k m o l min-'
Penal.
P.U.
~~
T
S F11
F11-T
26.82%
-0.86410E-01
21.85%
0.00
C
T
S F12
F12-T
25.13%
0.836406-01
26.78%
0.22
C
T
S F9
F9-T
21.52%
0.77397E-01
27.69%
0.22
C
T
S F10
F1O-T
19.95%
-0.745328-01
22.02%
0.00
C
MASSF
R F5
FS-MASSF
1.56%
0.49680E-01
49.64%
0.23
t/h
~
~~
MASSF
R BFWOl
STMO1-MASSF 1.51%
0.465918-01
35.39%
0.01
t/h
MASSF
R AMOl
AMOI-MASSF
0.81 %
0.166478-01
46.50%
0.23
t/h
MASSF
RFO
FO-MASSF
0.77%
0.25907E-01
58.25%
0.14
t/h
MFNH3
RF12
F14-MFNH3
0.58%
18.215
29.41%
0.15
-
the estimated value and results mainly from the uncertainty in four temperature measurements. Better temperature sensors for streams f9, f10, f l l and f12 would allow one to better estimate the reaction extent. This sensor network provides acceptable estimates for all process variables. However the application of the sensor placement optimization using a genetic algorithm can identify a cheaper alternative.
3.7 ~n Example Table 3.3
Cost, accuracy, and range for available sensors
Measured Variable
Relative cost
Standard deviation D
Acceptable range
T
1
1“C
T < 150°C
T
1
2 “C
T > 15O’C
P
1
1%
1-300 bar
Flow rate
5
2%
1-100 kg SK’
Power
1
3%
1-10,000 kW
Molar composition (all components in stream)
20
0.001 1%
x,
x, > 0.1
A simplified sensor data base has been used for the example. Only six sensor types were defined, with accuracies and cost as defined in Table 3.3. Accuracy targets are specified for seven variables: 0
0 0
two compressor efficiencies, target u = 4 % of estimated value three heat transfer coefficients, target u = 5 % of estimated value two reaction extents, target o = 5 % of estimated value.
The program detects that up to 59 sensors could be installed. When all of them are selected, the cost is 196 units, compared to 42 sensors and 123 cost units for our initial guess shown in Figure 3.1. Thus the solution space involves 2 ” = 5.76 X 1017 solutions (most of them being unfeasible). We let the search algorithm operate with a population of 20 chromosomes, and iterate until no improvement is noticed for 200 consecutive generations. This requires a total of 507 generations and 10,161 evaluations of the fitness function, which runs in 90 s on a laptop PC (1 GHz Intel Pentium 111processor, program compiled with Compaq FORTRAN compiler, local optimization only). Figure 3.2 shows Fitness
Generations
0 100
-200
200
300
400
5 0
I
537
538
I
3 Process Monitoring and Data Reconciliation
that the fitness function value varies sharply in the first generations and later improves only marginally. A solution with a cost similar to the final one is obtained after 40 % of the calculation time. The proposed solution involves only 26 sensors, for a total cost reduced to 53 cost units. The number of sensors is reduced from 16 to 11 for T, from 15 to 12 for P, from 6 to 2 for flow, and from 3 to 1 for composition. Thus the algorithm has been able to identify a solution satisfying all requirements with a considerable cost reduction.
3.8 Conclusions
Efficient and safe plant operation can only be achieved if the operators are able to monitor key process variables. These are the variables that either contribute to the process economy (e.g., yield of an operation) or are linked to the equipment quality (fouling in a heat exchanger, activity of a catalyst), to safety limits (departure from detonation limit), or to environmental considerations (amount of pollutant rejected). Most performance parameters are not directly measured and are evaluated by a calculation based on several experimental data. Random errors that always affect any measurement also propagate in the estimation of performance parameters. When redundant measurements are available, they allow one to estimate the performance parameters based on several data sets, leading to different estimates, which may lead to confusion. Data reconciliation allows one to address the state estimation and measurement correction problems in a global way by exploiting the measurement redundancy. Redundancy is no longer a problem, but an asset. The reconciled values exhibit a lower variance compared to original raw measurements; this allows process operation closer to limits (when this results in improved economy). Benefits from data reconciliation are numerous and include: improvement of measurement layout; 0 decrease of number of routine analyses; 0 reduced frequency of sensor calibration: only faulty sensors need to be calibrated; 0 removal of systematic measurement errors; 0 systematic improvement of process data; 0 clear picture of plant operating condition and reduced measurement noise in trends of key variables; 0 early detection of sensor deviation and of equipment performance degradation; actual plant balances for accounting and performance follow-up; 0 safe operation closer to the limits; 0 quality at process level. 0
Current developments aim at combining online data acquisition with data reconciliation. Reconciled data are displayed in control rooms in parallel with raw measurements. Departures between reconciled and measured data can trigger alarms. Analy-
sis of time variation of those corrections can draw attention to drifting sensors that need recalibration. Data reconciliation can also be viewed as a virtual instrument; this approach is particularly developed in biochemical processes, where direct measurement of the key process variables (population of microorganisms and yield in valuable by-products) is estimated from variables that are directly measured online, such as effluent gas composition. Current research aims at easing the development of data reconciliation models by employing libraries of predefined unit operations, automatic equation generation for typical measurement types, analyses of redundancy and observability, analyses of error distribution of reconciled values, interfaces to online data collection systems and archival data bases, and developing specific graphical user interfaces.
References
T.Data reconciliation and gross-error detection for dynamic systems. AlChE I. 42 (1996) p. 2841-2856 2 Bagajewicz M. J. Process Plant Inshumentation: Design and Upgrade. Chap. 6, Technomic Publishing Company, Lancaster PA (USA) (1997) 3 Bagajewicz M. J . Design and retrofit of sensor networks in process plants. AlChE I. 43(9) (2001) p. 2300-2306 4 Bagajewicz M. 1. Rollins D. R Data reconciliation. In.B.G. Liptak (ed.)Instrument Engineers’ Handbook (3rd edn.), Vol. 3: Process Software and Digital Networks. CRC, Taylor and Francis, Boca Raton, FL (USA) (2002). 5 Bagajewicz M. J . Sanchez M. C.Design and upgrade of nonredundant and redundant linear sensor networks. AlChE 1. 45(9) (1999) p. 1927-1938 6 Belsim, VAL1 4 User’s Guide. Belsim SaintGeorges-sur-Meuse. Belgium (2004). 7 Benqlilou C., Graells M. Puigjaner L. Decision-making strategy and tools for sensor networks design and retrofit. Ind. Eng. Chem. Res. 43 (2004) p. 1711-1722 8 Binder T.Blank L. Dahmen W. Marquardt W. Towards multiscale dynamic data reconciliation. In: R. Berber (ed.) Nonlinear ModelBased Process Control. NATO AS1 series, Kluwer, Dordrecht (1998) 9 Carroll D . L. FORTRAN Genetic Algorithm Driver, version 1.7, http://www.staff.uiuc. edu/carroll/ga.html (1998) accessed ruly 2001. 10 Chen H . S. Stadherr M. A. A modification of Powell’s dogleg algorithm for solving 1 AlbuquerqueJ . S. Biegler L.
systems of non-linear equations. Comput. Chem. Eng. S(3) (1981) p. 143-150 11 Chen H . S. Stadherr M. A. On solving large sparse nonlinear equation systems. Comput. Chem. Eng. 8(1) (1984a) p. 1-6 12 Chen H . S. Stadherr M. A. Enhancements of Han-Powell method for successive quadratic programming. Comput. Chem. Eng. 8(3/4) (1984b) p. 299-234 (198413) 13 Crowe C. M. Observability and redundancy of process data for steady state reconciliation. Chem. Eng. Sci. 44 (1989) p. 2909-2917 14 Crowe C. M. Data reconciliation - progress and challenges. I. Process Control. (6) (1996) p. 89-98 15 Datacon Invensys,http://www.simsci.com/products/datacon.stm, cited 3 May (2004) 16 Gerkens C.Heyen G. Use of Parallel Computers in Rational Design of Redundant Sensor Networks. 14th European Symposium on Computer Aided Process Engineering, Lisbon (2004) 17 Goldberg D. E. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA (USA) (1989) 18 Heyen G. Durnont M. N. KalitventzefB. Computer aided design of redundant sensor networks. In: Grievnik ]. Dumont M. N. Kalitventzeff B. 12th European symposium on Computer Aided Process Engineering The Hague, Elsevier Science, Amsterdam (2002) 19 Heyen G . Martkhal E. KalitventzefB. Sensitivity calculations and variance analysis in plant measurement reconciliation. Comput. Chem. Eng. 20s (1996) p. 539-544
540
I
3 Process Monitoring and Data Reconciliation 20 Joris P. Kalitventzeff B. Process measure-
21
22
23
24
25
26
27
ments analysis and validation. In: Proceedings Chemical Engineering Fundamentals Conference (CEF’87):Use of Computers in Chemical Engineering, Italy, pp. 41-46 (1987) KalitventzeffB., Laud P. Gosset R. Heyen G. The validation of industrial measurements, a necessary step before the parameter identification of the simulation model for large chemical engineering systems. In: Proceedings of International Congress ”Contribution des calculateurs blectroniques au developpement du genie chimique,” Societe de Chimie Industrielle, Pans (1978) Kalman R. E. A new approach to linear filtering and prediction problems. Trans. ASME 1. Basic Eng. 82D (1960) p. 35-45 Kuehn D. R. Dauidson H . Computer control: mathematics of control. Chem. Eng. Prog. 57 (1961) p. 44-47 Kyriakopoulou D. /. Kalitventzeff B. Data reconciliation using an interior point sqp. ESCAPE-6, Rhodes, Greece, 26-29 May, Pergamon Press (1996) Kyriakopoulou D. /.Kalitventzeff B. Reduced Hessian interior point SQP for large-scale process optimization. First European Congress on Chemical Engineering, Florence, Italy, 4-7 May AlDIC Milano (1997) Liebman M . /. Edgar T. F. Lasdon L. S. Efficient data reconciliation and estimation for dynamic processes using nonlinear programming techniques. Comput. Chem. Eng. 16 (1992)p. 963-986 Madron F. Process Plant Performance: Measurement and Data Processing for Optimization and Retrofits. Ellis Honvood, London (1992)
28 M a h R. 5. H . Stanley G. M . Downing D. W.
Reconciliation and Rectification of Process Flow and Inventory Data. Ind. and Eng. Chem. Proc. Des. Dev. 15 (1976) p. 175-183 29 Meyer M . Le Lann]. M . L. Koeherf B. Enjalbert M . Optimal selection of sensor location on a complex plant using graph-oriented approach. In: Moser F. Schnitzer H. Bart H.-J. (eds.) European Symposium on Cumputer-Aided Process Engineering-3, Graz, Austria (1993). Suppl. to Computers and Chemical Engineering, Pergamon Press, Oxford (1993) 30 Musch H. List T. DempfD. Heyen G. On-line Estimation of Reactor Key Performance Indicators: an Industrial Case Study. (2004) 29 Narasimhan S.lordache C., Data Reconciliation and Gross Error Detection, an Intelligent use of Process Data. Gulf, Publishing Company, Houston, TX (USA) (2000) 30 RomagnoliJ. A. Sanchez M. C. Data Processing and Reconciliation for Chemical Process Operations. Academic Press, San Diego, CA (USA) (2000) 31 Vaclavek V . Studies on system engineering: on the application of the calculus of obsexvations in calculations of chemical engineering balances. Coll. Czech. Chem. Commun. 34 (1968) p. 3653-3660 32 Vaclavek V. Studies on System Engineering: Optimal Choice of the Balance Measurements in Complicated Chemical Engineering Systems. Chem. Eng. Sci. 24 (1969) p. 947-955 33 Vali Belsim http://w.belsim.com/Products-main.htm, cited 3 May 2004 34 Veverka V. V. Madron F. Material and energy balancing in the process industries. From microscopic balances to large plants, Computer-Aided Chemical Engineering. Elsevier Science, Amsterdam (1996)
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
4 Model-based Control Sebastian Engell, Cregor Fernholz, Weihua Cao, and Abdelaziz Toumi
4.1 Introduction
As explained in several chapters of this volume, rigorous process models can be used to optimize the design and the operating parameters of chemical processing plants. However, optimal settings of the parameters do not guarantee optimal operation of the real plant. The reasons for this are the inevitable plant-model mismatches, the effects of disturbances, changes in the plant behavior over time, etc. Usually not even the constraints on process or product parameters are met at the real plant if operating parameters that were obtained from offline optimization are applied. The only effective way to cope with the effect of plant-model mismatch, disturbances etc. is to use some sort of feedback control. Feedback control means that (some of) the degrees of freedom of the plant are modified based on the observation of measurable variables. These measurements may be performed quasicontinuously or with a certain sampling period, and accordingly the operation parameters (termed inputs in feedback control terminology) may be modified in a quasicontinuous fashion or intermittently. Often, key process parameters cannot be measured online at a reasonable cost. One important use of process models in process control is the model-based estimation of such parameters from the available measurements. This topic has been dealt with in the previous chapter. In this chapter, we focus on the use of rigorous process models for feedback control by model-based online optimization. Feedback control can be combined with model-based optimization in several different ways. The simplest, and most often used, approach is to perform an offline optimization and to divide the degrees of freedom into two groups. The variables in the first group are applied to the real process as they were computed by the offline optimization. The variables in the second group are used to control some other variables to the values which resulted from the offline optimization, e.g., requirements on purities are met by controlling the product concentration by manipulating the feed rate to a reactor or the reflux in a distillation column. In the design of these feedback controllers, dynamic plant models are used, in most cases obtained from a lineComputer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
542
I
4 Model-based Control
arization of the rigorous model around the optimal operating regime or process trajectory. If nonlinear process models are available from the design stage, these models can be used directly in model-based control schemes. This leads to nonlinear model-predictive control (NMPC)where the future values of the controlled variables are predicted over a finite horizon (the prediction horizon) using the model, and the future inputs are optimized over a certain horizon (the control horizon). The first inputs’ values are applied to the plant. Thereafter, the procedure is repeated, taking new measurements into account. A major advantage of this approach is the ability to include process constraints in the optimization, thus exploiting the full potential of the plant and the available actuators (pumps, valves) and respecting operating limits of the equipment. In Section 4.2, NMPC around a precomputed trajectory of the process is presented in more detail and its application to a reactive semibatch distillation process is discussed. When closed-loop control is used to track a precomputed trajectory and the controllers perform satisfactorily, the process is kept near the operating point that was computed as the optimal one offline. Those variables which are under feedback control track their precomputed set-point even in the presence of disturbances and plant-model mismatch. However, the overall operation will in general no longer be optimal, because the precomputed operating regime is optimal for the nominal plant model, but not for the real plant. As an extension of this concept, feedback control can be combined with model adaptation and reoptimization. At a lower sampling rate than the one used for control, some model parameters are adapted based upon the available measurements. After the model has been updated, it is used for a reoptimization of the operating regime. The new settings can be implemented directly or be realized by feedback. In Section 4.3, such a control scheme is presented for the example of batch chromatographic separations, including experimental results. A serious problem in practice is structural plant-model mismatch. This means that an adaptation of the model parameters, even for an infinite number of noise-free measurements, will not give a model that accurately represents the real process. Therefore if the structurally incorrect model is used in optimization, the resulting operating parameters will not be optimal; often, not even the constraints will be met by the real process unless the constrained variables are under feedback control with some safety margin that reflects the attainable control performance, which again causes a suboptimal operation. A solution to the problem of plant-model mismatch is the use of optimization strategies that incorporate feedback directly, i.e., use the information gained by online measurements not only to update the model but also to modify the optimization problem. In Section 4.4,this idea is presented in detail and the application to batch chromatography is used to demonstrate its potential. NMPC involves online optimization on a finite horizon based upon a nonlinear plant model. This approach can be employed not only to keep some process variables at their precomputed values or make them track certain trajectories, but also to perform online predictive optimization of the plant performance. Bounds, e.g., on product specifications,can be included in the formulation as constraints rather than set-
4.2 NMfCApplied to a Semibatch Reactive Distillation Process
ting up a separate feedback control layer to meet the specifications at the real plant. In this spirit, the problem of controlling quasicontinuous (simulated moving bed) chromatographic separations is formulated in Section 4.5 as an online optimization problem, where the measured outputs have to meet the constraints on the product purities but the optimization goal is not tracking of a precomputed trajectory, but optimal process operation.
4.2 NMPC Applied to a Semibatch Reactive Distillation Process 4.2.1 Formulation of the Control Problem
In NMPC, a process model is used to predict the future process outputs 7 over a fmed prediction time horizon Hp for given sequences of H,changes of the manipulated variables u. The aim of the controller is to minimize a quadratic function of the deviation between the process outputs 7 and their desired trajectories yfas well as of the changes of the manipulated variables. The control move at the sampling point k + l is given by the optimization problem Eq. (1).The parameters ytrand k,, allow scaling the controlled and manipulated variables and shifting the weight either on good setpoint tracking or on smooth controller actions. Bounds on the manipulated variables can be enforced by using sufficiently large penalties I,, or by adding inequality constraints (Eq. (2)) to the optimization problem Eq. (1):
Umln
J
5 uk+j < - umax V j = 1 , .. . , H,.
If the control scheme is applied to a real plant, plant-model mismatch or disturbances will lead to differences between the predicted and the real process outputs. Therefore a time-varying disturbance model, as proposed by Draeger et al. (1995),is included in the process model. The formal representation of the complete model that is used by the model predictive controller is
where dk, denotes the estimated disturbances, ykmodeldenotes the model outputs given by the physical process model, and j$ the model prediction of the controller used in the optimization problem (1).The disturbances dk, are recalculated at every time k for each time step i. The process model is simulated from time k-i until time k taking into account the actual control actions giving the model outputs Yde'(klk-i). The errors ek, computed as the differences between the measurements ykmeasand the model outputs y d e ' ( k lk-i):
I
543
566
I
4 Model-based Control
F- ymodel( k I k - i ) .
(4)
ek,i - k
The new estimates of the disturbances are calculated by a first order filter:
dk,i= aek,i+ (1 - a)dk-l,i. 4.2.2 The Methyl Acetate Process
Methyl acetate is produced from acetic acid and methanol in an esterification reaction. The conventional process consists of a reactor and a complex distillation column configuration, while using reactive distillation, high purity methyl acetate can be produced in a single column (Agreda et al. 1990). The reaction can either by catalyzed homogeneously by sulfonic acid or heterogeneously using a solid catalyst. The latter avoids material problems caused by the sulfonic acid as well as the removal of the catalyst at the end of the batch. This process is investigated here. A scheme of the process is shown in Figure 4.1. The column consists of three parts. Two structured catalytic packings of 1 m height are located in the lower part of the column while the upper part contains a noncatal9c packing. Methanol is filled into the reboiler before the beginning of the batch and heated until the column is filled with methanol. Acetic acid is fed to the column above the reactive section. Since acetic acid is the highest boiling component, it is necessary to feed it above the catalpc packing in order to ensure that both raw materials are present in the catalytic area in sufficient concentrations. The upper section purifies the methyl acetate. The azeotropes of the mixture are overcome because water and acetic acid are present in the stream that enters the separation stages. The plant considered here is a pilot plant in the Department of Biochemical and Chemical Engineering at Universitat Dortmund. A batch run takes approximately 17 h. Condensor
Product Packing Catalytic Packing Catalytic Packing
f3 Reboiler
Figure 4.1
Scheme of the semibatch column
4.2 NMPCApplied to a Semibatch Reactive Distillation Process
A more detailed description of the process and a rate-based model and its validation are presented in Kreul et al. (1998)and Noeres (2003).The latter pointed out that for this process the accuracy of a rate-based model is not significantly higher than that of an equilibrium stage model, and thus the equilibrium stage model was used to determine the optimal operation of the process (Fernholz et al. 2000) and as a basis for controller design. Mass and energy balances for all parts of the plant result in a differential-algebraicequation system consisting of more than 2000 equations. The main assumptions in the model are: 0
0 0
0
0 0
0
0
0 0 0
The structured packings can be treated as a number of theoretical plates using the HETP-value (height equivalent to theoretical plate). The vapor and the liquid phase are in thermodynamic equilibrium. All chemical properties depend on the temperature and the composition. The phase equilibrium is calculated using the Wilson equations. The dimerization of acetic acid in the vapor phase is taken into consideration. The reaction kinetics are formulated by a quasihomogeneous correlation. The pressure drop of the packing is calculated by the equation of Mickowiak (1991). The hold-up of the packing is determined by an experimentally verified correlation. Negligible vapor hold-up. Ideal vapor behavior. Constant molar hold-up in the condenser. The dynamics of the tray hydraulics and the liquid enthalpy are taken into consideration.
The aim of the controller is to ensure the tracking of the optimal trajectory in the presence of model inaccuracies and disturbances acting on the process.
4.2.3 Simplified Solution of the Model Equations
Generally, any process model can be used to predict the future process outputs f k + l , as long as the model is sufficiently accurate. A straightforward approach would be to use the same model that was used to calculate the optimal operation. Unfortunately the integration of this differential algebraic model is too time-consuming to solve the optimization problem given by Eqs. (1)and (2) within one sampling interval. Thus, a different model had to be developed to make sure that the solution of Eqs. (1)and (2) is found between two sampling points. The physical process model is based on heat and mass balances resulting in a set of differential equations. A large number of algebraic equations is needed to calculate the physical properties, the phase equilibrium, the reaction kinetics and the tray hold-ups, as well as the connections between the different submodels. Various numerical packages are now available to solve large differential-algebraicequation (DAE) systems like gPROMS (1997) or the Aspen Custom Modeler (ACM). Even
I
545
546
I
4 Model-based Control
though they are designed to solve large and sparse DAEs in an efficient way, generalpurpose solvers do not take advantage of the mathematical structure of a special problem. Our aim was to find a way to reduce the numerical effort required to calculate the solution of the DAE system which describes the reactive distillation process. The main idea is to split up the equation system into a small section that is treated by the solver in the usual manner and a large subsystem containing mainly the algebraic equations. An independent solver that communicates with the DAE solver calculates the solution of this subsystem. Generally, this sequential approach may not be advantageous since the solution of the algebraic part must be provided in each step of the iteration of the DAE solver. It will only be superior if the solution of the second part is calculated in a highly efficient way. Therefore an analysis of the system equations for one separation tray is given in the sequel. Similar considerations can be easily made for the reactive trays as well as the other submodels of the process. The core of the model for each separation tray consists of the mass balances of the components (Eq. (6))the heat balance (Eq. (7)),and the constitutive equation for the liquid mole fractions (Eq. (8)):
In addition to the core Eqs. (6)-(8), empirical correlations are used to calculate the molar hold-ups of the trays (Eq. (9)),the liquid enthalpy (Eq. (lo)),the vapor enthalpy (Eq. (ll)),and the density (Eq. (12)):
Finally the phase equilibrium is calculated by using a four-parameter Wilson activity coefficient model for the liquid phase and a vapor-phasemodel which takes into consideration the dimerization of the acetic acid in the vapor phase (Noeres 2003). This phase equilibrium model (Eq. (13)) is an implicit set of equations in contrast to Eqs. (9)-(12) which are explicit functions of the composition and the temperature.
4.2 NMPC Applied t o a Semibatch Reactive Distillation Process
Even though the formal description of Eqs. (9)-(13) feigns that its size is similar to the core model Eqs. (6)-(8),the opposite is true. Owing to the necessity of introducing a lot of auxiliary variables, especially for the phase equilibrium, Eqs. (9)-(13) make up the largest part of the overall system. Thus the idea is to move as many algebraic equations as possible, especially the parts containing the auxiliary variables, from the part which is handled by the DAE solver to an additional solver that exploits the mostly explicit structure of the equations. The DAE solver used in this work is DASOLVE, a standard solver in gPROMS for stiff DAEs (gPROMS, 1997).The proposed architecture of the algorithm is shown in Figure 4.2. The main task of the external software is to solve the implicit phase equilibrium Eq. (13a,b)in an efficient manner. Solving Eq. (13a,b)for given pressure and liquid composition means to find the temperature T such that condition Eq. (13b) is fulfilled for the values calculated by Eq. (13a).Thus, the phase equilibrium calculation can be treated as solving a nonlinear equation with one unknown variable. Once Eq. (13a) and Eq. (13b) are solved, the remaining variables can be calculated straightforwardly by the explicit Eqs. (9)-(12). All values are passed back from the DAE solver via the foreign object interface. In order to minimize the number of equations handled by the DAE-solver, the dynamics of the tray hold-up N and the liquid enthalpy hli, were neglected. This causes deviations between the original model and the model with neglected dynamics. Several case studies were performed to check the differences between the original model and the model with neglected dynamics. In many cases the predictions of both models can hardly be distinguished. In some cases, however, noticeable differences in the dynamic behaviors result. These inaccuracies have to be handled by the disturbance estimation of Eqs. (3)-(5). By applying this scheme to the complete column model, the time required to calculate the solution for typical model predictive control scenarios could be reduced by a factor of 6-10. The use of a simplified model and the special solution algorithm enable the online solution of the optimal control problem of Eqs. (1)-(3). The optimization algorithm L-BFGS-B of Byrd et al. (1994) is used to solve the optimization problem. This code solves nonlinear optimization problems with simple bounds on the decision variables and ensures a decrease of the goal function in each iteration step. The user of this code has to supply the values of the goal function as well as its derivatives with respect to the decision variables. The value of the cost function is calculated by integrating the model, the derivatives are obtained by perturbation. Using perturbations offers the opportunity to parallelize the calculation. Within a sampling period of
Figure 4.2
Scheme of the algorithm
I
547
548
I
4 Model-based Control
6 min, about 100 function and gradient evaluations can be performed. The maximal number of function and gradient evaluations which were needed for the cases investigated was 33. Thus, the algorithm is able to find the optimal solution within the sampling time.
4.2.4 Controller Performance
The analysis of Fernholz et al. (1999a) showed that a suitable control structure for this process is to control the concentrations of methyl acetate and water in the product stream by the reflux ratio and the heat duty of the reboiler. At our pilot plant, NIR (near infrared spectroscopy) measurements of the product concentrations are available. The nonlinear model predictive controller was tested in several simulation cases. Here the original model is used as the simulated process, whereas the simplified model is used in the controller. In order to explore the benefits of the nonlinear controller, a linear controller was designed as well (Engell and Fernholz 2003). The linear controller was chosen based on an averaged linear model calculated from several linear models which were obtained by linearization of the nonlinear model at several points on the optimal trajectory. The controller design was done using the frequency response approximation technique (Engell and Muller 1993).The details of the linear controller design are beyond the scope of this book, they can be found in Fernholz et al. (19991-3). The parameters of the cost function in Eq. (1)were chosen such that deviations of both controlled variables give the same contribution to the objective functions. Additional bounds on the manipulated variables were added. The reflux ratio is physically bounded by the values 0 and 1, while the heat duty is bounded to a lower value of 1 kW and an upper one of 8 kW to ensure proper operation of the column. In order to avoid undesired abrupt changes of the manipulated variables, small penalties on these changes were added. The values of the penalty parameters )Lii were selected in a way that large changes are possible for large deviations of the controlled variables but are unfavorable if they are close to their set-points1. Preliminary work on the model predictive control of this process had shown that the choice of a control horizon of H, = 2 and of a prediction horizon of H, = 5 gave good results. The closed loop responses for a set-point change of the methyl acetate concentration from a mole fraction of 0.8 down to 0.6 and back to 0.8 are shown in Figure 4.3. The use of the nonlinear controller reduces the time required to decrease the methyl acetate concentration drastically. The price that has to be paid for this reduction is a larger deviation of the water concentration. For the set-point change back to the original value, the differences between the two controllers are small. Next, the performance of both controllers was checked for set-points of methyl acetate and water which force the process into a region where a sign change of the static gain occurs. If the set-points of the mole fractions of methyl acetate and water are of all A,,is 0.01, while y is set to one. The physical units of the controlled variables are mole mole-', the reflux ratio is dimensionless and the heat duty is given in kilowatts.
1 The value
4.2 N MPC Applied to a Semibatch Reactive Distillation Process
methyl acetate
1
0
100 200 300 reflux ratio
0
100 200 300 time [min]
water
0.151
1
0
100 200 300 heat duty
0
100 200 300 time [min]
Figure 4.3 Methyl acetate set-point tracking. Black: nonlinear controller; grey: linear controller
changed simultaneously to 0.97 and 0.02 respectively, both controllers drive the process in the correct direction (Figure 4.4), but only the nonlinear controller is able to track both concentrations accurately. If the set-points are set back to their original values, the linear controller becomes unstable while the nonlinear controller works properly.
c
I[_-..
O I K
-
0 .c
3 0.8 + L a,
f 0.6r
t
0
0.05
i?
--L-
.. 100 200 300 time [min]
'F-I 10
100
200
300
time [min]
Figure 4.4 Set-point tracking in a region o f a sign change of the static gain. Black: nonlinear controller; grey: linear controller
I
549
550
I
4 Model-based Control
4.2.4.1 Disturbance Rejection
The main goal of the controller is to track the optimal trajectory of the process in the case of disturbances and plant-model mismatch. In the case of an accurate model and the absence of disturbances, no feedback controller would be necessary. Thus, two disturbances are imposed on the process during the simulation to test the disturbance rejection capabilities of the controllers. First the influence of disturbances of the heat supply is considered. After 200 min the heat supply is decreased by 0.7 kW (which is about 20 % of the nominal value), set back to its nominal value at t = 300 min and increased by 0.7 kW at t = 550 min until it is again reset to the nominal value at t = 700 min. The simulation results for both controllers are depicted in Figure 4.5. The nonlinear controller rejects the disturbance much faster than the linear controller, especially for the product methyl acetate. The second disturbance investigated is a failure of the heating system of the column. In order to minimize heat losses across the column surface, the plant is equipped with a supplementary heating system. A malfunction of this system will change the heat loss across the surface. The heat loss is increased by SO W per stage, set back to 0 and decreased by SO W per stage at the same times at which the disturbances of the heat duty were imposed before. The simulation results in Figure 4.6 show that the nonlinear controller rejects this disturbance more efficiently than the linear controller. Thus, the disturbance rejection can be significantly improved by using the nonlinear predictive controller. methyl acetate I
200 0.641
400 600 reflux ratio
800
200
z
7 0.6
0.58 400 600 time [min]
800
water 1
400 600 heat duty
800
400 600 time [min]
800
4
I
0.62
200
0.11 I
3.5 3
2.5
200
Figure 4.5 Rejection of a disturbance in the heat supply. Black: nonlinear controller: grey: linear controller; dashed optimal trajectory
4.2 NMPC Applied to a Semibatch Reactive Distillation Process
methyl acetate
L
water
I
I
I
200
400 600 reflux ratio
800
200
400 600 heat duty
800
200
400
800
200
400 600 time [min]
800
600 time [min]
Figure 4.6 Rejection of a disturbance in the heat loss. Black nonlinear controller; grey: linear controller; dashed optimal trajectory
4.2.5 Summary
In this section, we presented the principle of NMPC to track a precomputed trajectory of a complex process and discussed the application to a semibatch reactive distillation column. Neglecting the dynamics of the molar hold-ups and of the enthalpies enabled splitting up the original DAE system into a small DAE part, which is treated by the numerical simulator, and an algebraic part, which is solved by an external algorithm. This approach reduced the time needed to solve the model equations by a factor of 6-10, These reductions made the use of a process model that is based on heat and mass balances possible for a model predictive controller. The resulting nonlinear controller showed superior set-point tracking properties compared to a carefully designed linear controller. The nonlinear controller is able to track set-points that lie in regions where the process shows sign changes in the static gains and any linear controller becomes unstable. The nonlinear controller rejects disturbances faster than the linear controller. Moreover, since the nonlinear controller makes use of a model the range of validity of which is not restricted to a fNed operating region in contrast to the linear one, the nonlinear controller might be used for different trajectories giving more overall flexibility. The superior performance of the controller is due to the fact that a nonlinear process model is used. On the other hand, its stability and performance depend on the accuracy of the rigorous process model. If, e.g., the change of the gain of the process (which is caused by the fact that the product purity is maximized for certain values of reflux and heat duty) occurs for different values of the reflux and the heat duty than predicted by the model, the controller may fail to stabilize the process.
I
551
552
I
4 Model-based Control
4.3 Control of Batch Chromatography Using Online Model-based Optimization
4.3.1 Principle and Optimal Operation of Batch Chromatography
The chromatographic separation is based on the different adsorptivities of the components to a specific adsorbent which is futed in a chromatographic column. The most widespread process, batch chromatography, involves a single column which is charged periodically with pulses of the feed solution. These feed injections are carried through the column by pure desorbent. Owing to different adsorption affinities, the components in the mixture migrate at different velocities and therefore they are gradually separated. At the outlet of the column, the purified components are collected between cutting points, the locations of which are decided by the purity requirements on the products (Figure 4.7). For a chromatographic batch process with given design parameters (combination of packing and desorbent, column dimensions, maximum pump pressure), the determination of the optimal operating regime can be posed as follows: a given amount (or flow) of raw material has to be separated into the desired components at minimal cost while respecting constraints on the purities of the products. The operation cost may involve the investment into the plant and the packing, labor and solvent cost, the value of lost material (valuable product in the nonproduct fractions) and the cost of the further processing, e.g., removal of the solvent. The free operating parameters are: the throughput of solvent and feed material, represented by the flow rate Q or the interstitial velocity u, constrained to the maximum allowed throughput which in turn is limited by the efficiency of the adsorbent or the pressure drop; the injection period ti,(, representing the duration of the feed injection as a measure of the size of the feed charge; the cycle period tcyc.representing the duration from the beginning of one feed injection to the beginning of the next; the fractionating times.
0
0
0
The mathematical modeling of single chromatographic columns has been extensively described in the literature by several authors, and is in most cases based on differential mass balances (Guiochon 2002). The modeling approaches can be classified by the physical phenomena they include and thus by their level of complexity. Details
~~
injection
elution
7
column Figure 4.7
Principle of batch chromatography
,..
collection
tl
t2
b
trt
4.3 Control ofBatch Chromatography Using Online Model-based Optimization
on models and solution approaches can be found e.g., in (Diinnebier and Klatt 2000). The most general one-dimensional model (ignoring radial inhomogeneities) is the general rate model (GRM)
at
at
where also reaction terms in the liquid and in the solid phase were included. These two partial differential equations describe the concentrations in the mobile ) . adsorption isotherms relate phase (cb,i) and in the stationary phase (qi and c ~ , ~The the concentrations qi (substance i adsorbed by the solid) and cp,i(substance i in the stationary liquid phase). A commonly utilized isotherm functional form is biLangmuir isotherm: qi =
al%,i
1
+ C bIj5.i
+
a2cp,i 1+
j
C
b2jcp.j.
(16)
j
An efficient numerical solution for the GRM incorporating arbitrary nonlinear isotherms was proposed by Gu (1995).The mobile phase and the stationary phase are discretized using the finite element and the orthogonal collocation method. The resulting ordinary differential equation (ODE)system is solved using an ODE solver which is based on the Gear's method for stiff ODES. The numerical solution yields the concentrations of the components in the column at different locations and times. The concentration information at the outlet of the column is used to generate the chromatogram from which the production rate and the recovery yield can be computed. The requirements on the products can usually be formulated in terms of minimum purities, minimum recoveries or maximum losses. In the case of a binary separation without intermediate cuts, these constraints can be transformed into each other, so either the recovery yield or the product purity may be constrained. The production cost is determined by many factors, in particular the throughput, the solvent consumption and the cost of downstream processing. A simple objective function is the productivity Pr, i.e., the amount of product produced per amount of adsorbent. This formulation results in the following nonlinear dynamic optimization problem:
such that
Reci 1 Rec,in,i,
0 5 u i %ax> 0 5 Gnj, tcyc
i = 1, . . . , nsp
where Reci denotes the revovery yield of product i. This type of problem can be solved by standard optimization algorithms. In order to reduce the computation times to enable online optimization, Diinnebier et al. (2001) simplified the optimization problem and decomposed it in order to enable a more
I
553
554
I
4 Model-based Control
efficient solution. They exploited the fact that the recovery constraints are always active at the optimal solution and consider them as equalities. The resulting solution algorithm consists of two stages, the iterative solution of the recovery equality constraints, and the solution of the remaining unconstrained static nonlinear problem. 4.3.2 Model-based Control with Model Adaptation
In industrial practice, chromatographic separations are usually controlled manually. However, automatic feedback control leads to a uniform process operation closer to the economic optimum, and it can include online reoptimization. Dunnebier et al. (2001) proposed the model-based online optimization strategy shown in Figure 4.8.
7 t
Estimation of the Model Parameters
Control of the Fractionatink Valve
Independently &tmated
Parameters
7
Contraiatx
Purity, Recovery
mm.Pressure Drop Figure 4.8
Control scheme for chromatographic batch separations
4.3 Control ofBatch Chromatography Using Online Model-based Optimization
I
555
Essentially, this scheme performs the above optimization of the operating parameters online. To improve the model accuracy and to track changes in the plant, an online parameter estimation is performed. A similar run-to-run technique has been proposed by Nagrath et al. (2003). Note that this scheme contains feedback only in the parameter estimation path. Therefore it will lead to good results only if the model is structurally correct so that the parameter estimation leads to a highly accurate model.
86 -
76 -
L
-13
Resulting purities I
I
Component A ComponentB
I
I
1 1
0
5
10
15
r
I
I
I
I
,
20
25
I
I
I
40 Time [hl
30
35
I
I
I
35
40
Optimal interstitialvelocity
I
5
I
10
15
I
20
c3
C.-
,.A
JU
Switching times
0
1
. n
~~
0
1
I
5
I
10
I
15
20
I
25
Figure 4.9 Product purities and operating parameters during an experimental run (set-point change for required purity at approximately 28 h from 80 to 85 %) (from Dunnebier et al., 2001)
I
30
I
35
Time [h]
I
40 Time [h]
556
I
4 Model-basedControl
The scheme was tested successfully at the pilot scale for a sugar separation with linear adsorption isotherm by Diinnebier et al. (2001). The product concentrations were measured using a two-detector concept as first proposed by Altenhohner et al. (1997). A densimeter was used for the measurement of the total concentration of fructose and glucose and a polarimetric detector for the determination of the total rotation angle. Both devices were installed in series at the plant outlet. Figure 4.9 shows an experimental result. First the operating parameters are modified in order to meet the product purity and recovery of 80 % each. After about 28 h, the controlled operating parameters reach a stable steady state. At this point a setpoint change takes place in the product specifications: purity and recovery are now required to be 86 %. The control scheme reacts immediately, reducing the interstitial velocity and increasing the injection and cycle intervals. This leads to a better separation of the two peaks and to an increase in purity as desired. The controlled system quickly converges to a new steady state. 4.3.3 Summary
The key idea of the approach described in this section is to use model-based set-point optimization for model-based closed-loop control. Plant-model mismatch is tackled by adapting key model parameters to the available measurements so that the concentration profiles at the output which are predicted by the model match the observed ones. Experimental results showed that this approach works very well in the case of sugar separations where the model is structurally correct. The optimization algorithm was tailored to the structure of the problem so that convergence problems were avoided. Owing to the use of a tailored algorithm and the fact that the process is quite slow, computation times were not a problem. 4.4 Control by Measurement-based Online Optimization
In the two-step approach described in the previous section, the model parameters are updated by a parameter estimation procedure so that the model represents the plant at current operating conditions as accurately as possible. The updated model is used in the optimization procedure to generate a new set-point. This method works well for parametric mismatch between the model and the real plant. However, it does not guarantee an improvement of the set-point when structural errors in the model are present. In chromatographic separations, structural errors result e.g., from the approximation of the real isotherm by the Bi-Langmuir function. One important cause of plant-model mismatch can be the presence of small additional impurities in the mixture which may lead to considerable deviations of the observed concentration profiles at the output. The model-based optimization then generates a suboptimal operating point which in general does not satisfy the constraints on purity or recovery. The conventional solution is to introduce an additional control loop that regu-
4.4 Control by Measurement-based Online Optimization
lates the product purities, as proposed and tested by Hanisch (2002). However, the changes of the operating parameters caused by this control loop may conflict with the goal of optimizing performance. 4.4.1 The Principle of Iterative Optimization
To cope with structural plant-model mismatch, the available measurements can be used not only to update the model but also to modify the optimization problem in such a manner that the gradient of the (unknown) real process mapping is driven to zero, in contrast to satisfying the optimality conditions for the theoretical model. Such an iterative two-step method was proposed by Roberts (1979),termed integrated system optimization and parameter estimation (ISOPE). A gradient-modification term is added to the objective function of the optimization problem. ISOPE generates set-points which converge towards the true optimum despite parametric and structural model mismatch. Theoretical optimality and convergence of the method were proven by Brdys et al. (1987). From a practical point of view, the key element of ISOPE is the estimation of the gradient of the plant outputs with respect to the optimization variables. The general model-based set-point optimization problem can be stated as
min U
such that
J(u7i) g(u) 5 0 Urnin
5
5
Urnax
where J(u,y) is a scalar objective function, u is a vector of optimization variables (setpoints), y is a vector of output variables, and g(u)is a vector of constraint functions. The relationship between u and y is represented by a model
y = f (u, a )
(19)
where a is a vector of model parameters. ISOPE is an iterative algorithm, where at each step of the iteration measurement information (i.e., the plant output y$<,which was measured after the last set-point was applied) is used to update the model and to modify the optimization problem. The updating of the model can be realized as a parameter estimation procedure. A vector of gradient modifiers is computed using the gradient of the updated model and of the plant at set-point dk):
The optimization problem of Eq. (18)is modified by adding a gradient-modification term to the objective function:
+ h(k)Tu
min
J(u, y)
such that
g(u) 5 0 Umin 5 5 Umax
U
I
557
558
I
4 Model-based Control
Assuming that the constraint function g(u)is known, the optimization problem can be solved by any nonlinear optimization algorithm. Let denote the solution to Eq. (21), then the next set-point is chosen as:
where K is a diagonal gain matrix, the diagonal elements are in the interval [0,1], i.e., K is a damping term. Starting from an initial set-point, ISOPE will generate a sequence of set-points which, for an appropriate gain matrix, will converge to a setpoint which satisfies the necessary optimality conditions of the actual plant. It can be proven that the modification term leads to the satisfaction of the optimality conditions at the true plant optimum. Tatjewski (2002) redesigned the ISOPE method resulting in a new algorithm that does not require the parameter estimation procedure. The key idea is to introduce a model shift term in the modified objective function:
with the following definition of the modifier
Although the parameter CY is not updated, it is can be proven that the optimality conditions are satisfied. Parameter adaptation thus is no longer necessary, although it may be beneficial to the convergence of the procedure. As the optimality of the result is solely due to the gradient-modification in the optimization problem, the redesigned algorithm could be termed iterative gradient-modijication optimization.
4.4.2 Handling o f Constraints
If constraint functions depend on the behavior of the real plant, they cannot be assumed to be precisely known, and using a model for the computation of the constraint functions will not assure that the constraints are actually satisfied. In the original derivation of the ISOPE method, constraints were assumed to be processindependent. An extension of the ISOPE strategy which considers process-dependent constraints can be found in BrdyS et al. (1986). In this formulation, a recursive Lagrange multiplier is used. Tatjewski et al. (2001) also proposed using a follow-up constraint controller that is responsible for satisfying the output constraints. A different method to handle the process-dependent constraints was proposed in Gao and Engell 2005. It is based on the idea of using plant information acquired at
4.4 Control by Measurement-based Online Optimization
the last set-point g(u('))to modify the model-based constraint functions g(u) at the current iteration. The modified constraint functions approximate the true constraint functions of the plant in the vicinity of the last set-point. The modified constraint function is formulated as:
The modified constraint function has the following properties at u('): 0
0
The modified constraint has the same value as the real constraint function, g(') (U''') = gY:
(u(y.
The modified constraint has the same first order derivative as the real constraint function, & $ k ) ) ~ ( ~ ( ' 1 ) = (g"): ( ~ ( ' 1 ) .
As the modified constraint is only valid in the vicinity of dk), a bound u(')- Au i u 5 u(') + Au is added to the optimization problem to limit the search range in the next iteration. This guarantees that the constraints are not violated greatly.
4.4.3 Estimation of the Gradient of the Plant Mapping
A key element of the iterative gradient-modificationoptimization method is to estimate the gradient of the plant mapping. Several methods for this have been proposed during the last 20 years. These methods can be grouped into two categories according to whether set-point perturbations are used or not. Early versions of the ISOPE technique used finite difference techniques to obtain the plant gradient by applying perturbations to the current set-point. Later versions used dynamic perturbations and linear system identification methods to estimate the gradient (Lin et al. 1989, Zhang and Roberts 1990). Both methods have the disadvantage of requiring additional perturbations. In Roberts (ZOOO), Broyden's formula was used to estimate the required gradient from current and past measurement information. The Broyden estimate is updated at each iteration using a formula of the form:
where D is the estimate of aY(X)/aX, and the superscript k refers to the iteration index. Although no additional perturbation is needed, care must be taken to avoid ill-conditioning as AX(') -+ 0. It should also be noted that the updating formula requires to be initialized with D(O). BrdyS and Tajewski (1994) proposed a different way of implementing a finite difference approximation of the gradient without additional set-point perturbations. This method uses set-points in past iterations instead
I
559
560
I
4 Model-based Control
of additional set-point perturbations. The gradient at set-point as:
is approximated
where rn is the dimension of the vector u. Theoretically, the smaller the difference between the set-points, the more accurate will the approximation of the gradient be. On the other hand, because the measurements of the plant outputs y9c(k-i),i = 0,1, ..., rn are usually corrupted by errors, the matrix S ( k ) should be sufficiently wellconditioned to obtain a good approximation of the gradient. Let
denote the conditioning of S(k)in terms of its singular values. If is too small, the errors in the measurements will be amplified considerably and the gradient estimation will be corrupted by noise. In BrdyS and Tajewski (1994)the optimization problem is reformulated to take into account future requirements of the gradient estimation. An inequality constraint
(where 0 < 6 < 1)is added to the optimization problem at the (k - 1)" iteration so that the set-point u(*)will give a good approximation of the gradient. The advantage of this method is that no additional set-point perturbations are needed, but a loss of optimality will be observed at the current iteration because the inequality constraint reduces the feasible set of set-points. Therefore more iterations are required to attain the optimum, especially for a bigger values of 6. A novel method was proposed for the gradient estimation in Gao and Engell (2005). It follows the same idea as Brdys's method, i.e., using the past set-points in the finite difference approximation of the gradient. But the conditioning of S(k) is included not as a constraint in the optimization problem, but as an indicator to decide whether an additional set-point perturbation should be added. At the (k - 1)" iteration, after a new set-point dk)is acquired, is computed using {u(*), d-'), ..., u ( ~ - ~If) it} .is less than the given constant 6, an additional set-point uLk)will be added to formulate a new set-point set {uQ,uLk),u+'), ..., u ( ~ - ~}- for ' ) the gradient approximation. The gradient at dk) is approximated by:
4.4 Control by Measurement-based Online Optimization
with
The additional set-point provides an additional perturbation around the current setpoint. Its location is optimized by solving
such that
g(k-l)(uhk))5 0 U(k-l) Umin
(32)
uik)
5 < U(k-l) + au 5 ULk) 5 U m a . -
Therefore, by introducing the additional set-point, SLk’ is kept well-conditioned and the optimal set-point dk) can be used in the gradient approximation. This method does not compromise optimality, and it is not as expensive as finite difference techniques with set-point perturbations in each iteration, because an additional set-point < d. perturbation is added only when dk) The procedure can be summarized as follows: 1. Select starting set-points which include the initial set-point and m other set-points for the gradient estimation at the initial set-point. Initialize the parameters of the algorithm, i.e., K, S and Au. 2. At the kth iteration, apply set-point dk)(and uAk)if needed) to the plant. Measure the steady-stateoutputs. 3. Approximate the gradient using the proposed method. Modify the objective function and the constraint functions in the optimization problem and add the additional bound. 4. Solve the modified optimization problem Eqs. (23)-(26) using any nonlinear optimization algorithm and generate the next set-point. 5. Check the termination criterion IIdk+’) - uIk)11 < E and decide whether to continue or to stop the optimization procedure. 6. If the termination criterion is not satisfied, check the conditioning of S(k)in terms of its singular values
and if dk)2 6 return to step 2, otherwise go to step 7. 7. Add an additional set-point by solving the optimization problem Eq. (32), then return to step 2.
I
561
562
I
4 Model-based Control
4.4.4 Application to a Batch Chromatographic Separation with Nonlinear Isotherm
The iterative gradient-modificationoptimization method was tested in a simulation study of a batch chromatographic separation of enantiomers with highly nonlinear adsorption isotherms that had been used as a test case in laboratory experiments before (Hanisch 2002). A model with a bi-Langmuir isotherm that was fitted to measurement data is considered as the “real plant” in the simulation study. A model with isotherms of a different form is used in the set-point optimization. The flow rate Qand the injection period tiniare considered as the manipulated variables here. The cycle period t, is fwed to the duration of the chromatogram. The performance criterion is the production rate Pr: Pr = -roduct/tcyc
(33)
The recovery yield Rec is constrained to a minimal value. This results in the optimization problem
such that
Rec(Q, Gnj) ? Rec,i, OSQSQmax tinj 2 0
(34)
Figure 4.10 shows the chromatograms of the “real” and the perturbed model for the same set-point. Note that such differences can be generated by rather small errors in the adsorption isotherms. The second component is considered to be the valuable product. The purity requirement is 98%. The recovery yield should be greater than 80%. There is an upper limit of the flow rate of 2.06 cm3SK’. The flow rate and the injection period are normalized to the interval [0,1]in the optimization. The gain coefficients in K are set to 1. The bound A u is [.06 .06]’. The recovery constraint was handled by the method Chromatograms
0
500 Time Is1
,ooo
Figure 4.10 illustration of the influence of model mismatch on the chromatogram. Solid line: “real” model; dashed line: nominal optimization model
4.4 Control by Measurement-based Online Optimization
proposed above. The iterations were stopped when the calculated set-point change was less than a predefined tolerance value ( E = 0.006) or the optimization algorithm did not terminate successfully. Different gradient estimation methods were used in the iterative optimization procedure:
0
the finite difference method, i.e., applying perturbations to each set-point (FDP); Brdys's method, where an additional constraint is added to the optimization problem so that the next set-point can be used in the estimation of the gradient, no perturbations; finite difference method with additional set-point perturbations when necessary (FDPN).
Several runs of the set-point optimization were simulated, first without measurement errors and then with errors. In the case without errors, the optimization procedures with FDP and FDPN terminated successfully, while the optimization procedure with BrdyS's method stopped early because the optimization algorithm could not find a feasible point in the given number of iterations. The optimization procedure with FDPN used one iteration more than the optimization procedure with FDP, but it used only six additional set-points (6 = 0.2). The optimization procedure with FDP perturbed the set-point eight times at each iteration to estimate the gradient so that it generated 80 additional set-points overall. The trajectories of the production rate Pr and of the recovery yield Rec are depicted in Figure 4.11. The recovery constraint was met by all three optimization procedures. Figure 4.12 shows the set-point trajectories and the production rate and recovery contours of the real model and the optimization model. Although a considerable mismatch exists between the real model and the optimization model, the iterative gradient-modification optimization method generates set-points which converge to the real optimum.
1' 0
0
"
1
1
2
2
"
3
3
"
"
5 6 7 Iteration index I-]
8
4 5 6 7 Iteration index [-]
8
4
'
'
I
9 1 0 1 1
9 1 0 1 1
Figure 4.11 Trajectories of production rate and recovery yield, simulations without errors. * Set-points using the finite difference method (FDP), A set-points using BrdyS's method, o set-points using the finite difference method with additional set-point perturbations when necessary (FDPN). Recovery limit: 80%, 6 = 0.2
I
563
4 Model-based Control Set-point trajectory
80
100
140
120
160
180
Injection period (s]
200
220
240
Figure 4.12 Illustration of set-point trajectories. Solid lines: contours o f the “real” model; dotted lines: contours of the nominal optimization model, ’5 set-points using FDP method, A set-points using BrdyB’s method, o set-points using FDPN method, do)initial set-point, u(’)and d2)additional initial set-points for the gradient estimation at do)
Table 4.1 shows the results of simulations with measurement errors. Different values of 6 were tried and all simulations were stopped at the optimum of the “real” model. With increasing 6, more additional set-points were used, which improved the accuracy of the gradient estimations. Therefore, fewer iterations were needed to arrive at the optimum. Considering the total number of set-points used, 6 = 0.1 gives a good result. Table 4.1
Optimization results of the simulations with errors
6
Number o f iterations
Additional set-points
Final set-point
0.2
13 13 15
7
22
4
(2.06, 99.73). (2.06, 99.79). (2.06, 99.53). (2.06, 99.10).
0.1 0.05 0.01
6
5
Optimum o f the “real” model
(2.06, 99.35).
4.4.5 Summary
The identification of an accurate model requires considerable efforts, especially for chemical and biochemical processes. In practice, inaccurate models must be used for online control and optimization. A purely model-based optimization will generate a
4.5 Nonlinear Model-based Control of a Reactive Simulated Moving Bed ( S M B ) Process
suboptimal or even infeasible set-point. We described a modified iterative gradientmodification optimization strategy that converges to the real optimum in a few steps while respecting the constraints. A few additional set-points are introduced to reduce the effect of measurement errors on the gradient approximation. The example of a batch chromatographic separation with highly nonlinear isotherms demonstrated the impressive improvements that can be obtained by this approach.
4.5 Nonlinear Model-based Control of a Reactive Simulated Moving Bed (SMB) Process 4.5.1 Principle and Optimization of Chromatographic SMB Separations
Batch chromatography has the usual drawbacks of a batch operation, and leads to highly diluted products. On the other hand, it is extremely flexible, several components may be recovered from a mixture during one operation and varying compositions of the desorbent can be used to enhance separation efficiency. The idea of a continuous operation with countercurrent movement of the solid led to the development of the simulated moving bed (SMB) process (Broughton 1966). It is gaining increasing attention due to its advantages in terms of productivity and eluent consumption (Guest 1997, Juza et al. 2000). A simplified description of the process is given in Figure 4.13. It consists of several chromatographic columns connected in series which constitute a closed loop. A countercurrent motion of the solid phase relative to the liquid phase is simulated by periodically and simultaneously moving the inlet and outlet lines by one column in the direction of the liquid flow. After a start-up phase, SMB processes reach a cyclic steady state (CSS). Figure 4.13 shows the CSS of a binary separation along the columns plotted for a fixed time instant within a switching period. At every axial position, the concentrations vary as
em I
del. 1,xtract
II
hi i4
I
feed
m
1
!!J
IV zone raffinate
Figure 4.13 Simulated Moving-Bed Process. At the top, the concentration profiles at the cyclic steady state are shown. Pure a I S withdrawn at the extract port and pure B is withdrawn at the raftinate port
I
565
566
I
4 Model-based Control
a function of time, and the values reached at the end of each switching period are equal to those before the switching, relative to the port positions. In order to exploit the full potential of SMB processes, recent research has focused on the design of the process, in particular the choice of the operation parameters for a given selection of adsorbent, solvent and column dimensions, using mathematical optimization. As the optimum should be determined precisely while meeting all constraints, rigorous models which include the discrete dynamics are used (Klatt et al. 2000, Zhang et al. 2003). In addition to a higher reliability compared to shortcut methods, this approach is applicable to a broad variety of SMB-like operating regimes. The optimization problem can be stated as (Toumi et al. 2004~):
where PurExand PurRafdenote the purities at the extract and the raffinate ports and summarizes of dynamics of the process from one switching period to the next, including the shifting of the ports and c, denotes the axial concentration profile along the columns. The goal is to operate the process at the optimal CSS with minimal separation costs Costspecwhile the purity requirements at both product outlets are fulfilled. Equation (34) constitutes a complex dynamic optimization problem the solution of which critically depends on an efficient and reliable computation of the CSS defined by
The free optimization variables are the flow rates in the sections Q and the switching period t. They are transformed to the so-called p-factors, which represent the ratio between the flow rates Q and the hypothetical solid flow rate. This nonlinear transformation leads to a better conditioned optimization problem (Diinnebier et al. 2001). An additional constraint takes the maximum pressure drop into account. The main difficulty of the optimization problem results from the large dimension of the CSS equations when a first-principle plant model is used. A simple and robust optimization approach consists of integration of the model equations starting from initial values until the CSS is reached (sequential approach). At the CSS, the objective function as well as the constraints are evaluated and returned to an optimizer. This yields a small number of free parameters and hence a relatively simple optimization problem. The number of cycles required to reach a CSS usually is not too large (about 100) in contrast to other periodic processes like pressure swing adsorption where 1000 or more periods have to be simulated. The computational effort is therefore reasonable.
4.5 Nonlinear Model-based Control of a Reactive Simulated Moving Bed (SMB) Process
4.5.2 Model-based Control Klatt et al. (2002)proposed a two-layer control architecture similar to the one used for batch chromatography, where the optimal operating trajectory is calculated at a low sampling rate by dynamic optimization based on a rigorous process model. The model parameters are adapted based on online measurements. The low-level control task is to keep the process on the optimal trajectory despite disturbances and plant/ model mismatch. The controller is based on identified models gained from simulation data of the rigorous process model along the optimal trajectory. For the linear adsorption isotherm case, linear models are sufficient (Klatt et al. 2002), whereas in the nonlinear case neural networks (NN) were applied successfully (Wang et al. 2003). A disadvantage of this two-layer concept is that the stabilized front positions do not guarantee the product purities if plant-model mismatch occurs. Thus an additional purity controller is required. Toumi and Engell (2004a)recently presented a nonlinear model-predictive control scheme and applied it to a three-zones reactive SMB (RSMB) process for glucose isomerization (Toumi and Engel12004b, 2005). The key feature of this approach is that the production cost is minimized online, while the product purities are considered as constraints, thus real online optimization is performed, not trajectory tracking. The following optimal control problem is formulated over the finite control horizon H,:
such that
The prediction horizon is discretized in cycles, where a cycle is a switching time t(k) multiplied by the total number ofcolumns. Equation (37) constitutes a dynamic optimization problem with the transient behavior of the process as a constraint. The objective function Q is the sum of costs incurred for each cycle (e.g., desorbent consumption) and a regularizing term added in order to smooth the input sequence in order to avoid high fluctuations in the input sequence from cycle to cycle. The first equality constraint represents the plant model evaluated over the finite prediction horizon H,. The switching dynamics are introduced via the permutation matrix P. Since the maximal attainable pressure drop by the pumps must not be exceeded, constraints are imposed on the flow rates in zone I. Further inequality constraints g(fii)are added in order to avoid negative flow rates during the optimization.
I
567
568
I
4 Model-based Control
The control objective is reflected by the purity constraint over the control horizon
H,which is corrected by a bias term APurExresulting from the difference between the last simulated and the last measured process output to compensate unmodeled effects: APurEx,k = PurEx,(k-l) - PUrEx,k,rneas-
(38)
A second purity constraint over the whole prediction horizon acts similar to a terminal constraint forcing the process to converge towards the optimal CSS. It should be pointed out that the control goal (i.e., to fulfil the extract purity) is introduced as a constraint. A feasible path SQP algorithm is used for the optimization (Zhou et al. 1997)which generates a feasible point before it starts to minimize the objective function. 4.5.3 Online Parameter Adaptation
The concentration profiles in the recycling line are measured and collected during a cycle. Since this measurement point is fixed in the closed-loop arrangement, the sampled signal includes information of all zones. During the start-up phase, an online estimation of the actual model parameters is started in every cycle. The quadratic cost functional Jest(p):
is minimized with respect to the parameters p. For this purpose, the least squares solver E04UNF from the NAG-library is used. A by-product of the parameter estimation is the actual value Q ( k ) of the state vector which is given back to the NMPC controller. 4.5.4 Simulation Study
Figure 4.14 shows a simulation scenario where the desired extract purity was set to 70% at the beginning of the experiment. The desired extract purity was then changed to 60% at cycle 60. At cycle 120, the desired extract purity was increased to 65 %. The enzyme activity and nonce the reaction rate is assumed to decay exponentially during the experiment. A fast response of the controller in both directions can be observed. Compared to the uncontrolled case, the controller can control the product purity and compensate the drift in the enzyme activity. The evolution of the optimizer iterations is plotted as a dashed line and shows that a feasible solution is found rapidly and that the concept can be realized in real time. In this example, the control horizon was set to two cycles and the prediction horizon to ten cycles. A diagonal matrix I$ = 0.02 1(3,3)was chosen for regularization.
I
4.5 Nonlinear Model-based Control ofa Reactive Simulated Moving Bed (SMB) Process 1
I
I
I
1
I
I
I
I
I
I
I
569
I
1
20 10 13 6
1
1
I
I
I
I
I
1
1
I
I
I
I
25
37
49
61
73
85
97
109
121
133
145
157
169
181
25
37
49
61
73
85
97
109
121
133
145
157
169
181
I
I
I
I
I
I
I
I
I
I
1
I
I
4 2 0 13 30
20 10 0' 13
I
25
I
37
I
49
I
61
1
73
I
85
I
I
I
I
I
I
1
97
109
121
133
145
157
169
181
60 50 13 Figure 4.14
-
25
37
49
61
73
85 97 109 Period Number
121
133
145
157
169
Control scenario H, = 2, H, = 10
Figure 4.15 shows the result of the parameter estimation. A good fit was achieved and the estimated parameter follows the drift of the reaction rate adequately.
4.5.5 Experimental Study
A sensitivity analysis showed that the process is highly sensitive to the values of the Henry coefficients, the mass transfer resistances and the reaction rate. These are therefore key parameters of the reactive SMB process. These parameters are reestimated online at every cycle (a cycle is equal to switching time multiplied by the number of columns). In Figure 4.16, the concentration profiles collected in the recycling line are compared to the simulated ones. At the end of the experiment all system parameters have converged towards stationary values as shown by Figure 4.17. The developed mathematical model describes the behavior of the RSMB process well. The formulation of the optimization problem (37) was slightly modified for the experimental investigation. The sampling time of the controller was reduced to one switching period instead of one cycle, so that the controller reacts faster. The switching time was still used as a controlled variable, but modified only from cycle to
181
14
3 1.3 b 1.2
! I c
0
g 1 1
1
0.9
0
.c
5
10
5
10
15
20
25
30
15
20
25
30
1500-
0 3
LL
g 1000-
.c
8 500-
"0 Figure 4.15
Cycle Number
Estimation of the reaction rate
cycle. This is due to the asymmetry of the RSMB process that results from the dead volume of the recycling pump in the closed loop. It disturbs the overall performance of the process and is corrected by adding a delay for the switching of the inlet/outlet line passing the recycling pump. A detailed description of this method is provided in the patent (Hotier 199G).Therefore the shift of the valves is not synchronous to compensate for the technical imperfection of the real system and to get closer to the ideal symmetrical SMB system. In order to avoid port overlapping, the switching time must be held constant during a cycle. In the real process, the enzyme concentration changes from column to column. The geometrical lengths of the columns also differ slightly. Moreover, the temperature is not constant over the columns due to the inevitable gradient of the closed heating-circuit. These problems cause a fluctuation of the concentration profiles at the product outlet. Even at the CSS, the product purity changes from period to period. Using the bias term given by Eq. (38)causes large variations of the controlled inputs from period to period. This effect was damped by using the minimal value over the last cycle:
4.5 Nonlinear Model-based Control of a Reactive Simulated Moving Bed (SMB) Process
.
= 150
I
571
I
; 100 6 50 0)
t
0 0
.
= 150
I
I
; 100 C
1
2
3
4
5
n
0)
0
2
3
4
5
1
2
3
4
5
6
5
6
100
c
6 50
6 50
1
2
3
4
5
n v
6
0
.
I
6 50
8
.
=. 150
150
a
5c 100 0
1
.
. ;
100
n v
6
,150
0)
0
5
= 150
6
. ;
n v
4
t
= 150 c
3
0)
50 0 0
2
; 100 6 50
0
8
1
; 100 0)
t
1
2 3 4 Number of periods
5
6
50
0' 0
1
2 3 4 Number of periods
Figure 4.16 Comparison of experimental and simulated concentration profiles collected at the recycle line
APur,,,k = ,
min
j=(k-1 ,....k-l-Ncoi)
(PurEx,(k--l) -
The desired). purity for the experiment reported below was 55.0% and the controller was started at the GOth period. As in the simulation study, a diagonal matrix Rj= 0.02 1,3,3)was chosen for regularization. The control horizon was set to H, = 1 and the prediction horizon is H,=GO periods. Figure 4.18 shows the evolution of the product purity as well as of the controlled variables. In the open-loop mode where the operating point was calculated based on the initial model, the product purity was violated at the periods numbered 48 and 54. After a cycle the controller was able to drive the purity above 55.0% and to keep it there. The controller first reduces the desorbent consumption. This action seems to be in contradiction to the intuitive idea that more desorbent injection should enhance the separation. In the presence of a reaction this is not true, as shown by this experiment. The controlled variables converge towards a steady state, but they still change from period to period, due to the nonideality of the plant.
I
i L
0
0.015 0.01
0.005 v)
r"
0'
1 4000 I
I
4
5
6
7
I
8
9
I
3
I
I
2
10
11
12
I
I
I
I
I
I
I
I
I
I
I
1
I
I
I
I
1
I
I
2000 n
"1
I
I
2
Figure 4.17
I
I
I
3
4
5
6
7 Cycle number
I
8
I
I
I
9
10
11
12
Online estimation of the model parameters
4.5.6 Summary
Closed-loop control of SMB processes is a challenging task because of the complex dynamics of the process and the large order of the discretized model. By formulating the control task as an online optimization problem on a receding horizon, the process can be at an optimal operating point while meeting constraints on the product purities. The feasibility of the approach has been demonstrated on a real pilot-scale plant using an industrial PLC-based in a well-known term. 4.6 Conclusions
In this chapter, it was demonstrated by means of several examples how rigorous, first-principles-basedmodels can be used in process control. In the reactive distillation case study, a NMPC was presented that is based upon a slightly simplified rigorous process model. For reasons of computational efficiency, the solution of the algebraic equations was separated from the solution of the balance equations, resulting
4.6 Conclusions
17 -. .............................. 16-~
15
-
-
.
I
.
- ,
. I
I
573
.............................
........ .
I
L
.
I
I
I
I
66
72
78
84
21 1
-.r:
E E
1
42 11.5 11
I
48 1
I
54 I
I
60 I
I
I
I
I
90
-
.............................
U"
48
54
LL
60
66
I
72
78
I
84
Cycle number Figure4.18 Control experiment for a target purity of 5 5 0 %
in a performance gain by about 10. The NMPC controller not only gave a much better performance than the linear controller but it could also control the process in a region where the gains change their signs so that a linear controller inevitably fails. In related work, we used a neural net approximation of the rigorous process model in an NMPC controller, giving a slightly inferior performance with a much reduced computational effort (Engell and Fernholz 2003). Online optimization using measurement information in many cases is an attractive alternative to the tracking of precomputed references because the process can be operated much closer to its real optimum, while still meeting hard bounds on the specifications. The measurement information can be used in the control scheme in various ways. The weakest form of feedback is to use the measurements for parameter adaptation only which requires a structurally correct model. In the control of the SMB process, this was combined with updating a disturbance model so that the desired product purities were maintained even for plant-model mismatch. Measurement information can also be used to modify the gradients in the optimization problem, ensuring convergence to the true optimum even in the case of structural model mismatch. The biggest obstacle to the widespread use of model-based control is the effort needed to obtain faithful dynamic models of complex processes. While it has become
90
574
I
4 Model-based Control
routine to base process design on rigorous stationary process models, the effort to develop dynamic models is usually avoided. Process designers tend to neglect dynamic effects and to believe that control will somehow deal with them. As shown for the reactive distillation example, however, standard methods may fail, especially if a process is run at an optimal point, because near such an operating point, some variables will exhibit a change of the sign of the gain unless the optimum is only defined by constraints. A combination of first principles-basedand black box models, the parameters of which are estimated from operational data, may be a way to obtain sufficiently accurate models without excessive effort. In combination with this approach the application of optimization techniques which take model mismatch explicitly into account, as presented in Section 4.4,is very promising. References 1 Agreda V. H. Partin L. R. Heise W. H. High
purity methyl acetate via reactive distillation, Chem. Eng. Prog. 86 (1990)p. 40-46 2 Aftenhoner U. Meurer M. Strube]. SchmidtTraub H. Parameter estimation for the simulation of liquid chromatography, J. Chromatogr A 769 (1997) p. 59-69 3 Brdyf M. Chen S. Roberts P. D. An extension to the modified two-step algorithm for steady-state system optimization and parameter estimation, Int. J. Syst. Sci. 17 (1986)p. 1229-1243 4 BrdyS M. Ellis]. E. Roberts P. D. Augmented integrated system optimization and parameter estimation technique: derivation, optimality and convergence, IEEE Proc. 134 (1987) p. 201-209 5 Brdyf M . Tajewski P. An algorithm for steady-state optimizing dual control of uncertain plants, Proceedings of the 1st IFAC Workshop on New Trends in Design of Control Systems, Slovakia, (1994) pp. 249-254 6 Broughton D. (1966) Continuous simulated counter-current sorption process employing desorbent made in said process. US Patent 3.291.726 7 Byrd R. H. Lu P. Nocedaf]. Zhu C. (1994)A limited memory algorithm for bound constrained optimization, Technical Report NAM-08, Department of Electrical Engineering and Computer Science, Northwestern University, USA 8 Draeger A. Ranke H EngeffS. Model predictive control using neural networks. IEEE Control Syst. Mag. 15 (5) (1995)p. 61-66 9 Diinnebier G. Engelf S. Epping A. Hanisch F. lupke A. Klatt K.-U. Schmidt-Traub H .
Model-based control of batch chromatography, AIChE J. 47 (2001) p. 2493-2502 10 Diinnebier G. Klatt K.-U. Modelling and simulation of nonlinear chromatographic separation processes: a comparison of different modelling approaches, Chem. Eng. Sci. 55 (2000) p. 373-380 11 EngeffS. Fernhofz G . Control of a Reactive Separation Process, Chem. Eng. Process. 42 (2003) p. 201-210 12 Engeff S. MiifferR. Multivariable controller design by frequency response approximation, Proceedings of the 2nd European Control Conference ECC2, Groningen, (1993) pp. 1715-1720 13 Fernhofz G. Engelf S. Fougner K. (1999a) Dynamics and Control of a Semibatch Reactive Distillation Process. Proc. 2"d European Congress on Chemical Engineering (CDROM), Montpellier 14 Fernhofz G. Engelf S. Kreuf L.-U. Gdrak A. (2000) Optimal Operation of a Semibatch Reactive Distillation Column. Proc. 7th Int. Symposium on Process Systems Engineering, Keystone, Colorado. In: Computers & Chemical Engg. 24, 1569-1575 15 Fernhofz. G. Wang W. Engeff S. Fougner K. Bredeho?/.-P. Operation and control of a semi-batch reactive distillation column. Proceedings of the 1999 I E E E CCA, Kohala Coast, Hawaii, August 22-27, (199913) pp, 397-402, IEEE Press 16 Gao W. Engell S. (2005) Iterative Set-Point Optimization of Batch Chromatography, Computers and Chemical Engineering 29, 1401-1410 17 gPROMS User's Guide (1997) Process System Enterprise, London, United Kingdom
References I 5 7 5 18 Gu T. (1995) Mathematical Modelling and
30 Noeres C. (2003) Catalpc distillation:
Scale Up of Liquid Chromatography, Springer, New York 19 Guest D. W. Evaluation of simulated moving bed chromatography for pharmaceutical process development, J.Chromatogr. A 760 (1997) p. 159-162 20 Guiochon G . Preparative liquid chromatography, J.Chromatogr. A 965 (2002) p. 129-161 21 Hanisch F. (2002) Prozessfuhmng praparativer Chromatographieverfahren (Operation of Preparative Chromatographic Processes), Dr.Ing. dissertation, University of Dortmund, and Shaker Verlag, Aachen (in German) 22 Hotier G. Nicoud R. M. (1996) Chromatographic simulated mobile bed separation process with dead volume correction using period desynchronization. US Patent 5 578 215 23 Juza M. Mazzotti M. Morbidelli M. Simulated moving-bed chromatography and its application to chirotechnology, Trends Biotechnol. 18 (2000) p. 108-118 24 Klatt K.-U. Hanisch F. Diinnebier G. Modelbased control of a simulated moving bed chromatographic process for the separation of fructose and glucose, J. Process Control 12 (2002) p. 203-219 25 Matt K.-U. Diinnebier G. Hanisch F. Engell S. (2002) Optimal Operation and Control of Simulated Moving Bed Chromatography: A Model-based Approach. Invited Plenary Paper, CACHE/AIChE Conference Chemical Process Control G, 2001, Tucson. In: J. B. Rawlings, B. A. Ogunnaike, and J. W. Eaton (Eds.): Chemical Process Control VZ,AIChE Symposium Series No. 326, Vol. 98 CACHE Publications, 2002, 239-254 26 Kreul L. U . Gdrak A. Dittrich C. Barton P. I. Dynamic catalyhc distillation: advanced simulation and experimental validation, Comput. Chem. Eng. 22 (1998) p. 371-378 27 Lin]. C. Han P. D. Roberts P. D. Wan B. W. New approach to stochastic optimizing control of steady-state systems using dynamic information, Int. J. Control 50 (1989) p. 2205-2235 28 Mdckowiak J . (1991) Fluiddynamik von Kolonnen mit modemen Fullkorpern und Packungen fur Gas-/Flussigsysteme, 1. Auflage, Otto-Salle-Verlag. Frankfurt (in German) 29 Nagrath D. Bequette B. Cramer S. Evolutionary operation and control of chromatographic processes, AlChE J. 49 (2003) p. 82-95
dynamic modelling, simulation and experimental validation, Dr.-Ing. dissertation, University of Dortmund, and VDI Verlag, Dusseldorf 31 Roberts P. D. An algorithm for steady-state system optimization and parameter estimation, Int. J. Syst. Sci. 10 (1979) p. 719-734 32 Roberts P. D. Broyden derivative approximation in ISOPE optimizing and optimal control algorithms, Proceedings of the 11th IFAC Workshop on Control Applications of Optimization CA0’2000, Elsevier (2000) pp. 28 3-288 33 Tatjmski P. (2002) Iterative optimizing setpoint control - the basic principle redesigned, Proceedings of the 15th Triennial IFAC World Congress, CD-ROM, Barcelona 34 Tatjewski P. BrdyS M. A. Duda ]. Optimizing control of uncertain plants with constrained feedback controlled outputs, Int. J. Control 74 (2001) p. 1510-1526 35 Toumi A. Engell S. (2004~)A software package for optimal operation of continuous moving bed chromatographic processes. In: H. G. Bock, E. Kostina, H. X. Phu, and R. Rannacher (Eds.): Modelling Simulation and Optimization of Complex Processes (Proceedings of the International Conference on High Performance Scientijc Computing, Hanoi, 2003), Springer, 471-484 36 Toumi A. Engell S. Optimal operation and control of a reactive simulated moving bed process, Proceedings of the IFAC Symposium on Advanced Control of Chemical Processes, Hong Kong, Elsevier (2004a) pp. 243-248 37 Toumi A. Engell S. (2004b) Optimizationbased Control of a Reactive Simulated Moving Bed Process for Glucose Isomerization, Chemical Engineering Science 59, 3777-3792 38 Toumi A. Engell S. (2005) Advanced control of simulated moving bed processes. In: H. Schmidt-Traub (Ed.): Preparative Chromatography offine chemicals and pharmaceuticals agents, Wiley-VCH, Weinheim. 39 Toumi A. Engell S. Ludemann-Hombourger 0. Nicoud R. M.Bailly M. Optimization of simulated moving bed and VARICOL processes, J.Chromatogr. A 1006 (2003) p. 15-31 40 Wang C. Matt K. Diinnebier G. Engell S. Hanisch F. Neural network based identification of SMB chromatographic processes, Control Eng. Practice 11 (2003) p. 949-959
576
I
4 Model-bused Control 41 Zhang 2.Muzzotti M. Morbidelli M. Power-
Feed operation of simulated moving bed units: changing flow-rates during the switching interval, J . Chromatogr. A 1006 (2003)p. 87-99 42 Zhang H . Roberts P. D. On-line steady-state optimization of nonlinear constrained processes with slow dynamics, Trans. Inst. Measure. Control. 12 (1990) p. 251-261
43 Zhou J. L. Tits A. L. Lawrence C. T. (1997)
User’s Guide for FFSQP Version 3.7: a FORT U N code for solving constrained nonlinear (minimax).optimization problems, generating iterates satisfaying all inequality and linear constraints, University of Maryland
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
5 Real Time Optimization Vivek Dua, John D. Perkins, and Efstratios N. Pistikopoulos
Abstract
This chapter considers two real time optimization (RTO) problems. The first problem is concerned with the model based control of linear discrete time systems and the second problem considers the case when logical conditions are also involved in the first problem. These RTO problems are reformulated as multiparametric programs to obtain control variables as an explicit function of the state of the system. This reduces the real time Optimization problems to simple function evaluations. 5.1 Introduction
Real Time Optimization (RTO)of a system is typically concerned with the solution of the following problem (Marlin and Hrymak, 1997; Perkins, 1998): J ( x ) = minf(x, u) U
s.t. h(u, x ) = 0 g(u, x ) 5 0 X € X
where x is the vector of the state of the system, u is the vector of control variables,f is a scalar objective function, such as cost, to be minimized, his a vector representing the model of the system, g is a vector representing constraints, such as lower and upper bounds on x and u and Xis a compact and convex set. Note that this problem is solved repetitively at regular time intervals. Model Based Predictive Control (MPC) (Morari and Lee, 1999) is widely used by industry to address real time optimization problems with constraints on u and x. It is based on a receding horizon approach where a sequence of future control actions is computed based on a prediction of the future evolution of the system and applied to the system until new measurements become available. Then, a new sequence is determined which replaces the previous one - see Figure 5.1 where xJ: is the desired Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
I577
578
I
5 Real Time Optimization
. ..... X*
.............................
Past
Future
U
---.
<
:
I
'L.!
;
k
k+l I
I
l
l I
I I
I
I
Time Intervals
I
I I
I
kip
'
Figure 5.1 Control
Model Based Predictive
state of the plant, k is the current time interval and k + 1,... , k + p are the future time intervals. Each sequence is evaluated by solving the optimization problem (1). Real time optimization offers tremendous benefits but has large real time computational requirements which involve a repetitive solution of problem (1)at regular time intervals (see Figure 5.2). The rest of the chapter is organised as follows. In the next section a parametric programming approach is introduced which can be used to compute u as an explicit function of x. Section 5.3 considers the case when h is given by linear discrete state space equations and the case when u also involves 0-1 binary variables is addressed in section 5.4. The solution approaches presented in sections 5.3 and 5.4 reduce RTO to simple function evaluations.
[PLANT] Figure 5.2
Real Time optimization
5.2 Parametric Programming
In an optimization framework, where the objective is to minimize or maximize a performance criterion subject to a given set of constraints and where some of the parameters in the optimization problem vary between specified lower and upper bounds, parametric programming is a technique for obtaining (i) the objective function and the optimization variables as a function of these parameters and (ii) the regions in the space of the parameters where these functions are valid (Fiacco, 1983; Gal, 1995; Acevedo and Pistikopoulos, 1996, 1997; Pertsinidis et al., 1998; Papalexandri and Dimkou, 1998; Acevedo and. Pistikopoulos, 1999; Dua and Pistikopoulos, 1999).Considering u as optimization variables and x as parameters in (I),parametric programming provides.
5.2 Parametric Programming
I
CR'
ul(x)
ifx
u2(x)
if x E CR2
E
such that CR' n CRi = 4, i # j , Vi, j = 1, ..., Nand CRi c X, V i= 1, ..., N. A CR' is known as a Critical Region. For the case whenf; g and h are linear and separable in u and x , the CRs are polyhedra and each CR corresponds to a unique set of active constraints (Dua et al., 2002). See Figure 5.3, where u is plotted as a function of x. The procedure for obtaining u ' ( x )and CR' depends upon whetherf; g and h are linear, quadratic, nonlinear, convex, differentiable, or not, and also whether u is vector of continuous or mixed - continuous and integer - variables (Dua and Pistikopoulos, 2000; Dua et al., 2002; Dua and Pistikopoulos, 1999; Dua et al., 2003; Sakizlis et al., 2002b). Recently algorithms for the case when (1)involves (i) differential and algebraic equations (Sakizlis et al., 2002a) and (ii) uncertain parameters (Sakizlis et al., 2004) have also been proposed. The engineering significance of solving parametric programming problems is highlighted in the next motivating example.
5.2.1 Example 1
Consider the refinery blending and production problem depicted in Figure 5.4 (Edgar and Himmelblau, 1989). The objective is to maximize the profit for the operating conditions given in Table 5.1, where x1 and x2 are the parameters representing the additional maximum allowable production of gasoline and kerosene production respectively. This results in a multi-parametric linear programming problem given in Table 5.2, where ui and u2 are the flowrates of the crude oils-1 and 2 respectively, in bbl/day and the units of profit are $/day. The solution of this problem by using the algorithm of Gal and Nedoma (1972) is given in Table 5.3. The engineering significance of obtaining this solution is as follows:
I
579
I
5 Real Time Optimization
SALES PRIC
COSTS
Crude ($24/b 1
REFINERY
oil #I Crude #2 ($15/b 1 Figure 5.4
Gasoline ($36/bbl) Kerosene ($24/bbl) Fuel Oil ($21/bbl) Residual ($lO/bbl)
Crude Oil Refinery
(i) A complete map of all the optimal solutions, profit and crude oil flowrates as a function of x1 and x2, is available. (ii) The space of x1 and x2 has been divided into two regions, CR’ and CR2,where the profiles of profit and flowrates of crude oils remain optimal and hence (a) one does not have to exhaustively enumerate the complete space of x1 and x2 and (b) the optimal solution can be obtained by simply substituting the value of x1 and x2 into the parametric profiles without any further optimization calculations. (iii) The sensitivity of the profit to the parameters can be identified. In CR’ the profit is more sensitive to x2, whereas in CR2 it is not sensitive to x2 at all. Thus, for any value of x that lies in CR2,any expansion in kerosene production will not affect the profit. This type of Information is quite useful for solving real time optimization problems. In the next section it is shown that real time model based control and optimization problems can be reformulated as multi-parametric quadratic programming problems, the solution of which is given by optimal control variables as a function of the state variables. The real time optimization problem thus reduces to simple function evaluations. Refinery Data
Table 5.1
I
I
Volume % Yield Crude #
1
Crude #
2
Maximum allowable production (bbllday)
Gasoline
80
24 000 + x1
Kerosene
5
2 000 + x2
Fuel Oil
10
6 000
Residual
5
-
Processing Cost ($/bbl)
Table 5.2 Profit
s.t.
=
0.50
Refinery Model
max 8.1 u1 + 10.8 u2
0.80 U I t 0.44 ~2 5 24 000 + XI 0.05 u1 + 0.1 0 u2 I 2 000 + x2 0.10 ~1 + 0.36 ~2 5 6000 u1 5. 0, u2 2 0 0 5 ~1 I6000 0 5 x 2 5 500
1.00
5.3 Parametric Control Table 5.3
Solution of the Refinery Example
I CR'
I
I Optimal Solution I
I I
1
-0.14 X I + 4.21 0 5 X I 5 6000
~2
5 896.55
-0.14 x1 + 4.21 x2 5 896.55 0 5 x1 5 6000 x2 5 500
2
Profit (x)= 4.66 x1 + 87.52 x2 + 286758.6 UI=
1.72
XI -
7.59
~2
+ 26206.90
Profit (x) = 7.53 x1 + 305409.84 ~1
=
1.48 X I
uz = -0.41
+ 24590.16
XI
+ 9836.07
5.3 Parametric Control
Consider the following state-space representation of a given process model (Pistikopoulos et al., 2002): x(t
+ 1) = A x ( t ) + Bu(t)
subject to the following constraints: Ymin Umin
5 Y ( t )I Ymax
i u ( t ) i urnax,
(3)
where x ( t ) E R", u(t) E R", and y(t) E RP are the state, input, and output vectors respectively, subscripts min and max denote lower and upper bounds respectively and (A, B) is stabilizable. Model based control problems for regulating to the origin can then be posed as the following optimization problems:
. ~=} ,Q' 2 0, R = R' > 0, P 2 0, N y 1 Nu and the superwhere U P { ul, ... , u ~ + ~ , Q script Tdenotes the transpose of the corresponding vector or matrix. The problem (4) is solved repetitively at each time t for the current measurement x ( t ) and the vector of predicted state variables, ~ ~ +...~, Xt+I+ l ~ at, time t + 1, ... , t + k respectively and coris obtained. responding control actions ut, ... , In the following paragraphs, a parametric programming approach which avoids a repetitive solution of (4)is presented. First, we do some algebraic manipulations to recast (4) in a form suitable for using and developing some new parametric programming concepts. By making the following substitution in (4):
I
581
582
I5
Real Time Optimization
j=O
the objective J ( U, x(t))can be formulated as the following Quadratic Programming (QP) problem:
a
where U [u:, ..., u ~ + ~ E, -R',~ s] 4~ mN,, is the vector of optimization variables, H = HT > 0, and H , F, Y,G, W, E are obtained from Q R and (4)-(5). The QP problem (6) can now be formulated as the following Multi-parametric Quadratic Program (~P-QP): 1
p ( x ) = min - z T ~ z 2 2 s.t. G z 5 W Sx(t)
+
(7)
where z U + H-' FT x(t),z E R', represents the vector of optimization variables, S E + GH-' and x represents the vector of parameters. The main advantage of writing (4)in the form given in (7) is that z (and therefore v) can be obtained as an affhe function of x for the complete feasible space of x. To derive these results, we first state the following theorem. Theorem 1 For the problem in (7) let xo be a vector of parameter values and (zo,Lo) a KKT pair, where Lo = h(xo)is a vector of nonnegative Lagrange multipliers, h, and zo = z(xo)is feasible in (7). Also assume that (i) linear independence constraint qualification and (ii) strict complementary slackness conditions hold. Then,
where,
No
=
(Y,hlS1, . . .,
where Gi denotes the i* row of G, Sidenotes the ith row of S, Vi = Giza - Wi - Sixo, Wi denotes the i* row of Wand Y is a null matrix of dimension (s x n). See Pistikopoulos et al. (2002) for the proof. The space of x where this solution, (8), remains optimal is defined as the Critical Region (CRO)and can be obtained as follows. Let CRRrepresent the set of inequalities obtained (i) by substituting z ( x )into the inequalities in (7) and (ii) from the positivity of the Lagrange multipliers, as follows:
5.3 Parametric Control
CRR= ( G z ( x )5 W
+ S x ( t ) ,A(%) > 0 ) .
(9)
then CRo is obtained by removing the redundant constraints from C R Ras follows: CR' = A ( C R R }
where A is an operator which removes the redundant constraints - for a procedure to identify the redundant constraints, see Gal (1995). Since for a given space of statevariables, X , so far we have characterized only a subset of X i.e. CRo s X , in the next step the rest of the region CR"'', is obtained as follows (Pistikopoulos et al., 2002): CRreSt= X - CR".
(11)
The above steps, (8-11) are repeated and a set of z ( x ) , h(x)and corresponding CR's is obtained. The solution procedure terminates when no more regions can be obtained, i.e. when CR"' = @. For the regions which have the same solution and can be unified to give a convex region, such a unification is performed and a compact representation is obtained. The continuity and convexity properties of the optimal solution are summarized in the next theorem.
Theorem 2 For the mp-QP problem, (7), the set offeasible parameters Xfc Xis convex, the optimal solution, z(x): XJ+ R' is continuous and piecewise affine, and the optimal objective function p(x) : XJ+ R is continuous, convex and piecewise quadratic. See Pistikopoulos et al. (2002)for the proof. Based upon the above theoretical developments, an algorithm for the solution of an mp-QP of the form given in (7) to calculate U as an affine function of x and characterize X by a set of polyhedral regions, CRs, has been developed which is summarized in Table 5.4. This approach provides a significant advancement in the solution and real time implementation of model based control problems. Since its application results in a complete set of control variables as a function of state-variables (from (8))and the corresponding regions of validity (from (lo)),which are computed off-line. Therefore during on-line optimization, no optimizer needs to be called and instead for the current state of the plant, the region, CRO, where the value of the state variables is valid, can be identified by substituting the value of these state variables into the inequalities which define the regions. Then, the corresponding control variables can be computed by using a function evaluation of the corresponding affine function (see Figure 5.5). Figure 5.6 demonstrates how advanced controllers can be implemented on a simple hardware.
I
583
584
I
5 Real Time Optimization
5.4 Hybrid Systems
Hybrid systems can be defined as systems comprising a number of interconnected continuous subsystems where the interconnections are determined by logical or discrete switchings. Each subsystem is governed by a unique set of differential and/or algebraic equations. In this section we focus on piecewise afine (PWA) systems (Bemporad and Morari, 1999). PWA systems are defined by partitioning the state and input space into polyhedral regions and associating with each region a different linear state update equation x(t
+ 1) = Aix(t) + B'u(t) +fi
(14
where i = 1, ..., s, x E R"' x (0, l}"',u E R"' x (0, l}"',(Pi}tl is a polyhedral partition of the set of the state and input space P c R"+", n 4 n, + nl, rn & rn, rn,. P is assumed to be closed and bounded and x, E RnCand u, E R" denote the continuous components ofthe state and input vector, respectively;xi E (0, l}"' and uiE (0, l}"' similarly denote the binary components. Note that PWA models are not suitable for recasting analysislsynthesis problems into more compact optimization problems. For this purpose the Mixed Logical Dynamical (MLD) framework (Bemporad and Morari, 1999) is used. The general MLD form of a hybrid system is:
+ 1) = A x ( t ) + Blu(t) + B2S(t) + B3z(t) y ( t ) = Cx(t) + Diu(t) + D2S(t) + D3Z(t) E2S(t) + E 3 ~ ( t )i Eiu(t) + E d t ) + Es x(t
(13)
(14) (15)
SOLVER
d
2
Figure 5.5 Real time optimization via parametric programming
5.4 Hybrid Systems Table 5.4
Solution Steps o f the rnp-QP Algorithm
Step 1
For a given space of x solve (7) by treating x as a free variable and obtain [x,,]. xo and solve (7) to obtain [zn, Lo].
Step 2
In (7) fix x
Step 3
Obtain [ z ( x ) .A@)] from ( 8 ) .
Step 4
Define CRRas given in (9).
Step 5
From CRRremove redundant inequalities and define the region of optimality CR" as given in (10).
Step 6
Define the rest of the region, CR"", as given in (11).
Step 7
If no more regions to explore, go to the next step, othenvise go to Step 1
Step 8
Collect all the solutions and unify a convex combination of the regions having the same solution to obtain a compact representation.
=
Model Predictive Control Real Time Optimization Problem
1
Offline Parametric Optimization Problem Sensors measurements are Parameters Manipulated inputs are Optimization Variables
Optimal control action as
(1) Explicit functions of sensor measurements, and (2) Critical regions where these functions apply
State-of-the-art performance on the simplest of hardware
Figure 5.6
Achieving state-of-the-artcontrol performance on simple hardware
where x = [xz x:]' E RnGx (0, l}"'are the continuous and binary states, u = [uz uf]' E Rmc x (0, l}"' are the inputs, y = [y: y:]' E Rp' x (0, l} p ' the outputs, and 6 E (0, l}",z E R" represent auxiliary binary and continuous variables respectively. All constraints on the states, the inputs, the z and 6 variables are summarized in the inequalities (15). Note that, although the description (13)-(14)-(15) seems to be linear, nonlinearity is hidden in the integrality constraints over the binary variables. MLD systems are a versatile framework to model various classes of systems. For a detailed description of such capabilities we defer the reader to Morari et al. (2003).
I
585
586
I
5 Real Time Optimization
5.4.1 Predictive Control of MLD Systems
Let t be the current time, and x ( t ) the current state. Consider the following optimal control problem
k=O
where vi-' [vT(0), ..., vT (T- l)]', Q = QT > 0, Qz = 2 0, Q1 = @ 2 0, Q4= QT > 0 and Qs = 2 0. x(klt) x ( t + k, x(t), vi-') is the state predicted at time t + k resulting from the input u(t + k) = v(k) to (13-15) starting from x(Olt) = x(t). 6 ( k l t ) , z ( k l t ) and y(klt) are similarly defined. Assume for the moment that the optimal solution { v:(k)}~=O,...,T-l exists. According to the receding horizon philosophy mentioned above, set
ar
disregard the subsequent optimal inputs v: (l),... , v; (T- l),and repeat the whole optimization procedure at time t + 1. Note that (16-17) is a Mixed Integer Quadratic Program (MIQP).This problem can be formulated as a Mixed Integer Linear Program (MILP)if 1norm instead of the 2 norm is considered in the objective function. The repetitive somtion of the MIQP or MILP can be avoided by formulating (16-17) as a multiparametric program and solving it to obtain the control variables as a set of explicit functions of the current state of the system and the regions in the space of the state variables where the explicit functions remain valid (Bemporad et al., 2000; Sakizlis et al., 2002a). This is achieved by recasting (16-17) in a compact form as follows:
a,
where zcand z d are continuous and discrete variables of (16-17), GT, G,, Gd, S, F are constant matrices and vectors of appropriate dimensions and is symmetrie and positive definite. x(t) is the state at the current time t. The objective is to obtain ncand n d as a function of x(t) without exhaustively enumerating the entire space of x ( t ) .This can be achieved by using parametric programming. In the next section an algorithm for Multiparametric Mixed Integer Linear Programs (mp-MILP) is
5.4 Hybrid Systems
described. This reduces the real time hybrid system control problem to a function evaluation problem (Figure 5.7).
5.4.2 Multiparametric Mixed-Integer Linear Programming
Consider a multiparametric Mixed Integer Linear Programming (mp-MILP) problem of the following form: MULTl-PARAMETRIC MIXED-I NTEGER PROGRAM SOLVER
4 U
Y
X
i? 4
3
r
where
PARAMETRIC PROGRAMMING
and @* are constant vectors.
5.4.2.1 Initialization
An initial feasible nd is obtained by solving the following MILP:
where x ( t ) is treated as a vector of free variable to find a starting feasible integer solution. Let the solution of (21) be given by n d = Z d .
I
587
588
I
5 Real Time Optimization
5.4.2.2 Multiparametric LP Subproblem
Fix x d = z d (20) to obtain a multiparametric LP problem of the following form: s.t. GCT,4- Gdnd 5
s + Fx(t)
The solution of (22) is given by a set of linear parametric profiles, j ( x ( t ) ) ’ ,where j ( x ( t ) )is convex, and corresponding critical regions, CR’ (Gal, 1995).
The final solution of the multiparametric LP subproblem in (22) which represents a parametric upper bound on the final solution is given by (i)a set of parametric profiles, j(x(t))i,and the corresponding critical regions, CR’, and (ii) a set of infeasible regions wherej(x(t))’= m. 5.4.2.3 MILP Subproblem
For each critical region, CR’, obtained from the solution of the multiparametric LP subproblem in (22), an MILP subproblem is formulated as follows:
The integer solution, x d = zi, and the corresponding CRs, obtained from the solution of (23), are then recycled back to the multiparametric LP subproblem - to obtain another set of parametric profiles. Note that the integer cut, n d # z d , and the parametric cut, 4; xc+ & x d Ij ( x ( t ) ) ’are accumulated at every iteration. If there is no feasible solution to the MILP subproblem (23) in a CR’, that region is excluded from further consideration and the current upper bound in that region represents the final solution. Note also that the integer solution obtained from the solution of (23) is guaranteed to appear in the final solution, since it represents the minimum of the objective function at the point, in x(t), obtained from the solution of (23).The final solution of the MILP subproblem is given by a set of integer solutions and their corresponding CR’s. 5.4.2.4 Comparison of Parametric Solutions
The set of parametric solutions corresponding to an integer solution, x d = z d , which represents the current upper bound are then compared to the parametric solutions corresponding to another integer solution, ldd = z;, in the corresponding C Rs in order to obtain the lower of the two parametric solutions and update the upper bound. This is achieved by employing the procedure proposed by Acevedo and Pistikopoulos (1997b).
5.5 Concluding Remarks 1589
5.4.2.5 Multiparametric MILP Algorithm
Based upon the above theoretical developments, the steps of the algorithm can be stated as follows: Step 0 (Initialization) Define an initial region of x ( t ) , CR, with best upper bound j * ( x ( t ) ) = w , and an initial integer solution %d.
Step 1 (Multiparametric LP Problem) For each region with a new integer solution,
-
n d: 0
0
0
Solve multiparametric LP subproblem (22) to obtain a set of parametric upper bounds j ( x ( t ) )and corresponding critical regions, CR. !fj(x(t))< j " ( x ( t ) ) for some region of x(t),update the best upper bound function, J""(x(t)),and the corresponding integer solutions, ni, If an infeasibility is found in some region CR, go to Step 2.
Step 2 (Master Subproblem) For each region CR, formulate and solve the MILP master problem in (23) by (i) treating x(t) as a variable bounded in the region CR, (ii) introducing an integer cut, nd # %d and (iii) introducing a parametric cut, @: nc+ & nd< j ( x ( t ) ) ' .Return to Step 1 with new integer solutions and corresponding CRs. Step 3 (Convergence)The algorithm terminates in a region where the solution of the MILP subproblem is infeasible. The final solution is given by the current upper boundsp(x(t))in the corresponding CRs. The n,(x(t))and nd(x(t))corresponding to j " ( x ( t ) ) are then used to obtain u ( x ( t ) ) .
Note that the algorithms presented in this chapter have been implemented and tested on a number of real time optimization problems (PAROS, 2004).
5.5 Concluding Remarks
In this chapter it was shown how real time optimization problems can be recast as multiparametric programs. Linear discrete time optimization problems are recast as multiparametric quadratic programs and problem involving logical decisions as multiparametric mixed integer programs. Algorithms for solving the multiparametric programs were then presented to compute the optimal control actions as an explicit function of the state of the system. This reduces real time optimization problems to simple function evaluations.
590
I
5 Real Time Optimization
References 1 Acevedo J. Pistikopoulos E. N. A parametric
MINLP algorithm for process synthesis problems under uncertainty. Industrial and Engineering Chemistry Research 35 (1996) p. 147-158 2 Acevedoj. Pistikopoulos E. N. A multiparametric programming approach for linear process engineering problems under uncertainty. Industrial and Engineering Chemistry Research 36 (1997) p. 717-728 3 Acevedo J. Pistikopoulos E. N. An algorithm for multiparametric mixed integer linear programming problems. Operations Research Letters 24 (1999)p. 139-148 4 Bemporad A. Borrelli F. Morari M. Piecewise linear optimal controllers for hybrid systems, proceedings of the American Control Conference (2000) p. 1190-1194 5 Bemporad A. Morari M. Control of systems integrating logic, dynamics, and constraints. Automatica 35 (1999) p. 407-427 6 Dua V. Bozinis N. A. Pistikopoulos E. N. A multiparametric programming approach for mixed-integer quadratic engineering problems. Computers & Chemical Engineering 26 (2002)p. 715-733 7 Dua V. Papalexandri K. P. Pistikopoulos E. N. Global optimization issues in multiparametric continuous and mixed-integer optimization problems, accepted for publication in the Journal of Global Optimization, 30 (2004)p. 59 8 Dua V. Pistikopoulos E. N.Algorithms for the solution of multiparametric mixed-integer nonlinear optimization problems. Industrial and Engineering Chemistry Research 38 (1999) p. 3976-3987 9 Dua V. Pistikopoulos E. N. An algorithm for the solution of multiparametric mixed integer linear programming problems. Annals of Operations Research 99 (2000)p. 123-139 10 Edgar T. F. Himmelblau D. M. Optimization of chemical processes. McGraw Hill Book Co, Singapore 2000 11 Fiacco A. V. Introduction to Sensitivity and Stability Analysis in Nonlinear Programming. Academic Press, New York 1983 12 Gal T. Postoptimal Analyses, Parametric Programming, and Related Topics. de Gruyter, New York 1995 13 Gal T. Nedomaj. Multiparametric linear programming. Management Science 18 (1972) p. 406-422
14 Marilin T. E. Hrymak A. N. Real-time
operations optimization of continuous processes. In: Kantor J. C. Garcia C. E. Carnahan B. (Eds.), 5th Int. Conf. Chem. Proc. Control. Vol. 93 of AIChE Symposium Series 1997 15 Morari M.Baotic M. Borrelli F. Hybrid systems modeling and control. European Journal of Control 9 (2003) p. 177-189 16 Morari M. Lee]. Model predictive control: past, present and future. Computers & Chemical Engineering 23 (1999)p. 667-682 17 Papalexandri K. P. Dimkou T. 1. A parametric mixed integer optimization algorithm for multi-objectiveengineering problems involving discrete decisions. Industrial and Engineering Chemistry Research 37 (5) (1998) p. 1866-1882 18 PAROS Parametric Optimization Solutions Ltd. http://www.parostech.com 2004 19 Perkins J. D. Plant-wide optimization: opportunities and challenges. In: Pekny J. Blau G. (Eds.), 3rd Int. Conf. Foundations of Computer Aided Process Operations. Vol. 94 of AIChE Symposium Series 1998 20 Pertsinidis A. Grossmann 1. E. McRae G. /. Parametric optimization of MILP programs and a framework for the parametric optimization of MINLPs. Computers & Chemical Engineering 22 (1998) p. S205 21 Pistikopoulos E. N. Dua V. Bozinis N.A. Bemporad A. Morari M. On-line optimization via off-line parametric optimization tools. Computers & Chemical Engineering 26 (2002)p. 175-185 22 Sakizlis V. Dua V. Perkins]. D. Pistikopoulos E. N.The explicit control law for hybrid systems via parametric programming, proceedings of the 2002 American Control Conference, Anchorage 2002a 23 Sakizlis V. Dua V. Perkins]. D. Pistikopoulos E. N. The explicit model-based control law for continuous time systems via parametric programming, proceedings of the 2002 American Control Conference, Anchorage 2002b 24 Sakizfis V. Kakalis N. Dua V. Perkins J. D. Pistikopoulos E. N. Design of robust model based controllers via parametric programming. Automatica 40 (2004) p. 189-201
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8 I591
6 Batch and Hybrid Processes Luis Puigianer andjavier Rornero
6.1 Introduction
Although historically, chemical engineers achieved their professional distinction with the design and operation of continuous processes [I],as we move into the new millennium, it comes somewhat as a surprise to realize that outside the petroleum and petrochemical industries, batch operation is still a common if not dominant mode of operation. Moreover, most batch processes are unlikely to be replaced by continuous processes [2, 31. The reason is that as the production of chemicals undergoes a continuous specialization, to address the diversif)ing needs of the marketplace, the continuous evolution of product recipes implies a much shorter life cycle for a growing number of chemicals, than has been traditionally the case, leading to a perpetual product/process evolution [4, 51. This situation has been matched and in part funded by a developing research interest in batch process systems engineering, batch production being the most suitable way of manufacturing the relatively large number of low-volume high-value-added products commonly found in the fine and specialty chemicals industry. Moreover, the coexistence of continuous and discrete parts in both strictly speaking batch processes and nominal continuous processes has motivated an increased research interest in further exploiting the inherent flexibility of batch procedures and the high productivity of continuous parts of the production system [GI.Thus, chemical plants constitute large hybrid systems, making it necessary to consider the continuous-discrete interactions taking place within an appropriate framework for plant and process simulation and optimization [7]. This chapter briefly discusses existing modeling frameworks for discrete/hybrid production systems embodying different approaches, before introducing a very recent framework for process recipe initialization that integrates a recipe model into the batch plant-wide model. Next, online and offline recipe adaptation from real-time plant information is presented, and finally, a model-based integrated advisory system Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright @ 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
592
I
G Batch and Hybrid Processes
is described. This system gives online advice to operators on how to react in case of process disturbances. In this way, an enhanced overall process flexibility and productivity is achieved. Application of this promising approach is illustrated through examples of increasing complexity. 6.1.1 Plant and Process Simulation
The discrete transitions occurring in chemical processing plants have only recently been addressed in a systematic manner. Barton and Pantelides [8] did pioneering work in this area. A new formal mathematical description of the combined discrete/ continuous simulation problem was introduced to enhance the understanding of the fundamental discrete changes required to model processing systems. The modeling task is decomposed into two distinct activities: modeling fundamental physical behavior, and modeling the external actions imposed on this physical system resulting from interaction of the process with its environment by disturbances, operation procedures, or other control actions. The physical behavior of the system can be described in terms of a set of integral and partial differential and algebraic equations (IPDAE). These equations may be continuous or discontinuous. In the latter case, the discontinuous equations are modeled using state-task networks (STNs) and resource-task networks (RTNs), which are based on discrete models. Otherwise, other frameworks based on a continuous representation of time have appeared more recently (event operation network among others). The detailed description of the different representation frameworks is the topic of the next section. 6.1.2 Process Representation Frameworks
The representation of a state-task network (STN) proposed by Kondili et al. [9] was originally intended to describe complex chemical processes arising in multiproduct/ multipurpose batch chemical plants. The established representation is similar to the flow sheet representation of continuous plants, but is intended to describe the process itself rather than a specific plant. The distinctive characteristic of the STN is that it has two types of nodes; mainly, the state nodes, representing the feeds, intermediates and final products and the task nodes, representing the processing operations which transform material from input states to output states (Fig. 6.1). This representation is free from the ambiguities associated with recipe networks where only processing operations are represented. Process equipment and its connectivity are not explicitly shown. Other available resources are not represented. The STN representation is equally suitable for networks of all types of processing tasks, continuous, semicontinuous or batch. The rules followed in its construction are:
6.1 Introduction
T10
T11
T20
1 v
T2 1
T22
T23
Figure 6.1 State-task network representation of chemical processes Circles: state nodes; rectangles: task nodes 0
0
A task has as many input (output) states as different types of input (output) material. Two or more streams entering the same state are necessarily of the same material. If mixing of different streams is involved in the process, then this operation should form a separate task.
The STN representation assumes that an operation consumes material from input states at a fixed ratio and produces material for the output state also at a known fixed proportion. The processing time of each operation is known a priori and considered to be independent of the amount of material to be processed. Otherwise, the same operation may lead to different states (products) using different processing times. States may be associated to four main types of storage policy: 0 0 0 0
unlimited intermediate storage finite intermediate storage no intermediate storage zero wait (the product is unstable).
An alternative representation; the resource-task network (RTN)was proposed by Pantelides [lo]. In contrast to the STN approach, where a task consumes and produces materials while using equipment and utilities during its execution, in this representation, a task is assumed only to consume and produce resources. Processing items are treated as though consumed at the start of a task and produced at the end. Furthermore, processing equipment in different conditions can be treated as different resources, with different activities consuming and generating them; this enables a simple representation of changeover activities. Pantelides [lo] also proposed a discrete-time scheduling formulation based on the RTN, which, due to the uniform treatment of resources, only requires the description of three types of constraint, and does not distinguish between identical equipment items. He demonstrated that the integrality gap could not be worse than the most efficient form of STN formulation, but the ability to capture additional problem features in a straightforward fashion is attractive. Subsequent research has shown that these conveniences in formulation are overshadowed by the advantages offered by the STN formulation in allowing explicit exploitation of constraint structure through algorithm engineering. The STN and RTN representations use discrete-time models. Such models suffer from a number of inherent drawbacks [Ill: 0
I
The discretization interval must be fine enough to capture all significant events, which may result in a very large model.
593
594
I
6 Batch and Hybrid Processes 0
0
It is difficult to model operations where the processing time is dependent on the batch size. The modeling of continuous operations must be approximated and minimum run-lengths give rise to complicated constraints.
Therefore, attempts have been made to develop frameworks based on a continuoustime representation. Reklaitis and Mockus [12] developed a continuous-time formulation based on the STN representation. A common resource grid is need, with the timing of the grid points (“eventorders” in their terminology) determined by optimization. The same authors introduced an alternative solution procedure based on Bayesian heuristics in a later work [13]. Zhang and Sargent [14] describe a continuous-time representation based on RNT representation for both batch and continuous operations. The poor relaxation performance of the continuous-time models is the main obstacle to their large scale application. To avoid this deficiency, Shilling and Pantelides [Ill modify the model by Zhang and Sargent (1996).A global linearization gives rise to a mixed-integer linear programming (MILP) which is solved by a hybrid branch-and-bound procedure. Recent reviews on these approaches can be found in Shah [15] and Silver et al. [16]. A realistic and flexible description of complex recipes has been recently improved using a flexible modeling environment [ 171 for the scheduling of batch chemical processes. The process structure (individual tasks, entire subtrains or complex structures of manufacturing activities) and related materials (raw, intermediate or final products) is characterized by means of a processing network which describes the material balance. In the most general case, the activity carried out in each process constitutes a general activity network. Manufacturing activities are considered at three different levels of abstraction: the process level, the stage level and the operation level. This hierarchical approach permits the consideration of material states (subject to material balance and precedence constraints) and temporal states (subject to time constraints) at different levels. At the process level, the process and materials network (PMN) provides a general description of production structures (such as synthesis and separation processes) and materials involved, including intermediates and recycled materials. An explicit material balance is specified for each of the processes in terms of a stoichiometriclike equation relating raw materials, intermediates and final products (Fig. 6.2). Each process may represent any kind of activity necessary to transform the input materials into the derived outputs. Between the process level and the detailed description of the activities involved at the operation level, there is the stage level. At this level, the block of operations to be executed in the same equipment is described. Hence, at the stage level each process is split into a set of the blocks (Fig. 6.3). Each stage implies the following constraints: 0 0
The sequence of operations involved requires a set of implicit constraints (links). Unit assignment is defined at this level. Thus, for all the operations of the same stage, the same unit assignment must be made.
Process 1
Process 2
3 (RM-1) + 6 (RM-2)
IP-I + 2 (RM-3)
+ BP-I + IP-1 Process3
2 (RM-4)
+ 5 (FP-1)
ke
+FP-2
Figure 6.2 A process and materials network (PMN) describing the processing oftwo products. RM are row materials, IP are intermediate products, BP are by-products and FP are final products
I
I
Process 1
I
I
Figure 6.3 Stage level. Each stage involves different unit assignment opportunities 0
A common size factor is attributed to each stage. This size factor summarizes the
contribution of all the operations involved. The operation level contains the detailed description of the activities contemplated in the network (tasks and subtasks), while implicit time constraints (links) must be also met at this level. The detailed representation of the structure of activities defining the different processes is called the event operation network (EON). It is also at this level that the general utility requirements (renewable, nonrenewable, storage) are represented.
596
I
G Batch and Hybrid Processes
The event operation network representation model describes the appropriate timing of process operations. A continuous-time representation of process activities is made using three basic elements: events, operations and links [18,191. Events designate those time instants where some change occurs. They are represented by nodes in the EON graph, and may be linked to operations or other events. Each event is associated to a time value and a lower bound. Operations comprise those time intervals between events (Fig. 6.4). Each operation rn is represented by a box linked with solid arrows to its associated nodes: initial NI m and final NF rn nodes. Operations establish the equality links between nodes (two) in terms of the characteristic properties of each operation: the operation time, TOP and the waiting time TW. The operation time will depend on the amount of materials to be processed; the unit model and product changeover. The waiting time is the lag time between operations, which is bounded.
Figure 6.4 The time description for operations. TOP Operation time, W w a i t i n g time, NI rn initial node of operation rn, N F rn final node of operation rn
Finally, links are established between events by precedence constraints. A dashed arrow represents each link K from its node of origin NOk to its destiny node NDk and an associated offset time A TK.
Figure 6.5 Event to event link and associated offset time representation. The dashed arrow represents each link K from its node of origin NOk to its destiny node NDk
Despite its simplicity, the EON representation is very general and flexible and it allows the handling of complex recipes (Fig. 6.6). The corresponding TOP, according to the batch size and material flow rate, also represents transfer operations between production stages. The necessary time overlapping of semicontinuous operations with batch units is also contemplated in this representation through appropriate links. Other resources required for each operation (utilities, storage, capacity, manpower, etc.) can also be considered associated to the respective operation and timing. Simulation of plant operation can be performed in terms of the EON representation from the following information contained in the process recipe and production structure characteristics: 0
0 0 0
A sequence of production runs or jobs associated to a process or recipe. A set of assignments associated to each job and consistent with the process p. A batch size associated to each job and consistent with the process. A set of shifting times for all the operations involved.
These decisions may be generated automatically by using diverse procedures for the determination of an initial feasible solution. Hence, simulation may be executed by
6.2 The Flexible Recipe Concept 1597 Figure 6.6 The recipe described as a structured set ofoperations The event operation network (EON) representation allows the handling o f complex synthesis problems The corresponding typical Gantt chart is given below
solving the corresponding EON to determine the timing of the operations and other resources requirements. The flexibility and potential of the EON representation has been further exploited by incorporating the flexible recipe concept, which is the subject of the next section.
6.2 The Flexible Recipe Concept
The simulation environments described in the previous section assume operating at nominal conditions following fixed recipes. Moreover, these nominal conditions are determined only once and sometimes considering only one stage of the process recipe. However, batch and hybrid manufacturing systems’ optimum performance require an integrated modeling environment capable of incorporating systematic information and of adapting to changing plant scenarios. Very recently, the flexible recipe concept has been introduced as an appropriate mechanism that permits the simultaneous optimization of recipe and plant operation [20, 211. This concept arose from the fact that batch processes normally do not operate at the plant-wide optimal nominal conditions of the fixed batch recipes, but the traditional fixed recipe does not allow for adjustment to plant resource availability or to variations in both quality of raw materials and in the actual process conditions. However, the industrial process is often subject to various disturbances and to constrained plant resources availability. Therefore, the fixed recipe is in practice approximately adapted, but in a rather unsystematic way depending on the experience and intuition of operators. As an alternative, the concept of flexible recipe operation is introduced, and a general framework is presented to systematically deal with the required adaptations at a plant-wide level. The flexible recipe concept was considered for the first time in the context of evolutionary operation [22].The main objective of that approach was to gain statistical insight into the problem behavior in order to gradually improve process efficiency
598
I
G Batch and Hybrid Processes
through suggestions of minor recipe modifications in each batch-run. However, it was not until the work of Rijnsdorp appeared [20] that the concept of flexible recipes was adequately introduced. Here, the term recipe is understood in a more abstract way as referring to the selected set of adjustable elements that control the process output generating the flexible recipe. According to this concept, a flexible recipe philosophy to operate batch processes was described in the work of Venvater-Lukszo (211. This philosophy distinguishes two main levels in the flexible recipe: (a) the recipe initialization level, where different aspects of a master flexible recipe are adjusted to actual process conditions and availability of resources at the beginning of the batch, thus giving the initialized control recipe; and (b) the recipe correction level, where the initialized control recipe is adjusted to run-time process deviations, thus generating corrected control recipes. A flexible recipe improvement system tool (called COMBO) was developed by TNO TPD (Netherlands Organization for Applied Scientific Research) for application of the flexible recipes concept in industrial practice. However, in this approach, only one critical stage of the process is considered and hence, no interaction with plant-wide optimization is, in fact, attempted. More recently, the application of the flexible recipe concept to an entire batch train was attempted for the multistage case [23]. However, standard quality models were assumed for process operations, and hence, no insight into recipe behavior was obtained. A new framework for recipe initialization that integrates a recipe model into the batch plant-wide model has been recently introduced [24]. The aim of this approach is to optimize the entire batch process, from recipe set-point adjustment to product sequencing. For this purpose, a recipe model and a plant-wide production model are required to build the flexible recipe model. Moreover, fulfillment of present standards (ISA S88) should be a requirement for implementation in industrial practice. 6.2.1 The Flexible Recipe and the Framework of ISA S88
Batch-processes flexibility may be mainly exploited at the level of the recipe formulation. Here, the set of process parameters is adjusted to warrant process outputs as a function of uncertain process inputs. Each one of such parameters, whose value may be changed for each batch, is called a recipe item. These items can be quantitative or qualitative, time-dependent or time-independent. The equipment requirement level, as defined in I S A S 8 [25], is already a flexible category in itself. In fact, ISA-S88 defines this level as an equipment choice constraint. Finally, considering flexibility in the recipe procedure would only be contemplated when some unexpected event happens, which is out of the scope of the batch process flexibility enhancement sought here. In a company four types of recipes are typically found: 0
0
General recipe and site recipe; which basically describe the technique and are equipment independent. Master recipe, a recipe which is equipment-dependent and which provides specific and unique batch-execution information describing how a product is to be produced in a given set of process equipment.
6.2 The Flexible Recipe Concept
Control recipe, which starting as a copy of the master recipe, contains detailed information for minute-to-minute process operation of a single batch. The flexible recipe might be derived from a master recipe and subsequently used for generating and updating a control recipe. Venvater and Keesman [26] introduced the concept of different levels between these two stages defined at ISA-S88. With these new levels a better description of the different possible functionalities of the flexible recipe is obtained: 0
0
0
0
Master control recipe, that is, a master recipe valid for a number of batches, but adjusted to the actual conditions (actual prices or quality requirements) from which the individual control recipes per batch are derived. Initialized control recipe, that is, the adjustment of the still-adjustable process conditions of a master control recipe to the actual process conditions at the beginning of the batch, i.e., the adjustment of variables such as temperature, pressure, catalyst addition and processing time in the face of deviations in the initial temperature of the batch, equipment fouling, available processing time and so on. Corrected control recipe, the result of adjusting the initialized control recipe to process deviations during the batch. And finally, for monitoring and archiving purposes, it is also useful to define the accomplished control recipe.
Therefore, on the basis of this basic philosophy, a novel flexible recipe approach [24] has been recently proposed that excerpts a flexible recipe model from a total master control recipe. This model describes the whole batch process train. However, it is only concerned with the critical batch process variables. Besides, it also considers the possible interactions between different batches because of scheduling purposes. Regarding the different levels between the master recipe and the initialized control recipe described, it can be concluded that four different flexible-recipe systems may be useful: 0
0
0
0
A system for adjusting the master recipe to the actual prices and quality requirements, defining the master control recipe. A system for defining the initialized control recipe from the master control recipe as a function of the actual process conditions, availability of resources at the beginning of the batch and of the availability of the plant equipment. A model to generate the corrected control recipe in the face of deviations during each batch. A system for updating and improving the master control recipe as the database of accomplished control recipes increases. This model will also improve the preceding models.
The interaction of these systems in a real-plant environment is described in Fig. 6.7. These systems will have to be developed in laboratory experiments, pilot plant operation, during normal production by a systematic introduction of acceptable small changes in certain inputs and parameters, or by adjusting white models and simulating them under different operating conditions.
I
599
600
I
6 Batch and Hybrid Processes
Market prices Raw Materials
Master Recipe Database
Model for recipes computing
* Model for short-term
Switching program regulatory control: Corrected Control Recipe
Switching program regulatory control: Initialized Control Recipe L
Process
1
On-line Process assessment and Evaluation of off-
1 Operational database management of accomplished control recipes Figure 6.7 Optimal flexible recipe environment information flow proposed
improvement and model development
6.3 The Flexible Recipe Model
6.3 The Flexible Recipe Model
The flexible recipe model is the tool that permits us to integrate a recipe optimization procedure with a batch plant optimization level. It represents the relationship that correlates a batch process output as a function of the selected input items of the recipes for different batch plant production scenarios. Therefore, it is a recipe description model that incorporates plant-wide variables. We identify four main components of the problem: quality or product specifications, process operating conditions, production costs and production due-dates. The flexible recipe model can be applied to a variety of scenarios. For instance, during batch process operation, processing times of some tasks may vary without setpoint adjustment, thus affecting the properties (quality) of the products obtained in such tasks. Then, to meet customer requirements, another batch of the same product might be able to compensate for these effects. For example, let’s assume a process in which A is converted into B; one batch with low conversion of A could be compensated for by another batch with a higher conversion, assuming that these two batches are going to be mixed afterwards, so that the final product quality corresponds to the customer and legal requirements. Otherwise, the processing time might be optimized without set-point adjustment by compensating for the quality within the same batch. For instance, a batch of product A that is first heated in one piece of equipment before reacting in another. A reduction in the processing time of the first task could be offset by a higher reaction time. Moreover, the processing time could be optimized with some set-point adjustment. In this situation, the properties of intermediates produced might be altered only at the expense of a higher operation cost. For instance, the reaction time could be reduced by increasing the reaction temperature, although this recipe modification would imply a higher operation cost. Which of the above-mentioned strategies should be applied in each case will depend on the specific process and on the available knowledge of the different tasks of the process. For example, such ways of operation might not be very suitable for highly restrictive processes, such as those found in the pharmaceutical industry, but they are probably convenient to specialty batch chemical production where customer requirements are defined simply by a set of product properties and not by the specific way the product has been produced. The preceding discussion leads to the basic concept upon which the modeling of scheduling problems considering the flexible recipe is built. 6.3.1 Proposed Concept for the Flexible Recipe Model
The flexible recipe model is regarded as a constraint on quality requirements and on production costs. In this approach, recipe items are classified into four groups: 0
The vector of process operating conditions, poc,, of stages i of a recipe. It includes parameters like temperature, pressure, type of catalyst, batch size, etc.
I
602
I
G Batch and Hybrid Processes 0
0
0
The product specification vector, psi, at the end of each process stage iof a recipe. It might include parameters like conversion of a reactant, purity or quality aspects. Processing time, PTi, at each stage i of a recipe. Waiting time, TWi, that is, the time between the end of a stage and the next stage start time.
Then, the product specifications vector of a batch stage will in general be a function, Y, of processing time, waiting time, process set-points, and product specifications at different stages i"where the different inputs to stage i are produced. Moreover, within this model, product specifications, ps, and process operation conditions, poc, are subject to optimization within a flexibility region, a and A respectively. A general algorithm representation of the flexible recipe model for short term scheduling is presented in Eq. (1).This model contains the nominal recipe and its capacity to accept modifications. The model adjusts the different recipe parameters for each individual batch performed in a specific production plan 8,where 8 is the variable that permits integrating batch process scheduling with the recipe optimization procedure. Each specific production plan 8 is defined when the specific orders to be delivered at a specific set of due dates, S, is specified and when the specific set of different plant resources, A, is assigned to each order. Besides this, each production plan has to meet some physical plant constraints, T, such as the multistage flowshop or jobshop batch plant topology constraints, T,operating with a set ] of equipment units and a set R of process resources. Each production plan will be generated to meet the market constraints: set I of production orders in a given set DD of time horizon or due dates. A performance criterion @ is also included. This criterion may vary from batch to batch and it may contain economic as well as process variables. The flexible recipe model validity constraints are considered in IJ and A regions. Optimize @ (PTi, TWi,psi, poci, 0 ) subject to recipe constraints, psi = (mi,TWi, psi, poc;), Psi C g, pOCi C A , subject to production environment constraints, @ ( S A) , c Q(7, J , R,2,DD)
*
The model may interact with the short-term scheduling level either offline or online.
6.4 Flexible Recipe Model for Recipe Initialization
At the start of a batch the initial conditions may differ from those prescribed by the master recipe, even to the extent of making successful completion unlikely. Examples are deviations in catalyst activity, available heat, raw material quality and equip-
6.4 Flexible Recipe Modelfor Recipe Initialization
ment fouling, among others. In such cases, the flexible recipe concept makes it possible to alter the still-adjustableprocess conditions, so as to ensure the most successful completion of the run. Otherwise, because of scheduling requirements, it may be worthwhile modifying the processing time of a stage of the master recipe, by modifying some operating conditions, so as to debottleneck some piece of equipment or accomplish some product due-dates. The procedure of generating the initialized control recipe from the master recipe is called recipe initialization. This procedure also implies the need to specify in which specific equipment unit and in which product sequence will each stage of the recipe be carried out. In general, the objective is to generate the best control recipe for different production scenarios. Specifically, the proposed framework adjusts the different parameters of a master control recipe model to deviations in prices or quality of delivered raw materials and in expected initial process conditions. For instance, in such a case where available steam pressure is lower than the nominal value at the beginning of a batch, recipe items will have to be initially adapted to this fact. Another aim of this framework is to adjust the different recipe items to the availability of plant resources and equipment units. The inputs of the problem are the production master recipe for each product, that is, the different components that define each recipe, the available equipment units for each task, the list of common utilities, the market requirements expressed as specific amounts of products to be delivered at given instants, and others. The algorithm has to determine the optimal sequence of the tasks to be performed in each unit, the values of the different parameters that specify each recipe, that is, the initialized control recipe and the use of utilities as a function of time. Specifically, the optimal schedule in each case is efficiently reached using the Sgraph approach [27]. This approach implies a branch-and-bound algorithm. This algorithm proceeds from a root node corresponding to the nominal master control recipe. From this root, partial schedules (nodes of the tree) are built adding schedulearcs to the preceding nodes. At each node, a flexible recipe model is solved to calculate a relaxation of the algorithm. The solution of this model at the end of a leaf gives the optimal timing, considering the flexible recipe, of the schedule associated to that leaf. The optimal schedule corresponds to the leaf with best objective function value. Hence, a model for schedule timing integrated with a flexible recipe is necessary. The proposed model is linear, simply to permit a rapid convergence of the algorithm.
6.4.1
Flexible Recipe Model for Schedule Timing
In addition to timing restrictions, two sorts of flexible recipe constraint have to be considered: product specificationsand process operating conditions and their consequences on the production cost. The product specifications vector, ps, is a function, Y, of processing time, waiting time, process set-points, and other product specifications. The model adjusts these recipe parameters for each individual batch performed in a specific production plan, @, this plan being a function of orders to be
I
603
604
I
G Batch and Hybrid Processes
satisfied, S, and of plant resources, A. Each production environment also has to meet some physical plant constraints, Q , such as the plant topology, T, operating with a set J of equipment units and a set R of production resources. Each production plan is generated to meet the market constraints (set of production orders in a given set DD of time horizons or due dates). A performance criterion Cp is also included. Hence, two sort of flexible recipe constraints have to be considered to define the flexible recipe model Y: product specifications (quality of the final products) and process operation conditions (set points) and their consequences on the production cost.
6.4.2 Quality and Production Cost Model
Product specifications,psi, might depend on processing time, waiting time, process operation conditions and product specifications at different stages i*where different inputs to stage i are processed. At the first stage of a batch, P will represent the raw materials. It will also be assumed that, within a time interval, a linear model can be adjusted to predict small deviations from process specifications,6psi, as a function of small deviations from the nominal values of PT;, TW;,poti and psi* (Eq. (2)).
where ai and bi, are the vectors that linearly correlate the effect of processing and waiting times of stage i on product specifications. C,i. is the matrix that linearly correlates the effect of the different product specificationinputs to stage i from stage i" on product specifications, and di the vector that correlates the effect of small deviations in process operation values on the product specifications. For instance, consider the production of one batch of product A. The stage i of this process consists in heating A in equipment unit 1. Stage i + 1 constitutes the reaction of A to give B in equipment unit 2. The main important product specification at stage i = 1 is the temperature reached in unit 1 and at the second stage, the conversion of reactant A and the temperature at the end of this stage. Therefore, the vector psl will only contain one element (temperature at the end of the stage 1).The vector ps2will have two elements, conversion of reactant and temperature. The vector al will consequently contain one element that will correlate the effect of small deviations in processing time of stage 1 on the temperature reached at stage 1. Similarly, a2 will have two elements, and each element will correlate the effect of processing time on each relevant product specificationj, psj, 2. If waiting time has no effect on product specifications, the vector bi is null. Otherwise, product specifications at stage 2 will clearly be affected by product specifications at stage 1. So, the matrix C2, will be { 1 x 2). Its elements correlate the effect of small deviations in the temperature reached at stage 1 on the conversion and temperature at the end of stage 2. Final products must meet some quality (product specifications)requirements. The model also considers the possibility of mixing different batches of the same product,
6.4 Flexible Recipe Modelfor Recipe Initialization
produced within a fixed horizon, to be sold or used together. Therefore, the properties of the last task of each batch, or, in the case of some batches being mixed, the properties of the final products mixed, must meet such requirements, 6ps,”. That is, only deviations up to a point will be permitted (Eq. ( 3 ) ) .
C BrnJpsrn i 6 ~ s C ; Bm m
m
Vp, Vrn
(3)
where B, is the batch size of product p at stage rn, and rn belongs to the set of last recipe stages of product p batches that are mixed. Process operation modification can have an influence on the operation cost. This fact is also considered in the flexible recipe model. Thus, within a time interval, the set-point modification is assumed to have a linear dependence with batch-stage cost (Eq. (4)). SCOSti =f;6poc,
(4)
6.4.3 Flexibility Regions
In Eq. (S), A and a define the flexibility regions for poci and psi respectively. The width of these regions will basically depend on the accuracy of the model presented in the previous section. That is, the regions are defined in which the model deviates from reality by only a predetermined percentage value, E. Assuming linearity, each of these regions can be described by a set of R” hyper planes (Eq. (5)) where n will be number of variables considered or degree of flexibility of the batch process considered.
where Li, l’; and l”i are the matrices that define the hyper planes bounding (Mi) the process flexibility to be considered within the linear model.
6.4.4 tntegration with the Scheduling Tool
Within the S-graph framework, a partial schedule is obtained at each node of the branch-and-bound algorithm. That is, at each node some equipment units may be already scheduled and some others not. The problem is relaxed by solving the linear flexible recipe model. Therefore, if a node has a relaxation higher than the best bound, the branch corresponding to that node is cut. Figure 6.8 shows the Linear Progamming (LP) model to be solved at each node of the branch-and-bound algorithm procedure where the objective function contemplates a trade off between production makespan and production costs. Thus, the recipe is optimized as well as the timing of the partial schedule. Here TIi and TFi are the starting and ending times of task i respectively, Siis the set of states that task i generates and Si” the set of states that feed task i*.
606
I
6 Batch and Hybrid Processes
Timing of the schedule constraints,
Ti, o T F ~= TIi + T W ~ vi Tli = TFi Vi, i'/3s E Si n Si,
+
TWiiTWy" Vi M S 5 TFj V i
I-
I
Flexible recipe model, SpSj = ai6PTj
+ bjTWi + C C j , i * G p ~ j+* djSpoci
Flexibility region, LSpoc;
i*
+ /!SPTi + $"STWi 5 Mi
Vi
Performance criterion,
Figure 6.8 Formulation for recipe initialization and multipurpose batch process schedule timing
6.4.5 Motivating Example
The proposed framework for recipe initialization integrated to production scheduling has been tested in the batchwise production of benzyl alcohol from the reduction of benzaldehyde through a crossed Cannizarro reaction. This reaction has been extensively studied by Keesman [28]. In that work, an input-output kind of black box model is developed in order to describe the behavior of the reaction phase of the recipe. The model predicts the reaction yield, psi, as a function of the reaction temperature, poci, reaction time, PTi, amount of catalyst, poci,* and amount of one reactant in excess, poci, 3. Then, the model is used to optimize different recipe components analyzing the effects of model accuracy on the results. However, in that work only one batch phase of the recipe was considered. In the following study, the whole batch recipe train and a production environment are considered in order to fully exploit the potential of a more realistic batch process scenario. The flexible recipe model, Y, for this reaction phase and given the linearity required by the model proposed in Section 6.4.2, becomes,
C::::)
6.4 Flexible Recipe Modelfor Recipe Initialization
Gpsi,l = 4SPT;
+ (4.4,95,95)
Spoci,~
Vi
E
[Reaction phase)
(6)
The coefficients of Eq. (6) are the linear coefficients of the Keesman quadratic model. The flexibility of this batch stage, contained in A and u regions according to Eq. (5), is defined by the set of cutting planes (Eq. (7)) that bounds the deviation of opsi, predicted by Eq. (6) and that predicted by the quadratic model. For simplicity, it has been assumed that the hypervolume of @ containing A and u is a hypercube. Equation 7 represents the hypercube of maximum volume that bounds the flexibility region with a tolerance of less than 1.5 % for the reactant conversion. ‘ 1 0 0 0 - 1 0 0 0 0 1 0 0 0 0 0-1 0 0 1 0 0 0-1 0 0 0 0 1 , o 0 0-1
0.7 “C 8.5 g
(7)
27 g
7.5 g 9og 0.1 h
This reaction stage has been incorporated in the whole recipe. It is assumed that a preparation stage performed in equipment unit U 1 and two separation stages carried out in equipment units U3 and U4 are also necessary to produce the alcohol. The reaction stage takes place in equipment unit U2. Reaction temperature at the second stage, Gpoc;, depends on the temperature reached at the first one, Gpsi.,2 r as follows: SPOCi,J= SPSi’,2 (8) where i’ corresponds to any preparation stage and i to any reaction stage of the alcohol recipe. The temperature reached at the preparation stage depends on the processing time according to: 6psi1.2= lOSPTi/ (9) This recipe has been introduced into the production scenario given in Table 6.1. P1 represents the production of benzyl alcohol. The rest of products P2, P3, and P4 share equipment units and resources with product P1. Table 6.1 Products
Batch production environment
Number of batches
Equipment unit Processing Time (h)
“1 P1
u1 0.5
P2 P3 P4
u2 1.75
u1
u3
1.0 u7 2.0 u2 1.5
2.0 u4 1.0 u3 1.0
u3 2.0 u4 1.5 U6 1.0 u7 2.0
u4 0.5 U6 1.0 u5 1.0
us
1.5
3
1 2 1
608
I
G Batch and Hybrid Processes
Figure 6.9 shows the Gantt charts corresponding to the optimum production scheduling for the proposed case study when the fixed recipe at nominal operation conditions is contemplated and when recipe adaptation is considered. The resultant production makespan is 10.75 h for the fxed recipe environment. When the proposed flexible recipe framework is considered, the production makespan diminishes to 10.45 h (2.8% makespan reduction). Also, a different sequence of batches is obtained when it is imposed that the mixing of the three batches of alcohol has to meet the nominal reaction yield (6pspO = 0). The optimal solution is obtained in 25.5 CPU seconds using an AMD-K7 Athlon 1 GHz. The resultant process operating conditions of the three alcohol batches for the flexible recipe scenario are summarized in Table 6.2. T r a d i t i o n a l F i x e d Reclpe
3
day 01
hour 0 0
day 01
hour 0 8
01/01/2001
F l e x i b l e Recipe
2 3 4 5
6 7 day I l l
hour f l O
day 01
hour 08
01/01/2001
Figure 6.9 Optimal Gantt chart of batch production environment of Table 6.1 when considering the fixed recipe and the recipe adaptation respectively. The case study recipe is represented in black
6.4 Nexible Recipe Modelfor Recipe Initialization Table 6.2 Formulation for recipe initialization and multipurpose batch process schedule timing Batch
1st 2nd 3rd
Temperature
Processing
(“C)
time (h)
64.5 64.5 63.8
1.2 1.2 1.2
Amount of
Amount of
Conversion
H*CO (g)
(“w
500 500 500
425 425 425
75 75 72
KOH (g)
To see the effect of initial process deviations on the recipe, Keesman [28] limited the reaction temperature to 63°C (6poq = -1). After optimizing the reaction stage alone, it is found that the reaction time has to be extended to 1.76 h so that the total amount of KOH reaches 528 g and the amount of formaldehyde goes to 475 g in order to keep the intended reaction yield. For these new nominal conditions, the resultant production makespan for the scenario described in Table 6.1 is 11.03 h, which means a reduction in productivity of 5.5 %. Otherwise, a better process performance can be achieved by applying the flexible recipe model to optimize the entire batch plant. The linear flexible recipe, Y, and the model validity constraints for these new nominal conditions are shown in Eqs. 10 and 11, respectively. Vi E {Reactionphase]
-10 0 1 0 - 1 0 0 0 0 0 0
0 0 0 1 - 1 0
‘-1°C 1“C
0 0 0 0 0 1
12g 23 g 13 g 28 g 0.57 h 0.4h
,
Now, the optimal production makespan becomes 10.61 h. Therefore, using the proposed framework, limiting the reaction temperature to 63 “C, only implies a 1.5 % reduction in process productivity. The new process conditions for the different batches of the alcohol production appear in Table 6.3. Table 6.3 Optimal process operation conditions for three batches of alcohol after limiting reaction temperature Batch
1st 2nd 3rd
(“C)
Processing
Amount of
time (h)
KOH (9)
63 63 63
1.55 1.36 1.36
512 512 512
Temperature
Amount of
Conversion
HSO (9)
(“A)
438 438 438
78.3 71.9 71.9
I
609
610
I
G Batch and Hybrid Processes
Notice that in this case study the cost of modifying different process variables has been considered negligible. Usually, nominal values should correspond to an economic optimum. Thus, altering such nominal conditions should result in overrunning this economic optimum in spite of an eventual increase in plant productivity. Obviously, a more realistic scenario should also consider the costs associated to deviations in process operation conditions from nominal values.
6.5 Flexible Recipe Model for Recipe Correction
The recipe initialization is performed at the beginning of the batch phase, taking into account known initial deviations. But other run-time deviations may arise. However, under certain circumstances it is possible to compensate for the effects of these unknown disturbances during the batch run, provided that continuous or discrete measurements are available. The flexible recipe model is the relationship that correlates a batch process output as a function of the selected input items of the recipe. This model is regarded as a constraint on quality requirements and on production cost. Figure 6.7 shows the environment proposed here for real-time recipe correction. While a batch process takes place, different online continuous process variables and discrete variables values, sampled at different times, are taken. From this information, a process state assessment is performed. This assessment gives information about how the batch process is being carried out to the flexible recipe model for recipe correction. The time at which process state assessment is performed, and so at which actions take place, might be different from the moment at which a deviation is detected. Interaction or integration of the flexible recipe model with production scheduling algorithms is necessary to account for the ultimate effect of recipe correction on overall plant capacity. Three different kinds of models are identified: A prediction model that estimates the continuous and discrete sampled (at the sampling time) product specification variables, as a function of the actual control recipe that has already been established by the offline initialization tool. Then, the process state assessment consists of the evaluation of the batch-process run. The predicted product specification i, pvsi"',expected by the offline recipe initialization model, is compared with the actual variable observed at the wth process statement, psi'". If this deviation observed is greater than a fured permitted error, E, some actions will be taken in order to offset this perturbation. A correction model for control recipe adjustments, which describes the ultimate effect of the values measured at the time of the process state assessment as well as of those run-time corrections made during the remainder of the processing time. A rescheduling strategy to adjust the actual schedule to the recipe modifications.
6 5 Flexible Recipe Modelfor Recipe Correction
6.5.1 Rescheduling Strategy, 52
The output of the flexible recipe model for recipe correction might give variations in processing time or resource consumption, which would make the existing plant resources schedule suboptimal or even infeasible. Therefore, in order to accommodate for these deviations in the actual plant schedule, a rescheduling strategy is to be used. There are two basic alternatives to update a schedule when it becomes obsolete: to generate a new schedule or to alter the initial schedule to adapt it to the new conditions. The first alternative might in principle be better for maintaining optimal solutions, but these solutions are rarely achievable in practice and require prohibitive computation times. Hence, a retiming strategy is integrated into the flexible recipe model for recipe correction. At each deviation detected, optimization is required to find the best corrected control process recipe. From this, it is proposed to solve the LP shown in Eq. (12) along with a linear representation of the recipe correction model, to adjust the plant schedule to each recipe correction. In case of dealing with a multipurpose plant, it might happen that a given schedule becomes infeasible because of process disturbances. In such a situation further actions should be taken, like changing the order sequence or canceling a running batch: min (@(PTi,TWi, psi, poci, 0 ) )subject to, Tlij 2 0 Vi, j TFi,, = TIi,, PTi TWi,,?, j TIi,, = TFi,,,Vj, i , i ' / 3 s E { Si n Sit} TIi,, TFi,,-lVi,j TWi,j 5 TW,maxVi,j correction flexible recipe model constraints
+
+
,,
where TI,,,, TF, PT, and TWt,, are the initial, ending, processing and waiting times of each stage i of a batch corresponding to the specific sequence j of the schedule. The sequence is assumed to be fixed. S, is the set of stages that feed stage i. S,' is the set of stages fed by stage i'. Cp is the performance criterion of the flexible recipe model.
6.5.2 Batch Correction Procedure
Within each batch-run, the algorithm of Fig. 6.10 is applied. This algorithm first predicts the expected deviations in process variables from the nominal values as a function of the corrections already taken. Then, the process state assessment verifies if there exist significant discrepancies between the observed variables and the predicted. If so, it freezes process variables of all batch-stages already performed and of the batch-stages that are currently being performed and are not the actual batch-stage
I
611
612
I
,
4 7 1
6 Batch and Hybrid Processes
yes
assesment
I
POC~,TOPi, TFi,
I'
Tli,
to be optimized
I
I
'Flexible recipe correction model'
I IU
Figure 6.10
the batches I
Batch correction procedure algorithm
of assessment and reoptimizes the actual recipe taking into account the effect on the schedule timing.
6.5.3 Application: Model-Based Advisory System for Recipe Correction and Scheduling
In this section, an integrated model-based advisory system is designed to give online support for batch process operation. This application integrates recipe modifications as well as modifications in the actual plant schedule-timing, thus breaking the traditional approach to disassociate recipe correction from plant-wide adjustment. It is based on the flexible recipe concept. The application envisaged gives advice to plant operators and schedulers on how to react in the face of disturbances, so that different kinds of actions can be supported, for instance: correct recipe parameters to offset compensating disturbances to meet product specifications at the expense of processing time;
G. 5 Flexible Recipe Modelfor Recipe Correction 0
allow batches to end on time, that is, finishing the batch below expected product quality; reduce processing times in order to fit a rush order; modify process operating conditions and processing time to partly compensate for disturbances, finishing the batch below product specifications, but not as low as if no action would have been taken.
Different elements form the integrated model advisory system presented here; The recipe adaptation set, where recipe flexibility is defined, the plant adaptation set, where plant schedule adaptation is included using different kinds of rescheduling alternatives, and finally, an integrated criterion, where recipe items modifications, product due-dates accomplishment and product specification deviations costs are included. In this application, the flexible recipe concept is included in the recipe adaptation set. This set consists of a statistical process model optimized from historical process data and relevant model constraints. Variables in the process model may be classified into those that appear perturbed ( P ) ,those that may be tackled to compensate disturbances ( M )and those that define the output of the batch process in terms of quality or yield (0).Hence, the flexible recipe model (Y),included at the recipe adaptation set is as follows.
0 = q ( P T ,M, P) O c a
{M, PT) c S
(13)
where 6 is the flexibility region for process operating conditions and u is the flexibility region for product specifications. The plant adaptation set, the other key concept of the advisory system, describes the plant resources management, including the relevant equipment information for scheduling, and defines penalties for due-date violations for the accepted orders. In this application, sequence of products is predefined and is assumed not to change. Hence, the plant adaptation set is described by Eq. (13). When a deviation between the expected and the actual behavior during processing is observed, some advice on how to react is requested. The application presented here considers two different kind of perturbations: process disturbances on some input variable of the batch recipe stages, and rush orders, a new order to be satisfied at a specific due-date and to be fitted in the actual production plan. Figure 6.11 shows this advice system mechanism window. The application being presented here has been simplified to just consider a linear flexible recipe model at the recipe adaptation set. Then, an LP formulation is used to calculate process adaptation effects. As soon as a deviation is detected, the control recipe can be readjusted, and this readjustment may have an impact on the whole plant operation. A number of scenarios for recipe readjustment are considered, for instance;
I
613
614
I
G Batch and Hybrid Processes
Figure 6.11 0
0
0
Integrated batch operation advice system results
The perturbation may be compensated for within the same batch stage, tackling manipulating variables with a consequence on operation cost. Other batches may have to be corrected, for instance reducing their processing time, to accommodate the impact of correcting a disturbance on a specific batchstage (i.e., on its processing time). The timing of the schedule may have to change, imposing a delay on product delivery.
An integrated optimization criterion computing consequences of different scenarios coordinates the recipe adaptation set actions with the plant adaptation set ones in order to maximize overall batch plant performance. This architecture has been implemented in MATLAB 6.0. When disturbances are encountered, the recipe adaptation set may decide to vary some processing times of some tasks of some recipes. This will have an impact on the actual production schedule, so some orders will have to be shifted forward (increased).The plant adaptation set is optimizing this order-shifting to minimize a function depending on delivery-datedelays. That is, not all order delays will have the same (for instance, economic) impact on the overall objective function, but the plant adaptation set, tries to shift orders so that the overall impact is minimized. This problem results in an LP formulation that is solved using MATLAB optimization toolbox.
G.5 Flexible Recipe Modelfor Recipe Correction
6.5.4
Advisory System Case Study
The case study corresponds to a multiproduct batch plant, over an advising-span of 1 week. During this week, the plant is producing three different products with a specific given sequence. The recipes and sequence of the case study are shown at Tables 6.4 and 6.5, respectively. Table 6.4 study
Master recipes of advisory system case
Recipes
Processing time of stages (h)
R1 R2 R3
10
Table 6.5
Master Schedule of advisory system case study
5 5 10
3 2
3 10 5
4
5 6
Master schedule Sequence Due dates
R3
R1 45
41
R2 58
R3 64
R1 68
R2 81
R3
87
R3 97
Recipe adaptation set (ras) variables are classified into a perturbed variable (P), manipulated variable (4, output variable (0)and processing time (P'T). For simplification, it is assumed that there is only one (critical) variable of each. Besides, the relationship among these variables is considered to be defined by a linear model and it is considered that there is only one flexible task for each product recipe. Table 6.6 summarizes the recipe adaptation set (ras) parameters. Table 6.6
Recipe adaptation set of case study
Flexible Recipe
R1 R2 R3
phase
Coefficients for
SM
61
6PT
2.5 1.0 2.0
2.0 0.5 1.o
1.5 0.5 0.5
A disturbance may be totally offset by modifying the manipulated variable and keeping the master processing time and master output (or quality). Or may be ignored, keeping the manipulated variable and processing time unmodified, so that the output variable (quality) will be affected, or, otherwise, a disturbance may be totally compensated by processing time, or partially by all variables. The integrated advisory criterion computes the effect of modifying manipulated and output variables from the nominal values. Table 6.7 show the costs associated
I
615
616
I
G Batch and Hybrid Processes
with modifying these variables. Also, the cost of delivery-datesdelay from due dates for each order is shown at Table 6.8. Table 6.7 Recipes
Integrated advisory criterion 1
Output deviation cost (u)
R1
1.o
R2 R3
2.0
Manipulated variable deviation cost (u)
0.1 0.5 0.7
0.5
Table 6.8
Integrated advisory criterion 2
Delivery-dates deviation cost (u)
0.2
0.1
0.02
0.5
0.0
0.2
0.1
0.1
0.2
0.03
Two types of disturbances are considered; process disturbances and rush orders. From plant operation, recipe perturbation input variable is retrieved. In this case study, disturbances follow an exponential increase. This would be the case, for instance, of a catalyst being used for all product recipes whose activity is decaying at each use along the production makespan. In the face of disturbances (process disturbances and rush order) the application gives advice on how to react following three policies: 0
0
0
Policy I. This policy ignores process disturbances, and therefore does not modify manipulated variables or processing times. This situation has a direct impact on the output variable, that is, quality of products. Policy 11. Here, process disturbances are totally compensated for by modifylng processing time (not modifymg manipulated variable and keeping output variables equal to nominal values).This situation has a direct impact on delivery dates of products. Policy 111 This policy modifies all recipe items: manipulated variable, processing time and output variable. Within this situation, disturbances may have an impact on delivery dates as well as on product quality, depending on their weight on the overall objective function.
In all policies, a rush order is always accepted. In the case study shown, disturbances have a negative impact of -14.7 u. for policy I, -11.3 u. for policy I1 and of -9.1 u. for policy 111. Policy 111 is the one containing more degrees of freedom to react in the face of disturbances, and therefore is showing the best performance. Figure 6.11 shows the application results window.
G. G final Considerations
6.6 Final Considerations
The increasing interest recently observed in rationalizing batchlhybrid process operations is well-justified. The great advantage offered by batch process stages resides in their inherent flexibility, which may give an adequate answer to present uncertain product demand, variable customer specifications, uncertain operating conditions, market prices variations and so on [29]. In batch plants, there is no reason why the same product must be made every batch; there is the possibility of tailoring a product recipe specifically for a particular customer. In this chapter firstly a review of relevant approaches to represent and exploit this potential flexibility of batchlhybrid process has been given. A novel framework (flexible recipe) has been presented that allows further exploration of flexible manufacturing capabilities of such types of processes. This framework proposes a new philosophy for recipe management in batch process industries that includes the possibility of recipe adaptation in a real-time optimization environment. Based on these novel concepts, a model-based integrated advisory system is presented. The system gives on-time advice to operators on how to react when process disturbances occur. This advice takes into account modification in recipe parameters (product quality, specifications, processing time, process variables) as well as modifications in the production schedule. A process state assessment module for evaluation of abnormal situations should advise when proper actions should be taken. The result is a user-friendly application for optimal batch process operation in industrial practice.
Acknowledgments
Financial support for this research received from the Generalitat de Catalunya", FI program and project GICASA-D (No 1-353) are fully appreciated. Also, support received in part by the European Community (project no G1RD-CT-2001-00466-04GG) is acknowledged. Funds were also received from Spanish MCyT (project no DPI200200806). Enlightening discussions and suggestions received from Prof. Antonio Espuiia and Prof. Verwater-Lukszo are thankfully appreciated. "
Nomenclature
A B, Ci, i+:
DD I 1
Assignment of different batch plant resources. Batch size of product p at stage m. Correlation matrix with the effect of product specifications inputs to stage i from stage i+:. Set of production horizon or due-dates. Observed perturbed variable. Set of production orders.
I
617
618
I
G Batch and Hybrid Processes
Set of equipment units. Manipulable variable of advisory system. Output variable of batch process. Perturbed variable. Processing time of each stage i of a recipe. Process operation conditions vector as a function of time t of stage i. Observed product specification vector at the wth process state assessment moment of batch process stage i. Observed product specification vector at the end of batch process stage i*. Expected vector of product specification vector at the wth process state assessment moment of batch process stage i. Set of process Resources. Set of states generated by task i. Set of states that feed tasks i". Sequence of different batches. Multistage flowshop or jobshop batch plant topology. Starting time of task i. Ending time of task i. wth moment at which stage i of a batch is being assessed. Waiting time at stage i. Steam temperature condensation at pressure Pi. Flexibility region for process operation conditions. Steam enthalpy. Scheduling constraints. Performance criterion function of the Flexible Recipe model. Quality and production cost modelling function of prediction model. Quality and production cost modelling function of correction model of the wth process assessment moment. A specific production plan. Flexibility region for product specifications.
References 1 Reynold T. S. (1983) 75 years of Progress. A
History of the American Institute of Chemical Engineers 1908-1983. American Institute of Chemical Engineers, New York 2 Parakrarna R. Improving batch chemical processes. The Chem. Eng. 1985 p. 24-25 3 Rekfaitis G. V. (1985) Perspectives for computer-aided batch process engineering. Chem Eng Prog, 8 (1985) p. 9-16 4 Rekfaitis G. V. Sunof A. K. Rippin D. W. T. Hortacsu D. (1996) Batch Processing Systems Engineering. NATO AS1 Series 143, Spinger-Verlag,Berlin
Stephanopoulos G. Ali S. Linninger A. Safomon E. AIChE Symp. Ser. 323 (2000) p. 46-57 6 PuiPjaner L. Espuka A. Reklaitis G. V . (2002). Frameworks for discrete/hybrid production systems. In: Braunschwerg B. Gain R. (eds.) Software Architectures and Tools For Computer Aided Process Engineering. ComputerAided Chemical Engineering, 11. 88 (No. 9) (1985) Elsevier, Amsterdam, pp 663-700 7 Engell S. Kowalenski S. Sefmlz C Stursberg 0. Continuous-discrete interaction in chemical processing plants. Proc. IEEE 88 (2000) p. 5
1050-1068
References
8 Barton P. I. Pantelides C. C. Modeling of con9
10
11
12
13
14
15
16
17
18
tinued discrete/continuous processes. AIChE J. 40 (6) (1994) p. 966-979 Kondili E. Pantelides C. C. Sargent R. N. H. A general algorithm for short-term scheduling of batch operations - 1. Mixed integer linear programming formulation. Comput. Chem. Eng. 17 (1993) p. 211-227 Pantelides C. C. UnJixedJ?ameworks for optimal process planning and scheduling. Proceedings of the. 2nd Conference on Foundation of Computer-Aided Process. Operations. CACHE (1994) pp. 253-274, New York Schilling C. Pantelides C. C. A simple continuous time process scheduling fromulation and a novel solution algorithm. Comput. Chem. Eng. S20 (1996) p. S1221-SI226 Reklaitis G. V. Mockus L. Mathematical programming formulation for scheduling of batch operations based on non-uniform time discretization. Acta Chim. Esloven. 42 (1995) p. 81-86 Mockus L. Reklaitis C. V. Continuous time representation in batch/semicontinuous process scheduling-randomized heuristics approach. Comput. Chem. Eng. S20 (1996) p. S1173-Sl178 Zhang X. Sargent R. W. H. The optimal operation of mixed production facilities - extensions and improvements. Comput. Chem. Eng. S20 (1996) p. S1287-S1292 Shah N. Single and multi-site planning and scheduling: current status and future challenges. AlChE Symp. Ser 320 (1998) p. 75-90 Silver E. Ryke D. Peterson R. (1998) Inventory Management and Production Planning and Scheduling. John Wiley and Sons, New York Graells M. Canton J . Peschaud B. Puigjaner L. General approach and tool for the scheduling of complex production systems. Comput. Chem. Eng. 225 (1998) p. S395-S402 Puigjaner L. Handling the increasing complexity of detailed batch process simulation
19
20
21
22 23
24
25
26
27
28
29
and optimization. Comput. Chem. Eng. 23s (1999) p. S929-S943 Canton /. (2003) Integrated Support System for Planning and Scheduling of Batch Chemical Plants. PhD Thesis Universitat Polithcnica de Catalunya Rijnsdorp J. E. (1991) Integrated Process Control and Automation. Elsevier Amsterdam Venvater-Lukszo 2. (1997) A Practical Approach to Recipe Improvement and Optimization in the Batch Processing Industry. PhD Thesis. Eindhoven Technische Universiteit, Eindhoven, The Netherlands Box G. E. P. Draper N. R. (1969) Evolutionary Operation. Wiley, New York Graells M. Loberg E. Delgado A. Font E. Puigjaner L. Batch production scheduling with flexible recipes: the single product case. AIChE Symp. Ser. 320 (1998) p. 286-292 RomeroJ . Espuna A. Friedler F. Puigjaner L. A newframework for batch process optimization using the flexible recipe. Ind. Eng. Chem. Res. 42 (2003) p. 370-379 ANSl/lSA - S88.01 (1995) Batch Control. Part I : Models and Terminology. American National Standards Institute, Washington D.C. Venvater-Lukszo 2. Keesman K. J. Computeraided development of flexible batch production recipes. Prod. Planning Control 6 (1995) p. 320-330 Sanmarti E. Holczinger T. Puigjaner L. Friedler F. Combinatorial framework for effective scheduling of multipurpose batch plants. AlChE J. 48 (11) (2002) p. 2557-2570 Keesman K. J. Application of flexible recipes for model building batch process optimization and control. AIChE J. 39 (4) (1993) p. 581-588 Rippin D. W. T. Batch process systems engineering: a retrospective and prospective review. Comput. Chem. Eng. S17 (1993) p. S1-S13
I
619
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
7 Supply Chain Management and Optimization Lazaros C. Papageorgiou
7.1 Introduction
Modern industrial enterprises are typically multiproduct, multipurpose and multisite facilities operating in different regions and countries and dealing with a global international clientele. In such enterprise networks, the issues of global enterprise planning, coordination, cooperation and robust responsiveness to customer demands at the global as well as the local level are critical for ensuring effectiveness, competitiveness, business sustainability and growth. In this context, it has long been recognized that there is a need for efficient integrated approaches to reduce capital and operating costs, increase supply-chain productivity and improve business responsiveness that considers various levels of enterprise management, plant-wide coordination and plant operation, in a systematic way. A supply chain is a network of facilities and distribution mechanisms that performs the functions of material procurement, material transformation to intermediates and final products, and distribution of these products to customers. A definition provided by theSupplyChain.com (http://www.thesupplychain.com) is: “SCM is a strategy where business partners jointly commit to work closely together, to bring greater value to the consumer and/or their customersfor the least possible overall supply cost. This coordination includes that of order generation, order taking and order fulfilment/distribution ofproducts, services or information. €fictive supply-chain management enables business to make informed decisions along the entire supply chain, f r o m acquiring raw materials to manufacturing products to distributingfinished goods to the consumers. At each link, businesses need to make the best choices about what their customers need and how they can meet those requirements at the lowest possible cost.”
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
I621
622
I
7 Supply Chain Management and Optimization
A similar definition has also been given by Beamon (1998) by defining a supply chain as an integrated process with a number of business entities (i.e., suppliers, manufactures, distributors, retailers). A key characteristic of a supply chain is a forward flow of material from suppliers to customers and a backward flow of information from customers towards suppliers. The supply-chainconcept has in recent years become one of the main approaches to achieving enterprise efficiency. The terminology implies that a system view is taken rather than a functional or hierarchical one. Enterprises cannot be competitive without considering supply-chain activities. This is partially due to the evolving higher specialization in a more differentiated market. Most importantly, competition drives companies toward reduced cost structures with lower inventories, more effective transportation systems, and transparent systems able to support information throughout the supply chain. A single company rarely controls the production of a commodity as well as sourcing, distribution, and retail. Many typical supply chains today have production that spans several countries and product markets are global. The opportunities for supply-chain improvements are large. Costs of keeping inventory throughout the supply chain to maintain high customer service levels (CSLs) are generally significant. There is wide scope to reduce the inventory while still maintaining the high service standards required. Furthermore, the manufacturing processes can be improved so as to employ current working capital and labor more efficiently. It has widely been recognized that enhanced performance of supply chains necessitates (a) appropriate design of supply-chain networks and its components and (b) effective allocation of available resources over the network (Shah 2004). In the last few years, there has been a multitude of efforts focused on providing improvements in supply-chain management and optimization. These efforts span a wide range of models, from commercial enterprise resource planning systems and so-called advanced planning systems to academic achievements (for example, linear and mixed-integer programming and multiagent systems). 0 0
There are three main areas in supply-chain modeling research: supply-chain design and planning simple inventory-replenishment dynamics “novel” applications (e.g., optimization of taxationltransfer prices, cross-chain planning etc.).
The main aim of this chapter is to provide a comprehensive review of recent work on supply-chain management and optimization, mainly focused on the process industry. The first part will describe the key decisions and performance metrics required for efficient supply-chain management, while the second will critically review research work on enhancing the decision-makingfor the development of the optimal infrastructure (assets and network) and planning. The presence of uncertainty within supply chains will also be considered, as this is an important issue for efficient capacity utilization and robust infrastructure decisions. Next, different frameworks are presented which capture the dynamic behavior of the supply chains by establishing efficient inventory-replenishment management strategies. The subse-
7.2 Key Features of Supply Chain Management
quent section of this chapter considers management and optimization of supply chains involving other novel aspects. Finally, available software tools for supplychain management will be outlined and future research needs for the process systems engineering community will be identified. 7.2 Key Features o f Supply Chain Management
Management of supply chains is a complex task, mainly due to the large size of the physical supply network and inherent uncertainties. In a highly competitive environment, improved decisions are required for efficient supply-chain management at both strategic and operational levels, with time horizons ranging from several years to a few days, respectively. Depending on the level, one or more of the following decisions are taken: 0
0 0
0 0
number, size and location of manufacturing sites, warehouses and distribution centres; production decisions related to plant production planning and scheduling; network connectivity (e.g., allocation of suppliers to plants, warehouses to markets etc.); management of inventory levels and replenishment policies; transportation decisions concerning mode of transportation (e.g., road, rail etc.) and also material shipment size.
In general, the supply chains can be categorized as domestic and international, depending on whether they are based in a single country or multiple countries, respectively (Vidal and Goetschalckx 1997).The latter case is more complex, as more global aspects need to be considered such as: 0 0 0 0
different tax regimes and duties exchange rates transfer prices differences in operating costs.
It should be mentioned that effective application of suitable forecasting techniques are often critical to successful supply-chain management (see, for example, Makridakis and Wheelright (1989)).These quantitative forecasting techniques provide accurate forecasts (usually for product demands), which can then be used for planning purposes. The quality of the efficiency and effectiveness of the derived supply-chain networks can be assessed by establishing appropriate performance measures. These measures can then be used to compare alternative systems or design a system with an appropriate level of performance. Beamon (1998)has described suitable performance measures by categorizing them as qualitative and quantitative. Qualitative performance measures include: customer satisfaction, flexibility, information and material flow integration, effective risk management and supplier performance. Appropriate quantitative performance measures include:
I
623
624
I
7 Supply Chain Management and Optimization 0
0
measures based on financial flow (cost minimization, sales maximization, profit maximization, inventory investment minimization and return on investment); measures based on customer responsiveness (fill rate maximization, product lateness minimization, customer response time minimization, lead-time minimization and function duplication minimization).
7.3 Supply Chain Design and Planning
Supply-chaindesign and planning determines the optimal infrastructure (assets and network) and also seeks to identify how best to use the production, distribution and storage resources in the chain to respond to orders and demand forecasts in an economically efficient manner. It is envisaged that large benefits will stem from coordinated planning across sites, in terms of costs and market effectiveness. Most business processes dictate that a degree of autonomy is required at each manufacturing and distribution site, but pressures to coordinate responses to global demand while minimizing costs imply that simultaneous planning of production and distribution across plants and warehouses should be undertaken. The need for such coordinated planning has long been recognized in the management science and operations research literature. A number of mathematical models have been presented with various features; steadystate, multiperiod, deterministic or stochastic. Early research in this field was mainly focused on location-allocation models. Geoffrion and Graves (1974)present a model to solve the problem of designing a distribution system with optimal location of the intermediate distribution facilities between plants and customers. In particular, they aim to determine which distribution centre (DC) sites to use, what size DC to have at each selected site, what customer zones to serve and the transportation flow for each commodity. The objective is to minimize the total distribution cost (transportation cost and investment cost) subject to a number of constraints such as supply constraints, demand constraints and specification constraints regarding the nature of the problem. The problem is formulated as a mixed-integerlinear programming (MILP) problem, which is solved using Benders decomposition. The model is applied to a case study for a supply chain comprising 17 commodity classes, 14 plants, 45 possible distribution centre sites and 121 customer demand zones. The risks arising from the use of heuristics in distribution planning were also identified and discussed early on by Geoffrion and van Roy (1979).Three examples were presented in the area of distribution planning demonstrating the failure of “common sense’’methods to come up with the best possible solution. This is due to the failure to enumerate all possible combinations, the use of local improvement procedures instead of global ones, and the failure to take into account the interactions in the system. Wesolowsky and Truscott (1975) present a mathematical formulation for the multiperiod location-allocation problem with relocation of facilities. They model a
7.3 Supply Chain Design and Planning
small distribution network comprising a set of fac es aiming to serve the demand at given points. The model incorporates two types of discrete decisions, one involving the assignment of customers to facilities and the other the location of the nodes. They consider both steady-stateand time-varying demands. Williams (1983) develops a dynamic programming algorithm for simultaneously determining the production and distribution batch sizes at each node within a supply-chain network. The average cost is minimized over an infinite horizon. Brown et al. (1987)present an optimization-baseddecision algorithm for a support system used to manage complex problems involving facility selection, equipment location and utilization, and the manufacture and distribution of products. They focus on operational issues such as where each product should be produced, how much should be produced in each plant, and from which plant product should be shipped to customer. Some strategic issues are also taken into account such as location of the plants and the number, kind and location of facilities (plants). The resulting MILP model is solved using a decomposition strategy. It is applied to a real case for the NABISCO Company. A two-phase approach was used by Newhart et al. (1993)to design an optimal supply chain. First, a combination of mathematical programming and heuristic models is used to minimize the number of product types held in inventory throughout the supply chain. In the second phase, a spreadsheet-based inventory model determines the minimum safety stock required to absorb demand and lead-time fluctuations. Pooley (1994) presents the results of a MILP formulation used by the Ault Foods company to restructure their supply chain. The model aims to minimize the total operating cost of a production and distribution network. Data are obtained from historical records; data collection is described as one of the most time-consuming parts of the project. Binary variables characterize the existence of plants and warehouses and the links between customers and warehouses. Wilkinson et al. (1996) describe a continent-wide industrial case study. This involved optimally planning the production and distribution of a system with 3 factories and 14 market warehouses and over 100 products. A great deal of flexibility existed in the network which, in principle, enables the production of products for each market at each manufacturing site. Voudouris (1996) develops a mathematical model designed to improve efficiency and responsiveness in a supply chain. The target is to improve the flexibility of the system. He identifies two types of manufacturing resources: activity resources (manpower, warehouse doors, packaging lines, etc.) and inventory resources (volume of intermediate storage, warehouse area). The activity resources are related to time while the inventory resources are related to space. The objective function aims at representing the flexibility of the plant to absorb unexpected demands. Pirkul and Jayaraman (199G) present a multicommodity system concerning production, transportation, and distribution planning. Single sourcing is forced for CUStomers but warehouses can receive products from several manufacturing plants. The objective is to minimize the combined costs of establishing and operating the plants and the warehouses to customers.
I
625
626
I
7 Supply Chain Management and Optimization
Camm et al. (1997) present a methodology by combining integer programming, network optimization and geographical information systems (GIS) for Procter and Gamble’s North American supply chain. The overall problem is decomposed into a production (product-plant allocation) problem and a distribution network design problem. Significant benefits were reported with reconstruction of Procter and Gamble’s supply chain (reduction of 20% in production plants) and annual savings of $200m. McDonald and Karimi (1997) consider multiple facilities that effectively produce products on single-stagecontinuous lines for a number of geographically distributed customers. Their basic model is of a multiperiod linear programming (LP) form, and takes account of available processing time on all lines, transportation costs and shortage costs. An approximation is used for the inventory costs, and product transitions are not modeled. They include a number of additional supply-chain related constraints such as single sourcing, internal sourcing and transportation times. Other planning models of this type do not consider each product in isolation, but rather groups products that place similar demands on resources into families, and bases the higher level planning function on these families. More sophisticated models exist in the process systems literature. A model which selects processes to operate from an integrated network while ensuring that the network capacity constraints are not exceeded is described in Sahinidis et al. (1989).Means of improving the solution efficiency of this class of problems can be found in Sahinidis and Grossmann (1991) and Liu and Sahinidis (1995). Uncertainty in demands and prices are modeled in Liu and Sahinidis (1996) and Iyer and Grossmann (1998) by using a number of scenarios for each time period, thus resulting in multiscenario, multiperiod optimization models. Computational enhancements of the above large-scale model have been proposed by applying projection techniques (Liu and Sahinidis 1996)or bilevel decomposition (Iyer and Grossmannn 1998). A potential limitation of these approaches is that they use expectations rather than a variability metric of the second-stagecosts. Ahmed and Sahinidis (1998) resolved this difficulty by introducing a one-side robustness measure that penalizes second-stage costs that are above the expected cost. Similar measures based on expected downside risk have been developed by Eppen et al. (1989), and have recently been applied to capacity planning problems for pharmaceutical products at different stages in clinical trials (Gatica et al. 2003). Applequist et al. (2000) focus on risk management for chemical supply-chain investments. They introduce the risk premium approach in order to determine the right balance between expected value of investment performance and associated variance. An investment decision is approved provided that its expected return is better than those in the financial market with similar variance. An efficient polytope integration procedure is described to evaluate expected values and variances. Gupta and Maranas (2000) consider the problem of mid-term supply-chain planning under demand uncertainty. A two-stage stochastic programming approach is proposed with the first stage determining all production decisions (here-and-now) and all supply-chains decisions are optimized in the second stage (wait-and-see). This work is extended by Gupta et al. (2000) by integrating the previous two-stage
7.3 Supply Chain Design and Planning
framework with a chance constraint programming approach to capture the tradeoffs between customer demand satisfaction and production costs. The proposed approach was applied to the problem of McDonald and Karimi (1997). Sabri and Beamon (2000) develop a steady-state mathematical model for supplychain management by combining strategic and operational design and planning decisions using an iterative solution procedure. A multiobjective optimization procedure is used to account for multiple performance measures, while uncertainties in production, delivery and demands are also included. A MILP model is proposed by Timpe and Kallrath (2000) for the optimal planning of multisite production networks. The model is multiperiod, based on a time-indexed formulation allowing equipment items to operate in different modes. A novel feature of the model is that it can accommodate different timescales for production and distribution of variable length, thus facilitating finer resolution at the start of the planning horizon. The above model was applied to a production network of four plants located in three different regions. A larger example is briefly described in Kallrath (2000),which demonstrates the use of an optimization model involving 7 production sites with 27 production units operating in futed-batch mode. Bok et al. (2000)present a multiperiod optimization model for continuous process networks with main focus on operational decisions over short time horizons (one week to one month). Special features of the supply chain are taken into account such as sales, intermittent deliveries, production shortfalls, delivery delays, and inventory profiles and job changeovers. A bilevel decomposition solution procedure is proposed to reduce computational effort and deal with larger scale problems. Tsiakis et al. (2001) describe a multiperiod MILP model for the design of supplychain networks. The model determines production capacity allocation among different products, optimal layout and flow allocations of the distribution network by minimizing an annualized network cost. Demand uncertainty is also introduced in the multiperiod model using a scenario-based approach with each scenario representing a possible future outcome and having a given probability of occurrence. Papageorgiou et al. (2001) present an optimization-based approach for pharmaceutical supply chains to determine the optimal product portfolio and long-tem capacity planning at multiple sites. The problem is formulated as a MILP model, taking into account both the particular features of pharmaceutical active ingredient manufacturing and the global trading structures. Particular emphasis is placed upon modeling of financial flows between supply-chain components. A comprehensive review on pharmaceutical supply chains is given by Shah (2003). Kallrath (2002) describes a multiperiod mathematical model that combines operational planning with strategic aspects for multisite production networks. The model is similar to the one presented by Timpe and Kallrath (2000) but allows flexible production unit-site allocation (purchase, opening, shutdown), and raw material purchases and contracts. Sensitivity analyses were also performed, indicating that the optimal strategic decisions were stable up to a 20 % change in demand. Ahmed and Sahinidis (2003) propose a fast approximation scheme for solving multiscenario integer optimization problems, which is particularly relevant to capacity planning problems under discrete uncertainty.
I
627
628
I
7 Supply Chain Management and Optimization
Jackson and Grossmann (2003) describe a multiperiod nonlinear programming model for the production planning and distribution of multisite continuous multiproduct plants where each production plant is represented by nonlinear process models. Spatial and temporal decomposition solution schemes based on Lagrangean decomposition are proposed, to enhance computational performance. Ryu and Pistikopoulos (2003)present a bilevel approach for the problem of supplychain network planning under uncertainty. The resulting optimization problem is then solved efficiently using parametric programming techniques. Levis and Papageorgiou (2004) extend the previous work of Papageorgiou et al. (2001)to consider the uncertainty of outcome of clinical trials. They propose a twostage, multiscenario MILP model to determine both the product portfolio and the multisite capacity planning, while taking into account the trading structure of the company. A hierarchical solution algorithm is proposed to reduce the computational effort needed for the solution of the resulting large-scaleoptimization models. Neiro and Pinto (2004) present an integrated mathematical framework for petroleum supply-chain planning by considering refineries, terminals and pipeline networks. The problem is formulated as a multiperiod, mixed-integer nonlinear programming model, and essentially extends previous work (Pinto et al. 2000) for single refinery operations with nonlinear process models and blending relations. The case study solved represents part of a real-world petroleum supply-chain planning problem in Brazil involving four refineries, five terminals and pipeline networks for crude oil supply and product distribution. 7.3.1 MultiSite Capacity Planning Example
Consider a multisite pharmaceutical capacity planning example (Levis and Papageorgiou 2004) with seven potential products (Pl-P7) subject to clinical trials, four alternative locations (A-D), where A and B are the sales regions, A is the intellectual property (IP) owner, and B, C and D are the candidate production sites. The entire time horizon of interest is 13 years. In the first 3 years, no production takes place and the outcomes of the clinical trials are not yet known. Initially, there are two suites already in place at production site B. Further decisions for investing in new manufacturing suites are to be determined by the optimization algorithm. It is assumed that the trading structure is given together with the internal pricing policies as shown in Fig. 7.1. Five out of seven potential products are selected in the product portfolio while the optimal enterprise-wide pharmaceutical supply chain is illustrated in Fig. 7.2 where location C is not chosen. The investment decision calendar is illustrated in Fig. 7.3. Note that investment decisions for additional manufacturing suites are taken in the early time periods while the clinical trials are still on going. The proposed investment plans take into account the construction lead-time (2 and 3 years for nonheader and header suites, respectively) and safeguard the availability of the newly invested equipment right after the end of the clinical trials phase.
7.3 Supply Chain Design and Planning .............................................
site B P1, P4, P5
i
'
; IP centre
II
Site D ..........................................
:
c
;
..........................................
Figure 7.1 Trading structure o f the company. Pl-P7 Potential products subject to clinical trials, A-D four alternative locations where A and B
are the sales regions, A is the intellectual property and Dare the candidate production sites
(IP)owner, and
i - f
....................
............................................
Figure 7.2 Optimal business network
to
tl
t2
Time (years) Figure 7.3 Investment decisions calendar. Black: site B; grey: site D
t3
6, C
.............................................
.............................................
"
629
.............................................
7 , 1;
j
I
t4
........................
:
Market
I
630
I
7 Supply Chain Management and Optimization
120 100 80 m
5
9
60
40
20 0
II tE
t4
t5
t6
t7
t9
t13
Tlma
Product P I
Production m S a 4 e s +Demand Figure 7.4
+Inventory/
Characteristic profiles for the product P1
Operational variables (detailed production plans, inventory and sales profiles) for the selected products are also determined as illustrated in Fig. 7.4. In particular, the production of product P1 is taking place at both manufacturing sites B and D. Mainly due to the proposed investment plan and production policy, the total manufactured amount of P1 fully satisfies customer demand at all time periods.
7.4 Analysis of Supply Chain Policies
The operation of supply chains is a complex task, mainly due to the large physical production and distributions network flows, the inherent uncertainties and the dynamics associated with the internal information flow. At the operations level, it is crucial to ensure enhanced responsiveness to changing market conditions. In this section, different frameworks are presented which capture the dynamic behavior of the supply chains by establishing efficient inventory-replenishment management strategies. Beamon (1998)provides a comprehensive review of supply-chainmodels and classifies them as analytical and simulation. Analytical models usually use an aggregate description of supply chains and optimize high-level decisions involving unknown
-
7.4 Analysis ofSupply Chain Policies
configurations, while simulation models can be used to study the detailed dynamic operation of a futed configuration under operational uncertainty. In general, simulation is particularly useful in capturing the detailed dynamic performance of a supply chain as a function of different operating policies, Usually, these simulations are stochastic, thus deriving distributions of characteristic performance measures based on samples from distributions of uncertain parameters. Gjerdrum et al. (2000) describe a procedure for modeling of the physical and decision-making business process aspects of a supply chain. A specialty chemical process with international markets, secondary manufacturing plants and primary manufacturing plants illustrates the model. The above procedure proposes pragmatic, noninvasive policy and parameter modifications (e.g., safety stocks) that improve performance measures such as average inventory levels, probability of stock-outs and customer service levels (CSLs) are identified. A stochastic simulation approach is then proposed using the above procedure and sampling from the uncertain parameters to assess future performance of the supply chain. The above work has recently been extended by Hung et al. (2004) adopting an object-orientedapproach to model both physical processes (e.g., production, distribution) and business processes of the supply chain. An efficient sampling procedure is also developed which reduces significantly the number of simulations required. A model predictive control (MPC) framework for planning and scheduling problems is adopted by Bose and Pekny (2000). The framework consists of forecasting and optimization modules. The forecasting module calculates target inventories for future periods, while the optimization module attempts to meet these targets in order to ensure the desired CSL while minimizing inventory. Simulation runs are then performed to study the dynamics of a consumer goods supply chain focusing on promotional demand and lead time as the main control parameters. Different coordination structures of the supply chain are also investigated. Van der Vorst et al. (2000)present a method for modeling the dynamic behavior of food supply chains by applying discrete-event simulation based on time colored Petri-nets. Alternative designs of the supply-chain infrastructure and operational management and control are then evaluated with the main emphasis being placed upon distribution of food products. Perea-Lopez et al. (2001) describe a dynamic modeling approach for supply-chain management by considering the flow of material and information within the supply chain. The impact of different supply-chain control policies on the performance of supply chains is evaluated using a decentralized decision-making framework. This is demonstrated through a polymer case study with one manufacturing site, one distribution network and three customers. Perea et al. (2003) extend their previous work (Perea et al. 2001) by proposing a multiperiod MILP optimization model within an MPC strategy. A centralized approach is adopted where the corresponding MILP model considers the whole supply chain, involving suppliers, manufacturing, distribution and customers simultaneously. The benefits of centralized over decentralized management are then emphasized with a case study with profit increases of up to 15 %.
I
631
632
l
7 Supply Chain Management and Optimization
Agent-basedtechniques have recently been reported in the process systems literature for the efficient management of supply-chain systems. Garcia-Flores et al. (2000) and Garcia-Flores and Wang (2002)present a multiagent modeling system for supply-chain management of process industries. Retailers, warehouses, plants and raw material suppliers are modeled as a network of cooperative agents. A commercial scheduling system is integrated in the multiagent framework, as plant scheduling usually dominates the supply-chain performance. A case study with a single multipurpose batch plant of paints and coatings is then used to illustrate capabilities of the system. A similar approach has also been reported by Gjerdrum et al. (2001a)to simulate and control a demand-driven supply-chain network system, with the manufacturing component being optimized through mathematical programming. A number of agents have been used, including warehouses, customers, plants, and logistics functions. The plant agent, which is responsible for production scheduling, is using optimization techniques while the other agents of the supply chain are mainly rulebased. The proposed system is then applied to a supply chain with two manufacturing plants by investigating the effect of different replenishment policies in the supply-chain performance. Julka et al. (2002a,b)propose an agent-based framework for modeling, monitoring and management of process supply chains. A refinery application is considered for the efficient management of crude oil procurement business process by investigating the impact of different procurement policies, demand fluctuation and changes in plant configuration.
7.4.1 A Pharmaceutical Supply Chain Example
A pharmaceutical supply chain example (Gjerdrum et al. 2000) is shown in Fig. 7.5. A primary manufacturing plant is situated in Europe. Secondary formulation sites in Asia and America receive A1 from this plant and produce final products for the main warehouses in Japan and the US. There are two main SKUs in the Japanese market: products A and B. Also, in the US market there are two principal products, C and D. Primary
Secondaw
Europe America
Figure 7.5
A supply chain example
Market
Products
7.4 Analysis ofsupply Chain Policies
There are also several other product SKUs that share secondary manufacturing resources (in Asia and America) which are handled by other (not explicitly modeled) warehouses e.g., in Europe. The CSL deemed sufficient for this supply chain by decision-makers (called the target CSL) is 97 %. The low figure is due to large inventories held at external warehouses downstream in the supply chain, which will buffer against any temporary stock-outs. Occasional SKU stock-outs are therefore not considered to be harmful since the end consumer is not affected. Therefore, the aim of this case study is to reduce inventory while maintaining the target CS L. The demand-management SKU data is presented in Table 7.1. Table 7.1
Demand management SKU base case data
Safety stock (weeks) MOQ (units) Deviation from forecast (%) Pack size (units) Initial stock (units)
Product A
Product B
Product C
Product D
6 10,000 25 12 15,000
6 5000 20 30 15,000
6 5000
6 20,000 25 30 60,000
25 30 60,000
The production data are given in Tab. 7.2. Table 7.2
Secondary manufacturing site base case
data
Safety stock (kg) A1 Order quantity (kg) Production capacity (h week-1) Production rate (units h-’) Flexibility of production (“A) Initial A1 stock (kg)
Asia
USA
50 30 60 650 40 80
150 130 60 1200 25 180
One of the most important supply-chain performance indicators is the amount of unutilized working capital in the chain. By closely tracking simulated inventory, substantial costs can be taken out of the supply chain. This must be done while maintaining required high CSLs, low probabilities of stock-outs and supply-chain efficiency in general. Therefore, we simulate the inventory levels as well as the more traditional customer-focused aspects of the supply-chain performance. Each simulation is repeated 100 times to ensure statistically trustworthy results. CSL is defined as part-fill on-time. When there is inventory enough at hand, the sales equal demand. When inventory runs out, it is assumed that inventory that is there can be sold while the rest of the demand is left unfulfilled. The probability of stock-out (PSO) is simply the number of times the inventory is zero divided by the total number of data points, i.e., the horizon length times the number of simulated simulation runs. Hence, this complete supply chain is dynamically simulated over a prespecified horizon of, e.g., 2 years. First, initialization of the
I
633
634
I
7 Supply Chain Management and Optimization
model is performed with respect to fKed policy parameters, business processes and initial stock levels. At each time-step (e.g., each week) in the horizon, input parameters and model data from previous time steps are collected. Actual sales and machine breakdown are generated from stochastic distributions. Stock positions, current supply-chain orders and forecasts are then updated and evaluated. When any model variables violate the current procedures or policies (such as the safety stock) the model issues new directions of action (such as issuing new orders). To obtain statistically significant results of the stochastic process, the simulation is repeated over a number of runs. At each run, informative data are collected. Finally, all the supply-chain simulation results are extracted and evaluated. The experiments presented in Table 7.3 demonstrate the response of the supply chain to changes in internal and external parameters. These experiments are useful in that they provide information on whether current policies and external factors can be modified while maintaining strong supply-chain performance. INV is the horizon-average finished product (SKU) inventory. High demand variability represents a complex market to forecast. High order quantity shows what happens if a typical service level parameter is modified. Although these parameters obviously affect the performance levels, it seems that the demand management performance levels are not severely affected by the demand variability or the order quantity. The ramping and decreasing demands give conservative measures, since it is assumed that the forecast will not be updated during the horizon. The US market is able to handle a soaring demand better, due to fewer conflicting products. Table 7.3 Supply-chain experiments. CSL Customer service level, PSO probability of stock-out, INV horizon-average finished product inventory Experiment
Product A (Asia) CSL(%)
Base case H i g h demand variability H i g h order quantity Soaring demand Collapsing demand
98.33 98.05 97.94 95.52 99.83
PSO(%) 1.81 2.18 2.30 4.54 0.19
Product C (USA)
INV 48,074 48,678 58,688 43,997 54,645
CSL(%)
PSO (%)
98.81 97.76 98.21 98.73 99.51
1.33 2.31 1.96 1.22 0.75
INV ~
~
~~~~
105,041 107,315 109,049 112,554 111,789
In Fig. 7.6, the expected number of stock-outsfor various policies of safety stock is shown for product C in the US market. The value represents the risk of a stock-out occurring during the simulated period. In Fig. 7.7, the CSL defined as part-fill on-time is shown for different policies of weekly forward cover stock for product C. It can be seen that a policy of 4 weeks of forward cover will just about be sufficient to satisfy the target CSL of 97 %. In Fig. 7.8, the resulting mean average inventory values for various stock policies are shown for product C. The 4 weeks forward cover corresponds to an average inventory of about 70,000 units. This value can then be utilized to calculate the market inventory cost.
I
7.5 Multienterpnse Supp/y Chains
635
12
10
a
$ 6
4 2 0
2
1
4
3
5
7
6
Weeks forward cover Figure 7.6
8
9
10
Probability of stock-out
100 98 96 $
94
92 90 88
2
I Figure 7.7
3
4
5 6 7 Weeks forward cover
8
9
10
8
9
10
Customer service level
200000
150000 100000
m
50000 0 1
2
3
4
5
6
Weeks forward cover Figure 7.8
7
Average inventory
7.5 Multienterprise Supply Chains
In recent years, the concept of multienterprises has gained increasing attention, since it promotes all the benefits of extended multienterprise collaboration. The determination of policies that optimize the performance of the entire supply chain as a whole, while ensuring adequate rewards for each participant is crucial and the relevant work is very limited. Conflicting interests in general extended multienterprise supply chains frequently lead to problems in how to distribute the overall value to each member of the supply
636
I
7 Supply Chain Management and Optimization
chain. A simple approach to enhancing the performance of a multienterprise supply chain is to maximize the summed enterprise profits of the entire supply chain subject to various network constraints. When the overall system is optimized in this fashion, there is no automatic mechanism to allow profits to be fairly apportioned among participants. Solutions to this class of problems usually exhibit quite uneven profit distribution and are therefore impractical. They do however give an indication of the best possible total profit attainable in the supply chain as well as an indication of the best activities to carry out. van Hoek (1998)raises the question of how to divide supply-chainrevenues among the players in a supply-chain system when there is no leading player to determine how distribution of benefits should be handled. He states that supply-chain control is no longer based on direct ownership but rather on integration over interfaces of functions and enterprises. Traditional performance indicators limit the possibilities of optimizing the supply-chain network because the measures do not correctly address the wide opportunities for improvement. Cohen and Lee (1989) deliver a model for making resource deployment decisions in a global supply-chain network and solve different scenarios using an extensive mixed-integer nonlinear programming (MINLP) model. They also discuss several “policyoptions” for plant utilization, supply and distribution strategies. Pfeiffer (1999) describes transfer pricing in a supply chain consisting of procurement, manufacturing and selling units of one single company. His theoretical model handles one commodity at each node and does not include any capacity constraints. He proposes a transfer-price system governed by the headquarters, which fxes a specific transfer-price level. Each node optimizes its own decisions independently to maximize a given profit function, according to the price level fixed by the headquarters. After the decentralized optimization, headquarters evaluates and collects the overall results obtained and chooses a new transfer price that leads to a higher profitability. Alles and Datar (1998) claim that cost-based transfer prices of the companies are usually based in their competitive environment. The enterprise may cross-subsidize products in order to increase their ability to increase prices. Often, transfer prices for relatively lower cost products are decreased. The authors give evidence that transfer prices are determined based on strategic decisions rather than on internal cost systems. Jose and Ungar (2000) propose an approach to decentralized pricing optimization of interprocess streams in chemical industry companies. Their iterative auction method determines the prices of process streams so as to maximize an objective for a single chemical company, while each division within the company is constrained by its available resources. The approach is interesting in that each division conceals its private information from the other parties within the so-called micro supply chain. It normally takes several iterations for a model to converge, and the user has to define the limited amount of slack resources utilized. One of the main conceptual differences between their approach and the one presented in this paper is that they regard the channel members as adversarial and competitive for resources rather than cooperative. Also, they use a slack resource iterative auction approach, whereas in
7.6 Software Toolsfor Supply Chain Management
this paper the solution approach is to solve a noniterative separable MILP problem. Ballou et al. (2000) stress the importance of common objectives in the supply chain. Unattainable improvements for single companies in terms of cost savings and customer service enhancements can be obtained by cooperative companies. The authors point out that problems arise if some of the firms benefit at the expense of the others. The conflict resolution between supply-chain partners must be of focal interest, and to keep the coalitions intact, the rewards of cooperation must be redistributed. They identify three means to achieve this: 1. Metrics could be developed to capture the nature if interorganizational cooperation to simplify benefit analysis. 2. Information sharing mechanisms could transfer information about the benefits of cooperation among the members in the supply chain. 3. Allocation methods could be developed thatfairly distribute the rewards of cooperation between the members. According to Pashigian (1998), multiproduct industries form a new market structure characterized by novel market relationships among companies. Collusion takes place when the firms in an industry join together to set prices and outputs. Such an agreement is said to form a cartel. However, game theorists insist that there is an inherent incentive for each firm to cheat on such an agreement, in order to gain more profits for itself. Vidal and Goetschalckx (1997) present a nonconvex optimization model together with a heuristic solution algorithm for the management of a global supply by maximizing the after-tax profits of a multinational corporation. The model also considers transfer prices and the allocation of transportation costs as explicit decision variables. Gjerdrum et al. (2001b,2002) present a MINLP model including a nonlinear Nashtype objective function for fair profit optimization of an n-enterprise supply-chain network. The supply-chain planning model considers intercompany transfer prices, production and inventory levels, resource utilization, and flows of products between echelons. Efficient solution procedures for the above model are described by Gjerdrum et al. (2001b, 2002) based on separable programming and spatial branch-andbound respectively. Computational results indicate profits very close to those obtained by simple single-level optimization (e.g., maximization of total profit), but more equitably distributed among partners. Chen et al. (2003)propose a fuzzy decision-making approach for fair profit distribution for multienterprise supply-chain networks. The proposed framework can accommodate multiple objectives such as maximization of the profit of each participant enterprise, the CSL, and the safe inventory level. 7.6 Software Tools for Supply Chain Management
The coordination of operations on a global basis requires the implementation and use of software to support these decisions. State-of-the-artsoftware requires the ability to perform constraint-based, multisite planning that can become extremely com-
I
637
638
I
7 Supply Chain Management and Optimization
plex. Modern software supply-chain tools aim to integrate traditionally fragmented views of operations, and to provide a holistic view of the problem, rather than linking separate planning operations.
7.6.1 Aspen Technologies (http://www.aspentech.com)
Aspen MIMI supply-chain suite includes the Aspen Strategic Analyzer that can be used to identify strategic and operational options such as capacity addition, production constraints and distribution modes. Aspen is the leading provider of supplychain solutions in the process industry, by market share.
7.6.2 i2 Technologies (http://www.i2.~om)
Supply-Chain Optimization is a holistic solution and framework to help companies create a macrolevel model for the entire supply chain that controls an integrated workflow environment. The user may select to extend the capabilities and study in depth specific areas of the supply-chain problem, such as logistics, production, demand fulfilment and profitlrevenue analysis.
7.6.3 Manugistics (http://www.manugistics.com)
Network Design and Optimization is a supply-chain design and operation package that is part of the Manugistics supply-chain suite. Among its capabilities is the design of a supplier, manufacturing site and distribution site network in the most effective way. The process considers inventory levels, production strategy, production and transportation costs, lead times and other user-specified constraints.
7.6.4 SAP AC (http://www.sap.com)
MySAP SCM (Supply-Chain Management) is designed to be a complete supplychain solution. The supply network planning and deployment tool assists planners to balance supply and demand while simultaneously considering purchasing, manufacturing, inventory and transportation constraints. Integrated with other support tools, it aims to provide a complete optimization framework.
7.7 Future Challenges
7.7 Future Challenges
It is clear that considerable research work has been done on process supply-chains especially in the areas of network design and planning. However, a number of issues provide interesting challenges for further research. As many modern supply chains are characterized by their international nature, optimization-based decisions are required for various features such as taxes, duties, transfer prices, etc. Systematic integration of business/financial and planning models should be considered for efficient supply-chain management (see, for example, Shapiro (2003),Romero et al. (2003),Badell et al. (2004),Badell M., Romero J., Huertas R., Puigjaner L. Comput. Chem. Eng. 28 (2004) p. 45-61). Significant effort has already been put into supply-chain modeling under uncertainty commonly related to product demands. The treatment of uncertainty still requires research effort to capture more aspects such as product prices, resource availabilities etc. In order to ensure that investment decisions are made optimally in terms of both reward and risk, suitable frameworks for the solution of supply-chain optimization problems under uncertainty are required. Most of the existing frameworks are suitable for two-stage problems, while there is a need for appropriate multistage, multiperiod optimization frameworks for supply-chain management. As most of the resulting optimization problems, and predominantly cases under uncertainty, will be of large scale, there is great scope for developing efficient solution procedures. Aggregation and decomposition techniques are envisaged as such promising solution alternatives. It should be mentioned that it is quite important to maintain industrial focus for the successful development of such solution methods. The analysis of supply-chain policies for process industries has recently emerged and this research area is expected to expand. Suitable frameworks seem to be the ones based on agents, MPC and object-oriented systems. A key issue here is the appropriate integration of business and process aspects (see, for example, Hung et al. (2004)). Another emerging research area is the systematic incorporation of sustainability aspects within supply-chain management systems, necessitating the development of multiobjective optimization frameworks (see, for example, Zhou et al. (2000), Hugo and Pistikopoulos (2003)). Finally, research opportunities are evident in the appearance of new types of supply chain, associated, for example, with hydrogen (fuel cells), energy supply, water provision/distribution, fast response therapeutics and biorefinenes.
Acknowledgments
The author is grateful to Nilay Shah, Jonatan Gjerdrum, Panagiotis Tsiakis, Gabriel Gatica and Aaron Levis for useful discussions and contributions to this work.
I
639
640
I
7 Supply Chain Management and Optimization References 1 Ahmed S. Sahinidis N. V. Ind. Eng. Chem.
Res. 37 (1998) p. 1883-1892 2 Ahmed S. Sahinidis N. V. Oper. Res. 51 (2003)p. 461-47 3 Alles M. Datar S. Manage. Sci. 44 (1998) p. 451-461 4 Applequist G. E. PeknyJ. F. Reklaitis G. V. Comput. Chem. Eng. 24 (2000) p. 2211-2222 5 Ballou R. H. Gilbert S. M. Mukherjee A. A. Ind. Market. Manage. 29 (2000) p. 7-18 6 Beamon B. M. Int. J. Prod. Econ. 55 (1998)p. 281-294 7 Bokj. K. Grossmann I. E. Park S. Ind. Eng. Chem. Res. 39 (2000) p. 1279-1290 8 Bose S. Peknyj. F. Comput. Chem. Eng. 24 (2000)p. 329-335 9 Brown G. G. Graves G. W. Honczarenko M. D. Manage. Sci. 33 (1987) p. 1469-1480 10 CammJ. D. Chorman T. E. Dill F. A. Evans]. R. Sweeney D.]. Wegryn G. W. Interfaces 27 (1997)p. 128-142 11 Chen C. L. Wang B. W. Lee W. C. Ind. Eng. Chem. Res. 42 (2003)p. 1879-1889 12 Cohen M. A. Lee H. L. j . Manuf. Oper. Manage. 2 (1989) p. 81-104 13 Eppen G. D. Marti R. K. Schrage L. Oper. Res. 37 (1989) p. 517-527 14 Garcf'a-Flores R. Wang X. 2.OR Spectrum 24 (2002)p. 343-370 15 Garcia-Flores R. Wang X. 2.Goltz G . E. Comput. Chem. Eng. 24 (2000) p. 1135-1141 16 Gatica G. Papageorgiou L. G. Shah N. Chem. Eng. Res. Des. 81 (2003) p. 665-678 17 G e o f i o n A. M. Graves G. W. Manage. Sci. 20 (1974) p. 822-844 18 Geofion A. M. van Roy T. J. Sloan Manage. Rev. Summer (1979)p. 31-42 19 Gjerdrum J . Jalisi Q. W. 2.Papageorgiou L. G. Shah N. 5th Annual International Conference of Industrial Engineering Theory Applications and Practice, Hsinchu, Taiwan, 2000 20 Gjerdrumj. Shah N. Papageorgiou L. G. Prod. Planning Control (2001a)p. 12 81-88 21 Gjerdrum J. Shah N. Papageorgiou L. G. Ind. Eng. Chem. Res. 40 (200117) p. 1650-1660 22 Gjerdrum]. Shah N. Papageorgiou L. G. Eur. J. Oper. Res. 143 (2002) p. 582-599 23 Gupta A. Maranas C. D. Ind. Eng. Chem. Res. 39 (2000)p. 3799-3813 24 Gupta A. Maranas C. D. McDonald C. M. Comput. Chem. Eng. 24 (2000) p. 2613-2621
25 Hugo A. Pistikopoulos E. N. Proceedings of
the 8th International Symposium on Process Systems Engineering, 2003, A and B ,214-219 26 Hung W. Y. Kucherenko S. Samsatli N. Shah N. J. Oper. Res. SOC.2004 55 (2004)p. 801-813 27 lyer R. R. Grossmann 1. E. Ind. Eng. Chem. Res. 37 (1998) p. 474-481 28 Jackson/. R. Grossmann I. E. Ind. Eng. Chem. Res. 42 (2003)p. 3045-3055 29 Jose R. A. Ungar L. H. AIChE J. 46 (2000) p. 575-587 30 Julka N. Srinivasan R. Karimi 1. Comput. Chem. Eng. 26 ( 2002a) 1755-1769 31 julka N. Srinivasan R. Karimi I. Comput. Chem. Eng. 26 (20024 1771-1781 32 Kallrathj. Chem. Eng. Res. Des. 78 (2000)p. 809-822 33 Kallrath J. OR Spectrum 24 (2002) p. 219-250 34 Levis A. A. Papageorgiou L. G. Comput. Chem. Eng. 28 (2004) p. 707-725 35 Liu M. L. Sahinidis N. V. Ind. Eng. Chem. Res. 34 (1995)p. 1662-1673 36 Liu M. L. Sahinidis N. V. Ind. Eng. Chem. Res. 35 (1996) p. 4154-4165 37 Makridakis S. Wheelright S. C. Forecasting methods for Management, 1989,Wiley, New York 38 McDonald C. M. Karimi I. A. Ind. Eng. Chem. Res. 36 (1997)p. 2691-2700 39 Neiro S. M. S. Pintoj. M. Comput. Chem. Eng. 2004 28 (2004) p. 871-896 40 Newhart D. D. Stott K. L. Vasko F. J . J. Oper. Res. SOC.44 (1993) p. 637-644 41 Papageorgiou L. G. Rotstein G. E. Shah N. Ind. Eng. Chem. Res. 2001 40 275-286 42 Pashigian B. P. Price Theory and Applications 1998 McGraw Hill Boston 43 Perea-Lopez E. Ydstie B. E. Grossmann I. E. Comput. Chem. Eng. 27 (2003)p. 1201-1218 44 Perea-Lopez E. Ydstie B. E. Grossmann I. E. Tahmassebi T. Ind. Eng. Chem. Res. 40 (2001) p. 3369-3383 45 Pfeifler T. Eur. J, Oper. Res. 116 (1999) p. 319-330 46 Pinto j . M. Joly M. Moro L. F. L. Comput. Chem. Eng. 24 (2000) p. 2259-2276 47 Dirkul H. Jayarama V. Transp. Sci. 30 (1996) p. 291-302 48 PooleyJ. Interfaces 24 (1994) p. 113-121 49 Romero j . Badell M. Bagajewicz M. P u i ~ a n e r L. Ind. Eng. Chem. Res. 42 (2003) p. 6125-6134
References 1641 50 Ryu J. H. Pistikopoulos E. N. Proceedings of
51 52
53 54
55
56
57
the 4th International Conference on Foundations of Computer-Aided Process Operations,( 2003) p. 297-300 Sabri E. H . Beamon B. M . Omega 28 (2000) p. 581-598 Sahindis N. V. Grossmann I. E. Comput. Chem. Eng. 15 (1991) p. 255-272 Sahindis N. V. Grossmann I. E. Fomari R. E. Chathranthi M. Comput. Chem. Eng. 13 (1989) p. 1049-1063 Shah N. Proceedings of the 4th International Conference on Foundations of ComputerAided Process Operations (2003) p. 73-85 Shah N. Escape-14 conference 2004 Shapiro J . Proceedings of the 4th International Conference Conference on Foundations of Computer-Aided Process Operations (2003) p. 27-34 Timpe C.H. Kallrath J . Eur. 1. Oper. Res. 126 (2000) p. 422-435
58 Tsiakis P. Shah N. Pantelides C. C. Ind. Eng.
Chem. Res. 40 (2001) p. 3585-3604
59 van der Vorst J. G. A. J. Beulens A. J . M . van
Beek P. Eur. I. Oper. Res. 122 (2000) p. 354-366 60 van Hoek R. I. Supply Chain Manage. 3 (1998) p. 187-192 61 Vidal C. J. Goetschalckx M. Eur. 1. Oper. Res. 98 (1997) p. 1-18 62 Voudouris V. T. Comput. Chem. Eng. 20s (1996) p. S1269-S1274 63 Wesolowsky G. 0.Tmscott W. G. Manage. Sci. 22 (1975) p. 57-65 64 Wilkinson S. J. Cortier A. Shah N. Pantelides C. C. Comput. Chem. Eng. 20s (1996) p. S1275-S1280 65 Williams J . F. Manage. Sci. 29 (1983) p. 77-92 66 Zhou 2 . Y . Cheng S. W. Hua B. Comput. Chem. Eng. 24 (2000) p. 1151-1158
Section 4 Computer-integrated Approaches in CAPE
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
Section 4presents a review on actual trends and shows new advances i n the integration of software tools and process data. The material in this section is organized infive chapters. Chapter 1 sets the goalsfor an integrated process and product design, possibly including the product application process in the analysis. Functions to be met by the product are the specijcations. However, identijjing afeasible chemical product is not enough, it needs to be produced through a sustainable process. Chapter 1 dejnes the general integrated chemical product-process design problem. The important issues and needs are identijed with respect to their solution and illustrated through examples. A n y CAPE method/tool needs to organize the scales and complexity levels so that the events at different scales can be described and understood:fiom property prediction at the nanoscale to phenomena occurring at the equipment scale. Integrated product-process design where modeling and supply chain issues play a n important role is also highlighted. Chapter 2 shows where, why, and how models of various types are used throughout the I+ of a n industrial or rnanufacturingprocess. Thisjustijes the need for tool and data integration across the process liji cycle. Modeling addresses a diversity of goals, and relies on a range of forms, approaches, and tools. Models differ widely i n the detail level, time, and length scale. Several industrial case studies help illustrate the challenges of modeling throughout the life cycle, since there is a huge range of models used to help answer vital sociotechnical questions through the I$ cycle ofthe process or product. The last chapter of Section 3 has already presented the elementary principles and systematic methods of supply chain modeling and optimization. Chapter 3 shows how the practical implementation of supply chain management software suffersfiom several deficiencies, due to a limited focus or a lack of integration. To overcome these deficiencies and respond better to industrial demands facing a more dynamic environment, there is a need to explore new strategies f o r supply chain management. Integrated solutions are required for the next generation of software tools given the number and complex interactions present among main components i n the global supply chain:financialJows, negotiation and environmental aspects need to be considered simultaneously along with a number of operating and design constraints. Agent-based systems are considered as a promising architecture for integration. Reliable and consistent thermophysical property data for pure components and mixtures are essential for CAPE calculations. Chapter 4 reviews the data needed and the quality requirements. Major sourcesfor physical properties and phase equilibrium data collections are compared. The text also provides up to date references to information sources available on the Internet. The major issue i n CAPE tools integration is to ensure software component interoperability and to allow seamless data exchange between tools. This issue is discussed i n Chapter 5
thatfocuses on operational standards in the domain of CAPE, namely the CAPE-OPEN. Promising software interoperability standards, leading to service-oriented architectures and the emerging Semantic Web are also described. The organizational and economic consequences of the trend towards interoperability and standards i n CAPE are also shortly described.
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
1 Integrated Chemical Product-Process Design: CAPE Perspectives Rafiqul Cam'
1.1 Introduction
Chemical process design typically starts with a general problem statement with respect to the chemical product that needs to be produced, its specifications that need to be matched, and the chemicals (raw materials) that may be used to produce it. Based on this information, a series of decisions and calculations are made at various stages of the design process to obtain first a conceptual process design, which is then further developed to obtain a final design, satisfying at the same time, a set of economic and process constraints. The important point to note here is that the identity of the chemical product and its desired qualities are known at the start but the process (flow sheet/operations) and its details are unknown. Chemical product design typically starts with a problem statement with respect to the desired product qualities, needs and properties. Based on this information, alternatives are generated, which are then tested and evaluated to identify the chemicals and/or their mixtures that satisfy the desired product specifications (qualities, needs and properties). The next step is to select one of the product alternatives and design a process that can manufacture the product. The final step involves the analysis and test of the product and its corresponding process. The important points to note here are that (1) the identity of the chemical product is not known at the start but the desired product specifications are known, and (2) process design can be considered as an internal subproblem of the total product design problem in the sense that once the identity of the chemical product has been established, the process and/or the sequence of operations that can produce it is determined. Note also that after a process that can manufacture the desired chemical product has been found, it may be necessary to evaluate not only the product but also the process in terms of environmental impact, life cycle assessment and/or sustainability. From the above descriptions of the product and process design problems, it is clear that an integration of the product and process design problems is possible and that such an integration could be beneficial in many ways. For example, in chemical Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
648
I
I Integrated Chemical Product-Process Design: CAPE Perspectives
product design involving high value products where the reliability of the chemical product is more important than the cost of production, product specifications and process operations are very closely linked. In pharmaceutical products, there is a better chance to achieve success the first time with respect to their manufacture by considering the product-process relations. In the case of bulk chemicals or low-value products, the use of product-process relations may be able to help obtain economically feasible process designs. In all cases, issues related to sustainability and environmental constraints (life cycle assessment) may also be incorporated. As pointed out by Gani (2004a),integration of the product and process design problems can be achieved by broadening the typical process design problem to include at the beginning, a subproblem related to chemical product identification and to include at the end, subproblems related to product and process evaluation,including, lifecycle and/or sustainabilityassessments. Once the chemical product identity has been established, Harjo et al. (2004)proposes the use of a product-centricintegrated approach for process design. Giovanoglou et al. (2003),Linke and Kokossis (2002)and Hostrup et al. (1999) have developed simultaneous solution strategies for product-process design involving manufacture of bulk chemicals, while Muro-Sune et al. (2004) have highlighted the integration of chemical product identification and its performance evaluation. In all cases, integration is achieved by solving simultaneously some aspects of the individual product and process design problems. Recently, Cordiner (2004)and Hill (2004)have highlighted issues related to product-processdesign with respect to agrochemical products and structured products, respectively. Issues related to multiscale and chemical supply chain have been highlightedby Grossmann (2004)and Ng (2001). The objective of this chapter is to provide an overview of some of the important issues with respect to integrated product-process design, to highlight the need for a framework for integrated product-process design by employing computer-aided methods and tools, and to highlight the perspectives, challenges, issues, needs and future directions with respect to CAPE/PSE related research in this area.
1.2 Design Problem Formulations
In principle, many different chemical product-process design problems can be formulated. Some of the most common among these are described in this section together with a brief overview of how they can be solved.
1.2.1 Design of a Molecule or Mixture for a Desired Chemical Product
These design problems are typically formulated as, given the specifications of a desired product, determine the molecular structures of the chemicals that satisfy the desired product specifications, or, determine the mixtures that satisfy the desired product specifications (see Fig. 1.1).
7.2 Design Problem Formulations
In the case of molecules, techniques known as computer-aided molecular design (CAMD) can be employed, while in the case of mixtures, techniques known as computer aided mixture-blend design (CAM”) can be employed. More details on CAMD and CAMbD can be found in Achenie et al. (2002) and Gani (2004a,b).These two problems are also typically known as the reverse of property prediction as product specifications defined in terms of properties need to be evaluated and matched to identify the feasible alternatives (molecules and/or mixtures). This can be done in an iterative manner by generating an alternative molecule or mixture and testing (evaluating) its properties through property estimation. This problem (molecule design) is mainly employed in identifylng chemicals that are added to the process, such as solvents, refrigerants and lubricants that may be used by the process to manufacture a chemical product. In the case of mixture design, petroleum blends and solvent mixtures are two examples where the product is designed without process constraints. 1.2.1.1 Examples of Solvent Design
Consider the following process-product design problems where solvent design has an important role. The production of an active ingredient in the pharmaceutical industry needs the addition of a new solvent to an existing solvent-solute (reactant) mixture such that solubility is increased, and thereby the conversion is achieved. The new solvent must be totally miscible with the existing solvent (water) and must also be inert to the reactant. First determine all compounds that are totally miscible with water (use either a database or predict water miscibility).Next, screen out those that may react with the solute (this can be checked through the calculation of the chemical equilibrium constant). Next, identify those that will most likely dissolve the solute (this can be checked through the calculation of solubility).For this problem, alcohols, ketones, glycols are likely candidates. Consider also the following problem: it is necessary to designlselect an alternative solvent to remove oleic acid methyl ester, which is a fatty acid used in treatment of textile, rubber and waxes. The alternative solvent must be better than diethyl ether and chloroform in terms of safety and environmental impact while having solubility properties as good as the known solvents. Also, it must be liquid at temperatures between 270 K to 340 K. Searching of a compound that is acyclic (and containing only C, H and 0 atoms) that has a melting point below 270 K and boiling point above 340 K, has a Hildebrand solubility parameter that is * 16.95 MPa’12,and an octanol partition coefficient less than 2 generates the following candidates: 2-heptanone, 3-hexanone,methyl isobutyl ketone, isopryl acetate and many more. Interesting examples of application of CAMD in the agrochemicals, materials and pharmaceutical industries can be found in the edited monograph by Reynolds, Holloway and Cox (1995).
650
1
I Integrated Chemical Product-ProcessDesign: CAPE Perspectives
1.2.2 Design of a Process
These design problems are typically formulated, given the identity of a chemical product plus its specifications in terms of purity and amount and the raw materials that should be used, and determine the process (flow sheet, condition of operations, etc.) that can produce the product (see Fig. 1.1). This is a typical process design problem, which now can be routinely solved (see for example, textbooks on chemical process design) with CAPE methods/tools when the chemical product is a low-value (in terms of price) bulk chemical. The optimal process design for these chemical products is usually obtained through optimization and process/operation integration (heat and mass integration) in terms of minimization of single or multiparametric performance function. For high-value chemical products, however, a more product-centric approach is beneficial, as pointed out by Harjo et al. (2004),Fung and Ng (2003) and Wibowo and Ng (2001). 1.2.2.1 Examples of Product-Centric Process Design
Harjo et al. (2004) have recently developed a systematic method for product-centric process design and illustrated the application of their method through the design of processes for the manufacturing of phytochemical (plant-derived chemical) products. Harjo et al. (2004)provide a general structure of phytochemical manufacturing processes (Fig. 1.2) and provide a list of heuristic rules for constructing flow sheet alternatives (see Appendix). As examples of application, Harjo et al. (2004)have considered the manufacturing of carnosic acid, which is known to have a powerful antioxidant activity and can be recovered from popular herbs such as rosemary and sage. One of the flow sheet alternatives generated and evaluated by the authors is presented in Fig. 1.3. The solution of the problem required a number of methods and tools, some of which needed to be developed (property and unit operation models) while others needed to be adopted (such as heuristics for flow sheet generation, flow sheet simulation and evaluation, etc.). Since solvents also play an important role in these processes, methods for solvent search are also needed.
1 7 1
Rocess Design Roblem Raw material
ProdIcts
Molecule Design Problem Target pmperties
Flowsheet? Operating condition? Figure 1.1
Equipment parameters?
Molecular Structure?
Molecule function?
Differences between process design and molecule design problems. The question marks indicate what is unknown (needs to be determined) at the start of the design problem solution
Group properties
Molecule synthesis
7.2 Design Problem Formulations I 6 5 1
Harvested Plants
1
Washing Agent
Moisture,
Preparanion
L
MSA
Solvent/ MSA
Unused Parts
'
Products
--b
Solvent/
Contaminants,
-Ti
Product
,
Recovery
Residues,
By productst
Products
- -b Productsh3yproducts Impurities,
Product Purification
By products/
IL
Products
- -b ProductsIEiyproducts
Product
Products
-t
Additives
Figure 1.2 The main processing steps for manufacture o f phytochemicals (from Harjo et al. 2004, reproduced with permission from IChemE)
1.2.3 Total Design of a New Chemical Product
In these design problems, given, the specifications (qualities, needs and properties) of a desired product, the objective is to identify the chemicals and/or mixtures that satisfy the given product specifications,the raw materials that can be converted to the identified chemicals and a process (flow sheet/operations) that can manufacture them sustainably, while satisfylng the economic, environmental and operational constraints. As illustrated in Fig. 1.4, solution of this problem could be broken down into three subproblems: a chemical product design problem that only identifies the chemicals (typically formulated as a molecule or mixture design problem), a process design part that determines a process that can manufacture the identified chemical or mixture (typically formulated as a process design problem) and a product-process evaluation part (typically formulated as product analysis and/or process analysis problems). In principle, mathematical programming problems can be formulated and solved to simultaneously identify the product and its corresponding optimal sustainable process. The solution of these problems are however not easy, even if the necessary
652
I
7 Integrated Chemical Product-Process Design: CAPE Penpectives
Moisture
-1
Washed rosemary leaves
Cpt
Dry1
I
Volatile impurities
Water-insoluble impurities (solids)
H20
Cry2
1
F2
H20, Water-soluble impurities Na,SO, Remaining MSA
Carnosic acid Figure 1.3 Generated process flow sheet for the manufacture of carnosic acid (from Harjo et at. 2004, reproduced with permission from IChemE)
Product Design Problem
a
L l Product-Process Evaluation I Figure 1.4 Product design problem includes the molecule design and the process design problems
7.2 Design Problem Formulations I 6 5 3
models are available (Gani 2004a). Cordiner (2004) and Hill (2004) also provide examples of problems (formulations and structured products) of this type and the inability of current CAPE methods and tools to handle them. Numerous examples of new alternatives for the production of known chemical products can be found in the open literature and have been successfully addressed by the CAPE/PSE community. Examples of complete product-process design for new high-value chemical products may not be easy to find because of reasons of confidentiality. Interesting examples of some well known high-value chemical products from the pharmaceutical and specialty chemical industries can however be found, e.g., design and manufacturing of penicillin (Queener and Swartz 1979),and production of intracellular protein from recombinant yeast (Kalk and Langlykke 1985).
1.2.4 Chemical Product Evaluation
In these problems, given a list of feasible candidates, the objective is to identify/ select the most appropriate product based on a set of product-performance criteria. This problem is similar to CAMD or CAMbD except for the step for generation of feasible alternatives. Also, usually the product specifications (quality, needs, and properties) can be subdivided into those that can be used in the generation of feasible alternatives and those that can be used in the evaluation of performance. A typical example is the design of formulated products (also known as formulations) where a solvent (or a solvent mixture) is added to a chemical product to enhance its performance. Here, the feasible alternatives are generated using solvent properties while the final selection is made through the evaluation of the product performance during its application. Consider the following problem formulations: 0
0
0
0
Select the optimal solvent mixture and the paint to which it must be added by evaluating the evaporation rate of the solvent when a paint product is applied (Klein et al. 1992). Select the pesticide and the surfactants that may be added by evaluating the uptake of the pesticide when solution droplets are sprayed on a plant leaf (Munir 2005). Select the active ingredient (AI) or druglpesticide product and the microcapsule encapsulating it by evaluating the controlled release of the A1 (Muro-Sune et al. 2005). Select solvent mixtures for crystallization of a drug or active ingredient (Kanmanithi et al. 2004).
In all the above design problems, the manufacturing process is not included but instead, the application process is included and evaluated to identify the optimal product. Consider the following product evaluation problem from the agrochemical industry. A pesticide product consisting of an active ingredient and a surfactant and other additives need to be evaluated in terms of which surfactant can be added to the system to enhance the uptake of the A1 into the plant from the water droplets sprayed
654
I
I lntegrated Chemical Product-Process Design: CAPE Perspectives
on the leaf surface. Solution of this problem requires property models that can predict the solubility of the A1 in the water plus surfactants mixture, the diffusion of the A1 through the leaf and into the plant, the evaporation of the water and many more. A modeling framework able to generate the necessary model for evaluation of the specified problem has been proposed recently by Muro-Sune et al. (2005).Through an integrated set of methods, models, and tools it is possible to not only evaluate the formulated product through its performance (in terms of uptake rate) but also find the best combination of AI, surfactant and plant that provides an improved product.
1.2.5 Chemical Process Evaluation
These problems are formulated typically, given the details of a chemical product and its corresponding process, to perform a process evaluation to improve its sustainability. To perform such analysis, it is necessary to have a complete design of the process (mass balance, energy balance, condition of operations, stream flows, etc.) as the starting point and new alternatives are considered only if the sustainability indices are improved. Here, the design (process evaluation)problem should also include retrofit design. Uerdignen (2003)and Jensen et al. (2003)provide examples ofhow such analysis can be incorporated into an integrated approach by exploiting productprocess relations. Note that the choice of the product and its specifications, the raw materials, the process fluids (for example, solvents and heatinglcooling fluids), the by-products, conditions of operation, etc., affect the sustainability indices.
1.3
Issues and Needs
Three issues and needs with respect to integrated product-process design are considered in this chapter, namely, the issue of models and the understanding of the associated product-process complexities, the issue of integration, and the issue of problem definition.
1.3.1 The Need for Models
According to Charpentier (2003), over fourteen million different molecular compounds have been synthesized and about one hundred thousand can be found on the market. Since however, only a small fraction of these compounds are found in nature, most of them will be deliberately conceived, designed, synthesized and manufactured to meet human needs, to test an idea or to satisfy our quest for knowledge. The issuelneed here is the availability of sufficient data to enable a systematic study
7.3 h u e s and Needs
leading to the development of appropriate mathematical models. This is particularly true in the case of structured products where the key to success could be to first identify the desired end-use properties of a product and then to control product quality by controlling microstructure formation. Another feature among product-process design problems is the question of systems with different scales of time and size. For integration of product-process design, it is necessary to organize scales and complexity levels in order to understand and describe the events at the nano- and microscales and to better convert molecules into useful products. The relation between length and time scales is very nicely illustrated through Fig. 1.5, which is adapted from Charpentier (2003). Figure 1.G highlights the relationship between scales (related to the product) and events (phenomena, operation, application, etc., related to the process). Examples of multiscale modeling for product-process design can be found in structured products and their manufacture, such as polymers by polymerization and solid crystals by crystallization. In polymerization, the nanoscale is used in kinetics, the microscale for mass and energy transport, the mesoscale for particle-particle and particle-wall interactions, the macroscale for global polymerization reactor behaviour, and the megascale for reactor runaway analysis and energy consumption analy-
Fluid Dynamics & Trans port
MolecularlElectronic
10-14
I
655
Nano-scale
Molecular Process Active sites
Micro-scale
t-
Particles Droplets Bubbles Eddies
Biochemical products & processes Gene
Function
-
Micro organism enzyme Population cellular plant
Macro-scale
Meso-scale
+--
-
-
Reactors Exchangers Separators
*--
Production Units Plants
Pumps
Biocatalyst Enviroment
-
Reactors
__*
Environment
Athmosphere Oceans Soils
Units Plants Interaction
Active + -
aggregate
Bio
Mega-scale
Separators
*--
biosphere
Figure 1.6 Scales and complexity in chemical and biochemical product-process engineering
sis. For a biochemical process, the nanoscale may be used for molecular and genomic processes and metabolic transformations, the microscale is used for enzyme and integrated enzyme systems, the mesoscale is used for the biocatalyst and active aggregates, macroscale and megascales are used for bioreactors, units and plants involving interactions with the biosphere (Charpentier 2003).
1.3.2 The Need for Integration
According to IMTI (2000), integrated product/process development is the concurrent and collaborative process by which products are designed with appropriate consideration for all factors associated with producing the product. To make product right the first time and every time, product and process modeling must support, and be totally integrated with the design function, from requirements capture through prototyping, validation and verification, and translation to manufacture. Another issuelneed is the increasing complexity of new chemical products and their corresponding technologies, which provide opportunities for the CAPE community to develop/employ concurrent, multidisciplinary optimization of products and processes. Through these methods and tools, the necessary collaborative interaction between product designers and manufacturing process designers early in the product realization cycle can be accomplished. Few collaborative (design) tools are
1.3 hues and Needs
available to help turn ideas into marketable products, and those that are available are technology and product-specific. Optimizing a product design to meet a set of requirements or the needs of the different production disciplines remains a manually intensive, iterative process whose success is entirely dependent on the people involved.
1.3.3 Definition of Product Needs
Good understanding of the needs (target properties) of the product is essential to achieve "first product correct", even though it may be difficult to identify the product needs in sufficient details to provide the knowledge needed to design, evaluate and manufacture the product.
1.3.4 Challenges and Opportunities
Based on the above discussion, a number of opportunities have been identified by IMTI (2000),which are summarized below: Definition of Product-ProcessNeeds (Design Targets) 0
0
Provide knowledge management capability that captures stakeholder requirements in a complete and unambiguous manner. Provide modeling and simulation techniques to directly translate product goals to producibility requirements for application to product designs.
Methods/Tools for Product-ProcessSynthesis/Design 0
0
0
0
Provide a first principles understanding of materials and processes to assure that process designs will achieve intended results. Provide the capability to automatically create designs from the requirements data and from the characterization of manufacturing processes. Provide the capability to automatically build the process plan as the product is being designed, consistent with product attributes, processing capabilities, and enterprise resources. Create and extend product feasibility modeling techniques to include financial representations of the product as an integral part of the total product model.
Modeling Systems and Tools 0
Provide a standard modeling environment for integration of complex product models using components and designs from multiple sources/disciplines, where any model is completely interoperable and plug compatible with any other model.
I
657
658
I
I Integrated Chemical Product-Process Design: CAPE Perspectives
Integration 0
0
0
0
Provide the capability to create and manipulate product/process models by direct communication with the design workstation, enabling visualization and creation of virtual and real-time prototyped product. Provide simulation techniques and supporting processing technologies that enable complex simulations of product performance to run orders of magnitude faster and more cost-effectively than today. Provide the capability to simulate and evaluate many design alternatives in parallel to perform fast tradeoff evaluations, including automated background tradeoffs based on enterprise knowledge (i.e., enterprise experience base). Provide integrated, plug and play toolset for modeling and simulation of all life cycle factors for generic product types (e.g., mechanical, electrical, chemical).
1.4 Framework for Integrated Approach
The first step to addressing the issueslneeds listed above could be to define a framework through which the development of the needed methods and tools and their application in product-process design can be facilitated. Integration is achieved by incorporating the stages/steps of the two (product and process) design problems into one integrated design process through a framework for integration. This framework should be able to cover various product-process design problem formulations, be able to point to the needed stages/steps of the design process, identify the methods and tools needed for each stage/step of the design process and finally, provide efficient data storage and retrieval features. In an integrated system, data storage and retrieval are very important because one of the objectives for integration is to avoid duplication of data generation and storage. The design problems and their connec-
Product Design...--. v.
Property Models Product Models
..-. ._...-
,
_ _ A _ -
____..--
__._. __-. _._.-_..-_ _ - 1
Process SynthesislDesign Tools Simulation Engine
Figure 1.7 Integration of product-process design (see Table 1.1 for data flow details)
Product Application Model Process Analysis Tools
1.4 Framework for Integrated Approach
tions are illustrated through Fig. 1.3. The various types of design problems described above can be handled by this framework by plugging the necessary methods and tools into it. One of the principal objectives of the integration of product-process design is to enable the designer to make decisions and calculations that affect design issues related to the product as well as the process. The models and the types of models needed in integrated product-process design are also highlighted in Fig. 1.7. In the sections below, the issues of models and data flow and workflow in an integrated system are briefly discussed.
1.4.1 Models
-
One of the most important issues and needs related to the development of systematic computer-aided solution (design) methodologies are the models. For integrated product-process design, in addition to the traditional process and equipment model, product models and product-processperformance models are also needed. A product model characterizes all the attributes of the product while a product-performance model simulates the function of the product during a specific application. Figure 1.8 illustrates the contents and differences among the process, product and productperformance models. Constitutive (phenomena) models usually have a central role in all model types.
I
I
Process Analvsis Tools
Process Models
Conceptual variablescannot be measured
Intensive
Balance Equations
Constraint Equations
dudt = f(x, Ys Pv d, t)
0 = g,(x, y, p, d)
Constitutive E uationsl Phenornena%odels
can be measured
T, P, x; N
I
659
660
I
7 Integrated Chemical Product-Process Design: CAPE Perspectives
1.4.2 Data Flow and Workflow
The data flow related to the framework for integrated product-processdesign is highlighted through Table 1.1, where the input data and output data for each design (sub)problem is given. As highlighted in the problem formulation section, the workflow for various types of design problems is different and needs to be identified. In general terms, however, the following main steps can be considered (note that, as discussed above, some of these steps may be solved simultaneously): 0 0
0
Define product needs in terms of target (design) properties. Generate product (molecule, mixture, formulation, etc.) alternatives. Determine if process considerations are important. - If yes, define the process design problem and solve it. - If no, go directly to the product evaluation (analysis)step.
Table 1.1
Data flow for each design problem
Input data
Problem type
Output data
Building blocks for molecules, target Molecular design (CAMDj Feasible molecular structures properties and their upper/lower bounds and their corresponding propand/or goal values erties List of candidate compounds to be used Mixture design (CAMbD) List of feasible mixtures in the mixture, target properties and their (compoundsand their compoupper/lower bounds and/or goal values at sitions) and their correspondspecified conditions of temperature and/ ing properties or pressure Desired process specifications (input streams, product specifications, process constraints, etc.)
Process designlsynthesis (PD)
Process flow sheet (list of operations, equipments, their sequence and their design parameters)
Desired separation process specifications Process solvent design (input streams, product specifications, process constraints, etc.) and desired (target) solvent properties
Process flow sheet (list of operations, equipments, their sequence and their design parameters) plus list of candidate solvents
Details of the molecular or formulated product (molecular structure or list of molecules and their composition and their state) and their expected function
Product evaluation
Performance criteria
Details of the process flow sheet and the process (design) specifications
Process evaluation
Performance criteria, sustainability metrics
1.4 Frameworkfor lntegrated Approach Table 1.2
List of methods/algorithms and tools/software that may be used for each problem (design) type
1
Problem type
Method/Algorithm
Molecular and mixture design (CAMD)
Molecular structure generation Property prediction and database Screening and/or optimization
ProCAMD
Process design/synthesis (PD)
Process synthesis/design Process simulation/optimization Process analysis
ICAS (PDS, ICAS-sim, PA)
Process solvent design
CAMD methods/tools Process synthesisldesign Process simulation/optimization Process analysis
ICAS (ProPred, ProCAMD, PDS, ICAS-sim, PA)
Product evaluation
Property prediction and database Product-performance evaluation model Model equation solver
ICAS (ProPred, ICASutility, MOT)
Process evaluation
Process synthesisldesign Process simulation/optimization Process analysis
ICAS (ICAS-sim, ICASutility: MOT, PA)
0 0
Tools/Software
Analyze the process in terms of a defined set of performance criteria. Analyze the product in terms of a defined set of performance criteria.
In Table 1.2, the methods/algorithms and their corresponding tools/software are listed. Under tools/software, only tools developed by the author and coworkers have been listed (see the tools and tutorial pages at www.capec.kt.dtu.dk/Software/). Examples of application of the tools listed above are not given in this chapter but can be found in several of the referenced papers.
1.4.3 Simultaneous Molecular and Flow Sheet Design
Design of chemical products is often described as the design of molecules and their mixtures with desired (target) properties and specific performance, as drugs, pesticides, solvents or food products. Molecules likely to match the target properties and performance are identified, usually in experiment-based trial and error solution approaches. In product-centric process design, it is necessary to match a set of target performance criteria for the process, usually through process simulation. For process design, alternative process flow sheets can be generated through simulationbased synthesisldesign methods, where simulation is mainly used to evaluate and test alternatives. Systematic methods for generation of process alternatives are either rule-based or mathematical optimization-based.
I
661
662
I
1 Integrated Chemical Product-Process Design: CAPE Penpectives
Group contribution methods, which provide the basis for molecule and mixture design, can also be applied in process design. That is, in the same way functional groups are defined to represent molecules and to estimate their properties, process groups are also defined to represent process flow sheets and to estimate their operational properties. Therefore, if a table of process groups representing a wide range of operations can be established, the technique of CAMD can be adapted to computer-
Problem formulation
Conversion into input and constraints
CAMD
CAFD
'I want aromatic compounds with double bounds, with a minimum Tm = 300K, minimum Tb = 400K and a minimum molecular weight of 300 g/mol and a maximum of 30 groups."
'I want to separate a mixture of 22-Dimethyl propane, iPentane, n-Pentane, 22Dimethyl butane, 23-Dimethyl butane, 2-Methyl pentane, nHexane and Benzene into pure streams."
A set of building blocks: CH=C, C=C, ACH, AC ACCH3, ACCH, ...
A set of building blocks: (NBCD), (ABICD), (NBCDEFG), (cyc GIH), ...
Set of property based numerical constraints
Set of property based numerical constraints
+
+
I
#IIABCDE
Generation of alternatives
E F
I
I
Post synthesis
Molecular simulation Experimentations
Reverse approach to get the complete design parameters Rigorous simulation
Figure 1.9 Common framework overview (from d'Anterroches et al. 2005, reproduced with permission from IChemE)
1.5 Conc/usion I663
aided flow sheet design (CAFD),so that CAMD and CAFD can both be used for modeling, synthesis and design. Also, since CAMD can generate and evaluate thousands of molecules within few seconds of computer time, CAFD would also be able to generate numerous process alternatives without any loss of accuracy or application range. d’Anterroches et al. (2005) has developed a group contribution-based method for simultaneous molecular and mixture design. Figure 1.9 illustrates the features of a common framework for CAMD and CAFD.
1.5
Conclusion
As pointed out by Cordiner (2004), Hill (2004), and the referenced papers by Ng, even though the primary economic driver for a successful chemical product is speed to market, this does not mean that process design is not strategically important to these products. The important questions to ask (to list a few) pertain to the chemical product, how they will be manufactured, how sensitive is product quality related to cost and production, where they will be used or applied, how their performance will be evaluated and, how long a period will they be sustainable? Obviously, the answers to these questions would be different for different products and consequently, the methods and tools to be used during problem solution will also be different. Many opportunities exist for the CAPE/PSE community to develop systematic model-based solution approaches that can be applied to a wide range of products and their corresponding processes. It is the study through these model-based solution approaches that will point out under what conditions the process or operational issues become important in the development, manufacture and use of a chemical product. Successful development of model-based approaches will be able to reduce the time to market for one type of products, reduce the cost of production for another type of product, reduce the time and cost to evaluate another type of product. The models, however, need to be developed through a systematic data collection and analysis effort, before any model-based integrated product-process tools of wide application range can be developed. Finally, it should be noted that to find the magic chemical product, these computer-aided model-based tools will need to be part of a multidisciplinary effort where experimental verification will have an important role and the methods/tools could be used to design the experiments.
664
I
I Integrated Chemical Product-Process Design: CAPE Perspectives References
14 lensen N. Coll N. Gani R. An integrated com-
1 Achenie L. E. K. Gani R. Venkatasubramanian
2
3
4
5
6
7
8
9
10
11
12
13
V. 2002 Computer-Aided Molecular Design: Theory and Practice, CACE-12, Elsevier Science, Amsterdam Charpentier]. C. The future of chemical engineering in the global market context: market demands versus technology offers, Kern Ind 52(9) (2003) p. 397-419 Cordiner]. L. Challenges for the PSE community in formulations, Comput Chem Eng 29(1) (2004) p. 83-92 d’Anterroches L. Gani R. Harper P. M . Hostrup M . 2005 CAMD and CAFD (computeraided flow sheet design), paper presented at World Chemical Engineering Conference, Glasgow, Scotland, July 2005 Fung K. Y. Ng K. M . Product centered processing: pharmaceutical tablets and capsules, AIChE J 49(5) (2003)p. 1193 Gani R. Chemical product design: challenges and opportunities, Comput Chem Eng 28(12) (2004a)p. 2441-2457 Gani R. Computer-aided methods and tools for chemical product design, Chem Eng Res Des 82(All) (2004b) p. 494-1504 Giovanoglou A. Barlatier]. Adjiman C. S. Pistikopoulos E. N. Cordiner]. L. Optimal solvent design for batch separation based on economic performance, AIChE J 49 (2003)p. 3095- 3109 Grossmunn I. E. Challenges in the new millennium: product discovery and design, enterprise and supply chain optimization, global life cycle assessment, Comput Chem Eng 29(1) (2004)p. 29-39 Haqo B. Wibowo C. Ng K. M . Development of natural product manufacturing processes: phytochemicals, Chem Eng Res Des 82(A8) (2004) p. 1010-1028 Hill M . Product and process design for structured products: perspectives, AIChE J 50 (2004) 1656-1661 Hostrup M. Harper P. M. Gani R. Design of environmentally benign processes: integration of solvent design and process synthesis, Comput Chem Eng 23 (1999) p. 1394-1405 ZMTZ First product correct: visions and goals for the 21st century manufacturing enterprise, Integrated Manufacturing Technology Initative Report USA 2000
15
16
17
18
19
20
21
22
23
24
25
puter aided system for generation and evaluation of sustainable process alternatives, Clean Techno1 Environ Pol 5 (2003) p. 209-225. Kalk ]. Langlykke A. Cost estimation for biotechnology projects, ASM Manual of Industrial Microbiology and Biotechnology, ASM Press, Washington, D.C. 1986 Karunanithi A. Achenie L. E. K. Gani R. A computer-aided molecular design framework for crystallization solvent design, Chem Eng Sci 61 (2006) p. 1243-1256 Mein ]. A. Wu D. T. Gani R. Computer-aided mixture design with specified property constraints, Comput Chem Eng 1 G (1992)p. S229 Linke P. A. Kokossis Simultaneous synthesis and design of novel chemicals and chemical process flow sheets, in J. Grievink and J. van Schijndel (Eds.), ESCAPE-12, CACE-10, Elsevier Science, Amsterdam, (2002) pp. 115-120 Ng K. M . A multiscale-multifaceted approach to process synthesis and development, in R. Gani and S . B. Jmgensen (Eds.),ESCAPE11, CACE-9, Elsevier Science, Amsterdam (2001)pp. 41-54 Queener S. Swartz R. Penicillins; biosynthetic and semisynthetic, in Economic Microbiology 3, Academic Press, New York (1979)pp. 35-123 Reynolds C. H. Holloway M . K. Cox H. K. 1995 Computer-aided molecular design: applications in agrochemicals, materials and pharmaceuticals, ACS Symposium Series, 589, Washington, D.C., USA Munir A. Pesticide product and formulation design, Dissertation, CAPEC, Department of Chemical Engineering, DTU,Lyngby, Denmark 2005 Muro-Sune N . Gani R. Bell G. Shirley I. 2005 Model-based computer aided design for controlled release of pesticides, Comput Chem Eng, 30 (2005) p. 28-41 Uerdingen E. Gani R. Fischer U. Hungerbiihler K. A new screening methodology for the identification of economically beneficial retrofit options in chemical processes, AIChE J 49 (2003) p. 2400-2418 Wibowo C. Ng K. M. Product-oriented process synthesis and development: creams and pastes, AIChE J 47(2) (2001)p. 2746
1.5 Conclusion I665
Appendix
Heuristics for constructing flow sheet alternatives (from Harjo et al. 2004, reproduced with permission from IChemE). Feed Preparation
1. Consider reducing the size of the plant material to 2-5 mm to obtain good performance in industrial scale S-L extraction. 2. Consider using particle size bigger than 0.25 mm in S-L extraction to avoid clogging of the filter. 3 . If the plant material is hard and abrasive, consider using size reduction by ball mill, fluid jet mill, or hammer mill. 4. If the plant material is soft and tough or fibrous and woody, consider using size reduction by cutting mill, disk mill, or hammer mill. 5. If the plant material is brittle or crystalline, consider using size reduction by fluid jet mill, hammer mill, or roller crusher. 6. If the plant material is tough, fibrous, and very heat-sensitive, consider using cryogenic size reduction by cutting mill, disk mill, fluid jet mill, or hammer mill. 7. Consider reducing the moisture content of harvested plants to about 10% for a safe storage. Product Recovery
8. Consider using disk press for mechanical pressing of fibrous materials. 9. Consider using immersion type S-L extraction equipment if the target com-
pounds are in low concentrations, strongly bound, andlor slowly diffusing. 10. Consider using percolation type S-L extraction equipment ifthe target compounds are in high concentrations, loosely bound, or slightly soluble in the solvent. Product Purification
11. Whenever possible, consider using the same MSA as in the product recovery step. 12. For heat-sensitive materials, consider separations using L-L extraction, chromatography, or crystallization. 13. Consider using adsorption to separate natural pigments. 14. Consider using chromatography andlor crystallization for the separation of chiral molecules or when multiple single-compound products are desired. 15. Consider using pH swing crystallization for separation of compounds with acidic or basic groups. 16 Consider using large polarity differences between the mobile and stationary phases in reversed-phase chromatography to achieve high selectivity. 17. When handling liquid systems with little density difference, easily emulsified, or short contacting time is required, consider using centrifugal L-L extractors and separators. 18. When handling liquid systems containing suspended solids, easily emulsified, or large capacity, consider using reciprocating-plateL-L extractor columns.
666
I
7 Integrated Chemical Product-Process Design: CAPE Perspectives
19. When handling liquid systems with high viscosity or large capacity, consider
using mixer-settler L-L extractors. 20. If the material has a very steep solubility curve (e.g., very sensitive to temperature), consider using cooling-type crystallizers. 21. If the material has a normal or moderate solubility curve, consider using evaporative-cooling,surface-cooling,or isothermal-evaporative crystallizer. 22. For batch and relatively low capacity processes, or feed with viscous solutions, consider using either plate-and-framefilter presses or leaf filters. 23. For continuous and large capacity processes, consider using either continuous rotary-vacuum-drumfilter or continuous rotary-disk filter. Product Finishing
24. If the feed to be dried is in liquid, suspension, or slurry solution forms, consider using either drum or spray dryers. 25. If the wet granular solids are to be dried, consider using either rotary or tray dryers. 26. Use freeze-drying only for heat-sensitive materials which may not be heated in the ordinary drying or when the loss of flavor and aroma must strictly be avoided.
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
2 Modeling in the Process Life Cycle /an T: Cameron and Robert 13. Newell
This chapter deals with the important issues of where, why and how models of various types are used throughout the life of an industrial or manufacturing process. The chapter does not deal specifically with the modeling of the life-cycle process but concentrates on the use of models to address a plethora of important issues that arise during the many stages of a process’ life, from the cradle to the grave. In this chapter we first discuss the life-cycle concept in relation to a cradle-to-thegrave viewpoint and then in subsequent sections consider specific issues related to the modeling goals and realizations. Some important issues are discussed which surround model development, reuse, integration, model documentation and archiving. We also consider the future needs of such modeling approaches and the important implications of life-cycle modeling for corporations. Throughout this chapter we refer to several specific industrial case studies that help illustrate the importance of modeling throughout the life cycle as well as the challenges of doing so. What is evident in the following sections is that there is a huge range of modeling used to help answer vital sociotechnical questions through the life cycle of the process or product. It is important to appreciate that process and product engineering have vital links to social and human factors within a holistic approach to modeling. Major infrastructure projects continually reinforce a more complete view than that which is often taken by process and product engineers. In this chapter we expand the vision of modeling within the process or product life cycle to see just what has been achieved and where the challenges lie for the future.
2.1 Cradle-to-the-Crave Process and Product Engineering
Cradle-to-the-graveis a concept that now pervades most industrial operations, driven by concerns for safety, health and the environment (SHE). Sustainability and global environmental issues also figure highly in the drive for life-cycle analysis. Those conComputer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
668
I
2 Modeling in the Process Life Cycle
cerns have been heightened by community pressures on government and industry regarding the impact of process and manufacturing developments on ecosystems as well as associated social impacts. This is largely driven by past disastrous events that have had major impacts on local communities, either directly through such events as fires, explosions or toxic releases or through more sinister chronic impacts or severe land contamination. Much of the risk and environmental management legislation and regulations in Europe, the United States and Australasia have now focused on cradle-to-the-grave concepts to control industrial impacts and analyze economics over the complete life cycle of the process. These important concepts have been expressed in numerous international standards such as the IS014000 series and IS015288 [I].The life-cycle stages in these standards involve: 0 0
0 0 0 0
concept development production utilization support retirement.
Process and product related activities define similar stages to the generic standard as shown in Section 2.1.2. These holistic life-cycle concepts are particularly noticeable in such industries as vehicle manufacture, aluminum production and subsequent recycling, newspaper, glass production and recycling, plastic consumer articles and their reuse, to name just a few. The concepts are becoming increasingly common in the process and manufacturing industries, driven by tough environmental impact assessment regimes that demand in-depth analysis of the sociotechnical aspects of all major developments as well as facility expansions, well before any implementation. Life-cycle analysis is part of the responsible care program promoted by the International Council of Chemical Associations and in place since 1988 [2] after its initial start in Canada in 1985. The following sections outline in broad terms the key concepts that undergird the life-cycle modeling activities and put these into a broader sociotechnical context beyond the mere process perspective. In this way, the life-cycle concept is seen as a much more holistic activity.
2.1.1 The Life-Cycle Concept
The process life cycle is characterized by several chronological stages as illustrated in Fig. 2.1. Accompanying the process life-cycle phases are certain activities associated with each stage. Of prime importance throughout the life-cycle perspective will be the issues of raw materials, wastes and emissions as well as energy consumption, generation and reuse. These issues are necessarily part of an integrated framework [31.
2.7 Cradle-to-the-Grave Process and Product Engineering
These stages involve: strategic planning research and development activities 0 conceptual design of product and process 0 detailed engineering designs 0 installation and commissioning 0 operations and production 0 decommissioning of process 0 remediation of process related facilities. It is worth saying something about the phases of process life cycles to set the scene for a more in-depth analysis of the key issues from a modeling perspective. 0
0
Strategic Planning Phase
Here, the initial ideas of resource utilization or new product development have their genesis. This phase is driven by new business opportunities, perceived market needs or market push through the introduction of novel products, processes or markets. Modeling must incorporate uncertainty and is broad in scope and intent, as is to be expected in strategic and scenario planning. Research and Development Phase
Following an initial strategy phase is often a more focused phase of research and development for those options identified as having the best potential. From the product viewpoint this might involve the modeling of market responses to product ideas to gain understanding of acceptance. This is common in the food and consumer products sectors.
Remediation Decommissioning
Detailed Design
Conceptual Design Research and Development Strategic Planning
Figure 2.1
Life-cycle phases
3
I
669
670
I
2 Modeling in the Process Life Cycle
From the process perspective this phase can often involve very lengthy and intense research that covers such areas as product qualities, reaction kinetics, product yields and effects of operating conditions on product spectrums. In pharmaceutical developments it can involve significant computer-aided drug design and possibly the use of animal experimentation. This phase can also involve the analysis of environmental impacts through the treatment of process wastes such as solids, gases and liquids. Treatment options might be sought to eliminate or ameliorate impacts. Energy and utilities utilization for product creation will also be of importance. Here, the modeling can involve a wide range of time and length scales, using such tools as quantum chemistry, molecular simulations and specialized model development at one end of the scale, and producing partial models for use at macroscale levels that deal with equipment and plant design at the other end of the scale. Other efforts at this phase can concentrate on issues of risk management and the choice of particular processing paths which apply inherently safer design and operations concepts. An important issue at this phase is the development and validation of physicochemical prediction models for a range of phase equilibria applications. Conceptual Design Phase
For process technologies, conceptual design will lead to the development of inputoutput process models whereby overall economics can be better estimated. Flow sheeting in the traditional industries such as minerals, petroleum and chemicals is extensively used to generate these insights. Further, the use of these tools based on generic plant unit models leads to the development of the internal structure of the process system. Often the models are simply mixers, stream splitters, yield/stoichiometric reactors and component splitters. The aim will be to develop alternative flow sheet configurations to assess economics, product and by-product production plus environmental and risk impacts. The modeling is typically concerned with steadystate behavior. Structural optimization using simple process models may also take place in this phase in order to consider optimal processing structures. Of importance in this phase are the environmental impacts from air, noise, water and solid wastes. Extensive air-shed modeling is often required to assess potential impacts. Similar modeling may be required to assess impacts on groundwater and receiving waters such as rivers and bays. Particularly hazardous or noxious wastes may require modeling of their treatment options including destruction by incineration, landfill, chemical conversion or biological treatment. These can be significant modeling exercises in their scope and depth as well as time. Detailed Design Phase
The detailed design phase leads inevitably to such outputs as the definitive engineering flow sheet as well as the piping and instrumentation (engineering line) diagrams. It also covers such project outputs as detailed specifications for procurement. Modeling practice at this phase often involves the creation of equipment or unit specific models, which might take significant time to develop. The models might be
2. I Cradle-to-the-Grave Process and Product Engineering
integrated into larger software systems and dynamics can often be a significant issue driven by concerns for startup, shutdown, emergency response and regulatory control. As well, unit and plant-wide optimization modeling may take place at this point in order to consider best operational modes. Often, steady state assumptions move into continuous, dynamic considerations and then into hybrid (continuous-discrete) modeling environments. Installation and Commissioning
The installation phase of process and product plants is a key area involving project planning and related models. Here, critical path models, dynamic resource allocation and control models play a vital role in this life-cycle phase. Operations Phase
The operations phase includes such aspects as the construction and commissioning stages of process development. It clearly involves the day-to-day operations of the process under its intended operations policy over the useful life of the process or product. This operation period can vary widely depending on the industrial manufacturing sector being considered. Again it can be seen from the perspective of time and distance scales. Within the traditional process industries that make use of major natural resources such as oil, gas and mineral deposits, the time scale can be of the order of decades. In the consumer product area it can be of the order of months, where market forces demand quick time to market and flexibility of manufacture and delivery modes. The computer and electronics industries as well as the food sector provide well-known examples of these 'quick-to-market' sectors. Here, in the operations phase, modeling can focus on debottle-necking of existing processes for retrofit or incremental improvements. It can also address issues of effective supply chain design and operation under a dynamic market environment. It can also involve detailed risk modeling for improved design and operations as the external environment such as government regulations change. As well, the effect of changing resource characteristics can lead to significant process modeling that involves the assessment on the process of changing raw materials and product quality demands. Modeling of routine plant maintenance through techniques such as critical path procedures or risk-based inspection (RBI) or maintenance (RBM) strategies is a key area of production operations. They require modeling of the system in terms of predicted risk of failure and the need for preventative maintenance procedures on varying time scales. Decommissioning Phase
Most processes and products have a 'use by' date and inevitably come to a natural or in some cases, dramatic end. Decommissioning of the process or the product and product-line is now an important consideration in the life cycle. It can mean a very lengthy process of assessment and action. Much needs to be considered in the decommissioning phase, with a significant amount of work associated with the management of risk in achieving the outcomes.
I
671
672
I
2 Modeling in the Process Life Cycle
In particular the decontamination of plant for disposal may require specialized treatment and this can lead to in-depth modeling of the processes. In the case of defunct nuclear facilities the process of decommissioning can take decades as seen from recent activities within Europe and elsewhere. Remediation and Rehabilitation Phase
This is a phase of the life cycle which can involve significant financial resources, often in the past borne by government but now more likely by the operating companies if they remain financially viable. In many cases, specialized modeling and chemical experimentation is necessary to consider ways of achieving remediation of land and the environment. In many cases, where mining is carried out a remediation plan is activated, which is an integral part of the operations phase of the process. Modeling of mining operations and the remediation phase provides input to environmental management plans.
2.1.2 Process Modeling Within the Life Cycle
Modeling within the process life cycle is now an extremely important activity for all new industrial initiatives and expansions of existing facilities. However, what do we mean by the word ’modeling’?This seems an obvious question and for many there are simplistic answers. However, the modeling concept is far beyond the simple idea of a set of equations to be solved within a flow-sheetingpackage or numerical solver. In the context of life-cycle modeling, we do well to consider the generic definition of Minsky [4] who defined a model in the following terms: “A model (M) for a system (S) and an experiment (E) is anything to which E can be applied to answer questions about S.” As such, this definition captures a wide variety of models often used within the process life cycle, from physical models to mathematically-basedmodels. A nonexhaustive review of modeling applications and modeling forms or approaches is given in Table 2.1. This expands the life-cycle phases of Fig. 2.1 by introducing some intermediate activities often seen in industrial projects. Table 2.1 illustrates the diversity of modeling activities throughout the life cycle. There are many forms and approaches used in the phases, with significant reuse or integration of models - either directly or indirectly through data transfer from one model to another. The over-riding challenge in this area is unity of models, data and documentation. The principal characteristics can be summarized as: a diversity of modeling goals throughout the life cycle phases encompassing such purposes as: - assessing market potential or response; - generating basic data or property relations for use in later phases; - estimating economic potential;
2. I Cradle-to-the-Grave Process and Product Engineering Table 2.1
Model use and characteristics for process life cycle. Modeling forms or approaches
Basic economics Resource assessment
Purpose, goal, mission models Issue-based planning Scenario models Self-organizing models
Research and development
Resource characterization Basic chemistry Reaction kinetics Catalyst activity life Physicochemical behavior Pilot plant design and operation
Reaction systems models Catalyst deactivation models PFR, CSTR reactor models Elementary flow sheet packages Fluid-phase equilibria models Physical property models Molecular simulation Quantum chemistry models
Initial process feasibility
General mass and energy balances Alternate reaction routes Alternate process routes Input-output economic analysis Preliminary risk assessment
Flow-sheeting packages Semi-quantitative risk models Financial analysis models
Conceptual design
Mass and energy balances Plantlsite water balances Initial environmental impact Detailed risk assessment Economic modeling
Flow-sheeting packages Environmental impact (air, noise, water, solid wastes) Social impact assessment models Risk consequence and frequency models Computational fluid dynamics (CFD)
Detailed design
Detailed mass and energy balances Vessel design and specifications Sociotechnical risk assessment Risk management strategies Project management
Flow-sheeting packages Dynamic simulation for plant and units CFD modeling and simulation Mechanical simulation (finite element methods and variants) 3D plant layout models Fire, explosion, toxic release models Fault tree and event tree models Air-shed models for dispersion of gases and particulates Noise models
Commissioning
Startup procedures Shutdown procedures Emergency response
Grafcet and ladder logic models Safety instrumented assessment models Risk assessment models
Operations
Process optimization Process batch scheduling Supply chain design and optimization
~~~~
Scheduling models Unit and plant-wide optimization models (LP, NLP, MILP, MINLP) Queuing models
I
673
674
I
2 Modeling in the Process Life Cycle
Table 2.1
-I
Model use and characteristics for process life cycle (continued)
Process life-cycle phase Modeling applications Operations (continued)
Real-time expert system models Neural nets and variants Empirical models (ARMAX, BJ) Maintenance models (CPN, RBI/M)
Retrofit
Debottle-necking studies Redesign
~~
I
approaches and optimal policies
Remediation and resto- Geotechnical Contaminant extraction options ration
0
0
0
0
Flow-sheeting packages Detailed dynamic simulation
I 3D physical extraction pilot plants
Soil processing models for decontamination
evaluating system dynamics; predicting environmental impacts; optimizing product and process performance; designing products and processes (structure and units); planning production cycles or routine maintenance; improving viability, process performance or risk management factors; - enhancing inherently safer designs and operations; a diversity of model forms and approaches to address the modeling goals such as: - social, economic, human factors and technical models; - mechanistic, empirical, stochastic and deterministic models; - physical and mathematical modeling approaches; a granularity in the model representations across the life cycle, typically increasing in detail as the life cycle phases progress chronologically; a diversity of time and length scales being captured in the models, representing the multiscale nature of product and process engineering; the current independence of much of the life-cycle modeling that is undertaken, typically by a range of external consultants, in-house company groups and government agencies; a diversity of tools to accomplish the modeling tasks including: - proprietary software products such as flow-sheeting packages; - purpose built models that are essentially standalone items in languages such as C, Fortran or Java; - specific models in commonly available software such as MS Excel or Matlab; a diversity of solution approaches to the models in the life cycle, for purposes of prediction, design, estimation or identification, spanning computation times of seconds to weeks for large-scale CFD computations. -
0
Modeling forms or approaches
2.2 lndustrial Practice and Demands in Life-Cycle Modeling
2.1.3 The Multiscale Nature of Modeling During the Life Cycle
In Section 2.1.2, there is a wide range of time and length scales involved in modeling across the life cycle. This applies across the life cycle phases as well as within the individual phases. The rise in interest in multiscale modeling approaches and the integration and solution of composite models built from several partial models is driven primarily by product engineering where the nano or microscale characteristics are seen as vital to 'designer' products. Combined with the meso and macroscales, typical of issues at the equipment and plant level, the emphasis on multiscale representations will continue to grow. The chapter in this current book on multiscale process modeling gives a comprehensive overview of this important area. Within the modeling practice across the life cycle, Marquardt et al. [ S ] have also mentioned its importance.
2.2 Industrial Practice and Demands in Life-Cycle Modeling
The following sections deal with some of the important issues facing those carrying out modeling at stages of the process life cycle. It emphasizes the fact that modeling is far more than just developing a set of equations to be solved by a simulation tool [6, 71.
2.2.1 Modeling Methodology and Workflow
One of the key underlying concepts is methodology within modeling. This is closely related to workflow concepts in carrying out any modeling activity. Figure 2.2 illustrates a particular modeling methodology [8] which possesses generic character and is applicable to many occasions when modeling is performed. It shows how various stages of the modeling cycle use and refine other stages. There are seven steps shown in this workflow scheme, each having an important part to play. It is an iterative process which demands clear understanding of each task and the need to specify conditions of modeling cycle termination. 0
Goal set dejnition and decomposition. This refers to the initial phase of asking: what is the purpose of the model? Here, the key goals are established with regard to model application area (control, design, optimization, etc.) and the desired outcomes from the modeling activity must be stated, leading to formal specifications. This includes a wide range of outcomes as mentioned in Section 2.1.3. This aspect of modeling is generally poorly done but is essential for establishing the termination conditions for the modeling cycle. It still remains a complex, difficult area with little guidance or techniques in how overall or canonical goals are decom-
I
675
676
I
2 Modeling in the Process l$i Cycle
0
0
0
0
0
posed into subgoals as represented in a goal tree or goal graph and then subsequently to define the subsystems. Model conceptualization. Here, the model form or multiple forms have been chosen and the conceptualizationtakes place. In mechanistic models this is related to defining balance volumes for mass, energy, momentum or population conservation, together with convective and diffusive streams connecting the balance volumes. Other objects and attributes within balance volumes relate to reaction, physical properties, spatial distribution and the like. In the case of empirical model building, a selection of potential model forms is initially needed. The selection can be based on physical insight or through the use of information criteria which seeks to balance model complexity, usually the number of parameters, against the quality of the model fit to the data. This approach might cover completely black-box models with arbitrary structure or grey-box models that incorporate some predefined structure. Modeling data. This refers to both the type of data needed for calibration or parameter estimation and model validation as well as physicochemical data necessary for the building of mechanistic models. Here, experimental design can play an important role and cover classic factorial designs to such techniques as Latin hypercube sampling for Monte Carlo models. Model building and analysis. This brings us to the task of actually constructing the physical or mathematical representation based on the chosen approach. In most cases, significant use of computer-aidedtools can assist in this task. Nevertheless, the construction of mathematical models can be a time-consuming, error-prone task. Analysis of the resultant model set is absolutely necessary for reasons of degrees of freedom, model index and potentially observability,controllability and identifiability. Model ueiijcation. This relates to the systematic construction of solution code typically in the form of a computer program to solve the model. It requires such concepts as algorithm design, modular code construction and software verification tools. It seeks to ensure that the model solved is the model that was defined. Model solution. Solving the model can be a trivial task taking a few seconds of computer time to weeks of execution time on even the largest, distributed computing devices. Appropriate numerical methods are essential for minimizing computation time as well as ensuring solutions that are credible. Model calibration and validation. The final phase to be tackled in most industrial modeling is the need to calibrate the model against good plant or process data. This is nontrivial as 'good' data is often hard to obtain and requires significant effort and determination to get it. It leads to key parameter estimates being obtained. Model validation which asks: is the model prediction accuracy adequate for the task? This is another area that requires perseverance and in-depth statistical analysis - something often glossed over in industrial modeling.
2.2 industrial Practice and Demands in Lijk-Cycle Modeling
Model Goalset Definition I I
arefines. I I
wses.
I I
I
-------1
I
I
I I
W
Model
I 1
--------------I - ----- - -- -- -- -I
wses.
---I
I
and Analysis wses>) I
wses.
--------I
Model Verification
I
Model Solution I
I I
I Figure 2.2
Modeling methodology and workflow concepts
I I
I I I I I I I
I
677
678
I
2 Modeling in the Process Life Cycle
2.2.2 Modeling Goals and Model Types
In Section 2.1.2 we discussed the various activity or application models commonly seen in industrial modeling practice. These were based on model outcomes in specific application areas. Such models as reaction kinetic models can have many forms, ranging from linear to nonlinear empirical models. Physical property models can have an enormous range of model forms. However, we can consider a more fundamental classification of process related models based on characteristicsof the models being developed and used in particular applications. Figure 2.3 provides a simple class diagram for a range of typical model forms, which include most of those seen in the life-cycle process. Other variants of these model types are possible. The basic classification relates to steady state and dynamic models with actual behavior being captured as continuous, discrete or hybrid (continuous-discrete).Further classifications lead to empirical, mechanistic, stochastic and deterministic models, with their various subclasses. Each of these models has a particular mathematical form that relates to the model type and which requires specific solution methods to be applied - in most cases this involves some form of numerical computation. Models are built for a purpose or 'goal' and that means they exist to answer questions about the reality they represent. As such, the idea of goal definition becomes an increasingly important idea that is not often explicitly stated or used in driving the modeling activities. There has been some work done in this area, using functional modeling approaches that are mainly driven by applications in fault diagnosis [9,10]. Recent work [ll]has been directed towards the concept of modeling goal development and evolution. This uses structured goal definitions to help drive the modeling with the intent of shortening the iterative nature of the modeling cycle and to improve modeling outcomes as shown in Fig. 2.2.
2.2.3 Model Ingredients
Industrial modeling requires a number of key ingredients to be effective. The following highlights some of these ingredients and comments on them. Assumptions
Key assumptions that relate to the conceptualization need to be recorded and then used to check consistency with the resultant model. It is vital that all assumptions relevant to the model development are available in the modeling life cycle. This will include such items as assumed balance volumes, mechanisms, details of species in the system or the lumping of species into larger classes for tractability of modeling. It will include issues related to key flows in the system, whether convective or diffusive as well as sources and sinks of energy and component masses.
2.2 industrial Practice and Demands in L@-Cycle Modeling
Y 4 I
679
, I
$' I I
Fault l r e e
I
I
I
POm net
Evnnt bee
I aus-Co-uence
I
Figure 2.3
I
Model class hierarchy for commonly used models
Documentation Documentation still remains one of the biggest challenges in industrial modeling. The adequacy of documentation in most modeling exercises is usually very poor, leading to difficulties in model reuse and much repetition of effort by future generations of engineers that need to revisit historical work. Decision making and arguments concerning the positions taken on various modeling decisions can be documented in structured ways such as the IBIS system [12141.This consists of several elements such as issues to be addressed, positions to be taken in addressing those issues, arguments for and against the various positions. This is typically performed through hypertext and graphical descriptions. The generation of readable model descriptions and final reports as part of the modeling exercise is vital for future reuse. In general, reports are not stored as part of the modeling exercise but remain separate items. Better integration is necessary for reuse and utilization of efforts.
am automata
680
I
2 Modeling in the Process Life Cycle
Data
As mentioned in Section 2.2.1, modeling data - either physicochemical or plant data - is a vital ingredient for modeling. It requires careful analysis and archiving with the other inputs and outputs of the various phases of the modeling cycle. This aspect of modeling is one of the biggest challenges, with little if any facility in modern CAPE software tools to adequately store validation and other data together with the model.
2.2.4 Support Technologies
Modeling requires support technologies. This is a huge area and what follows is a brief mention of some of the key items often encountered in life-cycle modeling. Model Definition and Building
There is a plethora of model-building environments available for each life-cycle modeling phase. In the engineering area, many allow concepts and relationships to be defined at a higher conceptual level using GUIs. Such systems as ModeLLa, ModKit, ICAS-ModDev and SCHEMA are in this category [8].Other environments such as gPROMS, ASPEN Custom Modeler, Modelica and Daesim allow direct model descriptions in terms of equations in various forms that can be stored as model members. Matlab provides facilities for equation solution of ordinary differential equations (ODE)and differential-algebraicequations (DAE). There are also dedicated toolboxes that cover specialized areas such as neural network modeling, partial differential equation systems and finite element applications. Simulation
Again, simulators abound in most phases of the process life cycle. Process simulation systems such as ASPEN, PRO11 and HYSYS have dominated the petroleum and petrochemicals sectors with many other similar products available, that vary in their application areas and capabilities. Generic simulation tools such as gPROMS, ModKit, ICAS, Daesim Dynamics and other simulators are available for general solution of models in the form of differential-algebraicequations that may also be hybrid in nature. Several also handle partial differential equation systems and a few allow simulation of integro-differentialsystems. These tools need to be chosen carefully for the task being tackled. At the more fundamental chemistry level there is a wide range of tools for molecular simulation, quantum chemical calculations and the like. Other tools such as discrete element simulations can handle the dynamics of particulate systems and find wide use in minerals processing, bulk fertilizer and pharmaceutical applications. Environmental simulation tools are easily available for such applications as neutral, buoyant and heavy gas dispersion estimates as well as prediction of groundwater flows, river, estuarine and ocean impact assessments. Significant adaptation of finite element or finite volume methods has been applied to such application areas. Many are made available by national environmental protection agencies such as the US EPA.
2.3 Applications ofModeling in the Process Lfe Cycle: Some Case Studies
Data Analysis
Again, there are numerous tools for analyzing large data sets from process plant. The key issues here are good visualization facilities, ability to handle very large data sets (many millions or tens of millions of data points) and data reduction capabilities to handle multivariable data. The visualization of process data in real-time through the use of Web browser tools across the complete corporation is an increasingly important development. In many multinational corporations, the monitoring and analysis of real-time data from their world-wide operations is now commonplace. Typical of the systems in place for data capture is Honeywell’s Process History Database (PHD),which can be linked to other applications such as MS Excel to view archived data. Accompanying these issues is the ability to transform and manipulate data and to analyze its characteristics so that it is suitable for calibration and validation studies. Filtering, handling or reconstruction of missing data points, spectral analysis and time series analysis capabilities should be available to help tackle the data analysis problem.
2.2.5 Modeling Integration
Model integration is a major issue in industrial modeling practice. Model integration is poorly understood and practiced in the manufacturing and process industries. Much duplication of effort is required to integrate models across the process-product life cycle. In particular, the inputs and results from particular models are often transferred inefficiently between modeling applications. For example, the predictions of gaseous discharges from multiple sources in a process flow sheet are often required in area source dispersion models. The linked models are often not within the same corporation and even if they are much manual effort is often expended in data transfer. The initiative leading to pdxi data exchange protocols based on XML has not penetrated the process and related industries very well, if at all. Some progress has been made in integrating disparate process models into proprietary software simulation systems through the use of CAPE-OPEN and Global CAPE-OPEN standard interfaces. This also addressed physical property routines, numerical solvers and continues to be extended. The problem of data and model integration across the whole life cycle still remains as an unmet challenge.
2.3 Applications of Modeling in the Process Life Cycle: Some Case Studies
In this section we illustrate some aspects of life-cycle modeling on a recent, largescale industrial development of an alternative fuel source. This is based on oil shale. This section shows some of the extensive and diverse modeling that takes place dur-
I
682
I
2 Modeling in t h e Process
Life Cycle
ing the life cycle of such a process, and comments on the importance and challenges in such modeling.
2.3.1 Shale Oil Processing
Shale oil has a long history around the world as an alternate source of hydrocarbonbased fuels. Plants have operated in many countries, either as demonstration plants or for commercial production. Countries such as China, Estonia and Brazil currently produce shale oil. Several demonstration plants operated in the US, mainly in Colorado. In Australia oil shale was processed in the early 1840s to produce 'kerosene' from the kerogen content of the shale. In the 1930s shale oil was produced in crude, batch retorts which pyrolized shale that had been suitably mined and crushed. It was an expensive and inefficient process but provided alternative fuels in a time when normal petroleum fuels were in short supply. Much research and development work was done around the work during the 1970s to 1990s to enhance processing and recoverability of products as well as understand the potential environmental impacts. Processing of oil shale into hydrocarbon products involves a combination of mining technologies, minerals processing and conventional hydrocarbon processing. Figure 2.4 shows a typical block diagram flow sheet representing a commercial operation located on the central coast of Queensland, near to the city of Gladstone [15]. This shows the complex nature of the process from mining and shale processing, which is heavily slanted towards solids processing technologies through to liquidvapor systems typical of petroleum processing. 2.3.1.1 Research Modeling
One of the key research issues in oil shale processing is the understanding of product yields under various conditions of pyrolysis. Other research issues relate to the drying characteristicsof crushed oil shale prior to the retorting of shale in the processor. Much academic and industrial research has sought to address these issues [16181.
In particular, models for the kinetics of oil shale pyrolysis, which were represented by nonlinear models were important for reactor design [19]. These were based on extracting rate constants through parameter estimation techniques. Other work related to understanding the effect of particle size on retorting and drying. 2.3.1.2 Conceptual Design
Some selected aspects of conceptual design are mentioned here. Preliminary Process Design
Modeling at the conceptual stage of the process was complemented by significant pilot plant studies that gave specific data from actual feedstock material, so that ini-
r
Run of Mine Process
Crushed Shale Stackina & Blending
i
Oil Shale Processing
I
7
Processed ShaleDis osal
1 I%
O m
$2
-t
Naphtha Treatment
-
Naphtha
:"
Naphtha Hydro-treatment
Distillation
-
-u
Product Storaae H e a y Nmhtha SlwageTanks
Light Naphtha SaageTanks
--c
--h
i
Product EX DO^^
I
Figure 2.4
-1
Oil Recovery
Overall flow sheet o f oil shale processing
LFO Produd SorageTanks
-
684
I
2 Modeling in the Process l f e Cycle
tial mass and energy balances were possible through the use of minerals processing and petrochemical flow-sheetingpackages. Targetted modeling of key items such as the Alberta Taciuk Processor (ATP)were needed to ensure a good design basis for such a large piece of rotating equipment. Initial modeling used simplified multiple, lumped-parameter models to represent the distributed nature of the processor operations. The key parameters such as heat transfer coefficients and heats of combustion being obtained from pilot plant measurements. Environmental Aspects
Oil shale processing environmental issues continue to be one of the priority areas and as such a significant amount of modeling is required to answer questions related to environmental impacts. In particular the geographical location of such plants within complex air sheds poses particular challenges as to the impact of gaseous emissions. Complex dispersion models that can handle multiple sources such as the Industrial Source Complex version 3 (ISC3) [20] are needed to estimate such potential impacts of a wide range of gaseous species such as NO2 and SO2. Figure 2.5 shows the prediction of SOz levels (micrograms per cubic meter) from the planned stage 2 process. These showed that expected ground-level concentrations were well within accepted guidelines. It also showed potential areas of concern, which were very dependent on the geographical and atmospheric characteristics of the area. Other modeling involved such areas as: 0 Leaching models for spent shale to ascertain extraction rates of specific chemical substances from percolation of rain water. This involved the development of onedimensional partial differential equation models with appropriate constitutive relations for the extraction rates. permeation modeling of leachate into surrounding land and potentially into ground waters [21-231; Total site water management modeling to help design and develop 'zero discharge' operating policies. This covers water segregation, reuse and effects of atmospheric conditions such as cyclone conditions as well as sediment control issues close to the Great Barrier Reef Marine Park. premining and postmining flow modeling of principal creeks in the areas surrounding the operations. 2.3.1.3 Detailed Engineering
Detailed engineering inevitably involves the design of complex process equipment. In the case of oil shale processing, the ATP is a very complex device with four operational zones carrying out functions of shale preheating, spent shale cooling, retorting of dried shale and combustion of spent shale for energy recovery purposes. The zones are maintained at different pressures for operational reasons. Figure 2.6 shows a schematic of the ATP vessel and Fig. 2.7 shows the retort zone end of the demonstration pilot plant vessel, which is approximately SO m in length, 10 m in diameter and has a loaded weight of over 2500 tonnes.
2.3 App/ications ofModeling in the Process l$ Cycle: Some Case Studies
Figure 2.5 Predicted SO2 levels for stage 2 process from complex dispersion modeling
I
685
686
I
2 Modeling in the Process Llfe Cycle
2
3
1 VaporTube
4 RetortSaal
2 CornbusttonZone (750°C)
5 RehertZone(100-250"C)
3 Returt(50O"C)
6 CooltngZone
Figure 2.6
4
5
ATP shale processor schematic
Modeling aspects of detailed process design involved: Substantial flow-sheet modeling of the back-end process, which consists of fractionation, hydrogenation and product separation to produce a range of liquid and gaseous hydrocarbons products. Detailed process modeling of the ATP vessel using steady state models to compute the mass and energy balances. This involved the use of standalone programs to consider a range of scenarios for changing feed rates and pressure changes. Finite element stress modeling of the structural aspects of the ATP are used to ascertain adequacy of the internal mechanical design for vessel integrity.
Figure 2.7
ATP retort and combustion zones
2.3 Applications ofModeling in the Process Lfe Cycle: Some Case Studies
2.3.1.4 Operations Modeling
Operational phase modeling considers the following issues:
0
0
0
Initial modeling of the ATP dynamics to consider the effect of feed flow rate changes and moisture into the ATP vessel. This allowed the development of an initial tool for improving operator training and considering alternate control strategies for both set-point changes as well as disturbance rejection. This considered conventional PID loops as well as the potential for model-based control strategies. empirical modeling through planned step tests on the ATP to establish appropriate models for model-based control applications; improved flow-sheet modeling of the process to consider better operating policies for the hydrocarbon processing units of the plant; Development of a complex distributed parameter model of the preheat-cooling section of the ATP in order to answer questions about internal heat transfer design and potential hot gas bypassing. This involved the consideration of solids flow in rotating drums as a function of rotation speed and particle properties, the effect of particle size distributions on internal heat transfer and the effect of pressure driven flows in various flow zones within the device. A full distributed parameter model was developed to help answer the key questions.
2.3.1.5 Risk Modeling
Risk modeling that includes both financial and environmental risk is a crucial factor in modern process design and operations. In the case of shale oil production, there are a number of hazardous materials and conditions in the process. A full quantitative risk assessment was needed, which involved modeling all major events such as low and high pressure releases of gases and liquids, pool and jet fires as well as explosions. This led to estimates of iso-risk contours for potential fatalities as well as injury levels. Figure 2.8 shows predicted risk levels for radiation impacts at 4.7 and 23.0 kW m-2, which are key legislative limits for nearby residential and industrial land-uses. In this case, risk levels were well within acceptance levels, since nearby industrial sites were located hundreds of meters away and residential areas were several kilometers away. 2.3.1.6 Socioeconomic Impact Analysis
One of the key issues that is amenable to modeling is the impact that the proposed operations will have on social and economic aspects of the region. Regional economic impacts were measured using the Queensland Multi-Regional Input-Output Model that takes into account linkages between regions. This provides a tool to consider alternative strategies for regional development. This type of modeling can provide useful data on regional effects throughout the life cycle of the operation covering planning, conceptualization, construction and operational phases. The outputs can be linked to issues of employment levels, potential housing needs and the like.
688
I
2 Modeling in the Process Life Cycle
I
1
Figure 2.8
Thermal radiation risk model predictions
2.4 Challenges in Modeling Through the Life Cycle
2.3.2 Utility and Inventory Management Applications
Two applications that illustrate less traditional or less recognized modeling are utility management in a brewery and inventory management in a warehouse. Brewery Utility Management
Beer production is a deceptively simple process involving brewing (basically extraction of sugars from malted grain), fermentation (partial biological conversion of sugars to alcohol), filtration and packaging. It is complicated by the range of products generated by variations in ingredients and in the brewing and fermentation processes and by traditional processing in batches. A key economic driver to remain competitive is to reduce and manage the consumption of utility streams, about six in total. Utility management is based on the prediction of current consumption rates and the prediction of consumption over short periods into the hture. Utility consumption during individual steps in the batch processing is modeled empirically by a piecewise-linear profile based on processing time and sometimes batch size. The model needs to know what processing steps are active and when the steps commenced and then it simply performs interpolations and summations based on simple utility flow sheets to predict utility rates at various real and virtual sensor locations. This information is then used by operations and technical personnel for monitoring and diagnosis. Warehouse Inventory Management
Warehouses are also deceptively simple operations. The basic model is simply a dynamic mass balance that tracks inflows and outflows to predict inventories. A model validation is called a stocktake.Again there are many complicatingfactors: a large number ofproducts, discrete inflows and outflows in terms ofpacks, pallets and truck loads of various sizes, spatial factors governing the placement of material in the warehouse, product 'use by' dates, real-time orders and just-in-time delivery and manufacture. The economic drivers are minimizing inventories and wastage and increasing the ability to meet orders often with very small lead times. The model must not only predict current and future inventories but also advise the warehouse staff on where products are to be found to meet orders by careful tracking in the spatial domain as well as the time domain.
2.4 Challenges in Modeling Through the Life Cycle
This section outlines a number of challenges currently being addressed that help to tackle some of the more intractable aspects of life-cycle modeling for process and product systems. Most are to do with information management which is at the heart of these challenges.
I
689
690
I
2 Modeling in the Process Lfe Cycle
2.4.1 Model Management
It is clear that during the life cycle of a process or manufacturing plant there are a large range of models and associated data that have been gathered and developed by many different people with an often enormous associated cost over the life cycle. While there are well developed procedures for the life-cycle management of computer software, little is done to manage models, resulting in much loss of information and knowledge and much duplication of effort. This often occurs over relatively short periods in the operations phase largely due to changes in personnel and computer systems, let alone between life-cycle phases which frequently occur within different organizations. In many cases there is much process and operating knowledge in models developed in the strategic planning, research and development and design phases that is never utilized in the operations phase. Recent research into the tracking and representation of design information has led to several systems that seek to tackle this issue. The recording and utilization of modeling assumptions [7, 8, 331 and the transfer of software life-cycle management to modeling may one day address the model management issue more fully.
2.4.2 Model Repositories and Reuse
There have been efforts to address the issue of model repositories. It still remains a major challenge in process modeling and even more so in sociotechnical modeling, which presents a far greater challenge due to its diversity of applications and model forms. It is especially a problem when modeling over various life-cycle phases is carried out by a wide range of organizations not necessarily part of the corporation. Many model repositories are simply files within a directory that were used to run a specific simulation similar to those in current flow-sheeting packages. More sophisticated repositories that are ‘simulator neutral’ are not very common. Amongst the attempts to produce such repositories are languages such as system for chemical engineering model assembly (SCHEMA) [24, 251, that seek to store models and model families in a natural language form that includes the underlying assumptions in the model as well as a description that can be used to generate simulation code suitable for a specific simulation tool. In this case, the advantage of not being tied to a specific simulation package is clear when industrial companies make corporate decisions to change simulation platforms. This constitutes a major problem in maintaining process models. Other approaches have been proposed and prototyped. These include the repository of a modeling environment (ROME) system [26], which enables neutral models to be stored in terms of fundamental modeling objects. The ROME system provides import and export capabilities for various application software. It thus provides a means of storing, retrieving and using models over the life cycle. The wider problem of handling model storage across organizations is still largely unaddressed and
2.4 Challenges in Modeling Through the L$ Cycle
unsolved. It is often made more difficult by privacy and nondisclosure issues related to specific modeling systems or models that have significant commercial value. Along with this concept is the idea of exchanging models between different modeling environments such as gPROMS, Modelica or AspenPlus using a model exchange language like CapeML [27]. This provides a neutral exchange mechanism for model use across application packages. This was part of the larger program of CAPE-OPEN and Global CAPE-OPEN [28] that seeks to address mobility and exchange of modeling across diverse application platforms by setting open standard interfaces for computer-aided engineering applications. A number of major software vendors such as Aspen Technology, PS Enterprise, Fantoft Process and academic members support the Global CAPE-OPEN initiative.
2.4.3 Data Representation and Use
Data is at the heart of modeling and data representation is crucial to effective model development and use. In this context, significant work has been done to address the structuring of data across the design cycle. Development of the conceptual life-cycle model (CLiP) [29-311,has provided a framework in chemical engineering for design and model development and reuse. The CLiP development, in theory, covers sociotechnical systems but is yet to be expanded and developed to a point where it can adequately cover the range of industrial modeling activities common to major industrial developments. It includes some concepts covering technical, social and material systems that act as upper level metamodels to instances of these classes. One current development is the extension of the CLiP data model into an ontological representation called OntoCAPE [32], which seeks to put these concepts in the form of an ontology which can be used for reasoning about the domains covered by the concepts. This provides the possibility of building intelligent software agent systems that can help practitioners perform modeling and design tasks. Data exchange in the area of process engineering has also been of major concern leading to such initiatives as the Process Data Exchange Institute (pDXi) which was initiated in 1989 by the American Institute of Chemical Engineers and numerous organizations within the US and Europe. It is however difficult to assess the actual usage of the standard. More recently, initiatives such as the Standard for Exchange of Product Model Data (STEP) within the industrial automation and integration standard, I S 0 10303 will provide extensive specifications for chemical engineering related equipment, processes assembly and design [34].
I
691
692
I
2 Modeling in the Process Life Cycle
2.4.4 Documentation
Documentation in a corporation is a major challenge, given the enormous amounts of reports, figures, drawings, memos, letters, consulting documents and the like that are generated throughout the process or product life cycle. A number of large commercial systems exist to address this issue, such as Documentum [35], that provide enterprise content management (ECM). Recent developments provide collaborative workspaces that give facilities to share ideas and information within the corporation and beyond. Challenges still exist in being able to effectively link important documents to other technical systems such as hazard and risk registers or plant level systems, so that documents can be retrieved in a timely fashion for decision making purposes.
Acknowledgements
The authors would like to acknowledge the help given by Mr. Jim Schmidt, manag ing director (SPPD) and to Southern Pacific Petroleum (Development)for permission to quote details of the shale oil process referred to within this chapter. We also acknowledge permission from Unilever and CUB Limited to quote on warehouse and utility applications.
References 1 British Standards 2004
http://bsonline.techindex.co.uk
2 ICCA, 2004, International Council of Chemi-
cal Associations, www.icca-chem.org/ 3 Rosselot R S. Allen D. T. L$ Cycle Concepts, Product Stewardship, and Green Engineering, in D.T. Allen and D. Shonnard (eds.) Green Engineering: Environmentally Conscious Design of Chemical Processes, ch. 13, Prentice Hall PTR, Upper Saddle River, NJ 2002 4 Minsky M. Matter, Minds and Models in Semantic Information Processing, MIT Press, Boston, USA 1968 5 Marquardt W.uon Wedel L. Bayer B. Perspectives on Lifecycle Process Modeling, FOCAPD, 5th International Conference on Computer-Aided Process Design, AIChE Symposium Series 323 96 (200) p. 192-214 6 Virkki-Hatakka T. et al. Modeling at Different Stages of Process Life Cycle, European Symposium on Computer-Aided Process Engineering (ESCAPE)-13,Elsevier Science, Amsterdam (2003) pp. 977-982
7 Foss B. A. Lohmann B. Marquardt W.A Field
Study of the Industrial Modeling Process, J. Process Control, 8(5-6) (1988) p. 325-338 8 Hangos R M . Cameron I. T. Process Modeling and Model Analysis, Academic Press, London 2001 9 Lind M . Modeling Goals and Functions of Complex Industrial Plant, J. Appl. Artificial Intell. 8 (1994) p. 259-283 10 Modarres M. Cheon S. Function Centered Modeling of Engineering Systems Using the Goal Tree Success Tree Technique and Functional Primitives, Rehab. Eng. Syst. Safe. 64 (199) p. 181-200 11 Cameron I. T. Fraga E. S. Bogle I. D. L. Goal Set Development and Evolution in the Conceptualization of Process Models, CAPE Centre, University of Queensland, Internal Report 2004 12 Rittel H. Kunz W. Issues as Elements of Information Systems, Tech Report Working Paper 131, Institute of Urban and Regional Development, University of California, Berkeley, USA 1970
2.4 Challenges in Modeling Through t h e L@ Cycle 13 Batiares-Alcantara R. Lababidi H. M . S.
14
15
16
17
18
19
20
21
22
23
24
Design Support Systems for Process Engineering, Comput. Chem. Eng. 19 (1995) p. 267-301 Batiares-Alcantard R. King J . M. P. Design Support Systems for Process Engineering, Comput. Chem. Eng. 21 (1997) p. 263-276 SPP-CPM Stuart Oil Shale Project, Draft Environmental Impact Statement, Southern Pacific Petroleum (Development) Pty Ltd, Sinclair Knight Merz, Australia 1999 Litster]. D. Bell P. R. F. Newell R. B. White E. T. The Role of Kinetics in Oil Shale Retorting, in Proceedings of CHEMECA 82, Sydney, Australia, August 1982, pp. 35-39 Do D. D. Litster]. D. Peshkof E. Newell R. B. Bell P. R. F. Pyrolysis of Queensland Oil Shale in a Fluidized Bed Modeling and Experimental Studies, in Proceedings 1st Aust. Workshop on Oil Shale, Lucas Heights, Australia, May 1983, pp. 131-134 Litster]. D. Rogers M. /. Newell R. B. Modeling Fluid Bed Drying and Retorting of Rundle Oil Shale, in Proceedings of 2nd Australian Workshop on Oil Shale, Brisbane, Australia, December 1984, pp. 206-211 Fincane D. George ]. H. H a m s H. G. Perturbation Analysis of Second Order Effects in Kinetics of Oil Shale Pyrolysis, Fuel 56(1) (1977) p. 65-69 United States Environmental Protection Agency [ USEPA) www.weblakes.com/lakeepal.html 2004 Connell L. D. Bell P. R. Modeling Moisture Movement in Revegetating Waste Heaps :I Finite Element Model for Liquid and Vapor Transport Water Resources Res. 29(5) (1993) p. 1435-1443 Connell L. D. Bell P. R. Hauerkamp R. Modeling Moisture Movement in Revegetating Waste Heaps: I1 Application to Oil Shale Wastes, Water Resources Res. 29(5) (1993) p. 1445-1455 Syamsiah S. Krol A. Sly L. Bell P. R. B. Adsorption and Microbial-Degradation of Organic Compounds in Oil-Shale Retort Water Fuel 72(6) (1993) p. 855-861 Williams R. P. B. Keays R. McCahey S . Cameron I. T. Hangos K. M. SCHEMA: an Object Oriented Modeling Language for Continuous and Hybrid Process Models, Asia Pacific Conference on Chemical Engineering (APCChE), Paper #922, 2002, Christchurch, New Zealand 2002
25
26
27
28
29
30
31
32
33
34 35
I
693
McGahey S. Cameron I. T. Transformations in Model Families, ESCAPE12, Comp. Chem. Eng., The Hague, The Netherlands, 26-29 May 2002 Von Wedel L. Marquardt W. ROME: A Repository to Support the Integration of Models over the Lifecycle of Model-Based Engineering Processes, in s. Pierucci (ed.) European Symposium on Computer-Aided Process Engineering, 10, Elsevier, Amsterdam, (2000) pp. 535-540 Von Wedel L. CapeML A Model Exchange Language for Chemical Process Modeling Tech Report LPT-2002-16 Lehrstuhl fur Prozesstechnik, RWTH Aachen University, Germany 2002 http://zann.informatik.nvth-aachen.de:8080/ opencms/opencms/COLANgamma/ index.htm1 Bayer B. Conceptual Information Modeling for Computer-Aided Support of Chemical Process Design, Dissertation Nr. 787, Lehrstuhl fur Prozesstechnik, RWTH Aachen University, Germany 2003 Bayer B. Krobb C. Marquardt W. Data Model for Design Data in Chemical Engineering Information Models, Tech Report LPT-2001IS, Lehrstuhl fur Prozesstechnik, RWTH Aachen University, Germany 2002 Schneider R. Marquardt W. Information Technology Support in the Chemical Process Design Lifecycle, Chem. Eng. Sci. 57(10) (2002) p. 1763-1792 Yang A. et al. Principles and Informal Specification of OntoCAPE: COGENTS Project, Information Society Technologies (IST) Programme, 1ST-2001-34431,European Union 2003 Bogusch R. Lohmann B. Marquardt W. Computer-Aided Process Modeling with ModKit, Technical Report #8,RWTH Aachen University of Technology 1996 International Standards Organization [ISO) www.iso.org 2004 Documentum lnc. www.documentum.com 2004
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
3 Integration in Supply Chain Management Luis Puigjaner and Antonio Espuria
3.1 Introduction
An introductory chapter (SectionThree, Chapter 7) on the supply chain (SC)network has already presented the elementary principles and systematic methods of supply chain modeling and optimization. Here, the need for and integrated management of the SC is further emphasized and challenging solutions are presented. As seen, supply chain management (SCM) comprises the entire range of activities related to the exchange of information and materials between costumers and suppliers involved in the execution of product and/or service orders in an extremely dynamic environment (Fig. 3.1).
Rew Material Prcducer
Dowstream Matertal now Figure 3.1
Flow o f supply chain management information and materials
Computer Aided Process and Product Engmeenng Edited by Luis Puiglaner and Georges Heyen Copyright 02006 WILEY-VCH Verlag GmbH & Co KGaA, Weinhem ISBN 3-527-30804-0
696
I
3 integration in Supp/y Chain Management
A successful management of the supply chain management requires direct visibility of the global results of a planning decision in order to include this global perspective. This requires significant integration across multiple dimensions of the planning problem for nonconventional manufacturing networks and multisite facilities over their entire supply chain [I]. Objectives such as resources management, minimum environmental impact, financial issues, robust operation and high responsiveness to continuous needs must be simultaneously considered along with a number of operating and design constraints [2, 31. Almost all the currently available SCM software suffers from several of the following demerits: product availability focus; reactive rather than proactive; long lead times; uncertainty treatment throughout; lack of flexibility in systems; performance measured functionally; poorly defined management process; no real partnership; insufficient performance measurement [4]. To overcome the above deficiencies and respond better to industrial demands facing a more dynamic environment (greater uncertainty of demand, shorter product life cycle, financial and environmental issues, fewer warehouses, new cost/service balance, globalization, channel integration and so on), there is a need to explore new strategies for supply chain management. Moreover, an integrated solution is required for the next generation of SCM given the number and complex interactions present among supply chain main components in the global supply chain (Fig. 3.2). This new scenario requires departing from current approaches that consider Supply chain optimization in a static way. Production-distribution-inventorysystems that are now being used in manufacturing companies are static information systems. Periodically (typically weekly or monthly), all data (demand forecasts, available machine capacity, current inventory levels, desired inventory levels, etc.) are collected and fed into a huge optimization system (typically some type of linear programming system). After hours of computation a company-wide plan is obtained: this plan repI
Forecast Demand
{ j
f
Supplier
{
Distribution
II
I
I
I
I
I
Production
f
Enterprise Plan
I
i I
j I
I I
1 ~ jrranipnl/ypGG-pZq Management
Management
i
! Figure 3.2
!
Current supply chain planning components
I
3.2 Current State ofsupply Chain Management lntegration
resents next week’s (or month’s) production schedule, the inventory levels at the various facilities and the distribution of the final goods. However, due to demand fluctuations or other unpredictable situations, the plan does not exactly meet the company’s best course of action in length of changing conditions and data. So, production and plant managers adjust the plan to the best of their ability, given the frequent lack of data and their inability to compute the “best” course of action. Real-time adjustments in view of changing data are made manually and ad hoc. This situation is largely recognized and manifested in recent specialized forums of debate [S]. Therefore, new SCM solutions should be provided with the following main characteristics: interoperability, scalability, with open and flexible infrastructure; Weboriented interface; autonomous, capable of self-organization and reconfiguration, coordination and negotiation, with optimization and learning mechanisms, so to evolve and adapt to the dynamic market environment; adaptive process modeling, rapid prototyping; involving production planning and scheduling; capable of making forecasts accurately and to incorporate e-commerce and w-commerce. In this section, an open, modular-integrated solution is discussed that implements real-time supply chain optimization. The technology involved uses a network of cooperative and auto-associative software agents (smart agents) that constitute a decision support system for managing the whole supply chain in a real-time environment. This chapter is organized as follows: First, a review of recent approaches to SCM integration is presented and the requirements for next generation of SCM integrated solution are identified. Then, a specific environment is presented that encompasses the SCM characteristics identified above. Finally, the integration of negotiation, environmental, forecasting and financial decisions is reported as an example of the new technology that may lead to better, fully integrated, easier to use and more comprehensive tools for SCM. A brief description of the architecture and functionalities of the solution implemented is also given.
3.2 Current State of Supply Chain Management Integration
Supply chain management (SCM) has been extensively studied in recent years. Lee and Billington [7] consider inventory handling as the central part of supply chain management integration. This work considers three areas of potential conflict: 0 0 0
problems related to information and management of the SC; operational related problems; strategic and design problems.
Geoffrion and Powers [8]studied design aspects of the distribution network focusing on the storage location problem. The integration of production and distribution in the SC is examined in the work of Erengunc et al. [9]. The authors point out the necessary tradeoff between flexibility and quality on the one hand and product cost on the other in the SCM.
I
697
698
I
3 Integration in Supply Chain Management
As the level of integration in supply chain management increases, the complexity of the resulting model limits some approaches to contemplate small academic examples. Some representative methodologies to represent the supply chain are summarized.
3.2.1 Methods Based on Deterministic Analytical Models
Heuristics methods were used by Williams [lo] for planning and distribution of supply chain networks. The objective is to find the optimum production plan that satisfies the demand at minimum cost. Different models are used and compared with dynamic programming. In a later work, Williams [ll]developed a dynamic programming model to determine simultaneously production and distribution lot sizing for each echelon in the chain. Inventory and operation costs associated to each mode of the chain are minimized. A deterministic model was used by Ischii [12] to obtain the inventory levels and dead times associated to the solution of an integrated supply chain considering finite horizon and linear demand. Cohen and Lee [13] presented a mathematical programming formulation (mixedinteger nonlinear programming) that minimizes net profit of manufacturing and distribution centers under management constraints (production resources) and logistics constraints (flexibility,availability and demand limitations). This work was later extended [14] to minimize fixed and transport costs along the chain subject to supply, capacity, assignment, demand and raw material constraints. A combination of mathematical programming and heuristic models is used by Newhart et al. [15] to minimize the number of products in inventory through the network. A second step investigates the minimum inventory required to absorb demand delays and fluctuations. Arntzen et al. [16] develop a “global” model for the supply chain (GSCM) that implies a formulation of the type mixed-integer linear programming to determine: (1)the number and location of distribution centers, (2) client assignment to distribution centers, (3) number of echelons in the SC, and (4)product assignment to production plants. The objective is to minimize the weighted sum of total costs (production, inventory, transport and other fixed costs) and active days. The efficiency and response capacity of the SC can be improved by increasing its flexibility [17]. Here, flexibility is measured as the sum of the instantaneous difference between capacity and utilization of two types of resources: inventory and capacity resources. Given the necessary resources for manufacturing each product and bill of materials (BOM) information, the transport plan and delivery of each product is obtained and optimum inventory levels are achieved. Camm et al. [18] developed an integer mathematical programming (IP) model to determine the best location for distribution centers of Procter and Gamble and its proper assignment to clients grouped by zones. The model developed uses a simple transport model and the allocation problem does not consider capacity constrains.
3.2 Current State ofSupp/y Chain Management Integration
More recently, the design of multiechelon supply chain networks under demand uncertainty has been modeled mathematically as a mixed-integer linear programming optimization problem [191. The decisions to the determined include the number, location and capacity of warehouses and distribution centers to be set up, the transportation links that need to be established in the network and the flows and production rates of materials. The objective is the minimization of the total annualized cost of the network, taking into account both infrastructure and operating costs.
3.2.2 Methods Based on Stochastic Models
A detailed presentation of uncertainty sources present in the supply chain that affects its operation performance can be found in Davis [20]. In the same work a method is developed to treat the uncertainty associated to the supply chain of Hewlett-Packard. According to Davis [20], three different sources of uncertainty can be found in the SC: (1)suppliers, (2) production, and (3) clients. Chain suppliers are characterized by their performance and their response can be predicted. Uncertainty problems related to production can be solved by reliability analysis maintenance techniques. Finally, client demand uncertainty can be dealt with specialized forecasting methods. Stochastic models incorporate uncertain aspects of the supply chain and focus on certain parameters relative to its operation. For instance, in the work of Cohen et al. [21] a model is developed to establish a materials supply policy for each echelon in the SC. Four submodels are developed based on different costs for the control of materials, production, finished products storage and distribution. There are two probability distributions which are determined by the SC interactions, namely the materials’ demand in manufacturing plants and clients demand in distribution centers. Svoronos and Zipkin [22] consider a multiechelon SC distribution and estimate the average inventory level and the number of unfilled orders for a given base level of inventory. With these approximations, the authors build an optimization model to determine the inventory base level that implies minimum cost. A mathematical programming model for three echelons SC (one product, one factory, one distribution center and a retailer) is developed by Pyke and Cohen [23].The model minimizes the total cost subject to a service level constraint. A later work by Pyke and Cohen [24] considers the same network but with multiple types of products. Lee et al. [25] present a mathematical model that describes the “bullwhip” effect (variance distortion along SC upstream). Although it is often impossible to know exactly the probability distribution functions of products demand, it will always be possible to specify a set of demand scenarios with high probability of occurrence. Scenario-based planning permits the capture of uncertainly by defining a number of possible future scenarios [26]. Thus, the objective consists of finding solutions that behave satisfactorily under all scenarios. Mobasheri et al. [27] describe a number of
I
699
700
I
3 Integration in Supply Chain Management
scenarios as possible states from the actual state. The authors claim that this is avoided forecasting, which is less reliable. The same approach is used to formulate and solve operational problems, like environmental impact along the chain [28, 291. The midterm supply chain planning under demand uncertainly is addressed in a recent work with the objective to safeguard against inventory depletion at the production sites and excessive shortage for the customer [30]. A chance constraint programming approach in conjunction with a two-stage stochastic programming methodology is utilized for capturing the tradeoff between customer demand satisfaction and production costs. The design of multiproduct, multiechelon supply chain networks under demand uncertainty is considered by Tsiakis et al. [31]. Compared to previous models, the model integrates three distinct echelons of the supply chain within a single, mathematical-based formulation. Moreover, it takes into account the complexity introduced by the multiproduct nature of the production facilities, the economics of scale in transportation and the uncertainty inherent in the product demands. In a recent work [32], an optimization model is developed for the supply chain of a petrochemical company operating under uncertain demand and economic conditions. The proposed two-stage stochastic model succeeds in determining the optimum production volumes that maximize the volumes of products shipped for each scenario. However, the need for further investigation to study the dynamics of the petrochemical supply is recognized. A novel approach to increase the supply chain competitivenesshas been presented very recently [33]. The proposed strategy helps to coordinate the production/distribution tasks of the orders embedded in a SC by integrating the plant production scheduling with the transport scheduling to final markets. Uncertainty in the demand is considered and the problem is formulated as a two-stage stochastic optimization approach. The mathematical model looks for the detailed global schedule (production and transports) that maximizes the expected benefits.
3.2.3 Methods Based on Economic Models
The current trend of advanced planning and scheduling tools (APS) is to incorporate tools to model and change the current position of the financial and process managers during complex interconnected decision making in chemical process industries. Cash management models were considered in the supply chain following basically two stochastic approaches. Baumol’s model [34] had an inventory approach assuming certainty. Cash was treated similarly as holding inventory and payments were assumed at a constant rate. On the contrary, the Miller and Orr cash management model [35] was based on the fact that perfect forecasts of cash were virtually impossible because the tuning of inflows depend on payments to customers. In consequence, lower and upper bounds of cash were calculated to create a safety stock. In an attempt to model the client-provider behavior, Christy and Grout [36] develop a supply chain economic model based on games theory. The integration of budgeting
3.2 Current State ofSupp/y Chain Management lntegration
models into scheduling and planning models is also considered in a recent work [37]. A cash flow and budgeting model is coupled with an advance scheduling and planning procedure within the decision-making process for increased revenues across the supply chain. A further step in integrating levels of decision making in the SC is contemplated in the work of Badell et al. (381. This work considers business decisions and their impact in the SCM. It addresses the implementation of financial cross functional links with the SC operation and investment activities at the factory level when scheduling and budgeting in short term planning in batch processing industries. The target is to obtain tradeoff solutions preserving at most the profit and liquidity while satisfying customers.
3.2.4
Methods Based on Simulation Models
The fast development of new products and the increasing competitiveness of market agents have turned the SC system into a rapidly changing environment. Therefore, it becomes necessary to capture and characterize the dynamic behavior of enterprise systems and develop systematic procedures for decision-making support under these circumstances. Although the interest in integrated dynamic approaches for SCM is recent, some studies were clearly reported during the last four decades. Forrester [39] performs dynamic analysis and simulation of industrial systems by means of discrete dynamic mass balances and linear and nonlinear delays in the distribution channels and manufacturing sites. Although this work contemplated small academic examples, it permitted the identification of the aforementioned demand amplification problem. Later, Towhill [40] reported some effects to control SCs based on a transfer function analysis and classical feed-forward control. A changing environment is contemplated in the work of Back et al. [41].They propose a strategy to cater for the dynamics of the environment and the disturbances, as well as for the dynamics of operations of the business. The same approach is used in the work of Perea-Lopez et al. [42], but taking a step forward by considering a consumer-driven operation. They analyze the impact of heuristic control laws on the performance of the SC integrated by multiproduct multistage distribution networks and manufacturing sites. A more recent model, model predictive control (MPC), applied to the supply chain problem was reported by Brown et al. [43]. A comparison between these two control strategies can be found in Mele et al. [ a ] . A different approach is presented in the work of Mele et al. [45]. Here, a dynamic approach for SCM based on the development of a discrete event-drivensystem model of the SC contemplating several entities is reported. The interaction between these entities is explored through simulation techniques. The results obtained provide information about the tradeoff found in real systems and give valuable insight into SC dynamics. Thus, the proposed framework becomes a useful tool for decisionmaking support in real scenarios.
I
701
702
l
3 Integration in Supply Chain Management
Present analytic approaches to decision making have severe limitations when dealing with the amount of computations, probabilities and nonanalytic knowledge. Thus, there is an increasing interest in decision theory with artificial intelligence tools. It is used to address important tasks such as planning, diagnosis, learning, and serves as the basis for the new generation of “intelligent”software known as normative systems. An emerging area is the utilization of multiagent systems, since decision making is the central task of artificial agents [46]. This chapter focuses on a multiagent viewpoint for SCM and design. The proposed approach is to consider each possible configuration and action advice as an independent agent provided with autonomous, interactive, cooperative, adaptive and proactive capabilities. This approach permits new levels of integration and additional functionalities in SCM (environmental issues, human factors, financial decisions) as it will be unveiled in the next sections.
3.3 Agent-based Supply Chain Management Systems
Since SCM is essentially concerned with coherence among multiple, globallydistributed decision makers, a multiagent modeling framework based on explicit communication between constituent agents (such as manufacturers, suppliers retailers, and customers) seems very attractive. Agents can help to transform closed trading partner networks into open markets and extend such applications as production, distribution, and inventory management functions across the entire SC, spanning diverse organizations [46]. Agents are autonomous pieces of software that are designed to handle very specific tasks. In the case of SCs, where one has to deal with thousands of products, numerous requirements in production quality control, and many types of interactions, no single agent can be designed to handle this overall task therefore, we will have to design multiple specialized agents to guide the SC in its entirety. Multiagent systems may be regarded as a group of agents, interacting with one another to collectively achieve their goals. By drawing on other agents’ knowledge and capabilities, agents can overcome their own limits of intelligence. Otherwise, knowledge is distributed among several intelligent entities, the agents. Autonomous agents and multiagents represent a new way to designing, analyzing and implementing complex software systems [47]. A multiagent system uses cooperative agents towards a common goal. The agent is informed about the environment and/or can act on it. The agent has control of its own actions and internal state in a very flexible way, interacting when appropriate with other agents. Therefore, a multiagent system can emulate the behavior of distributed systems - like real world distributed supply chains - at a logical level, thus providing a resource for control of the real physical distributed systems.
3.3 Agent-based Supply Chain Management Systems
3.3.1 Multiagent System Architecture
A multiagent system (MASC) built on an open, distributed, flexible, collaborative, and self-organizing architecture has been recently proposed for SCM [48]. Retailers, warehouses, plants and raw material suppliers are modeled as a flexible network of cooperative agents, each performing one or more SC functions following a clientserver paradigm in an object-oriented fashion. An agent must use all its knowledge about the SC to determine the most convenient values of each attribute that must be negotiated. According to the resulting set of interests and capabilities of the network, the agent translates the value of each attribute into its value of satisfaction. Since it may be expected that not all attributes are equally important for the agent, each attribute has a different weight according to the agent’s scale of priorities. The total satisfaction that an agent obtains from a set of attributes is calculated taking into account the satisfaction given by each attribute and their respective weights. This final function, the utility function, gives an abstract value of the offers and counter-offers generated by both supplier and customer. Since objects of negotiation may be interdependent, the tradeoff between each pair of attributes considered is defined in a compensation matrix. This matrix is the element that enables the negotiation of all attributes at the same time, making the whole process faster and more similar to a human negotiation. The steps of the negotiation are: an agent receives a message of his opponent and evaluates how much this offer satisfies his expectations. From this initial vector of satisfaction, the agent generates an improved counter-offer using his compensation matrix. Then the agent can generate several counter-offers with the same utility using its weights. The opponent must do all the same steps after evaluating all the counter-offers and choosing the one with the highest applicable utility. The negotiation process finishes when an agent launches two consecutive offers that are nearly equal. Obviously, different negotiation policies can be considered and compared. Two types of agents are to be distinguished: the physical agent and management agent (Table 3.1). The physical agent represents the system’s physical entities (distribution center, warehouse, plant, etc.) and is capable of simulating the behavior and decision making of the corresponding entity, while information handling between entities is carried out by the management agents. A central agent, which is a management agent, is also responsible for the overall network management optimization. Table 3.1
Classification of the different entities in the MASC
Agents
Modules
Physical Management
Forecasting Scheduling ...
Users
I
703
704
I
3 lntegration in Supply Chain Management
n
Figure 3.3 Communication types between entities
Module components (forecasting, scheduling, etc.) are information tools that act as servers to specific queries from other entity (agent, module, and user). These three types of entities define five types of communications as seen in Fig. 3.3 for illustration purposes. The proposed MASC which contemplates a central agent (CA) as management agent offers a flexible architecture allowing representing real SCs with different policies of information control and decentralized decision making. The simplified structure of the MASC can be seen in Fig. 3.4. The architecture proposed contemplates physical agents at each node of the supply chain network (client, warehouse, factory) while a central management agent communicates with all the other agents. Other management agents are also considered that may be subagents of these already enumerated. This flexible architecture should permit an easy adaptation to represent any real SC with its own level information sharing, from a centralized system to a wholly decentralized one. For instance, in a decentralized case every physical agent (manufacturer A and B, retailer) takes decisions on internal variables and negotiates with other physical agents within its own
Retarder
Manufacturer
Module Forecasting
Central Agent
Manufacturer Scheduling
Figure 3.4 Basic multiagent system showing constituent entities (phys ical and management agents, modules and user)
3.3 Agent-based Supply Chain Management Systems
SC. Here, the central agent plays the role of information handling without decisionmaking capacity. Otherwise, when a centralized control system is contemplated, the central agent is the only one to have the overall SC information and to make decisions, while every physical agent sends and receives information to and from him. In this case, the central agent is provided with improvement/optimization algorithms. A fundamental aspect of this architecture is the traceableness required for a proper SC operation. Namely, all transactions carried out through the SC must be clearly registered to facilitate reproduction of the results obtained by simulation of a particular scenario. The second important element of the framework is the agent modeling. A brief description of different physical and management agents contemplated is given below: Physical Agents
As mentioned previously, physical agents represent the system’s physical entities. Specifically, the following physical agents are considered:
Client agent. The client agent initiates the system’s information flow by placing an order to the central agent. Basically the information transmitted is related to the product type, the amount of it, the delivery date and acceptable price. The central agent receives this information and transmits it to potential suppliers (manufacturerlwarehouse agents). Following their own internal logic, the potential supplier makes an offer to the central agent which is transmitted to the client. The client internal logic (amount of product, delivery, date and price) negotiates the selected supplier through the central agent. Final confirmation of the order by the supplier is received though the CA (Fig. 3.5). Warehouse agent. This agent models the material-handling of different kinds (raw materials, intermediates, final products) to be distributed through the SC. Clients of the warehouse may be manufacturers, other warehouses and product endusers. This agent mechanism has already been described (client agent), The internal logic for making an offer obeys to the following issues: (1) delivery date depending on distance, transport and preparation time; (2) available amount of
loffer (amount
Figure 3.5
I
Client-system interaction
I
705
706
I
3 Integration in Supply Chain Management
0
product that will depend on the stock and eventual delivery time, and ( 3 ) the product selling price which is dependent on production and storage cost plus warehouse expected profit. The warehouse behaves similarly to the client. However, complex inventory policies must be modeled that have to include demand uncertainty. Moreover, since the inventory control policy will influence the cost of downstream echelons in the SC, optimum negotiation at this point become very essential. Manufacturer agent. This agent models the actual manufacturing facility behavior in the SC producing intermediate and/or final products. It receives orders from clients/warehouses and makes an offer in terms of products amount, due date and price, which is calculated by using a production scheduling module. Manufacturers also operate like clients regarding raw materials supply. Planning tools are used to determine the amount of raw material needed. Production scheduling models will have into account the process type (continuous, batch, hybrid). Information provided by these models will be used at upper levels of manufacturing decision making (MRP, financial modules).
Management Agents
This type of agents does not represent a physical entity of the SC, but it rather simulates the SC operation in these aspects that are not necessary related to a physical entity of the SC. They could optimize the overall performance of the SC by modeling specific parameters associated to the other agents. 0
0
Central agent. Coordination of the agent’s network is achieved by means of the central agent. It essentially supervises and analyses the information flow between the other agents. As a result of information analysis, they may also have an active role by modifylng adequate parameters looking for the optimum overall performance of the SC. Other agents. The architecture envisaged contemplates the existence of subagents operating within the agents already described. Namely, a manufacturerlwarehouse agent contains coordinated subagents to simulate its real behavior. Typical examples of subagents are the sales/buy agents that simulate the corresponding department in a factory. These subagents negotiate transactions between physical agents (client,warehouse, manufacturer agents) and perceive their consequences. They follow the client-supplier logic described before. The architecture developed may consider the “multiowner” case where one SC competes with other SC. In that case, each SC has a partial view of the whole situation and therefore cannot manipulate variables belonging to a SC of a different “owner”. Thus, negotiation between SCs is necessary; being the central agent of each SC the responsible for interchain negotiations mechanism to reach an agreement on multiple conditions (Fig. 3.6). Management agents are adaptive. Namely, they are able to learn from the operation of the SC. Otherwise, they must be provided with appropriate tools (modules) for internal optimization (vertical integration) and external optimum negotiation (horizontal integration).
3.4 Environmental Module
Client I
I
Manufacture ... ..
Retatltr
Retailer ~
Figure 3.6 Interactions between two SCs through their central agents
Modules
Modules are not considered agents strictly speaking, but software tools needed to realize certain functionalities within the multiagent system. For instance, the warehouse physical agent model of inventory control may require demand forecasting tools to estimate the amount of future supply. Therefore, the warehouse agent will have to interact with the forecasting module to achieve proper inventory control. Available modules are: forecasting, negotiation, planning and scheduling, financial, optimization (multiobjective),environmental and diagnosis. Other plug-in modules can be added in the future to contemplate further functionalities. The next sections focus on three of them (environmental, financial, negotiation) that provide a challenging insight into the level of integration achieved for the whole SC performance optimization.
3.4 Environmental Module
Environmental considerations in the SC are necessary because industrial products most often reach the client through a variety of steps that are subject to strict environmental regulations. Moreover, these requirements migrate upwards through the SC and create a need for flow of environmental information. An adequate methodology to systematize this information and provide a vehicle for environmental impact minimization is the life-cycle assessment (LCA). The LCA approach has been adapted for the SC environmental assessment and improvement in this module [49]. The methodology used is summarized next. Let us consider the elementary SC shown in Fig. 3.7. It contains all the basic constituents of a generic SC in a simplified way, but although simple permits the representation of a variety of scenarios. The assumptions made in this base case study will
I
707
708
I
3 Integration in Supply Chain Management
I
Pr Pall Wr
omcr
process
”’.
Figure 3.7 Elementary supply chain representation utilized in the environmental module base case study
give an insight into the characteristics of the model contained in the environmental module. It should be observed that a high degree of aggregation is assumed in this SC representation so that energy and material streams are reduced to a minimum. This assumption implies that this SC representation is the result of a detailed modeling at the individual agent in the network, for instance, the manufacturer agent. Now let us consider Pml as the selected product of interest for LCA evaluation. This product leaves the factory as seen in Fig. 3.7. Then, the purpose of this study is to obtain an eco-label for this product in terms of environmental burden emissions associated to it following the LCA methodology guidelines (goal and scope, inventory analysis, impact assessment, integration phases) applied to the SC system. The first phase of LCA identifies the functional unit (product or process). This “functionality”can always be expressed as an equivalent product amount (in kg or MJ, according to the nature of the product) that will facilitate later calculations. The system boundaries are indicated in Fig. 3.7 (although in some cases it will be necessary to go deeper inside these boundaries to perform internal calculations to find environmental values for the global streams across the boundaries). Next, the inventory phase of manufacturing (the source block of product Pml) takes place, where the input streams Ps12 and Ps2 data, the emissions represented by Wm and data related to the products Pml and Pm2 are tabulated. Additionally, inputs and emissions downstream the manufacturing agent (Wo, Wu and Ww) are also considered. A key issue in inventory calculation is to establish the allocation policy for environment load associated to each product in each SC echelon. If causal relationship between inputs, outputs and emissions are known with certainty, inventory calculation can be easily done without need of an allocation procedure. Otherwise, the following general expression can be used for allocation at each SC echelon:
where Pk represents the stream of product k and vk is the corresponding eco-vector associated to product k. W . Y, is the waste stream weighted by its eco-vector, F, . vs is an input stream multiplied by its eco-vector and Pp . vpis the corresponding output stream weighted by its eco-vector vp.Finally, 6 and fop are allocation factors depending on the allocation policy (e.g.,mass allocation, energy allocation). Allocation to the chain left side of manufacturing (super supplier, supplier 1, supplier 2) is also analyzed in the same way as for the manufacturing case. This procedure is called forward allocation (Fig. 3.8) because environmental load is carried from left to right, that is in the same direction as the material flow in the SC. Following the LCA philosophy, the environmental module also considers a backward allocation (Fig. 3.9) that is in the opposite direction to the SC material flow. For instance, the manufacturer is also responsible for the environmental impact generated by this product after manufacturing, that is, along other processes in which it participates, during its use, and finally, during the waste management and treatment of the generated residues. Recycle processes and streams are treated by considering that the associated environmental load is included into the supplier LCA assessment. The model (Fig. 3.7) cuts the P, stream, thus P, and w,now include the inputs and the emissions for the
-------Figure 3.9
Backward allocation
710
I
3 Integration in Supply Chain Management
recycling process plus the inputs and emissions for the supplier 1, respectively.The environmental load associated to stream P, is considered to be zero.
3.4.1 Implementation Considerations
The user asks the system for an eco-label (ecological card). This eco-label can be expressed as a set of environmental loads as well as a set of environmental impacts. The system offers a table to enter data. These data belong to the following categories: inputs, emissions, products and functional unit, all referred to the main production process. This table would correspond to the manufacturing block in Fig. 3.7. Moreover, the system offers another table to introduce inputs and emissions associated to other processes such as those described as backward allocation. According to the data entered, the system generates additional input data tables to be filled-up by the user with inputs, emissions, products and calculation basis for each new table. These new tables would correspond to the blocks on the left to the manufacturing block in Fig. 3.7. Next, according to the kind of data entered in the tables, there are two types of calculation procedures. The tables whose inputs are all elementary flows must be calculated first. Otherwise, tables not having some elemental input have to be calculated afterwards. It is important to maintain the correct precedence in such a way that calculations follow the flow sheet from left to right. Finally, a table that contains all the information entered to the system is built. With the final table, the life cycle assessment calculations are made using data saved in an impact category table and a coefficients impact table. The Unified Modeling Language (UML) representation [SO]has been used to build the environmental module. The use case diagram shown in Fig. 3.10 summarizes the functionality of the module.
3.4.2 Industrial Testing
The environmental module has been tested in a real SC associated to automotive parts manufacturing. Specifically, the environmental impact associated to a certain component was evaluated and improvements were proposed to obtain the ecolabeling of the product. Once selected the functional unit for the product chosen, the inventory analysis was carried out from the information collected on raw material consumption, emissions and product(s). In the impact assessment phase, with the inventory analysis results some impact indexes were calculated for the following categories: 0 0
global warming stratospheric ozone depletion
0 main dsta
/?Get
Create new tables
Evaluate results
*O
\ LCA User
3.5 Financial Module I711
P ~ I inventory T ~
Get data
maws
Print results
0 Ssve results
Figure 3.10 Use case diagram showing the functionality ofthe environmental module
0 0 0
0
eco-toxicological impact photochemical oxidant formation acidification eutrophication.
Then allocation of environmental load was carried out satisfactorily backwards and forwards the supply chain. This permitted the maintenance of registers (eco-labeling) for each product. This register may give the manufacturer a substantial increase of penetration in the market. Moreover, the manufacturer can reduce the environmental impact of its products by selecting the “most ecological”supplier and/or modifymg the process to reduce emissions, This could be done by incorporating other modules (planning, financial and optimization) in the final decision making.
3.5 Financial Module
The purpose of the financial module (FM) is to bridge the existing gap between supply chain financial decisions and production management by providing a common framework for integrated decision making that permits an optimum cash management thus avoiding “blind” financial/production disaggregated decision making occurring in industrial practice.
712
I
3 Integration in Supply Chain Management
The methodologies currently used in production planninglscheduling try to optimize some performance measure without consideration of cash availability. Then, the output solution of the scheduling-planning model tries to fit the finances in an iterative trial and error procedure [Sl]. This procedure (called sequential procedure) usually incurs in substantial debt and has to pledge receivables (financialtransaction of high cost to the manager). Moreover, the lack of synchronization between cash inflows and outflows results in cash balance fluctuations, which can be alleviated by considering the cash flows simultaneously with production decisions. To achieve integration, the budgeting variables of liabilities and exogenous cash are calculated as a function of production planning and scheduling variables. Namely liabilities at a specific time period are a function ofthe cost ofpurchasingraw materials,the cost of materials processing and the cost of having to purchase part of the final product from another supplier or another plant. Exogenous cash flow incurred in every time period is due to the sale of products. The detailed formulation can be found in Refs. [51,52]. In summary the proposed methodology contemplates:
0 0 0
0 0
production expenses during the week consider an initial stock of raw materials and products, an initial working capital is considered, a short-tem financing source is represented by a constrained open line of credit, production liabilities incurred in every week period due to buy of raw materials, exogenous cash-flows due to sale of products, a portfolio of marketable securities is also considered.
3.5.1 Financial Module Interaction with the Multiagent System
The financial module constitutes the supporting tool for financial decisions within the multiagent system framework in a coordinated way with other decisions affecting the whole supply chain. This module permits coordination and integration of financial and operational decisions by exploiting the advantages offered by the multiagent system described previously. The structure proposed for the integration of the financial module with MASC is shown in Fig. 3.11. The FM will be used by the SC central agent to identify the best opportunities for investment and financing the SC, as well as to evaluate the impact of operational decisions in the manufacturer’s economy. The real supply chain is modeled and represented by the multi agent system. The central agent interacts with the FM in order to maximize the net profit for a given budget provided by the budgeting model that uses the specific information given by the scheduling and planning model module for two sets of time periods. The first time period set corresponds to the scheduling and planning period, while the second set goes beyond the end of the planning horizon up to one year budgeting (see Fig. 3.12). It is important to note that the model incorporates a number of subjective constraints to allow for different profiles in financial risk management.
3.5 Financial Module
3.5.2 Testing Results in Industrial Scenarios
The benefits obtained by incorporating the financial module in the SC decision making have been assessed at different levels. First, the use of an integrated model coordinating financial and operational decisions has been tested in a single manufacturer, specifically, a plant producing five different products and two different raw materials. Product switch-over basically
I
713
714
I
3 Integration in Supply Chain Management
- ~roductionPlanning < -- - 3 months
Wgrlzon
-
Sales of products
Buy af raw materials
Cost of malntenance Buy of products
3
Cash.flow,
Lc
Cash trow,
‘-ffowt*a
Gash flow
r
3
i z .-Figure 3.12 model
n = 13 weekly periods
Second set of tlme periods
r 1 YL
Budgeting horizons and its kink with operative planning
depends on the nature of both substances involved in the precedent and following batch (precedence constraints). Cleaning times are constrained by the product sequence. Comparative results obtained from the application of a sequential approach and those achieved with the FM are shown in Fig. 3.13. It can be seen that the integrated solution incurs less debt and avoids having to pledge receivables which means a 20 % savings for the firm. The second case study deals with the SC of a large fruit cooperative made up of raw materials suppliers, manufacturing (fruit selection, clearing, and packaging), warehouses, distribution centers and clients. Here a deeper level of integration in the SC management was contemplated. Since this supply chain is driven by the raw materials arrival rather than by the customer demand, a main objective was to use the forecasting module (FOREST) to estimate the raw material arrival time thus reducing operational uncertainty. A second important objective was to integrate financial and production aspects linked to cash flow management across the supply chain. Both modules were used on-line through the Web, thus achieving high visibility (customer on-line information) and improved service (increased customer satisfaction). The net results were an increase of sales about 15 % and diminution of stocks about 20 % (saving of two million euro per campaign).
3.5 Financial Module
a) Sequential approach
3500001 350000
I
250000
s E
L
200000 2150000 0 0 0 0 0 ~ ~
100000 50000 0
b) Integrated approach
n s&
I
\
Week-pdods Figure 3.13
Comparative results between sequential and integrated
approach: debt incurred, marketable securities and amount accumulated of pledged receivables at every week period (legend: --)t Marketable securities; t Debt; -+ Receivables pledged)
I
I
715
716
I
3 Integration in Supply Chain Management
3.5.3 Negotiation Module
When adopting an agent-oriented view of computation, it is readily apparent that most problems require or involve multiple agents as indicated before. Moreover, these agents will need to interact with one another, either to achieve their individual objectives or to manage the dependencies that follow from being situated in a common environment. These interactions can vary from simple information interchanges, to requests for particular actions to be performed and on to cooperation (working together to achieve a common objective) and coordination (arranging for related activities to be performed in a coherent manner). However, perhaps the most fundamental and powerful mechanism for managing interagent dependencies at run-time is negotiation - the process by which a group of agents come to a mutually acceptable agreement. Automated negotiation among autonomous agents is needed when agents have conflicting objectives and desire to cooperate. This typically occurs when agents have competitive claims on scarce resources, not all of which can be simultaneously satisfied. These resources can be commodities, services, time, money, etc. Specifically, the main objective of the negotiation module developed is to enhance profitable partnerships in SCs. This goal is divided into the following steps: 0
0 0
0
Identify the most profitable relationships and enhance them. Integrate supply contract negotiations into supply chain management. Evaluate different negotiation tactics according to the SC performance and partners behaviors. Develop learning techniques.
The proposed approach [53] takes into account the tradeoff between the quality of the offers made to customers, i.e., the level of satisfaction perceived by the client, and the expected profit to be achieved in the short term operation of the SC. Therefore, a twostage stochastic formulation is derived that considers the uncertainty associated with reactions to future demand, in order to compute a set of Pareto optimal solutions to the proposed problem. Each of these solutions comprises an SC schedule and a set of values for the parameters of the offers. Through comparison of the Pareto curve and the solution that would be obtained without negotiation, a set of offers representing contracts that are desirable from the supplier’s perspective is obtained. This set of values may be offered by the supplier in order to reach an agreement with the customer during the negotiation procedure. This approach facilitates a rational negotiation, in the sense that it enables the negotiator to simultaneously process much more data related to production and transport plans and customer preferences, thus avoiding having to rely exclusively on the negotiator’s beliefs and interests.
3.5 Financial Module
3.5.4 Motivating Example
Inspired in a real industrial case, a relatively simple linear SC is considered to illustrate the performance and results of the negotiation module. It entails a batch plant which produces three different products. The manufacturer has a warehouse (W) at which the products are stored when they leave the plant. As the plant has a limited capacity and is imagined to be next to the factory, no transport is necessary. There is also a distribution center (DC) from which customers are served. Three occasional orders are met if the DC has some amount of the product requested in stock, provided that it is not planning to use that stock to satisfy contractual requirements. If an order has only been partially met, there is no penalization but that particular sale will not be able to be carried out at a later point, when the merchandise reaches the DC. Moreover, the possibility of signing a contract with a customer is considered. The main objective of the proposed example is to observe how most of the activities executed by a SC can be integrated using the proposed negotiation model, rather than to study very complex structures involving many entities. The proposed approach provides a set of Pareto solutions to be used by the decision maker during the negotiation procedure (Fig. 3.14). There is a tradeoff between the expected profit and the quality of an offer sent to a customer, in this case the level of consumer satisfaction (CSat) attained by an offer, as well as the connection between customer relationship management (customer satisfaction) and production activities (schedules). For instance, for the stochastic Pareto solution the difference between the best-case and the worst-case values for CSat = 80 % is approximately 450 m.u. (20%), while the deterministic solution, shows, this difference equal to 1140 m.u. (55 %). Therefore, the stochastic treatment of the negotiation problem minimizes the impact of the uncertain environment by both increasing the expected profit and reducing the variability of the solution compared with the deterministic solution, which makes it very attractive from the perspective of the decision maker. Indeed more complex storage policies, plant flexibility, number of entities in the SC network and so on can be addressed using the same approach. In addition, since future predictions related to market behavior cannot be perfectly forecasted, a number of the parameters in the associated scheduling problem, such as product demands and prices, were considered to be uncertain parameters. The two-stage stochastic formula developed has allowed this situation, commonly found in practice, to be handled properly, which thus reduces the impact of the uncertainty on what profit is achieved in short-term planning. The usefulness of signing contracts as a way of reducing uncertainty has also been shown by means of the aforementioned stochastic formulation. The proposed strategy represents a method for facilitating rational negotiation, in the sense that it enables the negotiator to process a far greater amount of production, transport planning and customer preference data simultaneously and thus prevents beliefs and interests from being relied on exclusively.
I
71 7
718
I
3 Integration in Supply Chain Management
........ L ......... :.... ...... :.......... :.....~....:..........;....... . . 3 .......... :........ .J .........i
Satisfaction % Figure 3.14
Stochastic Pareto curve
3.6 Multiagent Architecture Implementation and Demonstration
The multiagent system framework has been implemented as a Web service. Each agent receives and transmits relevant information for the optimization of the SC. All agents depend on a central agent that coordinates information handling and that may modify a particular agent's decision should it be necessary for the whole SC negotiation optimization. Web services (agents) are programmed in C# using the tools of Visual Studio.NET by Microsoft, while XML under the SOAP protocol is used for communication between them. Each agent may use distributed modules (forecasting, planning and scheduling, optimization, environmental, financial and diagnosis) to support his activities.
3.6.1 Manager Agent System
The manager agent system has a central agent that coordinates relevant information flow from/to the other agents in the network, and will eventually take decisions for
3.6 Multiagent Architecture Implementation and Demonstration
the whole SC optimization. This way, the SC behavior can be accommodated to a full range of scenarios from a decentralized operation to a fully centralized management where the director (central) agent has the control on every individual SC activity. In general, the central agent will perform the following functionalities: 0 0 0
0 0 0
decision support, real-time information on SC activities, SC performance optimization, simulation and performance indicators calculation, client/supplier selection, graphic representation (Pert, Gantt).
These and other functionalities (forecasting, environmental assessment, financial assessment, SC retrofit, etc.) will require the use of the appropriate module. Figure 3.15 shows the manager agent utility system. Main performance indicators and expected ratio are indicated on Table 3.2. Obviously, the expected ratio will depend on the SC original situation and the type of SC. The table should be continuously updated once intermediate objectives are achieved. A key component in the system architecture is the database that must be made available to each agent locally. For instance, the basic design of the database of the central agent is shown in Fig. 3.16 for the specific scenario contemplating retailers of four different items (computers, bakery, books, and furniture).
3.6.2 Graphic Interface
The graphic interface has a double objective. On the one hand a graphic user interface (GUI) is needed for the real client that requires interaction with the multiagent system (specific demand implementation, request of information). Otherwise, a graphic interface is needed for the SC manager. In this regard a client GUI is provided to perform the SC simulation. The SC client makes the command through the Web application shown in Fig. 3.17, which enables him to communicate with the central agent who consequently
Elementary indicator
Ratio expected (min. to max. %)
I1 Modeling time reduction 12 Forecast accuracy 13 Inventory reduction 14 Means of production capacity utilization increase I5 Cycle process time improvement I6 Supply chain costs reduction 17 Delivery performance improvement
60-80 % 15-65% 20-85 % 5-60% 15-70% 10-30% 15-45 %
I
719
720
I
3 lntegration in Supply Chain Management
a Customer
Figure 3.15
UML representation ofthe manager agent utility system
3.6 Multiagent Architecture Implementation and Demonstration
I
721
CornpanyW m e CollfactFirstNme ComactLastName Address
Email Password
l---o?;;---I CustomersOrders
Furnlture Mode!
Model MdherBoard Processa
RAM HD
Author NumbwOfPages HardCwer PublicabcmDate PubilsWr
Color
M atcrlal F +rica:ionDste
Figure 3.16 Basic design of the database of the central agent for the specific scenario contemplating retailers of four different items (computers, bakery, books, and furniture)
will update the database. Once the client signs up in the Web service and provides the information requested (Fig. 3.18) he has access to the services and information offered (Fig. 3.19).
722
I
3 Integration in Supply Chain Management
1
Company Web Page
I
rEm
Figure 3.17
-
-
- _.
System access Web page
Sign Up Form
Figure 3.18 Client sign-up form
r-
_ I ~_ _ I
7qzd.--*\
--
A
-
3. G Muhagent Architecture Implementation and Demonstration
"
*..
.
,
Products From Cornpuny
r
Aarmr
f I--
F'rrr
Figure 3.19
.I,.*".rr
4
Example of information offered to the client
The client agent may be interested to know the real SC system behavior in front of new demand. In this case, a simulation of the real client is realized that analyzes alternative possible scenarios. Therefore, the client agent is provided with: 0
0
0
demand generator supplied with different patterns (stochastic, probability distribution functions), connectivity with the central agent to receive/transmit messages, graphic user interface.
The interfaces created for the rest of the agents can be seen in the following figures. The central agent interface is shown in Fig. 3.20. It collects all information from the database (on products, transport, warehouses, factories, environmental impact, financial, etc.) and permits simulation (at the left panel) and optimization (at the right panel) of the SC: Multiobjective optimization is carried out using any of the solvers offered in the center panel. The forecasting module interface is shown in Figure 3.21 giving the demand forecast in terms of dates and amounts of specific products. The environmental module interface appears in Fig. 3.22. Here the left panel permits the input of raw materials needed for each specific product and the right panel has the commands to perform the life cycle analysis of the entire SC. Figure 3.23 shows the financial module interface. At the left initial assets, liability and equity can be introduced, which appear optimized at the right after optimization under the selected constraints at the center (minimum cash, debt, interest rate, etc). Finally, the negotiation agent interface can be observed in Fig. 3.24 for a certain product. It shows the evolution towards satisfaction for both, customer
I
723
724
I
3 Integration in Supply Chain Management
i I
!
. . . . . .. ... ...,~ . ...... . . .+:..
. . . . .
-
. . .
. . . . I
._ . . . . .
~
Figure 3.20
,-
................................. ...........
...
The Central Agent interface
......
..................
I -
Figure 3.21
I
The forecasting module interface
....----_.-l...l-.____
..........
,,..I
726
I
-
3 Integration in Supply Chain Management
-
: 'I f - p v.m --~
i
;
r-.
..
.....
--:--: --I
Figure 3.24
..:
...................................
=..,.
.....
1
The negotiation agent interface
and supplier once the negotiation is initiated in terms of quantity, price and delivery time. The planning and scheduling module is not shown, since it is fully described elsewhere [54,55].The same can be said regarding the real-time monitoring and diagnosis module [5G]. 3.6.3 Demonstration
The multiagent system described has been tested in industrial scenarios. Some of the scenarios have been partially presented in previous sections of this chapter to show relevant components of the SC system. A global on-line through the Web demonstration took place recently at the CHEM Users Committee held in Lille, France [57]. Here operational tactical and strategic activities were shown to successfully cooperate in the SC, from demand forecasting to diagnosis, control and retrofit considerations. The demonstration contemplates the whole SC of a cosmetics manufacturing group of enterprises with the following characteristics:
3.7 Concluding Remarks
0 0 0
multiproduct manufacturing plants located in Europe (Oviedo, Tarragona in Spain and one in Italy), United States (California and Florida) and Mexico (Queretaro), warehouses for final products, distribution centers from which customers are served, transport system for distribution to retailers and clients.
The demonstration is initiated by forecasting from historical data the procurement needs of specific products at one of the warehouses in France. A robust demand is obtained using the forecasting module. Then, the following real-time sequence of decisions and activities is carried out by the multiagent system: 0
0
0
0
0
0
0
The negotiation module is used to select and agree the most satisfactory provider (the factory situated in Tarragona) for the specific products and in the amounts, prices and due dates envisaged. The environmental impact is assessed for the complete life cycle of the products contemplated by means of the environmental module described before. Financial evaluation is carried out, taking into account budgeting, cash flow and additional considerations provided by the financial module. The central agent collects all the preceding information and requests additional manufacturing data from Tarragona plant. Simulation of the whole SC is then carried out checking for feasibility. Finally, multiobjective optimization (profit, cash flow, debt, environmental impact, due dates) is realized. Optimum values are transmitted over the Web to the plant in Tarragona for manufacturing. The production planning and scheduling module calculates the optimal production schedule for the forecasted demand, which is automatically performed in the plant. An incidence occurs during plant operation (the reactor heating system breaks down). The monitoring and diagnosis module detects and isolates the fault. An alarm is issued and diagnosed. As a consequence, a rescheduling procedure takes place to find an alternative production route which is implemented again in realtime. The operator repairs the reactor which comes back to operation. The monitoring system reacts sending a new plan, which coincides with the original since it was optimum, resuming the plant operation using the repaired reactor.
3.7 Concluding Remarks
The supply chain of a manufacturing enterprise is, nowadays, a world-wide network of suppliers, factories, warehouses, distribution centers and retailers through which raw materials are acquired, transformed and delivered to customers. In this sense, the whole supply chain can be considered as a dynamic virtual enterprise: by the adequate management of its supply chain, the manager can easily find adequate solutions to cope with the dynamics of its production scenario, which includes drastic
I
727
728
l
3 lntegration in Supply Chain Management
and unexpected changes in materials or production resources availability, in the market conditions, or even in politics. This chapter has presented recent advancement on integrated solutions for on-line supply chain management. Specifically, an environment is presented that encompasses the SCM characteristics identified in a preliminary review of the state of the art. It reported the integration of negotiation, environmental, forecasting and financial decisions in a reactive mode as an example of a new technology that may lead to better, fully integrated, easier to use and more comprehensive tool for SCM. A brief description of the architecture and functionalities of the solution implemented has also been presented.
Acknowledgements
The authors wish to acknowledge support of this research work from the European Community (Contract No GIRD-CT-2001-004GG),the CICyT-MEC (project No DPI2003-0856),and the CIRIT-Generalitat de Catalunya (project No 1-353).Contribution from Fernando Mele, Gonzalo Guillen and Francisco Urbano, predoctoral students of the research group CEPIMA is also much appreciated (Chemical Engineering Department, Universitat Politecnica de Catalunya).
References
6 W u ] . Ulieru M. Cobzaru M. N o m c D. Agent-
1 Vidal C.]. Goetshalckx M . Strategic
2
3
4
5
Production-Distribution Models: A Critical Review with Emphasis on Supply Chain Models. Engineering Journal Operational Research 98 (1997)p. 1-18 Applequist G. E. Pekny]. F. Reklaitis G. V. Economic Risk Management for Design and Planning of Chemical Manufacturing Supply Chains. Computers and Chemical Engineering 24 (2000) p. 2211-2222 Badell M. Romero]. Huertas R. Puigjaner L. Planning, Scheduling and Budgeting ValueAdded Chains. Computers and Chemical Engineering 28 (2004) p. 45-61 W u ] . Cobzaru M. Ulieru M. Nome D. SCWeb-CS: Supply Chain Web-Centric Systems. Proceedings of the International Conference on Artificial Intelligence and SoftComputing, Bant, Canada (2000) pp. 501-507 Grossmann I . E. McDonald C. M. Foundations of Computer-Aided Process Operations: A View to the Future Integration or R&D Manufacturing and the Global Supply Chain, CACHE Cop., Austin, Texas 2003
7
8
9
10
11
Based Supply Chain Management System: State-of-the-Artand Implementation Issues, in Proceedings of European Symposium of Computer.Aided Process Engineering-14 (ESCAPE-14),Lisbon, Portugal, Elseiver, Amsterdam 2004 Lee H . L. Billington C. Managing Supply Chain Inventory: Pitfalls and Opportunities. Sloan Management Review, Spring (1992) p. 65-73 Geofion A. M . Powers R. F. Facility Location Analysis is Just the Beginning. Interfaces 10 (1980) p. 22-30 Erengunc S. S. Simpson N. C. Vakharia A,]. Integrated Production/Distribution Planning in Supply Chain: An Invited Review. European Journal of Operation Research 115 (1999) p. 219-236 Williams]. F. Heuristic Techniques for Simultaneous Scheduling of Production and Distribution in Multi-Echelon Structures: Theory and Empirical Comparisons. Management Science 27 (1981) p. 336-352 Williams]. F. A Hybrid Algorithm for Simultaneous Scheduling of Production and Distribution in Multi-Echelon Structures. Management Science 29 (1983)p. 77-92
References I 7 2 9 12 Ischii K. K. Takahashi Muramatsu R. Inte-
13
14
15
16
17
18
19
20
21
22
23
24
grated Production Inventory and Distribution Systems. International loumal of Production Research 26 (1988) p. 473-482 Cohen M. A. Lee H. L. Resource Deployment Analysis of Global Manufacturing and Distribution Networks. Journal of Manufacturing and Operations Management 2 (1989) p. 81-104 Cohen M. A. Moon S. Impact of Production Scale Economizes Manufacturing Complexity and Transportation Costs on Supply Chain Facility Networks. Journal of Manufacturing and Operations Management 3 (1990) p. 35-46 Newhart D. D. Stott K. L. Vasko F . J . Consolidating Product Sizes to Minimize Inventory Levels for a Multi-Stage Production and Distribution Systems, Journal of the Operational Research Society 44(7) (1993) p. 637-644 Arntzen B. C. Brown G. G. Harrison T. P. Trafton L. L. Global Supply Chain Management at Digital Equipment Corporation. Interfaces 25 (1995) p. 69-93 Voudouris V. T. Mathematical Programming Techniques to De-bottleneck the Supply Chain of Fine Chemical Industries. Computers and Chemical Engineering 2O(Suppl.) (1996) p. S1269-1275 Camm J. D. Chatman F. A. Evans]. R. Sweeney D. J. Wegryn G. W. Blending ORlMS Judgement and GIs: Restructuring P&G's Supply Chain. Research 27 (1997) p. 120-142 Papageorgiu L. Rotstein G. Shah N. Strategic Supply Chain Optimization for the Pharmaceutical Industries. Industrial and Engineering Chemistry Research 40 (2001) p. 275 Davis T.Effective Supply Chain Management. Sloan Management Review Summer (1993) p. 35-46 Cohen M. A. Lee H. L. Integrated Analysis of Global Manufacturing and Distribution Systems: Models and Methods. Operations Research 36 (1988) p. 216-228 Suoronos A. Zipkin P. Evaluation of One-forOne Replenishment Policies for MultiEchelon Inventory Systems. Management Sciences 37 (1991) p. 68-83 Pyke D. F. Cohen M. A. Performance Characteristics of Stochastic Integrated Production: Distribution Systems. European Journal of Operational Research 68 (1993) p. 23-48 Pyke D. F. Cohen M. A. Multi-Product Integrated Production-Distribution Systems. European Journal of Operational Research 74(1) (1994) p. 18-49
25
26
27
28
29
30
31
32
33
34
35
36
37
Lee H. L. Padmanabhan V. Whang S. Information Distortion in a Supply Chain: The Bullwhip Effect. Management Science 43 (1997) p. 546-558 Owen S. H . Daskin M. S. Strategic Facility Location: A Review. European journal of Operation and Research 111 (1998) p. 423-447 Mobasheri F. Orren L. H . Sioshansi F. P. Scenario Planning at Southern California Edison. Interfaces 19 (1984) p. 31-44 Mufueyj. M. Generation Scenarios for the Towers: Instrument System. Interfaces 26 (1996) p. 1-15 Jenkins L. Selecting Scenarios for Environmental Disaster Planning. European Journal of Operation and Research 121 (1999) p. 275-286 Gupta A. Maranas C. D. McDonald C.M. Mid-Term Supply Chain Planning Under Demand Uncertainty: Customer Demand Satisfaction and Inventory Management. Computers and Chemical Engineering 24 (2000) p. 2613-2621 Tsiakis P. Shah N. Pantelides C. C. Design of Multi-Echelon Supply Chain Networks Under Demand Uncertainty. Industrial and Engineering Chemistry Research 40 (2001) p. 3585-3604 Lababidi H . M. S. Ahmed M. A. Afatigi I. M. EL-Enzi A. F. Optimizing the Supply Chain of a Petrochemical Company Under Uncertain Operating and Economic Conditions. Industrial and Engineering Chemistry Research 43 (2004) p. 63-73 GuiflCn G. Bonfiff A. Esputia A. Puigjaner L. Integrating Production and Transport Scheduling for Supply Chain Management Under Market Uncertainty, in Proceedings of European Symposium of Computer-Aided Process Engineering-I4 (ESCAPE-14),Lisbon, Portugal, Elseiver, Amsterdam 2004 Baumof W. J. The Transactions Demand for Cash: An Inventory Theoretic Approach. The Quarterly lournal of Economics 66 (1952) p. 545-556 Miller M. H. Orr R. A. A Model of the Demand for Money by Firms. The Quarterly Joumal of Economics 80 (1966) p. 413-435 Christy D. P. Grout J. R. Safeguarding Supply Chain Relationships. International lournal of Production Economics 36 (1994) p. 233-242 Romero J. Badeff M. Bagajewicz M. Puigjaner L. Integrating Budgeting Models into Scheduling and Planning Models for the
730
I
3 Integration in Supply Chain Management
38
39 40
41
42
43
44
45
46
Chemical Batch Industry. Industrial and Engineering Chemistry Research 42 (2003) p. 6125-6134 Badell M. Romero]. Puigjaner L. Joint Financial and Operating Scheduling/Planning in Industry, in Proceedings of European Symposium of Computer-Aided Process Engineering-14 (ESCAPE-14),Lisbon, Portugal, Elseiver, Amsterdam 2004 Forrester]. W. Industrial Dynamics. MIT Press, Cambridge, MA 1961 Towhill D. R. Industrial Dynamics Modeling of Supply Chains. Logistics Information Managment 9(4) (1996) p. 43-56 Backs T. Bosgra D. Marquardt W. Towards Intentional Dynamics in Supply Chains Conscious Process Operations in Pekny, J.F. and Blau G.E. (Eds.) Proceedings of Third International Conference on Foundations of Computer-Aided Process Operations, AIChE, New York, (1998) p. 5 Perea-Lopez E. Grossmann I. Ydstie B. E. Tcihmassebi T. Industrial and Engineering Chemistry Research 40 (2001) p. 3369-3383 Brown M. W. Rivera D. E. Carlyle W. M. et al. A Model Predictive Control Framework for Robust Management of Multi-Product, Multi-Echelon Demand Networks. In Droceedings of 15th IFAC world Congress, Barcelona, Spain, 2002 Mele F. D. Forquera F. Rosso E. Basualdo M. Puigjaner L. A Comparison Between Chemical and Model Predictive Control Over Supply Chain Dynamic Model, in Proceedings of 9th Mediterranean Congress of Chemical Engineering, Expoquimia, Barcelona, Spain 2002 Mele F. D., Esputia A., Puigjaner L. Supply chain management, through combined Simulation-Optimisation approach, in Proceedings ESCAPE-15 (L. Puigjaner, A. Espuna, Eds.), Elsevier, Amsterdam, (2005) p. 1405-1410. Garcia-Beltrdn C. and Feritil S. Multi-AgentBased Decision System for Process Reconfiguration, in Latino-American Control Conference, Guadalajara, Mexico, 2002
47 Lin J . You]. Smart Shopper: An Agent-
48
49
50 51
52
53
54
55
56
57
Based Web Approach to Internet Shopping. IEEE Trans Fuzzy Systems 11 (2003) p. 226-237 Report 02-2 Final Specifications of the Supply Chain Multi-Agent Architecture. Project 1-303 (GICASA-D) 2003 Report D19 Environmental Impact Considerations Based on Life Cycle Analysis. Project GlRD-CT-2000-00318 2003 Muller P. A. Modelado de Objetos con UML. Ediciones Gestion 2000 S.A. Barcelona 1997 Romero ]. Badell M. Bagajewicz M. Puigjaner L. Integrating Budgeting Models into Scheduling and Planning Models for the Chemical Industry. Industrial and Engineering Chemistry Research 42 (2003) p. 6125-6134 Badell M. Romero ]. Huertas R. Puigjaner L. Planning, Scheduling and Budgeting valueaided chains, Computers Chemical Engineering 28 (2004) p. 45-61 Guilltn G. Pina C. Espuria A. Puigjaner L. Optimal Offer Proposal Policy in an Integrated Supply Chain Management Environment. Industrial and Engineering Chemistry Research 44 (2005) p. 7405-7419 Puigjaner L. Handling the Increasing Complexity of Detailed Batch Process Simulation and Optimization. Computers and Chemical Engineering 23(Suppl.) (1999) p. S929-S943 Arbiza M. I. Cantdn Espuria]. A. Puigjaner L. Objective-Based Schedule Selector: a Rescheduling Tool for Short-term Plan Updating, in Barbosa-Povoa A. (Ed.) European Symposium on Computer-Aided Process Engineering-14, Lisbon, Portugal CDROM 2004 Ruiz D. Benqlilou C. Nougub]. M.Puigjaner L. Proposal to Speed Up the Implementation of an Abnormal Situation Management in the Chemical Process Industry. Industrial and Engineering Chemistry Research 41 (2002) p. 817-824 Puigjaner L. Real-Time, Optimization of Process Operations: An Integrated Solution Perspective. CHEM User's Committee Seminar, Lille, France, 2004
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8 I731
4 Databases in the Field of Thermophysical Properties in Chemical Engineering Richard Sass
4.1 Introduction
Process synthesis, design, and optimization, and also detail engineering for chemical plants and equipment depend heavily on availability and reliability of thermophysical property data of pure components and mixtures involved. To illustrate this fact we can analyze the needs for one of the essential process engineering processes, the separation of fluid mixtures. For the design of such a typical separation process, e.g., distillation, we require thermodynamic properties of mixture, in particular for a system that has two or more phases at a certain temperature or pressure. We require the equilibrium constants of all components in all phases. The quality of data inside the data calculation modules is essential and can have extensive effects. Inaccurate data may lead to very expensive misjudgements whether it is to proceed with a new process or modification of it or not to go ahead. Inadequate or unavailable data may cause a promising and profitable process to be delayed or in the worst case be rejected, only for the reason that it was not properly modelled in a simulation. Another potential danger, partially generated by the marketing statements of simulation software producers, is that the credibility of the results of a thermophysical model calculation generated by computer software is very high, even if the result is wrong. So the expert has the duty to prove that the most sophisticated software will not lead automatically to the most cost-effective solution in order to save effectively energy, if there is not a background with an accurate database of physical and thermodynamic data.
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA,Weinheim ISBN: 3-527-30804-0
732
I
4 Databases in the Field of Thermophysical Properties in Chemical Engineering
4.2 Overview of the Thermophysical Properties Needed for CAPE Calculations
Without access to a numerical database and if the available literature and notes do not contain a value, the only possibility is to measure properties or to calculate them with a group contribution method or another estimation routine. The first alternative is expensive and time-consuming; the second one will produce data with unknown reliability in most cases, especially when molecules with two or more nonhydrocarbon functional groups in near proximity are involved. With the help of thermophysical databases with experimental values of pure component and mixtures, this problem can be solved. A description of what data are needed and which types of data are available follows. The properties required for the design of a thermal or chemical process depends upon the specific case and the temperature, pressure and concentration range. A short overview of the data needed in the simulation and design of processes is given in Table 4.1. Table 4.1
Important categories o f property data
Property type
I Specific properties
Phase equilibria
Boiling and melting points, vapor pressure, fugacity and activity coefficients, solubility (Henry’s constants, Ostwald or Bunsen coefficients)
PVT behavior
Density, volume, compressibility, critical constants
Caloric properties
Specific heat, enthalpy, entropy
Transport properties
Specific heat, latent heat, enthalpy, entropy, viscosity, thermal conductivity,ionic conductivity, diffusion coefficients
Boundary properties
Surface tension
Chemical equilibrium
Equilibrium constants, association/dissociation constants, enthalpies of formation, heat of reaction, Gibbs energy of formation, reaction rates
Acoustic
Velocity
Optical
Refractive index, polarization
Safety characteristics
Flash point, explosion limits, autoignition temperature, minimum ignition energy, toxicity, maximum working place concentration
Molecular properties
Virial coefficients, binary interaction parameters, ion radius and volume
~~
4.4 Examples of Databasesfor Jhermophysical Properties
4.3 Sources of Thermophysical Data
For years, the most popular way to find thermophysical property data was to take a look inside favorite book collections, starting with the Handbook of Chemistry and Physics up to data collection handbooks issued by data producers as DIPPR [I]and TRC [2], the Landolt-Bomstein [3] and the DECHEMA Chemistry Data Series [4]. Despite the inconvenience in using handbooks for data searches, a lot of users still appreciate the fast access to the data and, in comparison to the databases, relatively moderate price. An overview by Hochschule Merseburg University of Applied Science gives a list of available books and publications on thermophysical property data
(www.fh-merseburg.de/PhysChem).
Nowadays, mainly under the pressure of having data available in a short time for calculations and due to the full-time access to networks, the easiest way to find data is with access to databases, accessible or as in-house versions or on-line via hosts or via the World-Wide Web. Two types of collections and/or databases can be distinguished: bibliographical ones and numerical ones. A bibliographical collection or database is a literature source containing only references. Knowing the chemical species one needs data for, one can find literature references containing that data. Afterwards one has to go to the library and look up the different references to get the data. A numerical database or collection typically contains the literature references as well as measurement data. The numerical data could be accessed and used directly. In some cases this approach is combined with a critical review and selection of the available data, so that only thermodynamically consistent and proven data are contained in the collection. The approach could also be combined with model parameter fitting and recommendation, so that end-users only have to transfer the recommended parameters into their applications to implement a tested model with a defined reliability over all known measurement data points. In the following, a survey on the existing and still maintained collections and databases on solutions is given.
4.4 Examples of Databases for Thermophysical Properties
Due to the fact that dozens of sources for thermodynamic data are now available on the Web, only a few major providers will be mentioned in this chapter. To have access to a larger overview about what is available on the Web, a look at the pages, e.g., of the University of Illinois should be considered (http://tigger.uic.edu/-mansoori/Thermodynamic.Data.and.Property-htl) . A few examples for the largest and most famous databases are shown in Table 4.2. Three examples of databases are described as follows.
I
733
734
I
4 Databases in the Field ofThermophysical Properties in Chemical Engineering Table 4.2
Provider list for thermophysical data
Producer
I
Database name
I
URL
~
DECHEMA
DETH E RM
www.dechema.de/detherm-lang-en. html
DDBST
DDB
www.ddbst.de/new/Default.htm
NIST
Properties of fluids
http://properties.nist.gov/ ~
NIST IUPAC-NIST K&K Associates
FIZ Chemie
I
Chemistry WebBook Solubility Database
~~
I http://srdata.nist.gov/solubility/
Thermal Resource Center www.tak2000.com/ INFOTHERM ties Database
MDL
~~
http://webbook.nist.gov/chemistryl
www.fiz-chemie.de
www.ceram.co.uk/thermet.html
Technical database
www.dnv.com/software/all/api/index.asp
CrossFire Beilstein
www.mdl.com/products/knowledge/crossfire-beilstein/
TPC, Academy of Science THERMAL Russia
www.chem.ac.m/Chemistry/Databases/ THERMAL.en.htm1
AIChE
DIPPR
http://dippr.byu.edu/
G&P Engineering Software
MIXPROPS
www.gpengineeringsoft.com/pages/pdtmixprops.htm1
G&P Engineering Software
PHYPROPS
www.gpengineeringsoft.com/pages/pdtphysprops.htm1 www.crct.polymtl.ca/fact/index.php
Ecole Polytechnique de Montreal S . Ohe
Fundamental Physical Properties
http://data-books.com/bussei-e/bs-index.1
Prode
Prode Properties
www.prode.com/en/ppp.htm
NEL
I PPDS
www.ppds.co.uk/Products/
http://thermodata.online.fr Science
Database
http://chinweb.ipe.ac.cn/
4.4 Examples ofDatabasesfor Thermophysical Properties
4.4.1 NIST Chemistry WebBook
The NIST Chemistry WebBook provides access to data compiled and distributed by NIST under the Standard Reference Data Program [5]. The NET Chemistry WebBook [GI contains: 0
0
0 0 0 0 0
0
0
thermochemical data for over 7000 organic and small inorganic compounds: - enthalpy of formation - enthalpy of combustion - heat capacity - entropy - phase transition enthalpies and temperatures - vapor pressure reaction thermochemistry data for over 8000 reactions: - enthalpy of reaction - free energy of reaction IR spectra for over 16,000 compounds; mass spectra for over 15,000 compounds; UV/V is spectra for over 1600 compounds; electronic and vibrational spectra for over 4500 compounds; constants of diatomic molecules (spectroscopic data) for over GOO compounds; ion energetics data for over 16,000 compounds: - ionization energy - appearance energy - electron affinity - proton affinity - gas basicity - cluster ion binding energies thermophysical property data for 34 fluids: - density, specific volume - heat capacity at constant pressure (C,) - heat capacity at constant volume (C,) - enthalpy - internal energy - entropy - viscosity - thermal conductivity - Joule-Thomson coefficient - surface tension (saturation curve only) - sound speed.
Data on specific compounds in the Chemistry WebBook based on name, chemical formula, CAS registry number, molecular weight, chemical structure, or selected ion energetics and spectral properties can be searched for.
I
735
736
I
4 Databases in the Field OfThermophysical Properties in Chemical Engineering
4.4.2 DETHERM
The DETHERM [7] database provides thermophysical property data for about 24,000 pure compounds and 14G,OOO mixtures. DETHERM contains literature values, together with bibliographical information, descriptors and abstracts. At the time 5.2 million data sets are stored. DETHERM is a collection of data packages produced by well known providers of thermophysical packages, unified under a common graphical user interface. The database files in Table 4.3 are part of DETHERM. An example for the actual possibilities for presentation of the results is seen in Fig. 4.1. Table 4.3
Content of DETHERM ~
Dortmunder Datenbank DDB
'base equilibrium data
(ProJ Gmehling, University of Oldenburg)
1 I I
I
1 1 b
1 1
1
b 1 1 1 1
1
Vapor-liquid equilibria Liquid-liquidequilibria Vapor-liquid equilibria of low boiling substances Activity coefficients at infinite dilution Gas solubilities Solid-liquid equilibria Azeotropic data Excess properties Excess enthalpies Excess heat capacities Excess volume Pure component properties Transport properties Vapor pressures Critical cata Melting points Densities Caloric properties Others
Electrolyte data collection ELDAR (Prof: Barthel, University of Regensburg, LS Chemie IV)
Caloric data Electrochemical properties Phase equilibrium data PVT properties Transport properties
Thermophysical database INFOTHERM (FIZ CHEMIE)
PVT data Transport properties Surface properties Caloric properties Phase equilibrium data Vapor-liquid equilibria Gas-liquid equilibria Liquid-liquidequilibria Solid-liquidequilibria Pure component basic data
I 1 1 I
Dortrnunder Datenbank DDB
Phase equilibrium data
Thermophysical Parameter Database
Phase equilibria
COMDOR (Leuna GmbH in Cooperation with FIZ Chemie)
Excess enthalpies Transport and surface properties Caloric and acoustic data
Data Collection C-DATA (Institutfor Chemical Technic, Prag)
Twenty physicochemical properties for 593 pure components
Basic Database Bohlen BDBB (Sdchsische Olejnwerke AG Bohlen, now DOW Chemical)
Pure component database of the Sachsische Olefinwerke with chemical and physical basic data for 1126 pure substances (mainly for the fields of petroleum and coal chemishy)
Additional (DECHEMA e.V.)
Vapor pressures Transport properties 0 Thermal conductivities 0 Viscosities Caloric properties PVT data 0 PVT data 0 Critical data Eutectic data Solubilities Diffusion coefficients
4.4.3 DIPPR Database [i]
The major content of the database of the Design Institute for Physical Property Data (DIPPR), a subsidiary of the American Institute of Chemical Engineers (AIChE) [8], are mainly data collections of pure component properties but also data for selected properties of mixtures and the results of a project related to environmental, safety and health data. In total data of 1700 compounds in the database cover mainly the components of primary interest to the process industries. The special focus of the DIPPR database is to provide reliable data of thermophysical properties, including the temperature dependency of the properties, which are approved by technical committees, where industrial experts are involved in the design of the database and in the evaluation of the data. Table 4.4 gives an overview of the content of the DIPPR database. The use of these databases is meanwhile a standard option in the preparation of the process design. A bigger difficulty is the absence of thermophysical data for newer processes involving electrolytes and of solutions containing biomaterial. This specific topic will be explained in the next chapter.
738
I
4 Databases in the Fieldof Jhermophysical Properties in Chemical Engineering
Figure4.1 Joint graphical display oftwo different LLE data sets in DETHERM for the system chlorobenzene/acetonitrile/water
Table 4.4
Properties in the DIPPR 801 Database [9]
Constant properties: property
Acentric factor
1 1.
Units
Auto ignition temperature
K
Dipole moment
Cm
Absolute entropy of ideal gas at 298.15 K and 1 bar
1 J (kmol K)-’
~
Lower flammability limit temperature
K
Upper flammability limit temperature
K
Lower flammability limit percent
1 V ~ %I in air
Upper flammability limit percent
Vol % in air
Flash point
K
Gibbs energy of formation for ideal gas at 298.15 K and 1 bar Standard state Gibbs energy of formation at 298.15 K and 1 bar
I J kmol-’ J kmol-’
4.4 Examples of Databasesfor Jhermophysical Properties Table4.4
Properties in the DlPPR 801 Database (9) (Fortsetzung)
Constant properties: property
Units
Net standard state enthalpy of combustion at 298.15 K
1 kmol-'
Enthalpy of formation for ideal gas at 298.15 K
J kmol-'
Enthalpy of fusion at melting point
1 kmol-'
Standard state enthalpy of formation at 298.15 K and 1 bar
J kmol-'
Heat of sublimation
J kmol-'
Liquid molar volume at 298.15 K
m' kmol-'
Melting point at 1 atm
IK
Molecular weight
kg krnol-'
Normal boiling point
K
Parachor
I-
Critical pressure
I
Radius of gyration
Im
Pa
Refractive index
-
Solubility parameter at 298.15 K
(1 t 1 - ~ ) 1 , 2
Standard state absolute entropy at 298.15 K and 1 bar
I J (kmol K)-'
Critical temperature
K
Triple point pressure
Pa
Triple point temperature
IK
Critical volume
m' kmol-'
van der Waals area
m2kmol-'
van der Wads reduced volume
m' kmol-'
Critical compressibility factor
-
Temperature-dependent properties: property
Units
Heat capacity of ideal gas
I
Heat capacity of liquid
1 (kmol K).'
Heat capacity of solid
1 (kmol K).'
Heat of vaporization
1 kmol-'
(kmol K).'
I
739
740
I
4 Databases in the Field of Thermophysical Properties in Chemical Engineering Table4.4
Properties in the DIPPR 801 Database [9] (Fortsetzung)
Temperature-dependentproperties: property
Liquid density
I Units
1 kmol m-'
~~~
Second virial coefficient
m3 kmol-'
Solid density
kmol m-3
Surface tension ~~
~
Thermal conductivity of liquid Thermal conductivity of solid Thermal conductivity of vapor Vapor pressure of liquid Vapor pressure of solid or sublimation pressure
IN m-1 I w (m I w (m
I
~1-1
~1-1
pa
I Pa
Viscosity of liquid
Pa s
Viscosity of vapor
Pa s
4.5 Special Case and New Challenge: Data o f Electrolyte Solutions
A much bigger challenge than these normal solutions is the modeling of electrolyte containing solutions. The modeling of electrolyte solutions or, more generally speaking, liquids containing fractions of electrolytes is nowadays still an exhausting task. Chemical and process engineers for example are nowadays able to model or even predict a vapor-liquid equilibrium, the density or viscosity of a multicomponent mixture containing numerous different species with sufficient reliability. But if only traces of salt are contained in the mixture, nearly all models tend to fail. The modeling results however have a great impact on the design and construction of single chemical apparatus as well as whole plants or production lines. Proper functioning could only be guaranteed based on reliable results. Another area influenced greatly by electrolyte modeling is biochemical engineering. For example nobody knows how to predict quantitatively saltingout effect of proteins, crystallization processes of biomolecules, the influence of ions on nanoparticle formation, their size morphology and crystal structure, zeolite synthesis and so on. But the development of new production processes in this intensively growing area requires accurate macroscopic physical property models capturing accurately the underlying physics. In some cases there is a limited understanding of these mechanisms, but no real predictability. The chemical engineer developing new production processes as well as the physical chemist developing models are therefore having a pressing need to access reliable
4.6 Examples ofDatabases with Properties ofElectrolyte Solutions
thermophysical property data. Process as well as model development, either predicting or even only interpolating, requires multitudinous amounts of reliable thermophysical property data for electrolytes and electrolyte solutions. Among the most important property types are: 0 0
0 0 0 0
0 0
vapor-liquid equilibrium data activity coefficients osmotic coefficients electrolyte and ionic conductivities transference numbers viscosities densities frequency dependent permittivity data.
How does one find such data?
4.5.1 Reliable Data Sources
Thermophysical property data for electrolytes and electrolyte solutions are measured from numerous researchers and scientists and are published typically in a large number of journals and publications. But people requiring such data will not search the primary publications, because this is too time-consuming. And in most cases it is even impossible, because industrial users do not have access to all the required literature immediately. Instead the preferred way will be to check either a printed data collection or to search within an electronic database for the components, mixtures and properties one needs. Such printed data collections or databases are typically compiled and/or maintained by individuals or groups having a well known reputation in that field. Therefore they have an overview of the primary literature publishing physical property data and are able to continuously add new data to their collections. In most cases these groups also use their own collections for model development. In the following pages, a survey of maintained databases for electrolyte properties is given.
4.6 Examples of Databases with Properties of Electrolyte Solutions 4.6.1 The ELDAR Database [lo]
The Electrolyte Database Regensburg ELDAR is a numerical property database for electrolytes and electrolyte solutions. It contains data on pure substances and aqueous as well as organic solutions. The data collection for ELDAR started in 1976 within the framework of the DECHEMA study [ll]Research and Developmentfor Sau-
I
741
742
I
4 Databases in the Field of Thermophysical Properties in Chemical Engineering
ing the Raw Material Supply which was supported by the German Ministry for Research and Technology (BMFT). The work of this study 1981 led to development of ELDAR. From the beginning up to now the ELDAR database development was headed by the Institute of Physical and Theoretical Chemistry of the University of Regensburg. The database was designed as a literature reference, numerical data and also model database for fundamental electrochemical research, applied research and also the design of production processes. The database is still maintained and has roughly doubled its size since beginning. It contains data of more than 2000 electrolytes in more than 750 different solvents. Nowadays ELDAR contains approximately: 0 0 0
7400 literature references 45,400 data tables 595,000 data points.
ELDAR contains data on physical properties like densities, dielectricity coefficients, thermal expansion, compressibility, PVT data, state diagrams, critical data, thermodynamic properties like solvation and dilution heats, phase transition values (enthalpies, entropies, Gibbs free energies), phase equilibrium data, solubility, vapor pressures, solvation data, standard and reference values, activities and activity coefficients , excess values, osmotic coefficients, specific heats, partial molar values, apparent partial molar values and transport properties like electrical conductivities, transference numbers, single ion conductivities, viscosities, thermal conductivities and diffusion coefficients. ELDAR is distributed as part of DECHEMA’s numerical database for thermophysical property data, which is called DETHERM. To access ELDAR one could therefore use several options:
in-house client-serverinstallation as part of the DETHERM database [7]; Internet access using DETHERM ... on the WEB [7]; on-line access using host STN International [12]. To get an overview of the data available, the Internet access option could be recommended, because existence of data for a specific problem could be checked free of charge and even without registration. 4.6.2
The Electrolyte Data Collection
The Electrolyte Data Collection is a printed publication which is part of DECHEMA’s Chemistry Data Series. The Electrolyte Data Collection is published by Barthel and his coworkers from the University of Regensburg. The printed collection and the database ELDAR have complementary functions. The data books give a clear arrangement of selected recommended data for each property of an electrolyte solution. The electrolyte solutions are classified according to their solvents and solvent mixtures. All solution properties have been recalculated from the original measured
4.G Examples of Databases with Properties of Electrolyte Solutions
data with the help of compatible property equations. A typical page of the books contains the following for the described system: 0
0 0 0 0
general solute and solvent parameters fitted model parameter values measured data together with deviations against the fit a plot literature references.
The Electrolyte Data Collection has nowadays 18 volumes and consists of 9500 printed pages. Covered properties are: 0 0
0 0 0
conductivities transference numbers limiting ionic conductivities dielectric properties of water, aqueous and nonaqueous electrolyte solutions viscosities of aqueous and nonaqueous electrolyte solutions.
4.6.3 ICV-SEP Data Bank for Electrolyte Solutions
The Engineering Research Center Phase Equilibria and Separation Processes (ICVSEP) of the Technical University of Denmark (DTU) is operating a data bank for electrolyte solutions 1131. It is a collection of scientific papers containing experimental data for aqueous solutions of electrolytes and/or nonelectrolytes and also theoretical papers related to electrolyte solutions. The database is a mixture between a literature reference database and a numerical database. Currently references to more than 4000 papers are stored in the database. In addition experimental data from around 2000 of these papers are stored electronically as well. Most of the experimental data concern aqueous solutions. The access to the literature reference database is free of charge, but requires a registration. The access to the numerical database is restricted to members of an industrial consortium supporting the work of ICV-SEP.
4.6.4 The Dortmund Database DDB [14]
The Dortmund Database Software and Separation Technology from the University of Oldenburg is well known for its data collections in the areas of vapor-liquid equilibria and related properties. While the major part of the data collections is dealing with nonelectrolyte systems, two collections contain exclusively electrolyte data. They are focused on: 0 0
vapor-liquid equilibria gas solubilities.
I
743
744
I
4 Databases in the Field ofThermophysical Properties in Chemical Engineering
The two collections together currently contain 3250 data sets. Access to these collections is possible either on-line using the DETHERM on the Web or in-house using special software from DDBST or DECHEMA.
4.6.5 Closed Collections
In addition to the above-described publicly available and still maintained collections, do other old electrolyte data collections exist? Among them is for example the ELYS database, which was compiled by Lobo, Department of Chemistry, University of Coimbra, Portugal, or the DIPPR 861 Electrolyte Database Project. But these closed collections are typically not maintained any more and also not publicly available. It is likely the references and/or data published in these collections could also be found inside the aforementioned living collections.
4.7 A Glance at the Future of the Properties Databases
Most of the engineers in chemical companies trust in the power of their evaluation of the equations of state for the calculation of the optimal point of work. Nevertheless the opinion that databases have less importance these days is growing, mainly when budgetary elements come into consideration. The knowledge of the importance of a correct process design is run over by considerations that a saving of one euro per kg for a product which costs 50 euros per kg is not very relevant. That is not the case for basic chemicals, where saving of the same order of magnitude represents 25 % of the total costs and 10 % of the used energy. Unfortunately the production of these chemicals is today mainly transferred into low cost countries, i.e., not very relevant for research purposes. A lot of companies made the outsourcing of their measurements, so that only a limited amount of experts in companies maintain the knowledge for these activities. When we look at the constraints to find new methods for the design of biologic or polymer solutions, we must be sceptical to find enough people to manage future visions for models with the knowledge what was in the past. In the time of a rise in steel consumption in the Chinese industry, where in an unexpected way a demand of coal energy started again, it may be that the properties will rise in interest. From a governmental funding point of view, it is a good sign that new projects are coming up in order to find a new approach to build evaluated databases.
References 1745
References 1 Design Institute for Physical Properties (DIPPR): www.aiche.org/dippr/,2006 2 TRC Thermodynamic Tables: www.trc.nist.gov/tables/trctables.htm.2006 3 Landolt-Bornstein:www.springeronline.com/ sgw/cda/frontpage/02Cl18S5%2Cl-lOl13-295856-00?2COO.html,2006 4 DECHEMA Chemistry Data Series: www.dechema.de/CDS-lang-en.html, 2006 5 NIST Standard Reference Data Program: www.nist.gov/srd/,2006 6 NIST Chemistry WebBook http://webbook.nist.gov,2006 7 DECHEMA DETHERM database: www.dechema.de/detheim-lang-en.htm1, 2006
8 AIChE: www.aiche.org,2006 9 DIPPR Project 801: httpc//dippr.byu.edu,2006 10 University of Regensburg: www.uniregensburg.de/Fakultaeten/nat-Fak-IVIPhysikalische-Chemie/Kunz/,2006
11 DECHEMA, Forschung und Entwicklung
zur Sicherung der Rohstofiersorgung. Programmstudie Chemische Technik Rohstoffe, Prozesse, Produkte, Vol. 6 , DECHEMA Deutsche Gesellschaft fur Chemisches Apparatewesen e.V., Frankfurt am Main, 1976 l2 STN: www.stn-international.de/, 2006 13 ICV-SEP Data bank for electrolyte solutions: www.ivc-sep.kt.dtu.dk/databank/databank.asp, 2006 14 Dortmund Database DDB: www.ddbst.de, 2006
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8 I747
5 Emergent Standards Jean-Pierre Belaud and Bertrand Braunschweig
5.1 Introduction
Software standards in computer-aided process and product engineering are needed in order to facilitate application and software components interoperability. In the past, end-user organizations, software companies, governmental organizations and universities have spent hundreds of thousands, if not millions of euros, dollars and yens to develop bridges between software systems such as for transferring simulation data to an engineering database in order to provide the values for basic design: for integrating real time data coming from several process control systems into a common information network for the operators; for allowing a process simulation tool to use pure component data from a physical properties data bank for using a specialized unit operation simulation model within a commercial process simulation environment, etc. This question has been a subject of concern for years, as a source of unnecessary costs, delays and moreover of inconsistencies between data produced and consumed by different nonintegrated systems using different bases, different calculation principles, different units of measurements, running on different computers under different operating systems and written in different languages. This need in the domain of computer-aided process engineering has been described elsewhere; see, for example, Braunschweig and Gani (2002). Software standards remove this problem by providing the desired interoperability between software tools, platforms and databases. With appropriate machine-tomachine interface standards, using the best available tools together becomes a matter of plug-and-play,supposedly as easy as connecting USB devices or hi-fi systems’. Moreover, not only do these standards enable several software pieces available on your local PC to be put together, but they allow, thanks to the use of middleware, heterogeneous software modules available on your organizations’ intranet, or on the 1 Assuming that there is one commonly agreed stan-
dard and not several, e.g., see the problems of the
multiple standards for writable DVDs and the lack of interoperability that this multiplicity generates.
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 02006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
748
I
5 Emergent Standards
internet to interoperate, e.g., thanks to Web sewices technologies. Of course, such a facility has significant organizational, economic and technical consequences. We will briefly examine these consequences at the end of this chapter. However, our main focus will be on technologies,starting with a discussion on the concepts of openness and of open standards development. Then, we will examine some of the most significant operational standards in the domain of computer-aided process and product engineering, namely the CAPE-OPEN standard for process modeling tools, the OPC standard for process control systems. Following this, we will look at some of the current software interoperability technologies that we think will power future systems, i.e., XML and Web services technologies, leading to what is now called service-orientedarchitectures. Further on, we will shortly address standards for multiagent systems and the emerging Semantic Web standards, which should play a major role in the longer term, moving from syntactic to semantic interoperability of CAPE systems and services. We will conclude with a brief look at the organizational and economic consequences of the trend towards interoperability and standards. This chapter deals essentially with software-oriented standards, i.e., standards related to the use of one piece of software from within another piece of software. Data-oriented standards allowing to exchange data (from databases, files, etc.) between many software applications are only marginally addressed, e.g., in the POSC section.
5.1.1 Open Concepts
There is a clear fact that the emergence of the World-Wide Web was done with concepts of common development and usage. These concepts called here open concepts commonly encompass open standards, open computing, standardization processes and open software. In the first years of e-business, (open)standards were essential to the development of the Web, to e-commerce and to interlintra-organizational integration. Standardized information technologies such as TCP/IP, HTTP, HTML, XML, CORBA-HOP, Web services-SOAP,etc., achieve interaction and information exchange with external or internal, homogeneous or heterogeneous, and remote or nearby applications. These technologies are now core technologies of our networked environment. For the next generation of information systems and of computer technologies, open concepts should again play a key role for emergent information technologies (IT) standards introduced in Section 5.3. Heintzman (2003) gives a good introduction to open concepts for the domain of IT, through formal definitions, a brief history from the 1970s to the modern day battle of openness, and addresses commercial challenges of open projects from an IBM perspective. There is no reason why process engineering would escape from this trend, even if this field is a niche business and therefore more restricted and less global. Section 5.2 illustrates concrete technologies using open concepts in the field of CAPE. For example, CAPEOPEN (CO) is a significant technology for interoperabilityand integration of process
5.7 Introduction
engineering software components allowing engineering based on o$the-shelves components.
5.1.2 Open Standards and Standardization Process
In order to develop modem software applications and systems, technology selection involves many criteria. One main issue is to know if the technology is an (open) standard technology or a proprietary technology. Open standard technologies are freely distributed data models or software interfaces. They provide a basis for communication, common approaches and enable consistency (Fay 2003), resulting in improvements of developments, investments and maintenance. Clearly the common effort to develop an IT or a CAPE standard and its world-wideadoption by the community can be a source of cost reduction, because not only is the development cost shared but also the investment is expected to be more future-proof. Open standards are developed by software and/or business partners who collaborate within neutral organizations (such as W3C, OASIS, OMG, etc., for IT and COLaN, POSC, etc., for process engineering) in accordance with a standardization process. Such organizations represent a new kind of actor additional to more traditional actors, i.e., academics, software/hardware services suppliers and end-user companies. In the information and communication industry Warner (2003) calls this standardization process block-alliance in committee-based standard setting and examines it with block-alliancein market-based standard battle. The latter, which is beyond our scope, leads to defacto standards if the resulting technology successfully matches the market. However, both approaches are not so distinct since a standardization process can be a means in a business strategy. For example the Java platform and UML mix
Standard
I
I
:
Time Figure 5.1
Timing o f standard
Responsive standards
I
749
750
I
5 Emergent Standards
committee-based and market-based processes. If we consider the S-curve lifecycle of a simple technology, Sherif (2003) classifies the technological innovations in terms of market innovation and of technological competencies with radical, platform, incremental and architectural innovations. Weiss and Cargill (1992) show the ideal relationship between these types of innovation and the standardization process timing with the type of standards needed at each phase (Fig. 5.1). As an illustration we would say that the CO standard is in the second phase: initial products are commercialized; CO technology is now well disseminated; there is a well-establishedorganization releasing formal specifications;development tools, labeling process and promotion actions support the CO standard. 5.1.3 Open Computing, Open Systems, and Open/Free Software
By extension of the open standards paradigm, building modern software solutions can be based on an open computing paradigm. Open computing means that there is a standardization of information exchange. Then the resulting open system is a system whose characteristics comply with standards made available throughout the industry and therefore that can be connected to other systems complying with the same standards (IBM Glossary 2004). Open computing promises many benefits: flexibility/agility,integration capability, software editor independence, development cost and adoption of technological innovation. While always giving priority to the quality of business models available in a specific CAPE tool, process engineers can now privilege open CAPE systems, ensuring the exchange of information between CAPE solutions of distinct editors thus making it possible to benefit from various fields of expertise. This communication can be done statically with data models or dynamically with application programming interface (API). Open computing in CAPE is illustrated in Section 5.2. The tools for application engineering or for software development can be open source software tools or commercial software tools. Heintzman (2003)identifies several types of projects for the development and management of open source software: academic projects (especially viewed as a new media for collaboration, innovation promotion and dissemination), foundation projects (for base software such as Linux, Apache, Eclipse, Mozilla, etc.), middleware projects (advanced software such as JBoss, MySQL, etc.), niche projects (very specific software available on the Internet’). Open source software projects in the CAPE field are not significant at present but they could occur in academic or niche projects, the only known example at the time of writing being SIM42 project (Sim42 Foundation 2004), which develops an open source chemical engineering simulator. 2 For example SourceForge.net is the largest reposi-
tory of open source software projects with more
than 118000 projects at the beginning of 2006,
5.2 Current CAPE Standards
5.2 Current CAPE Standards
For several decades, experts and process engineers concentrated on the creation, evolution and improvement of models of thermodynamic and physical properties, unit operation, numerical methods, etc. Thus many CAPE software solutions allowing a more or less rigorous representation were developed. Each one is unique and dependent on the know-how of its author or editor. Particularly, in addition to the specific modeling activity, each one is characterized by selected computing technologies, i.e., supporting environment, implementation languages, persistence system, logical architecture, etc. This results in heterogeneity of available solutions and an impossibility of exchanging information between the different tools. Dual bridges between certain tools exist but this option remains proprietary and only operational for a limited number of associations of tools. Now the demand of users of CAPE tools turn to open systems, ensuring process, model and data exchange with third-party tools. In the same way, process engineers wish to be able to integrate their know-how easily and thus to deploy a final solution specific to their needs from best-in-classsoftware components. Open computing and its related IT and CAPE standards allow to build a user-centered modeling and simulation environment from enterprise internal components and selected off-the-shelfcomponents. Several initiatives that promote a standard for process information exchange can be identified, according to two types of techniques3, data models and API: 0
0
data models such as pdXML, energy estandards from POSC and Physical Property Data exchange from DECHEMA; APIs such as OPC from OPC Foundation, Physical Properties Package from IKCAPE and CAPE-OPEN from the CO-LaN.
Open software architectures can now be exploited by the new generations of CAPE software solutions in order to provide better enterprise process applications integration. As an illustration of interest, Fieg et al. (1995), Mahalec (1998),Braunschweig et al. (2000),White (2000), Braunschweig and Gani (2002) and Belaud et al. (2002) discuss open computing, its resulting and its expected benefits. The next sections introduce CAPE-OPEN, OPC and energy estandards. 5.2.1 CAPE-OPEN Standard for Modeling and Simulation
To solve problems, process engineers typically use a collection of in-house, commercial and/or academic software. Each user requires a broader access to available information and models to fit with the demand on the one hand, and has the constraint to match easily the old and the new, on the other hand. Information technologies play a predominant role to improve CAPE tools in supporting process engineers who 3 In some cases this distinction is not so obvious as some work both ways. Moreover, the XML technol-
ogy adopted by some standards does not really comply with this classification.
I
751
752
I
5 Emergent Standards
face these new challenges of interoperability. It is quite obvious that work is needed to develop and establish open systems for CAPE related software. Development of open systems requires the establishment of open standards. The CAPE-OPEN standard, through which a host tool and any external tool can communicate, is the answer to this question, as it provides an open communication systemfor process simulation, allowing the final users to employ various elements within any other. Specifically, since 1995, an international group of operating companies, software suppliers and academics, developed, through the CAPE-OPEN initiative, an open communication system for key simulation elements, and demonstrated its effectiveness on numerous examples. Through this it also promoted the adoption of the open system by the major providers and users of process simulation. The CAPE-OPEN standard (Belaud and Pons 2002, present version 1.0) consists in a technical architecture, interface specifications and implementation specifications. The technical architecture relies on modern development tools and up-to-date information technologies such as object-orientedparadigm, component-based approach, Web-enabled distributed architecture, middleware technology and uses the Unified Modeling Language (UML) notation. The interface specifications identify a conceptual model and the implementation specifications give the corresponding platform specific model for COM and CORBA. The specifications cover major application areas, e.g., unit operation, thermodynamic and physical properties, numerical solvers, optimization, planning and scheduling, chemical reactions systems, etc. CAPE-
Other Services I%tiinrug -.
,
Palnmctcr I:\rirn;itinn Dutn Rccunciliation
....- ,
.. .
!
--- .
-----
.-
f _
Jnit )perations
.....
. I Ermr Handling .
Frxtionn
Themcxlynnrnic and Fliys~col Pruperrieq
rI.-"-_
Figure 5.2
Scr]rlcntl;ll-4lndUl.u. Specific Tnrrls
-
& Schrdtiltng
I-------
IdentiRcatIon
,
UtllltlC...
--
Types and undefined volucx ,
CAPE-OPEN version 1.O specifications
.
..
__*
5.2 Current CAPE Standards
OPEN compliant software environments and components are now available on the market. Belaud et al. (2003)deal with the unit operation interface and show an example for a fxed bed reactor for butane isomerization. The CAPE-OPEN standard is free of charge and is managed by the CO-LaN consortium (www.colan.org,and Pons et al. 2003), which gathers operating companies, software suppliers and academic institutes. In addition to publishing the standard specifications, CO-LaN provides tools for supporting the transition to CAPE-OPEN technology: 0
0
0 0
migration tools, that is, software that automate the migration of existing components to CAPE-OPEN compliance, code examples for re-use, software testers that check compliance with the standard, guidelines and other helpful documents.
Recent announcements from software suppliers, end-users and research institutions demonstrate that CAPE-OPEN is increasingly accepted by the CAPE community. Its main technological benefits are: 0
0
0
for suppliers: increased usage of CAPE tools and reduced development and integration costs, for users: “develop your expertise once, plug and run everywhere” and access to best-in-classsolutions, for academics: improved dissemination of research results and better matching with industrial needs.
Organizations who adopt the CAPE-OPEN standard, and possibly become members of the CO-LaN, will be the first ones to harvest the benefits of open standard interfaces in process modeling and simulation.
5.2.2 Extensions to the CAPE-OPEN Standard
The 1.0 version of the CAPE-OPEN standard offers the following interface specifications as shown in Fig. 5.2. Details on these specifications are available elsewhere and on CO-LaN’s Web site. Although addressing a broad range of applications of CAPE modeling and simulation, the specifications are subject to improvements and extensions. At the time of writing this chapter, two such projects are active: 0
0
Improvement and refactoring of the thermodynamic and physical properties specifications. This work will eventually deliver version 1.1 of the specification which should be restructured in a more logical way, better documented, and therefore easier to use. Extension of the unit operation (UO) specification. The UO CAPE-OPEN standard, in version 1.0, does only address steady state simulation; although several tests have shown that CAPE-OPEN unit operations could be used, with limitations, in
I
753
754
I
5 Emergent Standards
dynamic simulation, work is going on to provide a specification fully compliant with all possible uses in dynamic simulation. A new version of the UO standard will be released after sufficient testing in a number of dynamic process modeling environments. The decision to launch a new improvement/extension project is taken by CO-LaN’s board of directors following proposals presented by special interest groups or by COLaN members. 5.2.3 OPC for Process Control and Automation
Since 1996 the OPC Foundation (OPC Foundation 1998) has been a nonprofit organization which ensures the definition and the use of interfaces for applications in control and automation of processes. It is dedicated to ensuring interoperability in automation by creating and maintaining open specifications that standardize the communication of acquired process data, alarm and event records, historical data, and batch data to multi-supplier enterprise systems and between production devices. The vision of OPC is to be the foundation for interoperability for moving information vertically from the factory floor through the enterprise of multi-vendor systems, as well as providing interoperabilitybetween devices on different industrial networks from different vendors. The foundation gathers more than 300 members, suppliers and users of control systems, instrumentation, and process control systems. It is worth noting that Microsoft is a member and acts as a technology advisor. The OPC-OLE for process control standard (Iwanitz and Lange 2002) is based on Microsoft OLE-ActiveX/(D)COMtechnology and standardizes the communication of OPC compliant data sources4 and OPC compliant applications’ through different connections (radio, serial, Ethernet and others) on different operating systems (Windows, Unix, VMS, DOS and others). Many specifications are available: 0 0
0
0
OPC Data Access provides access to real-time process data, OPC Historical Data access is used to retrieve process data for analysis, OPC Alarms and Events is used to exchange and acknowledge process alarms and events, OPC Data exchange defines how OPC servers exchange data with other OPC servers; OPC XML encapsulates process control data making it available across all operating systems.
As for CAPE-OPEN, the OPC foundation provides several tools and technologies supporting application and migration to the OPC standard, including self-testing software. 4 programmable logic controllers, distributed control
systems, databases and other devices
5 human machine interface, trending subsystems,
alarm subsystems, spreadsheet, historians, enterprise resource planning, etc.
5.3 Emergent Information Technology Standards
5.2.4 Energy estandards for Oil and Gas Processes
POSC is an international not-for-profit membership corporation. It unites industry people, issues and ideas to facilitate exploration and production information sharing and business process integration in the petroleum industry. Since 1990, membership has grown to over 100 companies. The membership includes world-wide representation of major and national oil companies, suppliers of petroleum exploration and production software and services, government agencies, computing and consulting companies, and research and academic institutions. POSC provides open specifications for information modeling, information management, and data and application integration over the life cycle. These specifications are gathered in the energy estandards project that relies principally on XML technologies (DTD, XML, Schema, etc.) for leveraging Internet technologies in the integration of oil and gas business processes. The set of standards are classified according to POSC areas: internet data exchange standards, practical exploration and production standards, data management standards, standards usability and application interoperability standards. For example, in the data management standards area the Epicentre standard provides a logical data model for upstream information. Also in the Internet data exchange standards area, ChemicalUsageML is a specification for the transfer of information about potential chemical hazards, and WellLogML is an XML DTD and a XML schema for well log data representation. These standards are not directly related to CAPE applications. However, the scope of POSC encompasses both underground applications (geology, geophysics, reservoir, drilling) and offshore applications (production, transportation). The second application area has many similarities with downstream areas such as petroleum refining, as it essentially involves the design, operation and monitoring of continuous processes. Some ofthe POSC projects such as POSC-CAESAR delivered technologies applicable to CAPE in general. Since these are data-oriented standards we do not address them in this chapter. Commonalities can also be found with a number of data modeling projects undertaken by the chemical engineering community such as PI-STEP, PDXI or pdXML (Teague 2002 and Teague 2002b).
5.3 Emergent Information Technology Standards
Although not yet fully exploited by the CAPE community, a number of emergent IT standards will become important for our applications in the near future. Complementing some of the technologies presented in the previous section, these new IT standards support Internet-based computing and take advantage of Web technologies. We will first look at Web services together with their newly developed business standards, leading to service-oriented architectures; then we will go a step further
I
755
756
I
5 Emergent Standards
and introduce IT standards for multi-agents architectures and the recently published6 Semantic Web standards.
5.3.1 Web Services and Business Standards
Web technologies are being used more and more for application to application communication. Before the twenty-first century, software suppliers and IT experts promised this interconnected world thanks to the technology of Web services. Web services propose a new p a r a d i p f o r distributed computing (Bloomberg 2001) and are one of today’s most advanced application integration solutions (Linthicum 2003). They help business applications to contact a service broker, to find and to integrate the service from the selected service provider. For example, during a simulation, the simulation environment, in need of an external thermodynamic service, contacts a UDDI directory in order to take advantage of a particular thermodynamic model (yellow page function). Once the producer of such services (a company) is selected, the simulation environment recovers the signatures of all available services using the associated WSDL descriptions’. These phases of discovery and description can be carried out dynamically or statically during the development process. Then the simulation environment connects to the specific thermodynamic service and uses it with SOAP* communication protocol. This scenario can take place on the Internet or on company intranets or extranets; it uses a set of technologies: UDDI, WSDL and SOAP, proposed by the Web services community to ensure interworking and integration of Web services. However, even if the idea of Web services has generated too many promises’, Web services should be viewed for now as a part of a global enterprise software solution and not as a global technical solution. In a project, Web services can be used within a general architecture relying on Java EJB or on Microsoft’s .NET framework. Many projects already utilize Web services, sometimes with nonstandard technologies, particularly for noncritical intranet applications. Even if Web services miss advanced functionalities, many advantages like lower integration costs, the re-use of legacy applications, the associated standardization processes and Web connectivity can plead in favor of this new concept for software interoperability and integration (Manes 2003). 5.3.1.1 Definition
A Web service is a standardized concept of functions invocation relying on Web protocols, independent of any technological platform (operating system, application 6 at the time of writing this section (early 2004) 7 Web Service Description Language, somewhat equivalent to OMG’s CORBA and to Microsoft’s
COM IDL
8 simple object access protocol, known as the “pip-
ing” between Web services 9 Early standards, security, orchestration, transac-
tion, reliability, performance, ethic and economic models are the main concerns.
5.3 Emergent Information Technology Standards
server, programming language, database, and component model). Bearingpoint et al. (2003) focus on the evolution from software components to Web services and write: “a Web service is autonomous and modular application component, whose interfaces can be published, sought and called through Internet open standards.” We see the introduction of Web services as a move from component architectures towards internet awareness, this context implying the use of associated technologies, i.e., H T P and XML, and an e-business economic model. Current component technology based on EJB, .NET and CCM being not fully suitable, Web services provide a new middleware for providing functionality anywhere, anytime and to any device. 5.3.1.2 Key Principles IBM and Microsoft’s initial view of Web services, first published in 2000, identified
three kinds of roles (Fig. 5.3): 0
0
0
A service provider publishes the availability of its services and responds to requests to use its services. A service broker registers and categorizes published service providers and offers search capabilities. A service requester uses service brokers to find a needed service and then employs that service.
These three roles make use of proposed standard technologies: UDDI from the OASIS consortium, WSDL and SOAP from the World-Wide Web consortium (W3C).UDDI acts as a directory of available services and service providers; WSDL is an XML vocabulary to describe service interfaces. SOAP is an XML-based transfer protocol that allows you to send requests to services on through HTTP. Further domain-specific technologies related to Web services are being developed, e.g., the following proposed by the OASIS consortium, a consortium of companies interested in the development of e-business standards ebXML, supported by Sun Microsystems, is a global framework for e-business data exchange; BPEL (formerly called BPEL4WS), is a proposed standard for the management and execution of business processes based on Web services; SAML aims at exchanging authentication and authorization information; WS-Reliable Messaging is for ensuring reliable message delivery for Web services; WS-Securityaims at forming the necessary technical foundation for higher-levelsecurity services, etc. A recent glossary of technologies related to Web services, each one defined by only a few lines of text, is 16 pages long (Cutter Consortium 2003). Simply stated, the interface of a Web service is documented in a file written in WSDL and the data transmission is carried out through HTTP with SOAP. SOAP can also be used to query UDDI for services. The functions defined within the interface can be implemented with any programming language and be deployed on any platform. In fact any function can become a Web service if it can handle XML-based calls. The interoperability of Web services is similar to distributed architectures based on standard middleware such as CORBA, RMI or (D)COM but Web services offer a loose coupling, a nonintrusive link between the provider and the requester,
I
757
758
I
5 Emergent Standards
\nd
Publish Bind & Consume
Figure 5.3
Key principles of Web services
due to the loosely-coupledSOAP middleware. Bloomberg (2001)compares these different architectures. Oellermann (2002) discusses the creation of enterprise Web services with real business value. Basically he reminds that a Web service must provide the user with a service and needs to offer a business value. The technically faultless but closed .NET “my services”project from Microsoft demonstrates that this is always challenging to convince final users. With Google Web API beta (2004),software developers can query the Google search engine using Web services technology. Indeed Google search engine is available as a Web service since mid-2002. Search requests submit a query string and a set of parameters to the Google Web APIs service and receive in return a set of search results. A developer’skit provides documentation and example code (Java, C# and Visual Basic) for using this Web service from any platform that supports it. 5.3.1.3 SOAP a Loosely-coupled Middleware Technology
HTML-HTTP act as loosely-coupledmiddleware technology between the Web client (navigator)and the business logic layer (Web server). Around the year 2000 Microsoft and IBM proposed to use the XML data format over the Internet protocols: HTTP as transport layer and XML as encodingfomat now constitute the key underlying technologies for Web services. On top of these, SOAP (currently in version 1.2) was delivered in June 2003, as a lightweight protocol for exchange of information in a decentralized and distributed environment. SOAP can handle both the synchronous request/response pattern of RPC architectures and the asynchronous messages of messaging architectures. An example of SOAP request message in a synchronous manner can be found in Google Web APIs beta (2004).A SOAP request is sent as a HTTP POST. The XML content consists in three main parts: 0 0
0
The envelope defines the namespaces used. The header is an optional element for handling supplementary information such as authentification, transactions, etc. The body performs the RPC call, detailing the method name, its arguments and service target.
5.3 Emergent Information Technology Standards
Whereas OMG CORBA, Java RMI, Microsoft (D)COM and .NET Remoting try to adapt to the Web, SOAP middleware ensures a native connectivity with it since it builds on HTTP, SMTP and FTP and exploits the XML Web-friendly data format. The many reasons for the success of SOAP are its native Web architecture compliancy, its modular design, its simplicity and extensibility, its text-based model”, its error handling mechanism, its ability for being the common messaging layer of Web services, its standardization process and its support from major software editors. With so many advantages for integration and interoperability one could expect a massive adoption by software solutions architects. However the deployment of Web services still remains limited. In addition to technical issues, three main reasons can be noted 0
0
0
Web services are associated to SOAP, WSDL and UDDI. The UDDI directory of Web services launched in 2000 by IBM, Microsoft, Ariba, HP, Oracle, BEA and SAP, was operational at the end of 2001 with three functions (white, yellow and green pages). However due to technical and commercial reasons this world-wide repository that meets an initial need (to allow occasional, interactive and direct interoperability) founded on the euphoria of e-business years does not match the requirements of enterprise systems. Entrusted to OASIS in 2002, UDDI version 3.0 proposes improvements in particular for intranet applications. The simplicity and interoperability claimed by Web services are not so obvious. Different versions of SOAP and incompatibilities of editors’ implementations are source of difficulties, to such a degree that editors created the WS-I consortium to check implementations of standards of Web services across platforms, applications, and programming languages. The concept was initially supported by a small group of editors (with Microsoft and IBM leading); now the “standards battle”” and the multiplication of proposed standards weaken the message of Web services (Koch 2003).
5.3.1.4 Service-oriented Architecture
In order to better integrate the concept of Web services in enterprise systems, IT editors now propose the service-oriented architecture (SOA) approach (Sprott and Wilkes 2004). Beyond the marketing hype, a consensus is established on the concept of service as an autonomous process, which communicates by message within an architecture that identifies applications as services. This design is based on coarsegrained, loosely-coupled services interconnected by asynchronous or synchronous communication and XML-based standards. The definition and elements of SOA are not well established yet. Sessions (2003)wonders whether a SOA is (1)a collection of components over the Internet, (2) the next release of CORBA or ( 3 ) an architecture for publishing and finding services. An SOA is only an evolution of Web-distributed component-based architectures to get applications integration easier, faster, cheaper and more flexible, improving 10 In contrast to binary and not self-describing
CORBA, RMI, (D)COM, .NET protocols.
11 with BEA, IBM and Microsoft on one side and lona, Oracle and Sun from the other side
I
759
760
I
5 Emergent Standards
return on investment. In fact the main innovations are in the massive adoption of Web services’* by the industry and in the use of the XML language to describe services, processes, security and exchanges of messages. This promises more futureproof IT projects than in the past. Despite limitations of Web services, the technology now appears to be complementary to solutions based on classic middleware bus, as well as to enterprise application integration solutions. Its loose coupling brings increased flexibility and facilitates the re-use of legacy systems. Moreover Web services can be used like low-cost connectors between distinct technological platforms like COM, .NET and JZEE. The next release of Microsoft’s Windows Vista operating system will include Indigo, a new interoperabilitytechnology based on Web services, for unifylng Microsoft’s proprietary communication mode; Abitboul, research director at INRIA, estimates that Web services will represent, in the long run, the natural protocol for accessing information systems. Thus it seems that we are only at the start of Web services and SOA. Andrews (2004)predicts dramatic changes in the Web services market for 2006, and announces a new class of business applications called service-oriented business applications. The merging of Web, IT and object/component technologies to form SOA and Web services is announced as the next stage of evolution for e-business (knowing that grid computing and autonomic computing will add their contributions too, but this is another story). There is no doubt that the scientific field will get many benefits from this trend. As for CAPE, one can foresee several applications of SOA and Web services. However, it is sure that innovations will probably go beyond what is predictable at this stage of development. Here are a few examples:
0
Sama et al. (2003) presents a Web-based process engineering architecture where simulator components can be executed over the Web. Many front-end engineering companies share design data over communications network. Access to this design data could be made easier through an SOA. Physical properties databases can be made available through Web services; a good example of such a service is Dechema’s “DETHERM ... on the Web” on-line service (Westhaus 2004). This service is currently available through conventional technology (PhP requests on database) and could be made into a Web service, therefore directly interoperable with other programs. In the long run, process engineering software could interoperate with equipment manufacturers services not only to develop better simulation models by using the manufacturer’sspecific unit operation model, but as well to link into manufacturers’ supply chain when moving into detailed design, procurement and commissioning.
As can be seen from these examples, the advent of service-oriented architectures brings many opportunities to the CAPE professional. Now let us move even further and come to semantic interoperability. 12 Even if a SOA does not imply the use of web ser
vices technology and vice versa.
5.3 Emergent Information Technology Standards
5.3.2 WjC’s Semantic Web Standards
The current World-Wide Web is very rich in terms of content, but is essentially syntactic or even lexical. Looking for information on the Web, using search engines, is done by finding groups of terms in the pages and in the documents, without taking consideration of the meaning of those terms. For example, using the most popular search engine, Google, to look for information about the ESCAPE-15 conference, the first page brings the results seen in Table 5.1. Thanks to the referencing work done by the conference organizers, the first hit is the conference’s Web site. However, in the first page, together with the correct hit, Google reports a ski bag, a motor racing wheel, and a tour in New Zealand. One might wish to go to the “advanced search” page and specify that only Web sites about conferences should be returned. This is not possible, since Google does not allow this restriction. As a matter of fact, none of the most popular search engines currently used could restrict the search to a category of pages, as the semantics of the pages are unknown to them. Supported by the W3C, of which it is a priority action, many projects aim at developing the semantic level, where information is annotated by its meaning. A necessary stage is to define consensual representations of the terms and objects used in the applications-these consensual representations are called ontologies. These ontologies will be expressed in OWL (Ontology Web Language), which itself is based on XML and RDF (Resource Description Language), a specialization of XML. Programs in the whole world support this movement towards the semantization of informaTable 5.1
ESCAPE-15 search with Google on 18 July 2004
ESCAPE 15 The ESCAPE (European Symposium on Computer Aided Process Engineering) series brings the latest innovations and achievements ... www.ub.es/escape 15/escape15.htm Thule Escape 15 Cubic Foot Rooftop Cargo Bag Buy Thule Escape 15 Cubic Foot Rooftop Cargo Bag here, one of many top quality Ski Rooftop Storage products ... www.sportsensation.com/skiing/r/Ski-Rooftop-Storage/~ule-EscapeIS-Cubic-Foot-Rooftop-Cargo-Bag-1330418.htm Motegi Racing Escape, 15” Wheels 01-On info.product-fnder.net/motegi/Escape-. IS--Wheels-01-On154.html Grand Escape 15 Days Auckland to Christchurch This morning we journey across the Auckland Harbour Bridge traveling through small rural farming communities. Visit the Matakohe Pioneer Museum ... www.newzealandtours.net.nz/auckland/~ided/ak~id66x.html
I
761
762
I
5 Emergent Standards
tion. In Europe, the EC strongly supports through the Information Society Technologies (IST) program. A few ontology development projects have taken chemical engineering as their application domain. A good definition of ontologies is provided in the Web Ontology Language Use Cases and Requirements document published by W3C (2004):
Ontology defines the terms used to describe and represent a n area of knowledge. Ontologies are used by people, databases, and applications that need to share domain infomation. Ontologies include computer-usable dejnitions of basic concepts in the domain and the relationships among them. They encode knowledge in a domain and also knowledge that spans domains. In this way, they make that knowledge reusable. The word ontology has been used to describe artijacts with diferent degrees ofstmcture. These rangeporn simple taxonomies to metadata schemes, to logical theories. The Semantic Web needs ontologies with a signijkant degree ofstructure. These need to specijjJdescriptionsfor the following kinds of concepts: classes (general things) in the many domains ofinterest, the relationships that can exist among things, the properties (or attributes) those things may have. The definition of ontologies is a multidisciplinarywork, which requires competence (1)in the application area: processes, chemistry, environment, etc., (2) in the modeling of knowledge into a form exploitable by machines. It is also an important stake for the actors of the field, who will use the standards defined to annotate and index their documents, their data, their codes, in order to facilitate the semantic retrieval. Applications of the Semantic Web are many. The last section of this chapter presents an example in intelligent reconfiguration of process simulations using software agents. Before this, it is worth listing the main use cases selected by the W3C working group on the definition of OWL that have guided its development before its official release as a standard: Web portals. A Web portal powered by ontologies will bring more relevant content by applying inferences on its content (e.g., a distillation column is a separation process, therefore information about distillation would be useful to readers interested in separation). Multimedia collections. Semantic annotation of large multimedia collections will help in the retrieval among these collections, e.g., a section of a video presentation about operating special equipment. Corporate Web site management. This is the same as above, with specific functionality for company personnel, such as finding competences among employee directories etc. Design documentation. The problem of documenting designs has been identified in the chemical engineering field as in other fields where design is a key phase; it is interesting to note that this problem has been outlined by the W3C as one which could most benefit of semantic annotations, allowing to retrieve design chunks in a structured manner.
5.3 Emergent Information Technology Standards 0
0
Agents and services. Ontologies will be used by software agents to discover and analyze service offers and select the most relevant one; the next section presents such a system developed in the COGents EC-funded project. Ubiquitous computing. New information and technical systems will be configured at runtime by appropriate selections of services in unchoreogruphed ways, that it, in configurations which were not predicted at the time of setting up the services; annotation of ubiquitous services by ontologies will help in interoperating such combinations.
5.3.3
Use of Ontologies by Software Agents
The IST COGents developed an agent-based architecture for numerical simulation, with a concrete implementation in the process simulation domain relying on the CAPE-OPEN interoperability standard. The project, which lasted two years (April 2002-March 2004), proposed and implemented a framework, designed the OntoCAPE domain ontology of modeling knowledge, and demonstrated its benefits through case studies. COGents was funded by the European Community under the Information Society Technologies program, contract IST-2001-34431. As before, the CAPE-OPEN standard facilitates process simulation software interoperability and can be the foundation for Web services in this domain. The COGents project pushed the technology further: we used cognitive agents to support the dynamic and opportunistic interoperability of CAPE-OPEN compliant process modeling components over the Internet. The result is an environment which provides automatic access to best-of-breedCAPE tools when required wherever situated. For this purpose the COGents project: 0
0
0
0
defined a framework allowing simulation components to be distributed and referenced on the Internet and intranets, defined representations of requirements and services in form of an ontology of process modeling, “OntoCAPE”, designed facilities for supporting the dynamic matchmaking of modeling components, demonstrated the concepts through software prototypes and test cases.
The project was supported by case studies serving as examples: nylon-6 process modeling; HDA process synthesis and simulation. The nylon-6 process case study poses challenges to the component set-up and configuration: the choice on how a simulation shall be performed depends on the availability of solvers and discretization methods. The HDA process has been used as a case study in process design, process optimization and heat exchanger network synthesis. The availability of published results provides a benchmark for the agent-based design and optimization tools. The architecture of the COGents framework is illustrated in Fig. 5.4. The extended functionality of COGents is provided by a multi-agent system (MAS), represented by the DIMA block in the above figure. MAS aims to model com-
I
763
764
I
5 Emergent Standards
D M A Agent Platform
onrvChPE oritolpy
DTMA Agent Platform
CO interfaces
Figure 5.4
The COCents framework
plex systems as collections of interactive entities called agents. Each agent is autonomous and proactive and can interact with others and act upon its environment, applying its individual knowledge, skills, and other resources to accomplish goals. In COGents the key role of the MAS is to conduct negotiation mechanisms for composing the simulation during the design phase, as well as providing runtime facilities such as diagnostics and guidance to the users. The communication between individual agents is done with messages exchanged using an Agent Communication Language (ACL), whose content is expressed using the OntoCAPE ontology. DIMA is complemented with DARX, which provides a global naming and location service on a network. COGents integrates a security layer based on SSH, which provides strong authentication and secure communications over the Internet. The advantages of agent-oriented approach are as follows: 0 0
0 0
Openness. New Agentscan can be dynamically and easily added and/or removed. Heterogeneity. The various components can be developed with different programming languages, they can be executed on different platforms. Flexibility. Interactions between the various entities are not rigidly defined. Distribution/Mobility. The agents can be executed on a set of distributed machines and can move from one machine to another.
In COGents, agents are used to improve the dynamic of simulations and to facilitate the design and development of distributed large-scale simulations. These distributed interactive simulations are built from a set of independent simulation components linked together by a network. They provide rich adaptive simulations with agents that can interact with humans and each other. As any application where domain knowledge has to be explicitly represented, COGents calls for an ontology to support the knowledge representation and interagent communication. More specifically, this ontology of the process modeling
5.4 Conclusion (Economic, Organizational, Technical, QA)
domain defines concepts indispensable for describing process modeling tasks, modeling strategies as well as software resources, and is the foundation of a matchmaking between requirements of users (i.e., process engineers) and suitable software components. OntoCAPE supports reasoning for mapping user’s requests into modeling strategies and for locating software resources to implement the identified modeling strategies. OntoCAPE was developed in DAML + OIL, a predecessor of the OWL language. More details on the COGents project, including full access to OntoCAPE, can be obtained from COGents (2004).
5.4 Conclusion (Economic, Organizational, Technical, QA)
Interoperability standards such as CAPE-OPEN, OPC, Web Services and the Semantic Web’s OWL supporting reference ontologies, open new opportunities for the process industries. Once these ideas gain wide acceptance by the process engineering community, we will find ourselves facing some very major changes in the ways process engineering software are designed, developed, marketed, distributed and used, for the mutual benefit of users and vendors. The market now has access to robust, reliable, commercial simulators that have standard software component interfaces. Process industries will be able to enjoy the lower cost and lower maintenance of commercial software, but this will be combined with an abundant flexibility. This combination will allow those companies to predict and manage process performance as never before. The number of potentially affected products is in the hundreds, due to the numerous application areas, components and suppliers. We will see many innovative combinations of process modeling components and services from large and small suppliers, used in opportunistic and changing ways depending on the modeling task at hand. This new collaboration framework is called “co-opetition”as defined by Brandenburger and Nalebuff (1996):“Business is cooperation when it comes to creating a pie and competition when it comes to dividing it up.” Plug-and-play capacity stimulates the market and creates new opportunities that could never have happened before. New value nets will be created with one supplier being another supplier’s competitor, and at the same time the supplier’s complement, as assembling components (or Web services in SOA) from several sources will provide more than just summing up the parts by operating them separately. Be prepared for further innovations and business benefits in process and product engineering thanks to the increasing role of interoperability standards and to emerging information technologies.
I
765
766
I
5 Emergent Standards Abbreviations
American Institute of Chemical Engineering Application Programming Interface Business Process Execution Language Business Process Execution Language for Web Services Business Process Markup Language Business Process Management Initiative Computer-aided process engineering CORBA Component Model CAPE-OPEN co CO-LaN CAPE-OPEN Laboratory Network CORBA Common Object Request Broker Architecture (D)COM (Distributed) Component Object Model Document type definition DTD Enterprise Application Integration EAI Electronic business XML ebXML Hyper Text Markup Language HTML Hyper Text Transfer Protocol HTTP Interface Description Language IDL Internet Interorb Protocol IIOP Information system IS Information technologies IT Java 2 Platform Enterprise edition J2EE Multi-agent system MAS Organization for the Advancement of Structured Information Standards OASIS Object linking and embedding OLE OLE for process control OPC Ontology Web Language OWL Process Data Exchange Institute Pa1 PlantData XML pdXML Object Management group OMG Resource description framework RDF RPC Remote procedure call Security Assertions Markup Language SAML Structured Query Language SQL Service-oriented architecture SOA Simple Object Access Protocol SOAP Universal Description, Discovery, Integration UDDI Unified Modeling Language UML uo Unit operation Web Services Description Language WSDL Web Services Interoperability Association ws-I World-Wide Web Consortium w3c Extensible Markup Language XML
AIChE API BPEL BPEL4WS BPML BPMI CAPE CCM
References I 7 6 7
Acknowledgements
The authors wish to express their thanks to colleagues of the CAPE-OPEN, Global CAPE-OPEN and COGents projects.
References 1 Andrews W. (2004) Predicts 2004, Gartner’s
2
3
4
5
6
7
8
9
10 11
predictions, www3.gartner.com/research/ spotlight/asset-55117-895.jsp Bearingpoint, SAP and Sun Microsystems (2003), Livre blanc, Les services Web, Pourquoi? www.bearingpoint.fr/content/library/ 138-731.htm Bfoombergj. Web services: A New Paradigm for Distributed Computing, The Rational Edge, September 2001, www-106.ibm.com/ developenvorks/rational/library/content/ RationalEdge/archives/sepOl.html Befaud j . P. Pons M . Open Software Architecture for Process Simulation, ComputerAided Chemical Engineering, 10, May 2002, Elsevier, Amsterdam, pp 847-852 Befaud J . P. Braunschweig B. L. Hafforan M . Irons K. Pi-nof D. Von Wedef L. Processus de standardisation pour l’interoperabilite des composants logiciels de l’industrie des procCdCs, Systeme d’information modelisation, optimisation commande en genie des procCdes, October Toulouse, France 2002 Belaudj. P. Roux P. Pons M . Opening Unit Operations for Process Engineering Software Solutions, AIDIC Conference Series, Val. 6, AIDIC & Reed Business Information S.p.A. (2003) pp 35-44 Brandenburger A. N a f e b u f B. Co-opetition, Currency Doubleday, New York 1996 Braunschweig B. L. Britt H . Pantefides C. C. Sama S. Process Modeling: the Promise of Open Software Architectures, Chemical Engineering Progress, September (2000) pp. 65-76 Braunschweig B. L. Gani R. (eds.) Software Architectures and Tools for Computer-Aided Process Engineering, Elsevier, Amsterdam 2002 COGents (2004), COGents project Web site, www.cogents.org Cutter (2003) Consortium Web Services Terminology, Web Services Strategies, Vol. 2, No. 12. December 2003
12 Fay S. (2003), Standards and Re-Use, The
13
14 15
16
17
18
19
20
21
22
23
Rational Edge, May 2003, www106.ibm.com/developerworks/rational/ library/2277.html Fieg G. Gutermuth W . Kothe W. Mayer H . H . Nagef S. Wendeler H . Wozny G. A Standard Interface for Use of Thermodynamics in Process Simulation, Computers and Chemical Engineering, Val. 19, Suppl., (2002) pp. S317-S320 Google Web APIs Beta (2004), www.google.fr/apis/index.html Heintzman D. (2003), An Introduction to Open Computing, Open Standards, and Open Source, The Rational Edge, July 2003, www-lO6.ibm.com/developerworks/rational/ library/content/RationalEdge/archives/ julyO3 .html IBM Glossary (2004), Glossary of Computing Terms, www-306.ibm.com/ibm/ terminology/goc/gocmain.htm Koch C. (2003),The Battle for Web Services, CIO magazine, October 2003, www.cio.com/ archive/l00103/standards.html Iwanitz F. Lange /. OPC-Fundamentals, Implementation and Application, Huthig Fachverlag 2002 Linthicum D. S. Next Generation Application Integration: From Simple Information to Web services, September 2003, Addison Wesley, Boston 2003 Mahalec V. Open System Architectures for Process Simulation and Optimization, AspenTech Speech, ESCAPE’8 Conf., Belgium, 25 May 1998 Manes A. T. Web Services: a Manager’s Guide, September 2003, Addison Wesley, Boston 2003 Oellermann W. Create Web Services with Business Value, .NET Magazine, November 2002, Val. 2, Number 10, www.ftponline.com/wss/2002-1l/magazine/features/ wollermann/default.aspx OPC foundation (1998).OPC Technical Overview, www.opcfoundation.org/Ol-about/ 0PCOverview.pdf
768
I
5 Emergent Standards 24
25
26
27
28
29
30
Pons M. Belaud]. P. Banks P. Irons K. Merk W. Missions of the CAPE-OPEN Laboratories Network, Proceedings of Foundations of Computer-Aided Process Operations, 2003, Coral Springs, Florida 2003 Sama S. PiiTol D. Serra M. Web-based Process Engineering, Petroleum Technology Quarterly, 2003 Sessions R. What is a Service-OrientedArchitecture (SOA)?Objectwatch Newsletter, Number 45, October 2003, www.objectwatch.com/issue-45.htm Shenf M. H. When is Standardization Slow ?, International Journal of IT Standards and Standardization Research, Vol. 1, Number 1, March 2003 Sim42 Foundation Simulator 42 Open Source Chemical Engineering Process Simulator, 2004 www.virtualmaterials.com/sim42 Sprott D. Wilkes L. Understanding ServiceOriented Architecture, Microsoft Architects Journal, EMEA edition, January 2004, www.thearchitectjournal.com/Journal/ issue I /article2.html Teague T. L. Electronic Data Exchange Using PlantData XML, AIChE Spring National Meeting, 10-14 March 2002, New Orleans Riverside, New Orleans 2002
31
32
33
34
35
36
Teague T. L. PlantData XML, Section 4.3 of Software Architectures and Tools for Computer-Aided Process Engineering, Elsevier, Amsterdam 2002b W3C (2004)W3C, World-Wide Web Consortium, Web Ontology Language Use Cases and Requirements, www.w3.org/TR/2004/ REC-webont-req-20040210/ Warner A. G. Block Alliances in Formal Standard Setting Environments, International Journal of IT Standards and Standardization Research, Vol. 1, Number 1, March 2003 Weiss M. Cargill C. Consortia in the Standards Development Process, Journal of the American Society for Information Science, Vol. 43, Number 8 (1992)pp. 559-565 Westhaus U.DETHERM ... on the Web, an On-line Service from DECHEMA, http://isystems.dechema.de/detherm/ 2004 White M. Working Together: Collaborative Competition Creates New Markets, Cap Gemini Ernst & Young Center for Business Innovation E-journal, Issue 5 (2000) pp. 33-35, www.cbi.cgey.com/journal/issueS/ index.hhnl
Section 5 Applications
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
The previous sections ofthis book have shown how process systems engineering has developed methods and tools to address the increasing complexity of the process industries; it seeks tofoster the development of new products and processes, to achieve optimal operation of complex equipment, and to help in the complex management ofthe global enterprises. Section 5 illustrates some applications of CAPE techniques, and aims to demonstrate what their benefits are, their current limits, and their short and long-term perspectives. Thefirst chapter illustrates the issue of education and training: how to teach the students to eficiently use very powerjd tools, in order to better understand the concepts, to appreciate how the theory can be put into practice, while avoiding the dangers of misusing the software by merely pushing buttons to generate results. The applications covered deal mainly with process and product design, and illustrate also the concept of tool integration, since results must be carried o u t f i o m one calculation step to the next. The second chapter concentrates on model-based process operation. It illustrates various industrial applications of data validation, and shows how the use of more detailed models can improve the accuracy of estimating plant parameters. Use of thermodynamic constraints besides component and overall mass balances is illustrated. Examples are taken f i o m a range of industries: oil refineries, chemicals, fertilizers, nuclear power plants. The main benefits are more reliable plant monitoring, capability to operate closer to limits with a better eficiency, early detection offaults, and reduction of analytical and instrumentation cost. The last chapter illustrates CAPE techniques applied to solving production-planning problems for a multiproduct plant. The goal is to optimize revenue by reacting sw$y to changes in product demand, market prices and feed stocks availability. The uncertainty aspect is modeled by means of a stochastic approach. Multiple objectives are considered: either maximizing the expected value of thefinal profit over the planningperiod, or maximisation of thefirst quartile of the profit (robust solution). All steps i n the application ofthe method are illustrated by means of a case study takenfiom afood additives plant. Thus, Section 5 illustrates the diversity ofCAPE tools and methods, and shows examples ofcurrent practice i n areas rangingfiom process design to plant operation and production planning under uncertainty.
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8 I773
1 Integrated Computer-aided Methods and Tools as Educational Modules Rafiqul Gani andJens Abildskov
1.1 Introduction
The CAPE community has been developing computer-aided methods and tools for several decades and these days it is common practice in teaching as well as in industrial problem solving to use one or more pieces of currently available software. Students are trained to solve process-product engineering problems with state-of-the-art software, which they also later use during their professional career. Process simulators and their use in process design education has become a standard tool for process design courses everywhere. While these tools are able to provide excellent training in analysis of problems that are well-defined and have sufficient information to completely solve the problem, it is questionable if they are also suitable for solving open-ended problems (Dohertyet al. 2000), such as those related to process-product design. Also, use of these tools in process-product design encourages the use of the inefficient trial and error solution approach as opposed to a systematic generate and test approach where additional tools for synthesis and design may be used together with process simulators. As computer-aided design becomes more prevalent in the process industry, according to Finlayson and Rosendall (2000),it is essential that graduating engineers know the capabilities of the computer-aided systems that are available as well as the scepticism to interpret the results wisely. At the same time, advances in computeraided design and simulation tools and reduced computing costs have allowed new uses of computing in chemical engineering education. Indeed, chemical process industries remain one of the strongest segments of the world-wide economy due to the cost effectiveness of well-designed chemical processes as well as to innovative chemistry (Doherty et al. 2000). Process simulators (ASPEN+,PRO-11, gPROMS, ChemCad, ProSim, etc.) together with modeling and simulation software (Mathematica, Maple, MATLAB, etc.) have become standard computer-aided tools in chemical process design, process control Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCHVerlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
774
I
1 Integrated Computer-aided Methods and Tools as Educational Modules
and, chemical process-operationmodeling. As industries face major new challenges because of increased global competition, greater regulatory measures, uncertainties in prices for energy, raw materials and products, etc., it becomes more and more important to consider integrated solution approaches. Similar to process integration, tools and/or problem integration imply the solution of more than one problem simultaneously or use of more than one tool in the solution of the problem. Also, the introduction of new courses such as product design has led to new challenges in fields such as applied thermodynamics (Abildskov and Kontogeorgis 2004), which also requires new software. This chapter highlights the use of a system viewpoint within an integrated approach to the solution of chemical engineering problems. As noted by Edgar and Rawlings (2004),the chemical engineer leverages knowledge of molecular processes across multiple length scales to synthesize and manipulate complex systems that encompass both processes and products. Several computer-aided educational modules that encourage the development of this viewpoint, are presented in this chapter. First, a brief overview of the integrated approach to CAPE is given, followed by a short presentation of an integrated computer-aided system that has been used as a basis for the development of a number of computer-aided education modules. Three examples of these educational modules are presented together with references of where other modules can be found. In conclusion, uses of these modules in courses are discussed.
1.2
Integrated Approach to CAPE
An integrated approach to CAPE, also known as concurrent engineering, simply means the solution of two or more problems in a single step, for example, make decisions in the early stages of design that also select the control structure and guarantee acceptable environmental impact. In this way, it is similar to process integration where two or more operations are performed through a single operation, for example, a heat exchanger combining a cooling operation with a heating operation. Application of the integrated approach, however, also needs an integration of tools. As illustrated through Fig. 1.1, most CAPE problems (synthesis, design, and analysis) are multitask by nature and require a number of different tools. To achieve integration of tools, it is necessary to establish the workflow and data flow with respect to the solution steps and the tools that would be needed in each of the steps. Tools integration, therefore, avoids duplication of work while providing efficient data transfer from one tool to another. Through a computer-aided framework that includes a collection of tools (and their associated subtools such as databases, models, solvers, etc.) and allows access of the tools according to specific workflow and data flow, typical chemical engineering problems can be solved in an integrated manner. More details on tools integration can be found in Fraga et al. (2002).
7.2 Integrated Approach to CAPE
Develop methods for process integration and algorithms for tools integration
Figure 1.1
Multidisciplinary
tools for nature processproduct design problems
1.2.1 Integrated Computer-aided System
An integrated computer-aided system (ICAS) (Gani 2001) combines computational tools for modeling, simulation (including property prediction), synthesis/design, control and analysis in a single integrated system. These computational tools are presented as toolboxes. During the solution of a problem, the student moves from one toolbox to another in order to solve problems, which require tools from more than one toolbox. Typically, problems in process synthesis, process design, and process control require the use of more than one tool. For example, in process design/synthesis, one option is to define the system input stream, to analyze the mixture (use of analysis tools), to generate flow sheet alternatives (synthesis/design tools), to evaluate the alternatives (simulation and analysis tools), and finally, to optimize the flow sheet (design tools). Each toolbox has a number of tools associated with it and are connected to the necessary tools from other toolboxes. Figure 1.2 illustrates the architecture of ICAS. ICAS has been developed specifically to solve problems in an integrated manner, which can be used to develop educational modules for different types of productprocess engineering problems. It is currently used to solve industrial problems as well as research and teaching. In this chapter, only the teaching related features will be highlighted. As shown in Fig. 1.2, ICAS consists of a simulator (with steady state and dynamic simulation engines) having the same features as other process simulators. ICAS, however, also has tools that “add to the system” and “toolboxes”that help to solve some of the tasks typically found in different CAPE related problems (for example, designselection of solvents, synthesis of process flow sheets, environmental impact analysis, model parameter estimation, etc.). The “add to the system” helps to introduce new compounds into the database, new unit operation models into the simulation model library, and new property models into the property model library in an integrated manner and requiring no additional programming. Once all these additions are introduced to the system, all tools within ICAS will be able to use them. In this way, the “add to the system” and “toolboxes” help to define the problem
I
775
776
I
1 Integrated Computer-aided Methods and Tools as Educational Modules
ICAS
An ahT1u
L
1
J
1
F
Kmetic hlodel
Adaptation
I
solved together Figure 1.2 problems
Multidisciplinary tools for nature process-product design
where most of the time is spent in software-based solution of problems. Often, the problems are not defined correctly or consistently, resulting in failure of the numerical solver. The different features of ICAS help to guide the students into defining/ formulating the problem correctly so that the numerical solver does fail, if a solution for the formulated problem exists.
1.3 Educational Modules
Three computer-aided educational modules involving property prediction (suitable for a course on thermodynamics or product design), extractive distillation-basedseparation (suitable for courses on separation processes, distillation, or process design), and model derivation and solution (suitable for courses on modeling, simulation and/or numerical methods) are presented. The objective of these educational modules is to highlight the solution strategies for typical chemical engineering problems (traditional as well as new) where software may be used (with clearly defined objectives) in some or all the solution steps (tasks).At the same time, it is emphasized that
7.3 Educational Modules I 7 7 7
the software is just an efficient calculator (that is, it provides answers when asked) but it does not work as an engineer. Also, in addition to the above calculator service, the software plus the workflow and data flow provide insights to improve the solution efficiency (of the overall problem) and therefore, the productivity of the user (student). 1.3.1 Computer-aided Property Estimation
This computer-aided teaching module introduces the students to the workflow, data flow and the tools needed to perform phase equilibrium calculations (saturation point calculation and generation of various types of phase diagrams: vapor-liquid, liquid-liquid or solid-liquid). The students learn the importance of the property model selection, the need for property databases, the need for additional property models, the model equations, and finally, the important calculation steps for solving the problem. In this way, the students are able to appreciate not only the property related calculations but also understand their influence in other problems, such as process simulation and design. Two problems are presented here: 0
0
Analyze the properties of a chemical called chemical fentanyl. Evaluate the binary mixture of ethanol-water.
The first problem could easily come from the product analysis step of a chemical product design problem, while the second problem could come from a bioprocess (downstream separation of a fermentation product or solvent based separation by distillation or even a solvent based crystallization process). In order to progress further in the process-product design problem, the pure component properties as well as the mixture properties need to be evaluated. 1.3.1.1 Analysis of Fentanyl
Here, we wish to analyze the properties of fentanyl (CAS number 000437-38-7) in terms of its state at the normal conditions of temperature and pressure, if it is toxic and its solubility properties in water and other solvents. The pure component properties that would be needed are: normal boiling point (Tb),normal melting point (Tm), the heat of fusion (AHf),the heat of vaporization (AHvap), the vapor pressure (Pat), the Hildebrand solubility parameter (as),the octanol-water partition coefficient (log &,) and a measure of toxicity (LCsO). The following steps (workflow) could be performed: 1. Check databases to find properties of fentanyl (properties such as Tb, Tm, A&, (Hvap, Pat, a s , log &,and Lcso). 2. If the properties cannot be found in the databases, use a property estimation package. a. Generate the needed properties through a property model by giving the molecular structural information.
778
I
I Integrated Computer-aided Methods and Tools as Educational Modules
3. Analyze the properties (estimated or retrieved from database). a. What is the state (solid, liquid or gas) at the normal condition of temperature and pressure? b. Is it a hazardous compound? c. Are there known solvents for fentanyl? d. How can solubility of fentanyl in solvents be quickly checked?
The methods and tools that are needed to perform the workflow shown above are the following: a fairly large database of pure component properties, a software package for prediction of pure component properties (with its resident model parameter tables), a software package for solvent search, a software package for solubility calculations (requires property models for mixture properties as well as algorithms for saturation point calculations).ICAS provides all the above in a single integrated system. Uses of ICAS (Gani 2001) for each of the above steps are highlighted below (information on all the ICAS tools can be found at www.capec.kt.dtu.dk/Software/ICAS-andits-Tools/or in Gani (2001)): Step 1: Search of the CAPEC database (Nielsenet al. 2001) in ICAS finds fentanyl but having only the molecular weight (336.48) and the normal melting point (360.65 K). This means all other properties need to be estimated. Databases in most process simulators will not have this compound or its properties. Step 2 To generate the properties, the ProPred toolbox within ICAS is used. ProPred needs the molecular structure of the molecule (as a 2D drawing, as a 2D/3D mol.file or as a SMILES string). The CAPEC database has the SMILES string, from which ProPred is able to draw the molecule, identify the needed groups and estimate the properties (in the case of fentanyl, since all the group parameters were not available, it also needed to create the groups, which is an option available in ProPred). Table 1.1 gives the SMILES string for fentanyl, the 2D drawing of the molecule and the properties estimated by ProPred. Step 3: Analyze fentanyl in terms of the generated properties. Step 3a: At the normal condition of 300 K and 1 atm, fentanyl is solid. Step 3b: It is hazardous as indicated by the -log (LC50) value. A high value (higher than 3) indicates a highly toxic compound. LC50 is the aqueous concentration causing 50 % mortality in fathead minnow after 96 hours. Step 3c: The CAPEC database does not list any known solvents for fentanyl. However, as the Hildebrand solubility parameter is known (6, at 298 K), it can be used to obtain some idea of which compounds could be good solvents. A search of the database for compounds having similar & (at 298 K) shows that hydrocarbons are likely to be good solvents while fentanyl will have very low solubility in water. Step 3d: To estimate the solubility, the needed properties can be seen from the following equation (condition for solid-liquid equilibrium where only one compound exists in the solid phase): 1 = X S Y S exp [AHfI(RTm)I(T - Tm)l]
(1)
SMILES string
CCC(=O)N(C~CCCCC~)C~CCN(CCC~CCCCC~)CC~
Tb
703.47 K (estimated with created groups[’])
T,
360.65 K
AHf
44.66 kJ mol-’ (estimated with created groups)
AH,,,at 298 K
42.16 kJ mol-’
6s at 298 K
20.78 MPa”* (estimated with created groups)
Log K,
3.85
-Log (LCSO)
7.32
P”‘ at 300 K
Fentanyl is solid at this ternperatwe
where x, is the saturation composition of solid s in solution, y s is the liquid activity coefficient of the solid compound in solution, R is the universal gas constant and T is the temperature at which solubility is to be calculated. From Eq. (l),it becomes dear that to estimate the solubility of fentanyl in a solvent,we need to estimate its liquid activity coefficient in the solvent (for which a property model is necessary) as well as the heats of fusion and melting point of fentanyl. A quick estimate may be obtained by setting y, = 1 (assuming ideal liquid). Note that if a liquid activity coefficient model such as UNIFAC (Hansen et al. 1991; Kang et al. 2002) is used, it will require the corresponding group interaction parameters, which for fentanyl are not available. Also, since ys is dependent on composition as well as temperature, an iterative solution technique would be necessary. The SoluCalc toolbox in ICAS, which has been specially developed for estimating solid solubility in solvents, can be used for this purpose. Figure 1.3 shows the calculated fentanyl saturation curve in hexane (solvent).Note that the students have the option to directly calculate the temperature versus composition diagram through ICAS utility toolbox, or, develop their own binary SLE phase diagram software using the property model (as a model object) from ICAS through modeling/simulation software (such as EXCEL, MATLAB, etc.).
780
I
7 Integrated Computer-aided Methods and Tools as Educational Modules
Figure 1.3
x,
Estimated saturation solubility curve for fentanyl in hexane
The above workflow could be repeated for any new chemical being considered as a product, provided the needed property variables can be identified and their values measured or estimated through appropriate property models. Solving this type of problems with process simulators is not efficient and in most cases probably not also possible. Having all the necessary tools available in an integrated manner saves time and provides valuable insights to the problem and its solution. 1.3.1.2 Evaluation of Ethanol-Water Mixture
Here, we wish to evaluate the ethanol-water mixture with respect to its separation from a biofermentation reactor. Depending on the mixture characteristics, different separation schemes may be generated. Here, however, we will first look at the vapor-liquid equilibrium and confirm that it is indeed a minimum boiling azeotrope. Then we will introduce a solvent, for example benzene or ethylene glycol, to the system and evaluate the ternary mixture (in terms of ternary azeotrope and liquid-liquid miscibility).The methods and tools needed to perform these tasks (calculations) are available in most commercial simulators but they are not necessarily organized for an integrated approach. In this example, however, we will break down the problem into multiple tasks in order to understand the problem, to highlight the importance of property model selection, the need for accurate pure component properties (in this case, vapor pressure) as well as the associated workflow (calculation steps or tasks) and data flow. Since ethanol-water is a nonideal mixture for which vapor-liquid (and possibly vapor-liquid-liquid when a ternary system is considered), equilibrium needs to be calculated, the properties, calculation methods and tools that are needed, are linked to the equilibrium model used. That is, if we select the equilibrium model as a twomodel gamma-phi type, we may select an activity coefficient model for the liquid
1.3 Educational Modules
phase and the ideal gas model (equation of state) for the vapor phase. Neglecting the Poynting correction factor, the vapor-liquid equilibrium is represented by:
In the above equation, yl is the vapor phase composition of component i, x, is the corresponding equilibrium liquid phase composition, y ,is the liquid phase activity coefficient of component i, F' is the vapor pressure of component i at the equilibrium temperature and P is the corresponding system pressure. Since y bis a function of composition and temperature and is a function of temperature, an iterative solution scheme needs to be devised to obtain the equilibrium temperature and the corresponding vapor composition for given liquid composition and pressure. Repeating the calculations for different values of liquid composition within the limit zero to one and keeping the pressure fxed at the original value, generates the entire so-called PT-xy diagram. Now consider the following three options:
rt
0
0
0
Given models for yl (for example, UNIFAC) and PFt (for example, the Antoine correlation), develop a computer program to generate the phase diagram. In principle, any modeling software (EXCEL, MATLAB, etc.) can be used to develop the software with the ICAS supplied property model object. Given a program to calculate the saturation temperature and vapor composition for specified liquid composition and pressure (and for a selected set of property models), repeat the calculations to generate the phase diagram. In principle, any modeling software (EXCEL, MATLAB, etc.) can be used to develop the software with the ICAS supplied property-utility object. Given software with built-in models and calculation options, select the appropriate model and calculation option to generate the needed phase diagram. In principle, any process simulator and/or ICAS utility toolbox may be used. If, however, the compounds are not ethanol and water, then the available options in the software need to be checked.
All three options will give the required solution, while the first option will be timeconsuming, the student will gain more insight on the needed workflow and data flow than the last option, which will be efficient in terms of problem solution but will provide little insight. An interesting approach could be to get the students to use all options, that is, use the first option at the beginning and use the last option when they are experienced in phase equilibrium calculations. This example also provides insights on property model selection (assuming an ideal system, which is usually the default selection for many software, the azeotrope will not be found) and accuracy of the needed properties (that is, the predicted azeotrope location may be highly sensitive to the accuracy of the vapor pressure model). Also, the parameters of the liquid activity coefficient model are important as they may give different values of the location of the azeotrope. Having analyzed the binary system, the next step is to analyze the mixture when a third component (for example, a solvent) is introduced. What happens to the azeotrope? Are there still a vapor and a liquid phase in equilibrium? If not, is there an
I
781
782
I
I Integrated Computer-aided Methods and Tools as Educational Modules
additional liquid phase? If yes, how to calculate the phase compositions and is there also a ternary azeotrope? As in the case of the binary mixture calculations, most commercial simulators also provide options to perform the calculations so that the above questions can be answered. The important points, however, are the following: Have the correct property model selections been made? What are the workflow and data flow? Are the results acceptable?Again, by breaking down the problem into multiple tasks, the students will gain more insights to the solution of the problem. In terms of calculations, in addition to the vapor-liquid equilibrium, the liquid-liquid equilibrium also needs to be computed: X l i Y l i = X2iY2i
(3)
The subscript 1 and 2 in the above equation indicates liquid phase 1 and liquid phase 2, respectively. The same liquid phase activity coefficient model will now be used in Eq. (2) and ( 3 ) .However, the liquid composition from Eq. (2) needs to be checked for phase stability and if found unstable, Eqs. (2) and (3) will need to be solved simultaneously. The following calculation steps (workflow) could be used 0
0
Use Eq. ( 3 ) to identify any binary pair (there are 3 binary pairs in the ternary mixture), which splits into two liquid phases: - Check also if this is the vapor-liquid azeotrope point (for a vapor-liquid-liquid system, one pair will satisfy this condition). - For the binary system showing both an azeotrope as well as liquid-liquid phase split, add incremental amounts of the third component and perform the vaporliquid-liquid calculations until there is only one liquid phase (note that for each calculation, the pressure is fixed at a constant value but the temperature is also calculated).The ethanol-water system with benzene as the solvent will show a vapor-liquid-liquid ternary system with the benzene-water pair showing the binary liquid-liquid phase split. If none of the binary pairs split into two liquid phases, there will only be a vapor in equilibrium with liquid and the calculations for the binary mixture can be repeated in the same way as before. The ethanol-water system with ethylene glycol as the solvent will show only a vapor-liquid system.
The three options listed above for the binary mixture calculations can also be repeated now for the ternary mixture calculations. Again, initially, it is better for the students to develop their own calculation program but later, they can use standard software (process simulator and/or ICAS utility toolbox). Further extension to this problem could be considered by adding an inorganic salt to the ethanol-water mixture (to study the salting-in or salting-out effect). Here, again the workflow will basically remain the same but the data flow will be significantly different because of the different property models and their corresponding property parameters. The above modules will prepare the students to solve all types of phase equilibrium problems in a systematic way, even when not all data and/or property model parameters are available. One advantage of allowing the students to develop their
7.3 Educational Modules
own software is that in the case of new compounds or systems, they will be better prepared to perform the necessary tasks. Note that since developing their software will not need much programming effort, they will actually concentrate on learning the calculations involved in each task.
1.3.2 Separation of Azeotropic Mixtures
The second computer-aided educational module introduces the students to aspects of design and simulation of solvent-based extractive distillation. The students use the phase diagrams that they have learned to generate. The main feature of this module is that it encourages the students to make the design decisions based on simple thermodynamic calculations rather than use the simulator on a trial and error basis to find the solution. That is, all the important design decisions are made through the generated phase diagrams and thermodynamic insights (for example, sequencing of distillation columns, selection of solvents, design of individual columns, etc.), which also generates an initial estimate for a detailed simulation of the process. In the final step, the initial estimate is passed to the simulator and the solution is obtained without too many iterations (by the rigorous model solver). Therefore, the student spends less time with the simulator and more time generating information (knowledge) that can be used for problem solution. 1.3.2.1 Problem Description
The problem which we will consider is the separation of a binary mixture of acetone and chloroform into high purity products. As this binary mixture forms an azeotrope (to be verified), solvent-based extractive distillation is an option. Benzene is a well known solvent that has been reported as a suitable solvent. Benzene, however, is not acceptable for environmental, health and safety (EHS) reasons and an alternative solvent needs to be found and verified. 1.3.2.2 Problem Solution
The following steps provide a solution to this binary mixture problem. Step 1: Perform a mixture analysis and verify that an azeotrope exists and check for its dependence on pressure. Make decisions/choice of property model and calculation steps. An ideal system cannot be assumed since the binary mixture forms an azeotrope. A VLE-based phase diagram (using Eq. (2)) needs to be generated. Figures 1.4a and 1.413 show the binary azeotropes as a function of pressure (ICAS utility toolbox is used to generate these diagrams). Step 2 Find solvents that perform as well as benzene but without the negative EHS properties of benzene. Using the ProCAMD tool in ICAS, a large number of candidate solvents are generated. The problem definition is as follows: find solvents that
I
783
784
I
I Integrated Computer-aided Methods and Tools as Educational Modules
Figure 1.4 Acetone-chloroform VLE calculated at 5 bar (a). Acetone chloroform VLE calculated at 1 bar (b)
are acyclic organic compounds having 320 K < Tb < 420 K, T, < 250 K, that is more selective to chloroform than acetone (selectivity > 1.7), is totally miscible with acetone-chloroform (therefore a vapor-liquid system) and does not form azeotrope with either acetone or chloroform. The solution statistics from ProCAMD is shown in Fig. 1.5. It can be noted that 5614 candidate molecules were generated, out of which 59 satisfied all constraints. From these 59 molecular structures, 133 isomers were generated and a more refined property estimation was made to identify 111 compounds that satisfied all constraints. Note that to identify the solvents, pure component properties ( Tb, T,) as well
: I _I-
_I.--^ Number d
m q o u n d s designed 5614 Number ot m g p u n d s d e c t e d 59 Number 01 itmers deslped 133 Numbn of is0mt.l selected 111 Tolal tm wed to hesign 0 32 s
I'ScreenedOM'Statistics In1 Prnnery Calculatms F ~ m c I i mpwp I m w m q 5755 of 5614 Nmmal Bdinp point 36 d 259 Molecular wlghE 92 of 223 Solvent lox 7 of 131
_I
Figure 1.5 Solution statistics from ProCamd for the solvent selection/design task
7.3 Educational Modules
1
785
CHLOROFORM (61 1) Azeotrope Feed from ICAS
*I
0
A
__
0 ACETONE (56.1)
100
0
2-METHYLHEPTANE (1 17.6)
Figure 1.6 Calculated distillation boundaries for the acetonechloroform solvent
as phase equilibrium calculations (selectivity,azeotrope calculation and liquid miscibility) needed to be performed. Therefore, an integrated system capable of doing these steps automatically for the user is very useful for this type of problems. Two of the alternatives found are methyl-n-pentyl-etherand 2-methylheptane. Step 3: Analyze the alternative solvent candidates in terms of distillation boundaries. For this step, the PDS tool in ICAS is used. PDS performs, among others calculations of distillation boundaries, residue curves and distillation column design for a specified ternary system (reacting or nonreacting). This problem is nonreacting and the distillation boundaries calculated through PDS are shown in Fig. 1.6
Step 4 Generate the process flow sheet and design the corresponding distillation columns. This can be done interactively. Design the first distillation column with the feed mixture and the fresh solvent added. The combination of the feed mixture and solvent places the total feed in the region where acetone can be obtained as the top product, and a mixture of mainly solvent and chloroform will be obtained as the bot-
786
I
I Integrated Computer-aided Methods and Tools as Educational Modules
All temperatures are displayed in Celcius
CHLOROFORM
-+-
Liquid on top trays Vapor on top trays -t Liquid on bottom tray -AVapor on bottom tray
+-
0 ACETONE
100 n 2-METHYLHEPTANE
Figure 1.7 Design of the distillation column that is consistent with the distillation boundaries
tom product. This feed will then be sent to a second column, from where the chloroform will be obtained as a top product and the solvent will be recovered and recycled. PDS and ICAS-SIM (steady state simulation engine) can be used interactively to design the first column (number of stages, feed location, product purity, reflux ratio, etc.) and verified through steady state simulation. Then the second column is added and the procedure repeated, which also generates the total flow sheet. Figure 1.7 shows an output from PDS, highlighting the design calculations for the distillation column. Step 5: Perform simulation and optimization to determine the optimal design of the separation process. In this case, all solvents can also be included in the same calculations and the optimization problem will find the optimal flow rate for each of the solvents. The flow sheet used in problem formulation and the sample simulation of the flow sheet are shown in Figs. 1.8a and 1.8b. This last step can be performed with the steady state simulation engine of ICAS or with any process simulator. ICAS has a
1.3 Educational Modules
I
787
S Q d y colmn connection stream nurnbprs ; .
I c
Feedsham1
FeedPtream2
Top prohfucl
EWnrtl product
1 :
Plole T b top product 6 hnuld a l w y h.we tee lowett stream w~rnher, cornparad to the bottom plrdlJCt Stlmv
7
I
1
Figure 1.8a Generation of flow sheet (synthesis and design) with PDS-ICAS (a).
direct interface for the PRO-I1 steady state simulator. This means that any flow sheet synthesized and designed can be directly simulated in PRO-I1 without adding any further information in PRO-11. Several variations of the problem are possible. For example, provide a binary azeotropic mixture that has significant pressure dependence so that the solvent-based separation can be compared with pressure-swing distillation. If a solvent that introduces a phase split is selected or specified, then at least one distillation column will have vapor-liquid-liquid phase equilibrium and the design calculations will have to be consistent with the resulting distillation boundaries. Also, a reacting system may be introduced. Basically, the tasks (steps in workflow) shown above would be similar in all cases but the data flow would be different because of specific choices of the models used to perform the tasks. If the student is given the calculation steps (workflow) and a corresponding set of integrated tools, the students will not only be able to solve these problems without too much difficulties but also gain valuable insights with each solution step.
788
I
1 Integrated Computer-aided Methods and Tools as Educational Modules 1
b ) S T R E A M NUHBER
_____-----
_________--______-
TEMPERATURE (K) PRESSURE (atm) ENTHALPY ( K/ Kmo l e ) ENTROPY (l/Kmole) U-ENERGY (K/Kmo l e ) DENS. ( K m o l e / m " 3 ) VAPOUR FRACTION L I Q U I D FRACTION -- - -- ( K m o l e / h r ) ACETONE CHLOROFORM 2 -ME THYL HE P T W E
_
__
-
STREAM NUMBER
___ _
.
----_ __ _ -_ _ 10.00039 10.98840 89.89447
373.15131 332.04270 1.00000 1.00000 -24505.51626 -26910.01571 44.693 63 35.62 145 .24532.76285 -2 6910.18654 5.85403 0.03 670 0.00000 1.00000 1.00000 0.00000 9.05476 0.85620 0.06853
__________ 9.97949 ________-_ __________
5
6
389.00747 1.00000 -27545.64659 47.80138 -27545.83081 5.42829 0 .ooooo 1 .ooooo
350.00000 1.00000 -19114.10880 3 6.69383 -19142.82893 0.03482 1.00000
0.00039 0.98811 88.89475
10.00000 10.00000 1.00000
4
3
__________
_ _ _ - _ _ --_ _ _ _ _
.
0.94563 10.13220 89.82593
342.63881 1.00000 -14287.60495 37.41181 -143 15.72 103 0.03557 1.00000 0.00000 0.94524 9.14408 0.93118
_____-___11.02051 __________ __--_-_-_-
_-_-_-_-_- __________
_________________TEMPERATURE ( K ) PRESSURE (atm) ENTHALPY ( K / K m o l e ) ENTROPY(l/Kmole) U-ENERGY ( K / K m o l e ) DENS. (Kmol e / m * 3 ) VAPOUR FRACTION L I Q U I D FRACTION - -- - ( K m o l e / h r ) ACETONE CHLOROFORM 2 -ME THYL HEP TANE
350.00000 1.00000 -27615.86917 41.50294 -2 76 16.02 699 6.33 628 0.00000 1.00000
2
-
0.00000
__________ 2 1.00000 _---_--____ _________ __________ __________ -------_-_ 89.88325
Figure 1.8b sheet (b)
Verification by simulation o f the generated process flow
1.3.3 Integrated Computer-aided Modeling
The third educational module deals with modeling issues. Here, the importance of model analysis before attempting to solve the model equations is emphasized together with model reuse in an external modeling/sirnulation environment. The degrees of freedom, the ordering of equations, the method of solution are all interrelated and through a computer-aided modeling toolbox, the students are encouraged to use these features whenever they have to solve problems represented by a set of equations. As shown in Fig. 1.9, the objective is to transform the model equations into a program code that can be used by other simulation engines and/or solvers. At the same time, the programming effort should be a minimum. But, before going to the solution phase, the model equations must be thoroughly analyzed. Although the modeling/simulation problems in most cases can be solved through a number of existing programs (MATLAB, Maple, Mathematica, etc.), in the current example, the use of ICAS-MOTis highlighted for the reasons given above.
7.3 Educational Modules
Figure 1.9 Import o f model equations to MOTand after transformation and analysis, export to a process modeling component (or external simulation engine)
The use of MOT, a tool in ICAS, as an educational tool will be highlighted through a simple reactor modeling exercise. 1.3.3.1 Modeling Problem Description
The series reactions:
are catalyzed by H2SO4. All reactions are first order in the reactant concentration. The reactions are carried out in a semi-batch reactor that has a heat exchanger inside with UA = 35 000 cal h-' K and ambient temperature of 298 K (see Fig. 1.10). Pure A enters at a concentration of 4 mol dm-3,a volumetric flow rate of 240 dm3h-', and a temperature of 305 K. Initially there is a total of 100 dm3 in the reactor, which contains 1.0 mol dm-3 of A and 1.0 mol dm-3 of the catalyst H2S04.The reaction rate is independent of the catalyst concentration. The initial temperature of the reactor is 290 K. The objective of this exercise is to highlight the basic features of ICAS-MOTand at the same time, the modeling steps needed to obtain the dynamic evolution of all concentrations in the semi-batch reactor for the given operating conditions.
vO
Figure 1.10 Semi-batch reactor scheme
I
789
790
I
7 Integrated Computer-aided Methods and Tools as Educational Modules
1.3.3.2 Description of the Mathematical Model Mole Balances
dcA dt
-==A+
d CEI dt
- = rg
dCC dt
(CAO - CA) VO V CB V
- -VO
CC V
- = rc - -vo
(7)
and the kinetic constants are Arrenhius-type
The relative rates are obtained using the stoichiometry (liquid phase) for the reaction series (Eq. (4)):
So that the net rates are as follows:
rA = r1A = -klACA
4 mol dm3 mol FA,, = -x 240 - 19GO dm3 h h
7.3 Educational Modules
Energy Balance NC
_-
NR
i=l
i=l
dt
NC
is equivalent to dT _ -
uA(Ta - T ) - FAoCPA(T- TO)+ [(AHRxlA)(hA) 4- (AHR~~,)(QB)]V
dt
+ CBCpB + c C c p C ] v f N H z S O ~ C ~ H ~ S O ~
[CACpA
(22)
Summarizing, the process model is given by: four ordinary differential equations (Eqs. (4-6, 22)) and fourteen algebraic equations (Eqs. (8-20); note that Eq. (14)is actually two equations). This differential-algebraic system can be solved simultaneously using ICAS-MOT.The data for this problem can be found in Table 1.2. Table 1.2
Data for the differential-algebraic system
Variable
Value
Units
Description
MoT-variable
CAO
4.0
mol dm-'
Initial concentration of compound A
CAO
CHiSOdI
1.0
mol dm-3
Initial catalyst concentration
CH2S040
vo
240.0
dm' h-'
Initial flow rate
vo
vo
100.0
dm'
Initial reactor volume
vo
UA
35^000.0
cal h-' K
Heat transfer coefficient
UA
T,
298.0
K
Ambient temperature
Ta
To
305.0
K
Inlet Temperature
TO
TI A
320
K
Reaction temperature (reaction 1)
TlAO
K
Reaction temperature (reaction 2)
T2BO
Kinetic reaction constant(reaction 1)
klAO
TZB
~~~
300 ~~~
~
~
~
~ I A
1.25
h-'
kzB
0.08
h-'
Kinetic reaction constant (reaction 2)
k2BO
EIA
9500.0
cal mol-'
Activation energy (reaction 1)
E1A
EZB
7000.0
cal mol-'
Activation energy (reaction 2)
E2B
cPA
30.0
cal mol-' K
Thermal heat capacity of compound A
CpA
cPB
60.0
cal mol-' K
Thermal heat capacity of compound B
CpB
CPC
20.0
cal mol-' K
Thermal heat capacity of compound C
CpC
35.0
cal mol-' K
Thermal heat capacity of catalyst
CpH2S04
-6500.00
cal mol-'
Reaction enthalpy (reaction 1)
DHRxlA
A HRAB
+8000.00
cal mol-'
Reaction enthalpy (reaction 2)
DHRx2B
R-
1.987
cal mol-' K
Universal gas constant
R
CPHSO~
AHR~IA ~~~~~
I
791
792
I
1 Integrated Computer-aided Methods and Tools as Educational Modules
1.3.3.3 Modeling Steps in MOT
The model developer does not need to write any programming codes to enter the model equations. Models are entered (imported) as text files or XML files, which are then internally translated. Step 1:Type the model equations in MOTor transfer a text file or an XML file. In Fig. 1.11,the model (Eqs. (4-22)) has been typed into MOT.
............................................ #*Nonisothermal Multiple Reaction * * #* #*CAPEC, Department of Chemical Engineering* #*Technical University of Denmark * #*MSC, April, 2004 *
............................................ #The series reactions: # k 1A K2B B - _ _ _ _ > 3c 274 -----> # # (1) (2)
#************* #Solution
*
#************* #Kinetic klA = klAO*exp( (ElA/R)* (l/TlAO - 1 / T ) k2B = k2BO*exp ( (E2B/R)* (1/T2BO - 1 / T )
) )
#Rate Laws rlA = -klA*CA r2B = -k2B*CB rA = rlA rB = klA*CA/2-k2B*CB rC = 3*k2B*CB #Reactor volume v = vo + vo*t FA0 = CAO*vo Cpmix = CA*CpA + CB*CpB + CC*CpC #Mol balances
dCA dCB dCC
=
= =
rA t (CAO - CA)*vo/V rB - CB*vo/V rC - CC*vo/V
#Energy Balance dT = (UA*(Ta-T) - FAO*CpA*(T-TO) + (DHRxlA*rlA + DHRx2B*rZB)*V)/(Cpmix*V + CHZS04O*VO*CpH2S04) Figure 1.11
MOTmodel
7.3 Educational Modules Figure 1.12a Incidence matrix (a). Equation parttioning and incidence matrix comparison (b)
Step 2: Model translation. MOTtranslates the imported model, lists all the equations and variables found in the translated model. It has built-in knowledge to distinguish between algebraic equations (explicit and implicit), ordinary differential equations and partial differential equations. The variables are classified as parameters, known, unknown (implicit), unknown (explicit), dependent and dependent prime (used for differential equations only). MOT automatically identifies the unknown (implicit), unknown (explicit) and dependent equations. The user needs to classify the known variables as either parameters (which could then be selected for model parameter estimation) or known variables (which could be used as design variables for optimization). Also, the user needs to link the dependent variables to the dependent prime (that is, in dyldt, y is the dependent variable and dy is the dependent prime).
I
793
794
I
7 Integrated Computer-aided Methods and Tools as Educational Modules
Step 3: Incidence matrix analysis. As the variables are assigned, MOT is able to generate a corresponding incidence matrix (equations are placed in rows and variables in columns) and order the equations as near as possible to a lower tridiagonal form. Also, unless the degrees of freedom are matched (that is, a square matrix of equations and unknown variables and the known variables equal the degrees of freedom), MOT does not allow the solver to be called. A sample of the incidence matrix is shown in Figs. 1.12a and 1.12b. Step 5: Define the independent variable. In this case, the independent variable is time t. MOT is now able to find the 14 algebraic equations and 4 ordinary differential equations and is therefore ready to start the solver. However, before the solver is called, the initial values for the dependent variables need to be specified together with the parameters and the known variables. Step 6: Set variable values. Figure 1.13 shows the specified values for the variables. Model Solution
Step 7 Select variables for output. The user may select the variables whose values can be stored and visualized as the numerical solver progresses towards the solution.
1
0 0
0 0
a 0 0
h 0
Figure 1.13
Initial condition and values of known variables
1.3 Educational Modules I 7 9 5
Imbendent Vanable
. . -- ..... - .... Figure 1.14 Component A: dynamic concentration motion. Component 8:dynamic concentration motion
.--
Step 8: Select the solver. In this case, dynamic option and forward integration with the BDF method is chosen and an end-time of 1.5 hours may be specified. Step 9: Simulation results. As the numerical solver integrates, the selected variables values from Step 7 will be shown in dynamic plots and their values will be stored in files for later use. Figures 1.14a and 1.14b shows two samples of the dynamic plots.
796
I
1 Integrated Computer-aided Methods and Tools as Educational Modules
Step 10: Saving the MOTfile for reuse as well as export to other simulation engines. When the user is satisfied with the model and its solution, the MOTfile can be saved for use within the ICAS simulation engine, for use from EXCEL or for use from any external simulation engine (with or without the CAPE-OPEN interface). Further expansion of the exercise includes providing experimental data and regressing the kinetic model parameters and process optimization. Other exercises may also be developed where the generated model object is used to simulate the reactor, which is part of a process flow sheet. The same procedure can also be repeated to generate model objects for new property models, kinetic models and unit operation models. Using these model objects and a simulation environment such as EXCEL, the student can develop their own process simulator. For process design tasks involving new chemical products, this option is very practical and useful as the available process simulators do not have the chemicals and/or the models to handle them.
1.3.4 Other Educational Modules
A number of ICAS-based educational modules have been developed and can be downloaded from the following address: www.capec.kt.dtu.dk/Software/ICASTutorials/ICAS-Tutorials-Workshops. These tutorials cover problems related to: 0 0 0
0 0 0
computer-aided property estimation computer-aided modeling computer-aided product design computer-aided separation process design computer-aided batch process modeling integrated computer-aided process engineering.
The objective of all these exercises is to highlight a systematic solution procedure and the use of an integrated set of methods and tools. Other useful information can be found in the document from CACHE Corporation (2004) on Computing through the curriculum: an integrated approach for chemical engineering (www.che.utexas.edu/cache/newsletters/fa112003~computing.pdf). See also Strategiesfor creative problem solving by Fogler and LeBlanc (www.che.utexas.edu/ cache/strategies.html); The fiontiers in chemical engineering education (web.mit.edu/ che-curriculum/); the EURECHA Web site at (www.capec.kt.dtu.dk/eurecha/);the EFCE working party on Education (www.efce.org/wpe.html).
Reference 1797
1.4 Conclusion
The educational modules are documents containing the problem definition together with a detailed step by step solution strategy where the calculations for each step are highlighted with possible use of specific software (data flow and workflow).They can be used by the teacher to highlight an algorithm or methodology or technique (theory). They can also be used by the student to learn how to apply the theory (algorithmslmethods) to solve problems. Through the education modules, some of the important issues (including the danger of misuse) related to the use of computers in chemical engineering education have been highlighted. They have been prepared such that the user is in charge of the navigation and decisions while the computer does the calculations, data transfer, code generation, etc., which it is supposed to perform very efficiently. One of the principal experiences from the use of the presented educational modules has been that once the students understood the main ideas and got familiar with the workflow and data flow, they are able to tackle a wide range of similar problems without much help and in a very short time. In this way, they also learn to appreciate that the computer-aided tools are there to help them but they are the ones who need to make the right decisions and drive the use of the software in the appropriate direction. Finally, it was found that the students were able to appreciate the concepts better, were able to solve more challenging problems in a shorter time, resulting, thereby, an increase in productivity. They were able to use the same methods and tools also for problem solution in other courses. The feedbacks from the students have also helped to improve the software as well as the workflow and data flow of the educational modules. Finally, we would like to emphasize that software should not be used as a replacement of the process-product engineer; it should be used to do what it was designed for, with the user always in charge of directing it.
References Abildskov]. Kontogeorgis G. M. Chemical product design: a new challenge of applied thermodynamics. Chemical Engineering Research and Design 82(All) (2004) p. 1494-1504 Doherty M. F. Mafone M . F. Huss R. S. Decision-making by design: experience with computer-aided active learning. AIChE Symposium Series 323(19) (2000) p. 163-175 Edgar T. F. Rawlings]. B. Frontiers of chemical engineering: The systems approach. DYCOPS Conference. Paper No. 206, Boston, MA, July 2004 Finlayson B. A. Rosendalf B. M . Reactor transport models for design: how to teach
5
6
7
students and practitioners to use the computer wisely. AIChE Symposium Series 323 (96) (2000) p. 176-191 Fraga E. S. Gani R. Ponton J . W.Andrews R. Tools integration for computer-aided process engineering applications, in B. Braunschweig and R. Gani (eds.) Software Architectures and Tools for Computer-aided Process Engineering. CACE-11, Elsevier Science, Amsterdam, (2002) pp. 485-514 Gani R. ICAS Documentations CAPEC Internal Report, Technical University of Denmark, Lyngby, Denmark 2001 Hansen H. R Rasmussen P. Fredenslund A. Schifler M. Gmehling J . Vapor-liquid-. equilibria by UNIFAC group contribution
798
I
1 Integrated Computer-aided Methods and Tools as Educational Modules
8
Revision and extension. Industrial Engineering Chemistry Research 30 (1991) p. 2352-2355 Kang]. W. Abildskov]. Gani R. Cobas]. Estimation of mixture properties from first and second-order group contributions with UNI-
9
FAC models. Industrial Engineering Chemical Research 41(13) (2002) p. 3260-3273 Nielsen T. L. Abildskov J. Harper P. M. Papaeconomou I. Gani R. The CAPEC Database, Journal of Chemical Engineering Data 46 (2001)p. 1041-1044
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
2 Data Validation: a Technology for Intelligent Manufacturing Boris Kalitventzefi Ceorges Heyen, and Miguel Mateus
2.1 Introduction
This document is intended to progressively demonstrate the technical assets of the data validation technology. Most of the technical features of the technology will be enlightened by specific process systems. However, validation technology can be and is implemented in various industrial sectors. Namely, it covers chemical, petrochemical and refining process plants, thermal and nuclear power plants, upstream oil and gas exploitation fields. Data validation is an extension of data reconciliation. Before demonstrating the technical assets of the validation, the reconciliation concept will be reviewed.
2.2 Basic Aspects of Validation: Data Reconciliation
Data reconciliation (DR) is the first mathematical method that addressed the concept of data validation for linear problems. It exploits information redundancy and (linear) conservation laws to extract accurate and reliable information from measurement data and from the process knowledge. It allows for the production of a single consistent set of data representing actual process operations, assuming the plant is operated in a steady state. To understand the basic principles of data reconciliation, one must first recognize that plant measurements (including lab analyses) are not 100% error free. When using these measurements without correction to generate plant balances, one usually gets incoherence in these balances. Some sources of errors in the balances directly depend on sensors themselves: 0 0 0
intrinsic sensor accuracy, sensor calibration, sensor location.
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
800
I
2 Data Validation: a Technologyfor Intelligent Manufacturing
A second source of error when calculating plant balances is the small variations in the plant operating conditions and the fact that samples and measurements are not exactly taken at the same time. Using time averages for plant data partly reduces this problem. However, lab analyses are usually carried out at a low frequency, and thus can seldom be averaged. Finally, one must also realize that in some parts of a plant too many measurements are available, whereas in other parts some measurements are missing and must be back-calculated from other measurements. As shown in detail in Section 3, Chapter 3 of this book, data reconciliation can be expressed mathematically as:
subject to where where
F(x ,y*) = 0 G(x ,y") 1 0
pi
Yi
3 oi
F(x,Y") = 0 G(x,y*) 2 0
is the reconciled value of measurement i, is the measured value of measurement i, is the unmeasured variablej, is the standard deviation of measurement i defining its confidence interval. corresponds to the process equality constraints. corresponds to the process inequality constraints. is called the penalty of measurement i.
In early publications on DR, equality constraints were considered linear. Thus, one obtains a quadratic formulation, where the Jacobian matrix of F is constant. It is a Gaussian regression problem: given a set (y, oY), the algorithm provides x and y* vectors together with their standard deviation ay*(when computed). When inequality constraints were not considered, some values y or x could be negative, what had no physical meaning in chemical or mechanical processes, where most variables must be positive (e.g., pressure, flow rate, mole fraction). It was considered as a source of information because one had to find which measurement was responsible for that negative value. Later on, simple inequalities (y 2 0, x 2 0 ) were considered. When F or G is nonlinear, the DR problem can be solved by sequential linearization. The minimization problem is solved iteratively, using algorithms such as SQP (sequentialquadratic programming). It is now possible in some commercial codes to calculate not only the reconciled values of measurements (y?;, oy*) but also unmeasured state variables ( x , ox)and some key performance indicators (KPIs) related to measured and unmeasured state variables (yq, x),as well as their uncertainty aKpI:
measurements Y a priori accuracy *' first principle modeling statistical laws
VALIDAJION
' ' O
x
y*
*X
I
OKPI
Kpl
2.2.1 Redundancy Analyses: Local/Overall
The level of redundancy is the number of measurements, which are available beyond the absolute minimum needed to calculate the system. Three different cases can be encountered: 0
0 0
If a system's redundancy is negative then there is not enough information to determine the state of the system. Additional measurements need to be introduced. A redundancy equal to zero means that the system is globally just calculable. And finally, if a system has a positive redundancy, DR can use it as a source of information to correct the measurements and increase their accuracy. In fact, each measurement is corrected as slightly as possible but in such a way that the reconciled measurements match all the constraints of the process model.
However, overall redundancy is not enough. It must also be achieved at the local scale. Indeed, redundancy can be positive at the global scale, but negative locally; consequently, information is lacking to completely describe the whole process. This point is illustrated with Fig. 2.1, based on a typical synthesis loop. Components A and B are introduced into the process feed, and converted into component C in the reactor unit SYNTHES (2C = 3A + B). Afterwards, the product ABC is separated in three distinct streams. One is recycled upstream in the process, another represents a purge, and finally an outlet stream contains only the compound C. Let us consider a process model restricted to mass balances. Measured variables are shown on Fig. 2.1. This simple process model presents a global redundancy level of 2 (20 equations for 18 unmeasured variables). However, local redundancy of unit SEP-2 is equal to zero. If one of the measurements around this unit was missing then global redundancy of the model would still be 1 but local redundancy of unit SEP-2 would be -1. Therefore, the system would not be reconcilable until a supplementary measurement around the mentioned unit has been provided.
802
I
2 Data Validation: a Jechnologyfor intelligent Manufacturing
I RECYCLED I AB-OUT I I FLOWtld I 300300 I 109109 1
del measured
Figure 2.1
reconciled
Process flow diagram (PFD) of a synthesis loop
2.2.2 If Complementary Measurement(s) are Needed: Which One(s)?
If the available measurement set is not enough to calculate all required process performance parameters, how do you propose an extra set from which complementary measurements can be chosen? Thus, the system becomes either just calculable or locally redundant, but necessarily globally redundant, as illustrated before. Consider the previous example, but here we would remove the total flow rate measurement of stream “purge”.Reconciliation software would then propose a set of variables from which possible complementary measurements ought to be chosen. Namely, the software would purpose in this case a choice between partial flow rates of compounds A and B in either stream “abc”or “purge”,or compound C partial flow rates in either stream “abc”or “c-prod. If it is not possible to add any measurement to the system (because of economical constraints for example), another way of avoiding negative redundancy is to aggregate some units in the model as a more global “black box” (that simply ensures global balances to be satisfied). Less information will be obtained locally, but this may allow estimating the required KPIs.
2.2.3 Increased Accuracy on Measured Data: Why?
As explained before, data reconciliation is based on measurement redundancy. This
concept is not limited to replicate measurements of the same variable by separate sensors; it includes the concept of topological redundancy, where a single variable can be estimated in several independent ways, from separate sets of measurements. Therefore, a posteriori accuracy of validated data will be better than a priori accuracy of measured data. A priori and a posteriori means before and after consistency treatment, or in other words before and after validation and reconciliation.
2.2 Basic Aspects of Validation: Data Reconciliation Table 2.1
DR back corrects measurements and increases their accuracy
AB-1
Flowrate Partial Flowrate (A) Partial Flowrate (B)
RECYCLED
Flowrate
AB-2
Partial Flowrate (A) A-3”B
ton/d tonid ton/d
Meas.
Meas. Acc.
Reconc.
Reconc. Acc.
1016,O 181.0 885,O
3,00% 3,00% 3,00%
1042,8 180,l 862,s
1,64% 2,98% ZOO%
30,O
3,00%
30,O
3,00%
190,O
3,00% O,OO%
190,s 0,o
O,OO%
0,o
1,60%
In the previous example, unit MIX-2 presented a level 2 redundancy. Indeed, for 5 equations and 9 variables (and thus 4 degrees of freedom) we have G measurements (G - 4 = 2). Table 2.1 shows the a priori and the a posteriori accuracy of those measurements around unit MIX-2. Reconciled measurements are more accurate than raw data when measurement redundancy is available. But when no redundancy is available locally, no improvement can be expected. This is the case for the estimation of the recycled flow rate: the measured value is not corrected, and its accuracy is not improved. When some measurement is not corrected that does not imply it can be trusted; this would only be the case if the standard deviation would decrease.
2.2.4 DR Avoids Error Propagation
Progress in automatic data collection has presented plant operators with a flood of data. Tools are needed to extract and fully exploit the relevant information it contains. Furthermore, most performance parameters are often not directly measured, but calculated from measured values. Thus, random errors on measurements also propagate in the estimation of KPIs. Data reconciliation, on the contrary, allows state estimation and measurement correction problems to be addressed in a global way. As a result, validation technology avoids error propagation, and provides the most likely estimate of the actual operating point of the process. Thus, the plant can be safely operated closer to its limits. Illustration of error propagation is addressed in Table 2.2 for the example considered in Fig. 2.1. The goal is to estimate the flow of component C in the process output. Because raw measurements are not error free, mass balance equation around mixer MIX-1 is not respected (fourth row of Table 2.2). Cases 1 to 3 show what happen when each of the three (process inlet) flow rates are manually corrected to close the mass balance, the flow rate of stream C being computed afterwards. In the last case DR is used to provide a consistent and accurate set of reconciled measurements. Indeed, Table 2.2 shows a balance value equal to zero. Note that measurements may be considered as correct since reconciled values are inside their confidence limits.
I
803
804
I
2 Data Validation: a Technologyfor Intelligent Manufacturing Table 2.2
Error propagation
Measured
ton/d 181.0 A in B in ton/d 885.0 t o n l d 1016.0 AB in Balance in ton/d -50.0
ABC Purge ton/d Cout ton/d
72.0
1
Accuracy
3.00% 3.00% 3.00%
/
3.00%
I
Case z
Case 3
Reconcilied
Accuracy
131 885 1016
180.1 862.8 1042.8
2.98% 2.00% 1.64%
0
181 835 1016 0
0
o
72 994
72 994
72 944
72.0 959.9
Case 1
181 885
1066
/ 3.00 % 1.80%
Knowing that the standard deviation of flow measurements is 3 % of the measured value, one obtains for outlet compound C the flow rate: 0 0
with DR: a standard deviation equal to 1.80 % with an estimate of 960 ton/d; with manual correction: a spread of estimates equal to 5.03 % (from944 to 994 tonld).
Thus, DR avoids error propagation and so provides more accurate computed parameters than those calculated by less rigorous or ad hoc correction modes. Plant engineers have to solve that type of problem regardless of if they have the appropriate tools or not. 2.2.5 Process Measurements to be Exploited
Key performance indicators (KPIs)can be determined accurately by validation of process measurement data. They are very useful for many purposes, e.g., revamping, energy integration, improved follow-up of the plant, possibility of working closer to specifications, detecting degradation of equipment performance, etc. A hydrogen plant process is used to illustrate the determination of accurate and reliable KPI. Namely, this example concerns the steam to carbon ratio (S/C) in the steam reformer feed, that is, one of the key control parameters in such plants. It allows controlling the conversion of methane to carbon oxide and hydrogen while avoiding carbon deposition on the catalyst. Two different cases were studied to compute this ratio: 0
0
First, DR was not considered. Ratio S/C was calculated from raw measurements of flow rates and compositions of process inlets (steam and natural gas) and reforming gas recycled. Afterwards, the same KPI was determined by means of DR.
Each of these two cases were reassessed, considering a measurement error on the steam flow rate (e.g., due to a leak). Namely, the steam flow rate is measured at either 72 ton/h or 78 ton/h. Results shown in Table 2.3 demonstrate that the uncertainty on the S/C ratio is reduced when data reconciliation is performed. Also, reconciled S/C ratio is less sensitive to the flow rate measurement error, which is detected and corrected by data
2.2 Basic Aspects ofValidation: Data Reconciliation
reconciliation. Thus, reconciliation detects errors in available measurements and yields accurately consistent and complete estimates of measured as well as unmeasured process parameters. Furthermore, in industrial practice one must take a safety margin for the S/C ratio to avoid carbon deposition in the catalyst. With DR, safety margins can be thinner, steam consumption is reduced and therefore plant operation costs less. Table 2.3
KPI computation
Without rneas. errors
With meas. errors
SIC ratio
rel. error
SIC ratio
rel. error
without D R
3.545
4.24 %
3.840
4.24 %
with D R
3.514
3.52 %
3.673
3.53 %
Here a real industrial case encountered in a hydrogen plant is described, for which validation technology was applied. In a hydrogen plant (operated by ERE company), the feed gas composition was not monitored accurately; measurement errors were leading to an approximate knowledge of the steam/carbon ratio [2], uncertainty being on the order of 30%. However, the hydrogen production efficiency and cost are strongly related to this ratio. Indeed a low S/C ratio decreases energy consumption. Therefore, a potential return of 500,000 euro per year had been identified. On the other side, a low S/C ratio could lead to carbon deposition (see Fig. 2.2) entailing a risk of catalyst damage (shut down for replacement costs five million euro). With on-line validation software the steamlcarbon ratio is determined nowadays with a precision of 1 %. This allows operating at the optimal point where energy costs are mastered and carbon deposition is avoided. This example shows how validation software allows for operation closer to the limits, taking care of safety constraints.
Fraction down wfoniler tuhr Figure 2.2
Profile of reformer reactor (courtesy of BP-ERE [3])
I
805
806
I
2 Data Validation: a Technologyfor Intelligent Manufacturing
2.3 Specific Assets o f Information Validation
Data validation is an extension of DR. In that case the set of corrected measurements and other calculated data respect linear and nonlinear constraints (mass, components and energy balances, reaction constraints as well as physical and chemical thermodynamic equilibrium constraints). Furthermore the technology includes data filtering, gross error detection/elimination, and it also provides the a posteriori accuracy of all the calculated data. Therefore, accurate and reliable KPIs are determined, as well as their accuracy. Moreover, validation software detects faulty sensors and pinpoints degradation of equipment performance (heat rate, compressor efficiency, etc.).
2.3.1 Accuracy of Nonmeasured but Calculated Data
Unmeasured variables of the system are calculated and their accuracy is quantified on basis of the measurements that are related to them. Therefore, in addition to providing substitution values for failed instruments, data validation software also calculates values that are not directly measured. Validation acts as a set of “soft sensors” that are robust and accurate because they are based on the reconciled values of all the measurements. Typically, validation technology provides three times more calculated data (and their accuracy),than the number of effectively measured data. Benefits are undeniable, costly lab analyses can be avoided. For instance, on the chemical site of Wacker Chemie (Germany)an on-line implementation of validation software reduced the number of routine analyses up to 40% (see Fig. 2.3) [3]. Wacker considered validation as a revolutionary way for quality follow-up of their plants: fobj, the sum of weighted squares of measurement corrections were checked for three years (see Fig. 2.4) [3]. They showed a reduction of the objective function (fobi) from 30,000 to 1000, demonstrating a better quality of sensors tuning. Any increase of that validation criterion alerts operators on possible plant upset.
Initial situaiioii 140
120 100
80 60 40
20 ~1ilrp
Figure 2.3 Reduction of lab analysis cost (courtesy o f Wacker Chemie [3])
2.3 .Spec+c Assets oflnformation Validation
30 OM
Source: Wacker Chemie
25 MI
20 nw 1s OM oidirre doily +
lOo(H1
5wn I
7
0 4 Figure 2.4 Sum of weighted squares of measurement corrections (courtesy of Wacker Chemie [3])
Furthermore, Wacker also follows the ratio -,J based on the chi-square statistical fob,
test (2). The chi-square test value depends on the number of redundancies of the system and on the statistical threshold of the test, typically 95 %. Active bounds are considered as adding new levels of redundancy. Two different cases are possible, whether the ratio is higher or lower than 1: 0
If-J > 1: no presence of gross errors in the set of measurements can be fobi expected.
0
If-J 5 1: presence of at least one gross error in the set of measurements is expected.
A data reconciliation result can only be exploited if the chi-square test is satisfied. Gross error detection and elimination is a feature of validation software that will be detailed next. 2.3.2 Key Performance Indicators and Their Accuracy
Key performance indicators (KPIs) are identified in the same way as nonmeasured state variables. Because measurement errors have been withdrawn from the set of reconciled data, the best possible estimate of the plant performance is delivered. Thus, KPIs can be accurately determined. Typical KPIs include: 0 0
global plant efficiency yields
I
807
808
I
2 Data Validation: a Technologyfor Intelligent Manufacturing 0 0 0 0 0
steam/carbon ratio, oxygenlcarbon ratio, H2/N2ratio, etc. specific energy consumption specific energy cost equipment duty and efficiency catalyst activity, etc.
Table 2.4 shows S/C ratio values and accuracy using data validation technology. In the third case, thermodynamic constraints were taken into account. KPI accuracy is more improved with data validation than with data reconciliation. This is due to the fact that data validation considers all available process information (temperatures, pressures, chemical reactions, equilibrium constraints, etc.), the redundancy level being thus higher. Moreover, S/C ratio is much less sensitive to measurement bias, as demonstrated with the introduction of a measurement error on the steam flow rate entering the reformer (see Table 2.4). The additional assets of data validation are described here after. Table 2.4
S/C ratio Without rneas. errors SIC ratio rel. error
With rneas. errors SIC ratio rel. error
without DR
3.545
4.24%
3.840
4.24 %
with DR
3.514
3.52 %
3.673
3.53 %
with data validation
3.423
0.63 %
3.432
0.63 %
2.3.3 Nonlinear Thermodynamic-based Data Validation 2.3.3.1 The Limitation of (Linear) Mass Balance-based Reconciliation
Most commercial data reconciliation packages are based on a linear solver and reconcile measurements on the basis of overall mass balances. Moreover, bounds on variables are seldom considered, meaning that negative flow rates or negative inventories can appear in the results. Additionally, mass balance-based systems only offer a low level of redundancy: at the most one gets one level of redundancy around each node where all incoming and outgoing rates are measured. As a consequence, the improvement in data quality is low and the results are very sensitive to gross errors in the measurements. On the contrary, thermodynamic-based data validation software provides additional equations increasing consequently the redundancy of the system, making it more accurate and less sensitive to measurement errors. At the same time, key performance indicators can be directly derived with a high level of accuracy and reliability. Of course, using thermodynamic properties has its drawback: most of the equations become nonlinear making linear solvers useless. Therefore, one must then use a nonlinear algorithm as large scale SQP-IP (sequential quadratic programming-
2.3 Specific Assets oflnformation Validation
v
p
Case I Design Data
A
‘-“-u.. A-STYRENE-RECOVERY A-EBZ-RECOMRY
n
Case 2 Compound Balance
‘-=-ljc.2
B f 98 97 B8 f $761 87 61
C-STYRENE-RECOMRY C-EBZ-RECOMRY
1500 1493 zocl 2 18 700 fie0 390 3 8 3
t O Q O 10 51 DO0 0 0 0
0uo
DO0
2430 1441 71 60 71 64 0 02 0 02
Measured
/ \
Case 3 Mass Balance
P M-STYREHE-RECOWRY M-EBZ-REC O M R Y
Measured Figure 2.5
0430 192 ?; 9852 4 8 9 %
/ \
Vahdated
Compound balances influence on accuracy
n
97 gQ 0 08 4. 87 85 0 41 %
030
000 000
030
go 70 Pa TO
ooo o m
Validated
810
I
2 Data Validation: a Technologyfor Intelligent Manufacturing
interior point), which has been implemented to solve complex nonlinear data reconciliation problems. 2.3.3.2 Example: Reconciliation of Two Distillation Columns
Two consecutive distillation columns are used to separate styrene (the final product) from unreacted ethylbenzene (EB), which is recycled to the reaction section (see Fig. 2.5). Case 1presents the design mass (and compound) balance of the plant. Case 2 presents typical measured values with a significant bias on the flow rate of recycled EB (stream c4) as reconciled in a compound-based data reconciliation system. The bias is clearly identified (3.70) and corrected (3.32) so that the styrene and EB recovery are accurately determined (87.86 * 0.41 %). Case 3 then presents the same flow rates reconciled using a simple mass balance system, which is unable to detect the measurement error and therefore calculates a wrong recovery of EB and styrene. One can see that the accuracy of the computed recoveries is considerably better when performing a compound balance than with a simple mass balance (in this case, more than ten times better).
2.3.4 Exploiting LV and LLV Equilibria as Source of Information
Variables describing the state of a process must be reconciled to verify consistency constraints representing basic laws of physics: dew point and boiling point constraints in condensers, evaporators, or distillation columns are a source of information exploited by thermodynamic-based validation software. The process of industrial ammonia production may be subdivided into three distinct parts: synthesis gas production, compression section and ammonia synthesis loop. Process natural gas (PNG) and steam enter the primary reformer reactor, after sulfur removal of PNG. High temperature and low temperature shift sections follow the secondary reforming, where compressed air is also introduced. After the methanator section, synthesis gas is partially recycled upstream in the process and partially introduced in the hyper compressor section. Finally, gas enters the ammonia synthesis loop. Figure 2.6 represents an ammonia synthesis loop process flow diagram (PFD), which can be considered as having an 8-digit structure with a heat exchanger in the middle. The synthesis gas enters the hyper compressor as well as the recycle gas, then the outlet (process gas) is cooled and partially condensed (106F) to recover ammonia. Afterwards, gas is heated through a counter-current heat exchanger, goes to the reactor section, then again to the same heat exchanger (at lower pressure than the cold process gas) before closing the synthesis loop. Condenser temperature (see Table 2.5) reflects a compromise between ammonia content and flow rate of the gas entering the reactor section. Considering condenser pressure as constant (158 bar) to simplify the following illustration, and condenser
2.3 Specific Assets oflnforrnation Validation Table 2.5
Condenser lO6F measurements Raw measurements
Validated measurements
Condenser lO6F
T Vapor flow rate (Nm’ h-’)
-14°C 456,890
-16.50“C 455,040
Reactor
PI, TI,
165 barg 185 “C 2.4 11.2
165 barg 181 C 2.829 11.07
%mol NH, %mol inerts
inlet composition and vapor flow rate specified, three different “what if‘ cases were studied (see Table 2.6). First, temperature was assumed equal to measured temperature -14°C. In the second column, temperature was considered the same as validated value -16.5”C. Finally, temperature is computed for ammonia content in vapor phase identical to raw measurement 2.40 %. Thus, a large amount of information can be extracted from the results: 0 0
0 0
At 158 bar, hydrogen solubility rises slightly with temperature. If temperature is considered equal to the raw measurement (-14”C), ammonia vapor composition estimated is considerably different from measurement (3.1 % instead of 2.4%). This proves inconsistence in the measurement set. On the contrary, vapor flow rate computed seems closer to that of the measurement value. In the second “what if” case, we reproduce validated data. To reach specified reactor inlet ammonia content (2.4%), temperature should be -20.8”C, instead of the -14°C measured. Therefore, vapor flow rate decreases.
This illustration shows the limitations of any partial “manual” validation. Why is validated data so important in this particular case? The “what if‘ computations show the size of uncertainty of different data. The more NH3 you condense in the condenser the better, but this has a direct cost, the energy spent in the cooling loop. How do you optimize any compromise if only nonvalidated data are available? Does it make sense? Table 2.6
LV equilibrium calculation results
T (“C)
-14
-16.5
-20.8
Vapor fraction
0.9586
0.9558
0.9517
3.10
2.83
2.40
Vapor flow rate (Nm3h-’)
456,330
455,039
453,049
%mol HI in liquid phase
0.38
0.36
0.33
Liquid flow rate
14.95
15.93
17.44
%ma1 NH3 in vapor phase ~~~~~
I
811
812
I
2 Data Validation: a Jechnologyfor Intelligent Manufacturing
2.3.5 Exploiting Reactions and Chemical Equilibria as Source of Information
This point can be illustrated with the same ammonia process described previously (see Fig. 2.6), in particular its reactor section. Ammonia is produced in a two adiabatic catalytic stages reactor. Reactants are nitrogen and hydrogen, entering the reactor in a stoichiometric mixture. Ammonia formation reaction is exothermic and reversible; therefore, gas leaving the first adiabatic stage is cooled before entering the second stage. Furthermore, the model considers a performance equation, consisting in the introduction for both adiabatic stages of a A Teqparameter, which takes into account deviation from chemical equilibrium. Because reaction is exothermic, A Teq will be positive. Thus, important information that can be extracted from data validation, considering reactions and chemical equilibrium, are performance parameters A Teq(see Table 2.7). Results pinpoint a closer approach to equilibrium in the first catalyst bed. In addition, it is possible to visualize validated ammonia concentration profile together with equilibrium curve and plant measurements (see Fig. 2.7). The two vertical lines represent measured inlet and outlet temperatures of the heat exchanger between the two catalyst beds. One cannot accept a measurement point above the equilibrium curve. This erroneous measurement set could not have been noticed any other way than exploiting reactions and chemical equilibria as an information source. Table 2.7
Performance parameters
AT- (‘C) First catalytic bed
6
Second catalytic bed
14
2.3.6 Exploiting Process Information
As explained before, data validation is based on measurement redundancy. The plant structure yields additional information, which is exploited to correct measurements. Consequently, considering a process at a global scale brings more accuracy to validated data than only taking into account a local section of the process. It is the same for the accuracy evolution of key performance indicators. Considering the same ammonia process as before, the H2/N2ratio in the synthesis loop was estimated in several ways. First, only a local section of the process was considered (the synthesis loop). Then, additional information of the plant was successively added until the whole process was taken into account. Results pinpoint a substantial reduction of the KPI inaccuracy when more and more process information is considered (see Fig. 2.8).
2.3 Specific Assets oflnforrnation Validation
........................ ...........................
813
c 30%
-
g E 25%
Y
gE
20%
8
15%
8
10%
c
5%
c
E
r"
5a
0%
-.
1
-Equilibrium curve Figure 2.7
Raw measurements +Validated
data
Synthesis reactor equilibrium curve
Evolution of loop H2/N2ratio accuracy 3,06 3,04
T
3,02
-
3,OO
z"
2,ga
0
c
e
-a I
2,96 2,94 2,92 2,90
Figure 2.8 Evolution o f a KPI imprecision according to process information taken into account
It was previously demonstrated that validation technology avoids error propagation. In fact, data validation software propagates accuracy. This technology combines process information and raw measurement data. The more information of the process taken into account, the more nonmeasured data (and so KPIs) will be accurate and reliable.
2.4 Advanced Features ofvalidation Technology
2.3.7 Detection of Leaks
Validation technology points out process performance degradation sources and helps to operate the plant closer to its ultimate performance. In particular, validation allows the detection of leaks. This can be illustrated by a practical case study related to a previous ammonia plant, where a leak in a NH3 synthesis loop was discovered. It would have been hardly detected by tools other than validation technology. A Carbochim plant operated in Belgium at 90 % of nominal capacity; a retrofit was studied to restore the expected capacity. Validation conveyed a leak in the heat exchanger in the middle of the 8-shucture synthesis loop (see Fig. 2.6). Thus, part of the process gas was cycling around from the compressor and condenser section, to the heat exchanger and again to the compressor. That leak had not been suspected. It probably developed and increased smoothly, but the question is, how could it have been discovered in the absence of the appropriate tool? Plant was shut down for isolating leaking tubes in the exchanger and was reopened to easily achieve the expected production rate without any costly additional investment. 2.4 Advanced Features of Validation Technology 2.4.1 Trivial Redundancy
Trivial redundancy cases are met when the validated value of a measured variable does not depend at all upon its measured value but is inferred directly from the model. This can occur in particular in L/V equilibrium drums, where complementary thermodynamic constraints must be respected. Indeed if, e.g., temperature, pressure, flow rate and composition of a condenser inlet stream were known together with the unit pressure drop, any complementary measurement (e.g., outlet temperature) would be considered as a trivial redundancy. Proper validation software detects trivial measurements, which then are no longer considered as measured. As a consequence, their measurement accuracy will not affect the accuracy of the respective validated variable. 2.4.2 Gross Error Detedion/Elimination
Gross errors are detected by means of a chi-square (2)statistical test, which has been previously explained at Section 2.3.1. 2.4.2.1 Detecting Gross Errors
The 2 statistical test enables the detection of gross errors in the sets of measurements. The 2 value depends on the total number of redundancies of the system, active bounds being considered as adding new levels of redundancy, and on the sta-
I
815
816
I
2 Data Validation: a Technologyfor lntelligent Manufacturing
tistical threshold of the test, typically 95 %. If the weighted sum of penalties is higher than the 2 threshold value, then there is a significant suspicion that gross errors exist. In such a case, all results obtained with that model are to be used with caution: validated values, identified performance factors and their reconciled accuracy. 2.4.2.2 Eliminating Gross Errors: The Highest Impact Method
Identifymg the actual source of the gross errors is not always trivial and requires a careful analysis of the results. The conventional technique (highest penalty method) is to ignore the measurements for which the highest corrections are made. This method is known to be inadequate in detecting some gross errors, for example, when the corresponding measurement is specified with a high level of accuracy as compared to the other measurements. On the contrary, the highest impact method detects the impact on the total sum of penalties by removing each of the measurements. This approach is in principle highly time-consuming and is therefore not used by most data validation packages. However, by means of specific algorithm, one can apply this technique in a calculation time of the same order of magnitude as a single validation run.
2.4.3 How to Validate with Petroleum Fractions
The modeling of a refinery process or a part of it is always confronted by the complexity of the petroleum and its products. Indeed, crudes and petroleum cuts are mixtures of a large number of chemical compounds, thus making it very difficult to model their properties without accurately knowing their composition. Therefore, it is common practice to model such streams by the well known pseudo-componentconcept. 2.4.3.1 Concept
A pseudo-component is a hypothetical molecule characterized by its density and its boiling temperature. Those parameters are then used to estimate the other thermodynamic properties (like critical properties or specific heat capacity) using empirical correlations as proposed for example by American Petroleum Institute (API). According to the crude type and origin, different pseudo-components must be used to get an accurate representation.The usual way of characterizing petroleum fractions is to generate a defined mixture of pseudo-components,with given boiling point, having the same properties as the Petroleum fraction. Namely, their composition and their density are identified in order to match all stream distillation curves and densities. Most common standards for distillation curves are true boiling point (TBP)and ASTM; each of them can be expressed on a weight basis or on a volume basis (see Fig. 2.9). Several petroleum cuts involved in a distillation process can be modelled as a data validation system involving:
2.4 Advanced Features ofvalidation Technology
I
817
300
200 T@ 100
0
20
0
40
Vol Yo
60
80
100
Figure 2.9 Decomposition in pseudo-components based on a TBP curve of a gas oil 0
0
0
as variables, the pseudo-compounds densities and their measured volume fractions in each stream; as equations, the mass balances of the distillation column for each pseudocomponent; as measurements, the densities and TBP or ASTM curves of all connected streams.
On this basis, data validation will generate calculated distillation curves from measured TBP or ASTM data, as it identifies the density of each pseudo-compound; this involves minimization of weighted deviation between measured and calculated distillation points, under density constraints, mass and thermal balance constraints. The other thermodynamic properties of the pseudo-compounds are also estimated. 2.4.3.2 Crude Oil Atmospheric Distillation Example
Following example concerns the modeling of a crude oil distillation unit (CDU), preThe crude oil is separated into six ceded by the preheating train (see Fig. 2.10) [4]. Petroleum cuts: naphtha, jet, kerosene, gasoline, diesel and residue. Gasoil
+P 4
4J
I
Crude
Diesel
4 I
NaDhta
Kero
Figure 2.10
Preheat train a n d CDU
I
I
I
Lw-Pp
Residue
I
UP-PP
FURNACE
Naphta
818
I
2 Data Validation: a Jechnologyfor Intelligent Manufacturing
Measurements available to perform the modeling are: 0 0 0
density and distillation curves (ASTM-DSG)of the petroleum cuts, temperature, pressure and flow rates of the streams, design data of the exchangers.
These measurements are validated and the other thermodynamic properties of pseudo-compounds are subsequently computed. Furthermore, with several sets of measurements taken in one year it was also possible to confirm fouling problems for the exchangers at the end of the preheating train. Indeed, their heat transfer coefficient decreased by a factor of two after one year of operation. Thus, data validation uses a rigorous method integrating robustly complex distillation systems. This forms a sound basis for the analysis of refinery performance, and for instance, of a retrofit potential.
2.4.4 Advanced Process Control Benefits from Working with Data Validation
Nowadays plants face a market where margins are under pressure due to global competition, more stringent environmental regulations, a higher demand for flexible operation and more severe safety requirements. Control techniques are required to increase those margins. Advanced process control (APC) systems can help optimize control to deal with those challenges [S]. Data validation technique enhances the quality of information allowing APC systems to work more efficiently.
Plant
Figure 2.11
tnput a output "
+[ream meac
Data validahon
KP'
+
%;ll
outpui
Data Validation working together with APC system
Dynamic
2.4 Advanced Features ofvalidation Technology
king raw information reliability and coherency. Some measurements could be erroneous and balances not be closed. Data validation software uses input and output streams of raw measurements in order to provide one coherent and accurate data set. With data validation, APC systems are allowed to take actions on the process based on coherent and reliable measured and nonmeasured data. Validated data contains measurements, equipment parameters, KPIs, and many other nonmeasured but validated data. The a posteriori accuracy of measurements and KPIs is provided. When a dynamic model is tuned according to validated data, benefits are generated as early as at the model design stage. 2.4.4.2 Benefits at Model Design Stage
Reduced dynamic model must be certified: dynamic model parameters are chosen and adjusted in order to produce results identical to measurements (A = 0 on Fig. 2.1 1).Benefits using validation techniques are double: 0
0
Measurements, to which dynamic model results are compared, are checked and corrected by data validation techniques. Measurements are much more reliable (they represent the actual process operation) and thus the model will be more reliable as well. Data validation technology reduces the number of principal directions needed to represent process variability, allowing the reduced dynamic model to represent the same level of variability using a model with a lower number of principal directions (see Fig. 2.12) [GI.
a9
0.8
.I
c
0.3
I
819
820
I
2 Data Validation: a Jechnologyfor intelligent Manufacturing
-
KPI validated (%) KPI based on raw measurement (%)
103 102 101 100 99
98 97
96 95 M-l 0
5
10
15
20
Run Figure 2.13
25
-
30
35
40
45
KPI follow-up and control
Figure 2.12 illustrates the number of principal directions (or components) necessary to represent the variability of a given system when the latter is based on validated data or on raw measurement data. Taking into account more principal components allows for the explanation of a higher fraction of the total process variability: 0
0
When using raw measurements, a large number of components are needed to explain most of the process variability (upper limit is the number of original variables, 186). When using the validated data sets, the number of significant principal components tends to a much lower number than the number of variables (upper limit is the number of degrees of freedom of the data validation model).
This reduction in the problem size allows the dynamic model to be more reduced, when based on validated data (accuracyincreased and noise reduced). Control of the process is made easier and the computing demand is decreased. Furthermore, since data validation technique enforces the strict verification of all mass and energy balance constraints, use of this technology ensures that the principal components represent the proper process behavior. 2.4.4.3
Benefits at Operation Stage Process control behavior can be very different whether APC is working together with data validation or not. Figure 2.13 presents the evolution of a process yield (KPI) versus time (run) whether data validation software is used or not: 0
Without data validation, APC detects a KPI variation and tries to stabilize the process operation. Based on raw data with embedded errors, APC takes actions risking being unusefd, resources-expensive, and even process-disturbing.
2.5 Applications I821 0
With data validation, APC considers actual process operation (validated,measured and nonmeasured, data are used as inputs to APC). APC can now use all of its resources on optimization of the process rather than on more stable operations.
2.5 Applications 2.5.1 On-line Process Performance Monitoring
The goal is to deliver on a periodic basis (typically each 10 to GO minutes) a coherent heat and mass balance of a production unit. In addition to the compound balances, the laws of energy conservation are introduced in the form of heat balances. This more detailed modeling of the production unit allows validation software to work as an advanced process soft sensor and to determine reliable and accurate KPIs. Typical benefits are: 0 0
0
0
0
access to unmeasured data, which is quantified and related accuracy determined; early detection of problems: sensor’s deviation and degradation of equipment performance are pinpointed; quality at process level: anticipate off-spec products by carefully monitoring the process; work closer to specifications: as the accuracy of measurement data improves, the process can be safely operated closer to the limits. This feature is reported as being financially the most productive. decreased number of routine analyses (up to 40 % in chemical applications); reduced frequency of sensor calibration (only faulty sensors need to be calibrated).
improvement of Product Selectivity in a BASF Plant
This example shows how the operation of a production unit at a BASF operating division of performance chemicals can be improved using data reconciliation 171. The product C is produced by conversion of component A with component B using 2 reactors. Several undesired by-products are generated, thus selectivity has to be maximized. Process model generated took into account only component mass and atomic balances. Several data sets at different process conditions were validated and from those the selectivity of product C was calculated. The diagram Fig. 2.14a (courtesy of BASF) shows this selectivity as a function of residence time in the first reactor, calculated from measured values; Fig. 2.1413 shows the results from validated data. The selectivity, calculated from crude data, is spread widely and in some cases selectivity values of more than 100 % were obtained, which is meaningless. The corresponding unfeasible area is marked on the charts. One could estimate in this case that a residence time of about 45 minutes is enough to maximize selectivity. However the selectivity based on reconciled data shows a clearer trend and does not
822
I
2 Data Validation: a Jechnologyfor Intelligent Manufacturing
104
-5
102
c1
IQO
ea
98
M
0
3
Values calculated from crude data
p
*
1
i
96
5
i
t
.........
............
90
1- ---+
........
...............
..........
I E
I
.
.I
35
4c
resldence Slrne (mh)
45
50
Values calculated from reconciled data
(6)
102
c
0
98
...............
96
. -. ......
94
92
A
+ A *y * 4 1i
.
/--+
-/'*
......
......
.......................................
35
40
+
.............
...........
.....
residence time (min)
.............
..........
45
....
ij I
50
Figure2.14 Nitrile selectivity as a function of residence time in the reactor: raw data versus reconciled data (courtesy of BASF [7])
exceed the 100% boundary. One realizes that residence time should be larger than the one estimated without data validation, in order to achieve the product optimal selectivity (residence time of about 48 minutes). This example (considering only a restricted part of a process) shows that the evaluation of selectivity is meaningful only on the basis of validated operational data. These lead to a safe interpretation of measurements. By doing so, a selectivity close to 99% can be obtained systematically, which is 2 % higher than the average figure obtained without data validation.
2.5 Applications I823
0
1
I
L I ' I
I
reconciled data
mI I
C
I
0
I
e o crude data
rate of back-reaction
molar ratio water I methyl-formate
Figure 2.15 Evolution o f t h e specific energy consumption as a function o f the rate o f back-reaction and parameters that influence this rate: crude data versus reconciled data (courtesy o f BASF (71)
Reducing Energy Consumption in a Formic Acid Plant of BASF
A main problem at the formic acid production is the undesired back-reaction of formic acid during distillation, which increases the specific energy consumption [7]. This is shown in the left diagram of Fig. 2.15, on the basis of measured values within a time interval of G days. BASF looked for process parameters, which may influence the back-reaction, in order to decrease operation costs (specific energy consumption). One of them is the molar ratio of water to methyl formate, both educts of the formic acid synthesis. The diagram on the right shows the influence of the mentioned molar ratio to the rate of the unwanted back-reaction: 0
0
Without data validation (raw data , black symbols), no influence is visible but only a cloud of data. Using validated values (grey symbols) a clear trend is visible, which means that reducing the molar ratio decreases the rate of back-reaction.
Both parameters could be correlated only by data validation. Due to these results the specific energy consumption can be reduced by 5 %. Data validation allows the most effective command variables for the control of a process to be determined. This study led to the discovery of which control variable had a dominant effect on the said rate of back-reaction, and consequently on the specific energy consumption. Performance Monitoring at KKL Nuclear Power Plant
On-line implementation of validation software in the nuclear power plant (NPP) of Leibstadt - Switzerland (KKL) generated substantial benefits (two million USD per year) over the past 10 years. The priority of NPP operators is to run their plant as close as possible to the licensed reactor power in order to maximize the generator power. To meet this objective, plant operators must have the most reliable evaluation of the reactor power. The definition of this power is based on a heat balance using several
I
824
I
2 Data Validation: a Technologyfor Intelligent Manufacturing
Rcconcilcd Vnlue
I
/
I
Value
0
Sfi
1 on
150
200
250
300
Days in Operation Figure 2.16 Operating closer to the limits: site feed water flow of the NPP Leibstadt (courtesy o f Kernkraftwerk Leibstadt [8])
measured process parameters, among which the total feed water flow rate is the most critical value. On-line implementation of validation software in the NPP of Leibstadt - Switzerland (KKL) has quantified the deviation between the actual and the measured feed water flow rate (see Fig. 2.16). In Fig. 2.16, only one recalibration is illustrated. This was used to convince the legal authorities about the reliability of the implemented validation technique. Validation results were also compared to test runs. In agreement with the authorities in charge of safety of NPP, KKL nowadays recalibrates the measured flow rate based on the validated value, as soon as a deviation becomes significant. This enables the power plant to work close to its maximum capacity throughout the whole year (1145 MW). Prevention of losses due to heat balance errors increased the plant output by 5 MW. In addition, the use of this technology also made the annual heat cycle testing obsolete and significantly reduced the cost for mechanical and instrumentation maintenance [8]. Performance Monitoring of Refinery Units at LOR (Lindsey Oil Refinery), U.K.
On-line validation software is used at LOR for the performance monitoring of refinery units for several years. One set of applications is about the follow-up of fouling of the heat exchangers of several preheat trains. The main goal of the application is to determine the appropriate amount of anti-fouling product in order to maintain an adequate operation of the preheat trains and thus energy efficiency of the plant.
2.5 Applications
Another set of applications concerns the follow-up of furnaces and power plant boilers. The goal here is to determine with sufficient reliability and accuracy their energy efficiency. Any inappropriate operation can easily be detected and corrected when necessary. Performance Monitoring of PE Plant at Confreville, France
The application enables any deviation within the instrumentation to be detected and provides guidance to the operators for the recalibration of the on-line analyzers. In addition, it ensures that the on-line soft sensors remain valid by counter-checking the quality of the instrumentation on which they rely.
2.5.2 On-line Production Accounting 2.5.2.1 Description and Benefits
This solution aims at providing a clear view of the production accounting, on a daily basis, of a whole industrial site: rigorous and automatic procedure for production accounting based on closed material balances. These material balances can be performed either: 0
0
On a global mass balance basis: mass flow rates, in terms of tons entering and tons going out of each production unit, are reconciled to generate a coherent mass balance of the whole site. This approach is typically applied in refineries and covers the whole site including the tank farm. On a chemical compound basis: additional information is then required on the composition of the various streams and the reactions schemes. This approach is typically applied in chemical and petrochemical production plants.
Typical benefits are: 0
0
Actual plant balances: closed balances are key elements as much for effective production accounting as for efficient performance monitoring. Decrease of unidentified losses and surpluses: abnormal conditions leading to losses and/or apparent surpluses are identified and can be corrected before they impact the economics of the plant.
Several real cases can be referred to, namely an adiponitrile plant and two refineries. Production Accounting at ERE and Holborn Refineries
On-line validation software establishes the daily mass balance of the whole ERE refinery (BP refinery located at Lingen, Germany), covering about 150 tanks and about 50 production or blending units. Only a global mass balance (in tons) is made around each unit. The person in charge of the use (and maintenance) of the system spends about 30 minutes per day to generate all the validated reports and inputs for the production accounting. More recently the Holborn refinery in Germany has
I
825
826
l
2 Data Validation: a Jechnologyfor Intelligent Manufacturing
installed a similar system, which also automatically detects abrupt changes in measured data identifying possible changes in operation or instrumentation failure. Production Accounting at Butachimie, France
In this application the modeling includes compound balances of each main piece of equipment of an adiponitrile production facility. Reconciled compound balances are provided on a daily basis. All main chemical compounds as well as the catalysts used in the system are rigorously tracked all over the process unit.
2.6 Conclusion
Data reconciliation and validation is nowadays a mature technology. However it is often confused with flow sheeting and process simulation. Still, much has to be done to inform engineers and managers who have not learned about this technology during their studies. We have tried to convey the importance of this technology, and the very high diversity of applications and benefits that it can provide for the process industry.
References 1
2
3 4
Belsim VAL1 4 User’s Guide, Belsim, Belgium, 2005 BP-ERE, AN-ERE-Ol.pdf,available at: www.belsim.com, 2004 WACKER AN-Wacker-Ol.pdf, available at: www.belsim.com, 2002 Delava P. Maricchal E. Vrielynck B. Kalitventzef B. Modeling of a Crude Oil Distillation Unit in Terms of Data Reconciliation with ASTM or TBP Curves as Direct Input. Application: Crude Oil Preheating Train, Proceedings of ESCAPE-9 conference, Budapest, May 31-June 2 1999, Computers and Chemical Engineering Suppl., (1999) pp. 17-20
5 6
7 8
APC Systems www.ipcos.be, 2005 Amand T. Heyen G. KalitventzefB. Plant Monitoring and Fault Detection: Synergy between Data Reconciliation and Principal Component Analysis. Computers and Chemical Engineering 25 (2001) p. 501-507 BASF, AN-BASF-OZ.pdf, available at: www.belsim.com, 2002 Kernkraftwerk Leibstadt, AN-KKL-Ol.pdf, available at: www.belsim.com, 1995
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8 I827
3 Facing Uncertainty in Demand by Cost-effective Manufacturing Flexibility Petra Heijnen andjohan Grievink
3.1 Introduction
This chapter deals with a case of flexible production planning for a multiproduct plant to optimize expected proceeds from product sales when facing uncertainty in the demands for existing and emerging new products over the planning period. The manufacturing capacities of the plant (that is, the nominal production rates) for the existing and new products are futed by its design and these are not subject to adaptations by making changes to the plant. Hence, the flexibility refers exclusively to the planning problem and it is not coupled with a plant redesign. As the inherent uncertainty in customers’ demand forecasts is hard to defeat by a company, the industry’s specific capabilities with respect to responding rapidly to new and changing orders must be improved. New technologies are required, including tools that can swiftly convert customer orders into actual production and delivery actions. On the production side, this may require new planning technologies or new types of equipment that are, for example, dedicated to product families, rather than to individual products. Many companies need to use medium term planning in their product development and manufacturing processes in order to sustain the reliability of supply and the responsiveness to changing customer requirements. Flexibility is often referred to in operations and manufacturing research as the solution for dealing with swift changes in customer demands and requests for intime delivery (Bengtsson 2001). The concept has received even more attention with the upcoming of e-business in the chemical industry. The actual meaning, interpretation and consequences of “operating flexibility” are, however, not instantly clear for a particular case or company (Berry and Cooper 1999). A number of uncertainties may induce organisations to seek more flexible manufacturing systems. Common sources of uncertainties are depicted in Fig. 3.1.
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright @ 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
828
I
3 Facing Uncertainty in Demand by Cost-efective Manufacturing Flexibility
I Product Demands
I
:;?ability Figure 3.1
Types of uncertainties
On the input side, manufacturing systems have to deal with suppliers’ reliability with respect to feed stock supply, involving quantities (v), quality (q), cost (C), and with uncertainties in time (t). Secondly, process inherent uncertainties exist, concerning equipment availability (Tj, and modeling uncertainties. On the product demand side, the same types of uncertainties can be found for each product, involving demand (d), quality ( q ) ,and cost (C), and time (t). For the products a distinction is made regarding two different sales conditions. At the beginning of the planning period some sales contracts can be secured, under which the amounts ( y ) that can be manufactured and sold. The excess manufacturing capacity of the plant can be used to make the amounts (z),which will capture market opportunities during the planning period. It is noticed that the number of products (n) can change over time. This change reflects a trend towards diversification in many production markets. To achieve this diversification and to cope with shorter product life span, it seems preferable for manufacturing systems to have flexible resources. Extensive research has been done into the flexibility of (chemical) processes that are subject to uncertainties on the input side and with respect to the availability of the processing equipment, possibly influencing the feasible operating region of the plant (Bansal et al. 1998; Swaney and Grossmann 1985). Less research has been done, however, into flexibility that is characterized by the possibility to cope with changes in demand or product mix. The right way to respond to change is always system specific, and dependent on the system’s flexibility. Many approaches for dealing with uncertainties exist (CorrEa 1994). As this study concerns product mix variations and demand variations, the monitoring and forecasting technique was selected. The uncertainty aspect is modeled by means of a stochastic approach. In the development of a planning technique its applicability requires careful consideration. Firstly, the technique should be compatible with the work processes and the associated level of technical competence. Among others, this requires that the input and the output can be well understood and interpreted by those who will use it. Secondly, the cost of using the technique (time and money wise) should remain low. It would be very helpful to use input data that can be obtained without excessive efforts, while the results of the planning can be easily (re)producedwith small computational effort.
3.2 The Production Planning Problem
3.2 The Production Planning Problem
A case study was developed based on experiences at a company that makes various
types of food additives in a multiproduct batch plant. In this plant several groups of products are produced on a number of reactors. At the beginning of every new production period of one year, planning management agrees with the customers about the amount and price of products that will be produced to meet customer demands in the coming period. These agreements between company and customers are laid down in annual contracts. The demand for products that could be sold in these annual contracts is in general very large and the total capacity of the plant could have been sold out. However, planning management has strong indications that the demand for a new and very profitable product will increase during the coming production period and it could be very attractive to keep some of the capacity free for this newcomer on the market. Not only that, but also for the current products it could be quite profitable to not sell all capacity beforehand, since the price for which the products can be sold during the production period is in general significantly higher than before by contract price. Unfortunately, the demand for the products during the production period cannot be assured. Planning management would like to establish in the production planning how much of the current products they should sell in annual contracts and how much capacity they should leave open for every individual current product and for the new one in such a way that the final profit achieved at the end of the production period is as high as possible. The plant production capacity acts as a restriction on the total amount that can be produced. The next sections will introduce a simple probabilistic model for the product demands as well as a manufacturing capacity constraint (Section 3.3). The realised product sales are related to the corresponding profit over the planning period (Section 3.4). In order to optimize the manufacturing performance, two objective functions are chosen that take into account the distributive nature of the demands and product sales (Section 3.5). The first objective is the expected value of the final profit over the planning period. The second objective is a measure for the robustness of the planning; it involves maximization of the first quartile of the profit. The outcome of the modeling is a multiobjective, piecewise linear optimization problem (Section 3.6). Due to the discontinuities the problem is solved by means of a direct search method, the Nelder and Mead algorithm. The multiobjective problem is turned into two single objective problems. The solutions to these problems define the full range between maximum expected profit (with a high risk) and the robust profit (for a low risk scenario). This approach allows a production manager to take a preferred position between these two extremes. The result is a production planning and the associated profit. Each step in the model development is illustrated by its application to the case study of the food additives plant, taking base case values for model parameters. To be able to make a good evaluation of the risks, the sensitivity of the profit and the optimal planning are studied for small changes from the nominal model parameters
I
829
830
I
3 Facing Uncertainty in Demand by Cost-efective Manufacturing Flexibility
(Section 3.7). Finally, the implementation aspects of the proposed planning method are discussed (Section 3.8).
3.3 Mathematical Description o f the Planning Problem
To solve this production planning problem we need to formulate it in a more formal way. Assume that the current product portfolio consists of n products that can be produced on several exchangeable units. The decision about on which specific unit a product will be made is established in the production schedule and is considered to be outside the scope of the production planning. In the production planning, the planners take the overall production capacity into consideration without allocating products to specific units. In the production planning the following decisions variables should be established: 0
the amount of the current products sold in annual contracts in ton per year:
yi, i 0
E (1, 2 ,
..., n} ;
the capacity left open for the current products and for the new one in ton per year: xi,
i E { 1, 2, ..., n, n + I} .
The information that is needed to make these decisions consists of the following parameters: the profit that can be made with the production of one ton of a certain product, depending on the retail price and on the production costs, divided into: - the profit made on the current products sold in annual contracts in dollars per ton: q,i E (1, 2, ..., n} ; - the profit made on sold amounts of the current products and of the new one
during the production period in dollars per ton:
,pi,i E { 1, 2, ..., n, n + I} ; the total production time available in hours per year: T; the production time needed to make the products in hours per ton: z i , i E { 1 , 2,..., n , n + l } ;
the demand for the current products that can be sold in the annual contracts:
Si,i E (1, 2, ..., n} ;
3.3 Mathematical Description ofthe Planning Problem
Since the total amount of product made during the production period cannot exceed the total available production time, the decision variables are restricted by:
i=I
i=l
The assumption is made that the planners have enough and correct information to make a good estimation of the values of these parameters. Therefore, these parameters are assumed to be the deterministic factors in the planning problem. the demand for the current products and for the new one during the production period in ton per year: di, i E (1, 2, ..., n, n + 1) . The uncertainty of the demand during the production period is quite large and therefore these factors are assumed to have a stochastic nature. The assumption is made that the planners have enough information to indicate the minimum and maximum demand that can be expected and the mode of the demand, that is, the demand for which the probability density function is maximized. The demand for the products will therefore be modeled by a triangular distribution with the probability density function given in Eq. (2). This triangular form (see Fig. 3.2) corresponds with the shape used in fuzzy modeling.
in which aiis the minimum, p, the mode and a certain product i.
ffjl A, a;
P,
3:
the maximum of the demand di for
d,
The expected demand for product i will then be: E(di) =
~ r +, Pi
3
+
Yi , E { 1 , 2,..., n , n + l } .
The probability distribution of the demand di reads:
Figure 3.2 The triangular probability density function o f the demand
I
831
832
I
3 Facing Uncertainty in Demand by Cost-efective Manufacturing Flexibility
In Section 3.4 the mathematical description of the planning problem will be continued, but the generic problem will first be applied to a case study in a plant where various types of food additives were made.
3.3.1 Case Study in a Food Additives Plant
In a multiproduct multipurpose batch plant different food additives are produced on two reactors. The present portfolio consists of two product groups A and B. Having strong indications about a growing demand for a new product C, operations management wants to reevaluate the current product portfolio and the production planning for the coming year. The product groups A and B are manufactured on two exchangeable reactors. Planning management has estimated the production times based on the current annual operation plan. The total amount of available operating time for the reactors is determined by the available time in a year minus 15 % down and changeover time, resulting in 4625 hours for reactor 1 and 4390 hours for reactor 2. Together this results in a total production time for the coming year of T = 9015 hours. Table 3.1 shows the estimated values for all parameters in the planning problem. From the figures it is clear that the total production capacity could have been sold out in the annual contracts, since the demand for the product groups A and B is high enough. The new product C however is expected to be very profitable and it would very likely be an unwise decision to sell out the total production capacity. Unfortunately the demand for the new product C is not very certain. Table 3.1
Estimated values for the planning parameters
Production time in hours per ton Contract profit in $ per ton Demand for contracts in ton per year Profit in production period in $ per ton Minimum demand in ton per year Mode of demand in ton per year Maximum demand in ton per year
Product group A
Product group B
0.24 1478 6, = 20 000 eA= 1534 a, = 16 040 PA= 17550 '/A = 19 900
ZB = 0.47
tA =
UA =
897 6 B = 11000 eB= 953 aB= 8350 bs = 8900 '/B = 9150 UB=
Product C tc = 1.4
-
ec = 3350 Q=O Pc = 850
yc
=
1600
This case study will be continued in Section 3.4.1.
3.4 Modeling the Profit of the Production Planning
The criterion on which planning will be assessed is the total profit that is achieved after the production period when the production is executed in accordance with the production planning. For that, not only is the expected profit important, but also the
3.4 Modeling the Profit ofthe Production Planning
certainty that this profit will be achieved should be taken into account in the final decision. If a small deviation of the expected demand results in a much lower profit than expected, it could be safer to choose a more robust planning with a lower, but more certain profit. Let zi, i E { 1, 2, ... n, n + l} be the sold amount of products when the production period is finished. Together with the products that are sold before the production period in annual contracts, the final profit that will be made in this period equals: n
n+ 1
i=l
i=l
The amount of products sold during the production period will depend on the demand for these products and the available production time. If the demand is lower than the amount that can be produced in the available production time, then the total demand can be satisfied. However, if the demand is larger than the available capacity then only that amount of product can be made and sold. Therefore, the total amount of product sold during the production period will equal zi = min(di,xi),i E (1, 2, ... n, n + I}. The same holds for the amounts sold in annual contracts. The overall profit will then be P(yi,. . . , yn, x i , . . . , xn, xn+i) =
n+l
n
Cai min(&, + C p i min(di, xi). yi)
i=l
i=l
(5)
For fned values of the decision variables, the maximum and minimum total profit that can be achieved depends on the planned amounts of the products on the production planning, and on the maximum, respectively minimum demand for the products:
C
n+ 1
n
max P(y1, . . . , yn, ~ 1. .,. , x n , xn+l) = minP(y1,. . . , yn, xi,.. . , xn,xn+i) =
i=l
(Ti
min(yi, si)
+ C pi min(xi, vi) i=l
n
n+l
i=l
i=l
Cgi min(yi, hi) + Cpi min(xi, ail.
(6)
For fured values of the decision variables the probability density of the final profit can now be derived from the probability density of the demand for the products during the production period, under the assumption that the demands for these products are mutually independent. In general for a linear combination w = ax + by, where the stochastic variables x and y are independent, the density function of w reads (Papoulis 1965):
I
833
834
I
3 Facing Uncertainty in Demand by Cost-eflective Manufacturing Flexibility
The final profit was defined by P(yl, ..., yn, xl, ..., x,,,
=
”
n+l
i=l
i-l
10,min(di,yi) + 1pi min(di, xi).
Let p i , i E { 1,2, ..., n, n + I} be the profit made by selling the amount zi of product i during the production period, then pi = pizi,with a minimum of 0 and a maximum of pixi. Applying the general proposition (Eq. (1))on the final profit P, the probability density function of P reads:
i=l
i=l
If the actual demand di, i E (1, 2, ..., n, n + 1) is smaller than the planned capacity xi then the sold amount of product zi will equal the demand di. In that case, the probability density of zi will follow the probability density of the demand di. However, if
the actual demand di is larger than the planned capacity xi then only the amount zi = xi will be produced and sold. The probability that this will happen is the probability of a demand larger than the planned capacity, that is, di 2 xi (see Fig. 3.3).
Figure 3.3
The probability density function o f the sold amount o f product
By this observation, the probability densityfi,(zi), i E (1, 2, ..., n, a + I} of the sold amount of product zi satisfies: 0 < 2; < xi , i E [1,2,.
. . , n, n +
1)
(9)
Unfortunately, by the local discontinuities in the probability density function of the final profit, the integrals cannot be solved analytically. The derived theoretical results will now be applied on the case study described in Section 3.3.1.
3.4 Modeling the Profit ofthe Production Planning Figure 3.4
Simulation of the final profit for one possible production planning
profit (in million 9)
3.4.1
Modeling the Profit for the Food Additives Plant
In the aforementioned case study the production planning should be made for two current product groups A and B and one new product C. For reasons of comprehensibility, the assumption is made that no products were sold before the production period, that is yA = y B = 0. The total profit that can be made will now depend on the demand for the products during the production period and on the planned amounts of the different products, that is, P(xA,x g . xc) = 1534 min(dA,x A ) + 953 min(dB,x B )+ 3350 min(dc, xc), under the restriction that the total production time will not be exceeded, 0.24 xA+ 0.47 X B + 1.4 X C = 9015. The probability density function of the profit satisfies:
Although this probability density cannot be solved analytically, it can be simulated for fned values of xA,x9, xc by randomly picking a certain demand for the products A, B and C from their individual probability density functions. Figure 3.4 shows a probability histogram of the simulated profit for a production planning with xA= 17 000, xB= 9588, xc = 950 ton per year. The sample size taken is 1000. The unequal distribution in the left tail is caused by the sample size and would not be present in the theoretical distribution. This histogram shows a very skewed
I
835
836
I
3 Facing Uncertainty in Demand by Cost-effective Manufacturing Flexibility
distribution to the left. This skewness is caused by the discontinuities in the function P(xA,xB,xc). In Section 3.5.1 this case study will be continued.
3.5 Modeling the Objective Functions
The quality of a certain production planning will be assessed on the expected value of the profit that can be achieved with the planning. The expected value of the final profit satisfies: EP(Y1,. . . , yn, X I , .
. . , xn, x n + l )
=
withyi 5 di, i E (1,2,. . . , n)
n
n+l
i=l
i=l
Cci min(di, yi) + C PiEzi(xi)
(11)
From the probability density functionfi,(zi), i E (1, 2, ..., n, n + l}of the amount of sold products, the expected value of the amount zi can be determined by:
There are four possibilities for the planned amount x,in comparison to the expected demand d,, i E (1, 2 , ..., n, n + l}. Remember that d, was expected to lie between al and y, with a mode 8.Elaboration of Eq. (12) yields: 1. If x,5 a, then Ez, (x,)= x,.
2. If a,5 x,5 p, then
4. If xi > yi then Ezi (xi) =
a+S+y
3
Unfortunately, the expected value of the profit that can be achieved with a certain production planning, does not guarantee that this profit will be achieved in reality. Due to the skewed density function, for most choices of the production planning, the median of the profit will be higher than the expected value of the profit. This
3.5 Modeling the Objective Functions
means that with a probability of more than 50% the real profit will be higher than the expected value. And with a probability of less than 50 % the real profit will be lower than the expected value. As a consequence, the average deviation below the expected profit will be larger than the average deviation above the expected profit. As a measure for the robustness of the planning the first quartile Qo(y,, x,)of the profit is chosen. The probability that the real profit will be lower than this first quartile equals 25 %. Under the assumption that the demands for the different products are mutually independent, the first quartile of the total profit will be a linear combination of the first quartiles of the sold amounts of products z,, i E { 1,2, ..., n, n + l} and can be written as:
The first quartile of the sold product, Qz,(xJ,will equal the first quartile of the demand d, if the planned amount x, is larger than this demand, otherwise it will equal x,: Qz,(xi) = min(Qo,(di), xi), i
E
(1,2,.. . , n, n + 1)
As long as the first quartile Q,(di) is smaller than the mode
(14)
PLof the demand, i.e.,
If the first quartile Q,(d,) is larger than the mode PI of the demand, i.e., PI > 0.75 a, yL,then
i0.25
3.5.1 Modeling the Objective Functions of the Food Additives Plant
For the case study described in Section 3.4.1, the objective functions can now be modeled. The expected value of the final profit satisfies:
I
837
838
I
L
3 Facing Uncertainty in Demand by Cost-efectiue Manufacturing Flexibility
180001 16000
14000 12000
7 10000
fi
8000 6000
4000 2000
X20000 P
t
200 0
30000
1000
2000 3 M O 4000 5000 6000
x-c
Figure 3.5 The expected amount of sold products A, B and C for a certain production planning
3.5 Modeling the Objective Functions
C
'=c
e
Q
Figure 3.6 planning
The expected profit for different choices o f the production
Figure 3.5 shows the functions of Ez,(xA),Ezs(xB)and Ezc(xc).respectively. The vertical lines indicate the different parts of the piecewise functions, that is, x,< a,, a, I x,5 b,, PI < x,5 Y,,x,> y,, i E {A, B, C}. Figure 3.6 shows the three-dimensional plot and the contour plot of EP(xA,xB,xc) with the restriction that the total production time is filled, but not exceeded, that is, ~ ~1 . 4 ~ ~ 9015 - 0 . 2 4 0.47 The figures show that there is one production planning that leads to a maximum value for the expected profit E p ( x A , xB,xC).A rough estimation can already be made from the contour plot that in this production planning around 18,200 tons will be planned of product A, around 500 tons will be planned for product C, which will leave capacity for about 8400 tons of product B. The second objective function is the first quartile of the total profit, that is, xB =
Q p ( x ~X, B , xc) = 1534 min(17247, xA)+953 min(8682, x ~ ) + 3 3 5 0min(583, xc).
Figure 3.7 shows the three-dimensional plot and the contour plot of Qp(xA,xB,xc) again with the restriction that the total production time is filled, but not exceeded (%I. (18)).
Also these figures show that there is one production planning that leads to a maximum value for the first quartile Qp(xA,xB,xc)of the final profit. Again a rough estimation can be made from the contour plot. In this production planning around 17,200 tons will be planned of product A, around GOO tons will be planned for product C, which will leave capacity for about 8600 tons of product B.
I
839
840
I
3 Facing Uncertainty in Demand by Cost-efectiue Manufacturing Flexibility
136.5
Figure 3.7 The first quartile o f the profit for different choices o f the production planning
3.6 Solving the Optimization Problem
The production planning problem is translated into a multicriteria piecewise linear optimization problem. The problem, however, will not be solved as a multicriteria problem, since the objectives are the extremes of one scale, from an uncertain but high profit to a more certain but low profit. For planning management willing to run a higher risk for a higher expected profit, the most profitable planning, corresponding with the maximum expected profit, may be the right choice. For planning management not willing to run any risk the most robust planning, that is, the one with the highest first quartile will be a more certain choice, although the expected profit will be much lower in that case. For every nuance of profitableness at a certain risk, a production planning in between those two extremes can be found. The optimization problem is as follows: determine y1, ..., yn, X I , ..., x,, x,,~ for which EP(Y1,.
..
I
Yn, X I , .
n
n+l
i=l
i=l
. . , xn, xn+1) = Caiyi + C P i E z i ( X i )
or
n
is maximized, subject to
T=
C +c n
i=l
tiyi
n+l i=l
tixi.
3.6 Solving the Optimization Problem
Due to the piecewise character of the objective functions, common gradient methods for optimization cannot be used. Therefore a choice is made to use a direct-search method, the simplex method of Nelder and Mead (Nelder 1965). The Nelder-Mead algorithm is mentioned in many textbooks, but very seldom explained in detail. That is why a short description of the working method is given here (Fig. 3.8). If there are n decision variables in the optimization problem, the Nelder-Mead algorithm will start by choosing n + 1 points arbitrarily. For reason of simplicity, assume that there are two decision variables. Then there will be 3 starting points (PI, P2, P3), which together form a triangle. This triangle is called the simplex. If the objective function is to be maximised then the point with the smallest value ( P l ) is reflected into the middle of the opposite side, in the supposition that this will lead to a better value for the objective function. There are four different possibilities on how the search will continue. 1. If the objective value of the new point (P4) lies between the best and the worst values of the other points (P2, P3), then P4 is accepted as a new starting point and the new simplex is (P2, P3, P4) (Fig. 3.8a). 2. If the objective value of the new point (P4) is better than all others then the point is even further drawn out, twice as far from the reflecting point as P4. Therefore, P5 will form the new simplex with P2 and P3 (Fig. 3.8b). 3. If the objective value of the new point (P4)is worse than all others but better than the original (PI), then a new point PG is evaluated half as far from the reflecting point as P4 (Fig. 3.8~).Again there are two possibilities: a. If P6 is worse than all others then the whole simplex is decreased by half towards the best point in the simplex. The new simplex is then (P2, P3’, PG’) (Fig. 3.8d). b. Otherwise, the new simplex is (P2, P3, PG). 4. If the objective value of the new point (P4) is worse than P1 then a new point P7 is defined halfway between the reflecting point and P1 itself. The new simplex will then be (P2, P3, P7) (Fig. 3.8e).
The new simplex is now used as starting point and the same procedure is performed until the best point and the second best point differ less than a fxed value E. For both objective functions the Nelder-Mead algorithm could be used to find the production planning with the highest expected profit and the production planning with the highest first quartile, that is, the most robust planning. The planners will be provided with information about the robustness and the expected profit of different possibilities of the planning, on which they can base their final choice. In the next paragraph, the Nelder-Mead algorithm will be applied on the objective functions in the case study.
I
a41
842
I
3 Facing Uncertainty in Demand by Cast-efective Manufacturing Flexibility
Figure 3.8
Nelder-Mead algorithm
3.6.1 Solving the Optimization Problem in the Case Study
For the case study the production planning problem is translated into the following optimization problem: Determine xA, x g , xcfor which Ep (%a, xg, xC)
=
1534 EzA( x A )
+ 953 Ez,
(xg)
+ 3350 Ez,
(xc)
or Qp (xA,xg, xc) = 1534 min(17247, xa) + 953 min(8682, x g ) + 3350 min(583, xc) is maximized, subject to xg =
9015 - 0.24 X A - 1.4 xc 0.47
3.6 Solving the Optimization Problem Figure 3.9 The Nelder-Mead algorithm to determine the maximum expected profit
Figure 3.9 shows for the decision variables X, and xc, the contour plot of the expected profit Ep(xA,xB,xc) with the simplices resulting from the Nelder-Mead algorithm. The optimal planning found is the planning in which 18,221 tons for product A, 8445 tons for product B and 481 tons for product C are planned (Table 3.2). The expected profit for this planning equals: Ep(18221,8445, 481) = 3.67 . lo7 dollars. The first quartile for this planning, that is, the robustness of the planning, equals Qp (18221, 8445, 481) = 3.61 . lo7 dollars. The most robust planning, that is, the one with the highest first quartile, is found by applying the Nelder-Mead algorithm to: Qp (xA, x f l ,xc) = 1534 min(17247, xA) + 953 min(8682, x B )+ 3350 min(583, x,-) 9015 - 0.24 X A - 1.4 xc subject to x f l = 0.47
The company itself should now decide if they will speculate on a higher expected profit or if they will prefer a lower but more certain profit. The diagram in Fig. 3.10 below can serve as an informative tool for the decision making.
Results of the optimization
Table 3.2
Optimization ~
~
~~
Ep(XA, XB! xc) (million dollars)
XA
XB
xc
(million dollars)
(ton)
(ton)
(ton)
36.1 36.6
18221 17248
8445 8647
48 1 579
Qp(xAr
XB! XC)
~~
Max. expected profit 36.7 Max. robustness 36.3
I
843
84.4
I
3 Facing Uncertainty in Demand by Cost-efective Manufacturing Flexibility
planned new product C (in tons] Figure 3.10
The profit levels for planned amounts of product C
In the planning with the maximum expected profit 481 tons were planned for product C. In the planning with the maximum 25 % limit of the profit 579 tons were planned for product C. Figure 3.10 shows for the interesting region for the planned amounts of product C, that is, between 400 and 700 tons, four different profit levels: Profit level 1 is the maximum expected profit that can be achieved with the planned amount of product C, assuming that the remaining capacity is optimally divided over the product groups A and B. Profit level 2 is the 25 % profit limit if the planning with the maximum expected profit (see profit level 1) is implemented. Profit level 3 is the maximum 25% profit limit that can be achieved with the planned amount of product C, assuming that the remaining capacity is optimally divided over the product groups A and B. Profit level 4 is the expected profit if the planning with the maximum 25 % profit limit (see profit level 3) is implemented. Assume that the company, based on the information from Fig. 3.10, decides to plan 579 tons of product C. This seems to be a profitable but not too risky choice. Compared to the planning with for example xc = 481, the maximum expected profit is a bit lower, but all other profit levels are very high. It will now depend on the choice for the planned amounts of products in the groups A and B, if they can expect a higher profit with more uncertainty or a lower profit with less uncertainty. However, the 25 % profit limit gives no information about how the profit is distributed below this limit. To have more information on how low the profit could be, Fig. 3.11 shows, for 579 tons planned for product C, the probability distribution of the profit for different amounts planned for product groups A and B.
3.7 Sensitivity Analysis ofthe Optimization
From Fig. 3.11 it is clear that the more product A is planned for, the more profit can be expected, but the more uncertainty exists whether this profit will be achieved. For instance, if the company decides to plan 18,000 tons for products from product group A, then the maximum profit they can achieve is about US $37.5 million and there is a probability of GO% that this profit will not be achieved. There even is a probability of 25 % that the profit will be lower than US $36.3 million.
3.7 Sensitivity Analysis of the Optimization
The planners settle several parameters on which the determination of the optimal planning is based. Some deviation from the expected values will lead, after completion of the production period, to profit results that differ from what was expected. To be able to make a good evaluation of the risks, the sensitivity of the profit and the optimal planning will be studied for small deviations from the expected values of the following parameters in the objective functions: 1. the profit ei, i E (1, 2, ..., n, n + I} made on the sold amounts of the current products and the new one during the production period; 2. the minimum ai, the mode 6,and the maximum y, of the demand di,i E { 1 , 2 , ..., n, n + I} of all products during the production period. XA=
38 f 37.6:
18500 ton
.....
37.2 7
xA=I7500 ton
c36.80s
xA=1725Oton
-
xA=l7000 ton
4 36.41 S
.-0 E
E
36:
...,
m . , ,
. . . . I . . .
6 0.7 0.8 0.9
cumulative probability Figure 3.11 of product A
Probability distributions of the profit for different amounts
I
845
846
I
3 Facing Uncertainty in Demand by Cost-efective Manufacturing Flexibility
Also the sensitivity of the solution to the parameters in the production time constraint will be studied: 3. the total production time T available per year; 4.the production time ti,i E (1, 2, ..., n, n + 1} that is needed to make one ton of product. Let (yypT,xYPT)be the optimal planning with respect to the maximum expected profit. Let E$'PT= Ep crypT,_xYpT) be the maximum value of the expected profit. The effect on the optimal value E$'" of a small change in for example the para-
meter
el is now given by aE$'PT, the absolute sensitivity coefficient of el.The same a PI --
can be done for all other parameters and for both objective functions. The stepwise character of the objective functions is in this case for the differentiation not a problem, since the discontinuities of the function are only found in the decision variables, not in the parameters. For the parameters, the objective functions are continuous and therefore differentiable. The size of the absolute sensitivity coefficients depends on the scale on which the parameters are measured. To make them comparable they are scaled by
-.
de1
- , which form the relative sensitivity coefficients, for example, for the
GPT
parameter el. The other uncertain parameters influence the only constraint in the problem: n
n+1
i=l
i-1
Cziyi + C
tixi =
T. Changes in the parameters tior in Twill cause the optimal plan-
ning to be no longer feasible. In practice a decrease in comparison to the expected values of ti,i E {1,2, ..., n, n + 1} or an increase in comparison to the expected value of Twill not cause any problem. The planned amounts of products can still be made and it will be possible to make an even higher amount of product than was planned, although it is not guaranteed that this extra amount can be sold as well. Information is needed to determine for which product the extra production time should be used to make as much profit as possible. An increase, however, ofthe values of t i , i E { 1,2, ..., n, n + 1} or a decrease of the overall production time T will cause the optimal planning to be unachievable, and information is needed at the expense of which product the reduction of production time should be found to keep the profit as high as possible. In general Lagrange multipliers can be used to investigate the influence of changes in the right hand side of the constraint parameters, but due to the piecewise character of the objective functions the Lagrange multiplier cannot analytically be calculated for an arbitrary T. The sensitivity analysis will be illustrated on the basis of the case study from Section 3.6.1.
3.7 Sensitivity Analysis ofthe Optimization
3.7.1 Sensitivity Analysis in the Case Study
Assume that the company, based on the information given in Figs. 3.10 and 3.11, has decided to plan 18,000 tons for product group A, 8265 tons for product group B and 579 tons of product C. The expected profit for this planning equals US $36.6 million and there is a probability of 25 % that the real profit is lower than US $36.3 million. To study the sensitivity of the results for small changes in the profit parameters @A, @B, @cand the demand parameters aA,PA,y ~ aB, . PB,yB,%, Pc, yc, the relative sensitivity coeffkients are calculated in the neighbourhood of the expected values of these parameters, presented in Table 3.3. Table 3.3 Parameter
Sensitivity o f the profit for different planning parameters Expected value
25 % Probability limit
Expected profit Absolute sensitivity
Relative sensitivity
Absolute sensitivity
Relative sensitivity
~~~
@A
ee Qc aA P A
YA aB P B
Ye
ac PC
Yc
1534 $ per ton 953 $ per ton 3350 $ per t o n 16,040 t o n 17,550 t o n 19,900 ton 8350 t o n 8900 ton 9150 t o n 0 ton
850 ton 1600 ton
17,578 8265 531 41 1 347 166 0 0 0 539 188 100
0.74 0.22 0.05 0.18 0.17 0.09 0 0 0 0
0.00 0.00
117,247 8265 579 0 0 0 0 0 0 0 0 0
0.73 0.22 0.05 0 0
0 0
0 0 0
0 0
The expected profit and the first quartile of the profit are most sensitive for the profit that can be made with the current product A, due to the high amounts of this product planned to be made and sold, according to expectations. Furthermore, the profit is sensitive to changes with respect to the profit that can be made with one ton of product B, and for changes in the demand of product A. Small changes in the other parameters have hardly any influence on the profit that can be made with the chosen planning. A change in the total production time will also influence the profit that could be achieved with the implementation of the chosen planning. Figure 3.12 shows the change in expected profit, respectively 25 % profit limit, if the increase or decrease in production time T is totally covered by an increase, i.e., a decrease in planned amounts of product group A, B or product C. Figure 3.12 shows clearly that a decrease in the total production time should never be covered at the expense of product group A, but should be found in a smaller amount of product group B or product C. On the other hand, when the production time is higher than expected, the extra time should be used to produce more of product C, although the differences are not so large. The robustness of the planning for
I
847
848
I
3 Facing Uncertainty in Demand by Cost-efective Manufacturing Flexibility
34.5
-400-300-200-100 0 100 200 300 400
...... change in T (hours) Figure 3.12
-400-300-200-100 0 100 200 300 400 change in T (hours) m . . . . .
The changes in profit for smaller or large production time
a small decrease in total production time will not change if less of product A is made. However, a large decrease should go at the expense of the other products. An increase of the production time should as well be used to make more of the product group B or product C. Figure 3.13 shows the change in expected profit, respectively 25 % profit limit, if the increase or decrease in the time ta needed to produce one ton of product A is totally covered by an increase, i.e., a decrease in the planned amounts of product group A, B or product C. Figure 3.13 shows that an increase of the production time needed to make one ton of product A can best be covered by making less of product group B or product C, although to keep the same robustness it is better to make less of product A. For a decrease, it is best to make more of product C. An increase or decrease of the production time needed to make one ton of product group B or of product C, gives more or less the same results as for product group A.
3.8 Implementation ofthe Optimization ofthe Production Planning
I
849
I
35.84 -0.012 -0.008 -0.004 0
0.004 0.008 0.012
-0.012 -0.008 -0.004
0
" l ,
,
change in t[A] (hourdton)
change in t[A] (hourdton) . r n . . D .
t.....
Figure 3.13 product A
The changes in profit for a smaller or larger time/ton for
3.8 Implementation of the Optimization of the Production Planning
In the development of the method the focus was on the practical use for the planners in a multiproduct plant. The users of the method should not be bothered with the mathematical background of the method, which from their point of view can be considered as a black box. The emphasis should be on the information that should be acquired from them as an input for the method and on the results obtained from this information to be presented in a comprehensible and useful way. Knowledge of the transformation from the input into the output can increase the confidence in the results, but is not required to be able to use the planning method (see Fig. 3.14). The informationjom the planners, needed as an input for the method, should consist of:
-tnformatron from
planners
Figure 3.14
Implementation of the planning method
,
0.004 0.008 0.012
850
I
3 Facing Uncertainty in Demand by Cost-efectiue Manufacturing Flexibility 0
0
0 0 0
0
the profits made on the current products sold in annual contracts in dollars per ton; the profits made on sold amounts of the current products and of new ones during the production period in dollars per ton; the total production time available in hours per year; the production times needed to make the products in hours per ton; the demand for the current products that can be sold in the annual contracts; the demand for the current products and for the new ones during the production period in ton per year described by: - the minimum demand in ton per year; - the mode of the demand in ton per year; - the maximum demand in ton per year.
Table 3.1 is an example of such input information. The results for the planners, produced by the planning method, consist oE 0
0
0 0
results of the optimization showing the optimal planning with respect to the maximum expected profit and the optimal planning with respect to the maximum 25 % profit limit (Table 3.2); the profit levels for planned amounts of new products showing the maximum expected profit and its corresponding 25 % profit limit and the maximum 25 % limit and its corresponding expected profit for different choices of free capacity for the new product(s). If more than one new product is taken into consideration, then the aggregate free capacity for these new products will be showed on the xaxis (Fig. 3.10). probability distributions of the profit for different planned amounts of the products showing the total probability distribution of a certain planning. For practical use, it should be easy to change the chosen amounts for all products to assess the effect of the changes on the profit distribution (Fig. 3.11). sensitivity of the profit for different planning parameters showing for a chosen planning which parameters are really influencing the profit, and by that require a good estimation of the expected value (Table 3.3); the changes in profit for smaller or larger production time (Fig. 3.12); the changes in profit for a smaller or larger time per ton for the products showing which adaptation to the planning should be made if the real values of production times differ from the expected ones (Fig. 3.13).
When the results are presented in such a way that the planners have full insight into the consequences of a chosen planning, the method will serve as a valuable decision support tool.
Reference I 8 5 1
3.9 Conclusions and Final Remarks
A production planning method has been presented for a multiproduct manufacturing plant, which optimizes the profit under uncertainties in product demands. In the method these uncertainties are modeled by means of a simple triangular probability distribution, which is easy to specify. The optimization goal can be formulated as either a maximum expected profit or a robust profit (first quartile of the profit) to lower the risk. Due to discontinuities in the probabilistic distribution function of sold products a direct search optimization technique, Nelder-Mead, must be applied rather than a gradient-based optimization. The development and the application of the method have been highlighted by means of a case study taken from a food additives plant. This method is considered practical because the required input data for the demand and process models and the profit function is easy to get by the users of the method, while the output information facilitates the interpretation of sensitivities of the optimized production planning in terms of common economic and product demand specification parameters. The method should be accessible to plant production management rather than to operations research specialized planning experts.
References Bansal V. Perkins /. D. Pistikopoulos E. N . Flexibility analysis and design of dynamic processes with stochastic parameters. Comp. Chem. Eng. 22(Suppl.) (1998) p. S817-S820 Bengtsson /. Manufacturing flexibility and real options: a review. Int. J. Prod. Ec. 74 (2001) p. 213-224 Berry W. L. Cooper M . C. Manufacturing flexibility: methods for measuring the impact of product variety on performance in process industries. J. Op. Mgmt. 17 (1999) p. 163-178
Corrka H . L. Managing unplanned change in the automotive industry. Dissertation University of Sao Paulo Wanvick Business School, Avebury. Aldershot 1994 Nelder /. A. Mead R. A simplex method for function minimization. Comput. J. 7 (1965) p. 308-313 Papoulis A. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, Boston 1965 Swaney R. E. Grossmann I. E. An index for operational flexibility in chemical process design. AIChE J. 31(4) (1985) p. 621-641
Indices
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8
Authon's Index I 8 5 5
Authors' Index Abildskov, Jens V-1" Alva-Argaez, Alberta 11-2 Belaud, Jean-Pierre IV-5 Bogle, I. David L. 11-4 Braunschweig, Bertrand IV-5 Cameron, Ian T. 1-6, IV-2 Dua, Vivek 111-5 Engell. Sebastian 111-4 Espuria, Antonio IV-3 Fernholz, Gregor 111-4 Buzzi-Ferraris, Guido 1-1 Gani, Rafiqul IV-1, V-1 Gao, Weihua 111-4 Georgiadis, Michael C. 1-4, 11-1, 111-1 Gerbaud, Vincent 1-3 Gemaey, Krist V. 1-7 Grievink, johan V-3 Hangos, Katalin M. 1-5, 1-6 Heijnen, Petra V-3 Heyen, Georges 111-3, V-2 Ingram, Gordon D. 1-6 Jmgensen, Sten Bay 1-2, 1-7 joulia, Xavier 1-3 Kalitventzeff, Boris 11-3, 111-3, V-2 Kikkinides, Eustathois S. 1-4
Kokossis, Antonis 11-2 Kostoglou, Margaritis 1-4 Kraslawski, Andrey 11-5 Lakner, Rozalia 1-5 Lim, Young-il 1-2 Lind, Morton 1-7 Linke, Patrick 11-2 Manca, Davide 1-1 Marechal, Francois 11-3 Mateus, Miguel V-2 Newell, Robert B. IV-2 Papageorgiou, Lazaros G. 111-7 Perkins, john D. 111-5 Pistikopoulos, Efstratios N. 11-1, 111-5 Proios, Petros 11-1 Puigjaner, Luis 111-6, IV-3 Romero, javier 111-6 Sass, Richard IV-4 Shah, Nilay 111-2 Toumi, Abdelaziz 111-4 Tsiakis, Panagiotis 111-1 Ydstie, B. Erik 11-4 "( Section-Chapter)
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
Computer Aided Process and Product Engineering Luis Puigianer and Georges Heyen . Co. KGaA, Weinhein Copyright 02006 WILEY-VCH Verlag GmbH 8 Subject Index I 8 5 7
Subject Index a abstraction 226 f acceptance criterion 117 f f accuracy - data validation 801 ff, 808 f - distributed dynamic models 38 ff - simultaneous integration 207 - stencil methods 43, 47 acetic acidlmethyl acetate process 544 acetone-chloroform mixtures 785 ff acetone-cyclohexane system 127 acetone-water system 129 achieve relation 255 acidification 71 1 acoustic databases 734 action-instrumental artisan model 249 active pharmaceutical ingredients (API) 405, 434, 649f
active tablet components 435 activity coefficients 125 ff, 781 activity-based costs (ABC) 464 adaptive grids method 40 adaptive mesh refinement (AMR) 35 ff, 68 ff, 74 ff
adaptive stencil methods 42,45 ff adaptive supply chains capabilities 702 adsorption - batch chromatography 552 - distributed dynamic models 75 - gas separation 3, 11, 138-147 - model-based control 567 - separation systems 137 ff advanced numerical methods 68 advanced planning and scheduling systems (APS) 476, 622, 700 advanced process combinatorics 473 Advanced Process Control (APC) 820 advection 99 advisory system 612, 615 f aeroderivative duty gas turbines 336 affine functions 583 f agent-based supply chain management 632, 697 K 702 ff
agglomeration 75, 87, 90ff aggregate time period (ATP) 455 aggregation - decomposition techniques 447,454 f f - distillation 278, 287 - granulation process 191 agitated reactors 307 agrochemicals 649 air conditioning 328 ff air preheating 355 Alberta Taciuk Processor (ATP) 684 alcohols 649 algebraic systems 2, 15-34, 175, 201 algorithm implementation 530 alkane groups 127ff allocation - product scheduling 488 ff - resource planning 448, 457, 460 - supply-chain management 624, 637, 708 allolactose, lac operon 231 Altshuller method (TRIZ) 428 aluminum process, carbothermic 399 amino acid chains 228 ammonia synthesis 534 ff, 812 ff analogies methods 428 Anderson molecular modeling 117 Andrecovich- Westerberg model 273 anisotropic united atoms force fields 123 Apache standards 752 application programming interface (API) standards 752 applications 174, 485, 771 -854 - chemical product-process design 659 - distributed dynamic models 75 ff - embedded integration framework 210 - flexible recipe model 612 - frameworks 209 - 214 - life cycle modeling 681 ff - molecular modeling 125 ff - multiscale processes 203 - product scheduling 506 ff - simultaneous integration 207 r-arabinose ( A M ) regulatory networks 242
Computer Aided Process and Product Engineering. Edited by Luis Puigjaner and Georges Heyen Copyright 0 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim ISBN: 3-527-30804-0
858
I
Subject Index
arc length 72 ARMAX package 679 aromatics groups 130 Arrhenius temperature dependence 79 ff, 85 Arrhenius-type kinetic constants 792 artisan model of instrumental action 249 ARXmodel 567 Aspen Custom Modeler (ACM) 545 Aspen packages 473, 638 AspenPlus - educational modules 775 - equipment design 390 - life cycle modeling 680, 691 - multiscale process modeling 214 asphalt production 457 assumption retrieval 181, 678 asymmetric traveling salesman problem 490 asymptotic solutions 162 asynchronous agent-based team 507 atmospheric distillation unit (CDU) 323 attainable region method 306 AUA4 force fields group 131f auction method, iterative 636 augmented Lagrangean relaxation 466 ff, 470 auto-associated software agents 697 autocorrelation coefficients 116 automatic differentiation 39 f automation OPC standards 755 autonomic chemical plants 223 autonomous agents 702 auxiliary functions 24, 31, 159 average values - data reconciliation 521 f - molecular modeling 112 ff - supply-chain inventory 634f Avogadro number 126 azeotropes - distillation 278ff - educational modules 782 ff - methyl acetate process 544
b backdiffusion effects 142 backward allocation 709 backward differentiation formula (BDF) 36 ff, 57 ff balance composite curves 345 f balance equations 172- 182 - chemical product-process design 659 - educational modules 778 - equipment design 388 - multiscale process modeling 198 - process monitoring 520 balance volumes 172 ff, 180ff - multiscale process modeling 198 - partial models 201
Bardenpho process 314 batch chromatography 552 ff, 562-568 batch correction procedure 611 batch crystallization 159 batch processes 498, 591 -620 batch production 448,458ff batch reactor model equations 406 batch scheduling 673 batch splitting 487 Bayesian networks 238, 594 beads model 122 benzene 388, 785 benzyl alcohol production 606 beta-factor 566 beta-galactoside permease 231, 256 f bibliographical therrnophysical databases 735 bid selection problem 466 bifurcations 237 bi-Iangmuir function 556ff bill of materials (BOM) 698 binary interaction parameters 127 binder - granulation 196 - tablet formulation 435 biochemical processes (BioPro) 75 ff, 408 biochemical product-process design 656 biogas outlet temperatures 355 biological regulatory framework 250 biological systems/biomolecules 224ff Biot number 80 biotransformation processes 224 ff black-box model - flexible recipes 606 - life cycles 676 - multiscale processes 199 blending - data validation 827+ - product development 432 - product scheduling 484 - resource planning 448, 456 block grid structures 69 blow-down steps 140, 148 boilers 337, 370ff boiling point 649, 734 Boltzmann factor 114, 127 bond molecular models 122 Boolean matrix 29 f Boolean networks 238 Boolean variables 277, 290f bottom-up approaches 194, 197f boundary conditions 172 ff - distillation 281 - distributed dynamic models 36, 54, 66 ff, 77 - gas separation 147 - meshrefinement 70 - molecular modeling 108 ff
Subject Index I 8 5 9
- mdtiscale processes 198 - partial models 201 - separation systems 139 f - thermophysical databases 734 bounds 30 bovine somatropin (BST) 409 Box- Jenkins model 679 brainstorming product development 427 branch and bound technique - distillation 282 - hybrid processes 594 - product scheduling 495 ff - resource planning 467 ff - supply-chain management 637 branch-and-cut enumeration 488 Brdyi model 557 ff breakage - crystallization processes 162 - distributed dynamic models 75, 87, 90 ff brewery utility management 689 bromopropyl compound amination 405 Brownian motion 108 Broyden condition 28 f, 559 bubble columns 307 bubble curve, hexane/cyclohexane system 128 Buckingham potential 123 budgetinghorizons 714 buffer times 499 building blocks - chemical product-processes 660 - distillation 277 building transformations 177 bulk chemicals 648 bullwhip effect 699 Burger’s equation 73 business strategies 421 butanoic acid chlorination 308 butyronitrile 131 Buzzi-Ferraris property 19, 25, 29 bypassing strategies 305 byproducts 297, 327
C
Caballero-Grossmann model 277 Calderbanl- Moo-Young correlation 85 calibration 182 ff calibration - data validation 801 - equipment design 391 - life cycle modeling 676 - multiscale process modeling 194 caloric databases 734 campaign production 448,458ff, 493 f Cannizarro reaction 606 canonical models 179 canonical NVT 113
capacities - chemical product-processes 657 - cost-effectivemanufacturing 831 ff - planning 541 - product scheduling 489, 504 - resource planning 450 CAPEC database 780 CAPE-OPEN - standards 750ff - life cycle modeling 680 f, 691 - multiscale process modeling 214 - resource planning 476 carbon catabolite repression 233 f, 256 ff carbon dioxide balance 96 ff carbon ratio 806 ff carbothermic reactors 399 f carboxyl groups 130 carnosic acid manufacture 652 Carnot efficiency 342 cascade model 331 case studies - equipment design 393 ff - life cycle modeling 681 ff - cost-effective manufacturing 834 ff, 844 ff case-based reasoning systems (CBR) 423,434 ff cash flow 695, 712 catabolite repression 233 ff, 256 ff catalysts - ammonia formation 814 - design 434, 655 - educational modules 791 - flexible recipe model 601 - intensification 300 - life cycle modeling 673 - utility systems 328 catalytic slurry bed 84 cause effect analysis 199 cell compartments 237 ff cell division 75 cell number variation 97 cell population - biochemical process design 409 - distributed dynamic models 75, 92 ff cell state control 224, 244 central agents 705 f, 727 central dogma of biology 228 f centrifuges 411 f chain rule, distributed dynamic models 40, 63 Chang method 61 ff, 65 ff chaos properties, regulatory networks 237 character string pattern 531 charge conservation 384 ChemCad modules 775 Chemical Abstracts Service (CAS) 419 chemical engineering, thermophysical databases 733 - 748
860
I
Subject Index
chemical engineering plant cost index (CEPCI) 335 f chemical equilibria 36, 649, 814 chemical industrial plants 327 ff chemical phenomena 301 f, 422 chemical potential 384 chemical product-process design, integrated 75ff, 647-668 chemical vapor deposition (CVD) 195, 203 chemistry - computer-theoretical 108ff - intensification 302 - quantum models 121 ff chemistry WebBook, thermophysical databases 736 ff CHEMKIN software 214 chi-square test 528, 809 f, 817 f chlorination, butanoic acid 308 chloroform 649 chloroform-acetone mixture separation 785 ff chocolate couverture, house of quality 426 Cholesky algorithm 22 CHP schemes 349 chromatographic separation 6, 552 chromatography 35,75 ff chromosomes 532 classification - integration methods 204 - life cycles 679 - linking frameworks 205 f - multiscale process modeling 199 - partial models 198ff cleaning 328 client agents 705, 722 clinical trial outcomes 462 closed-loopcontrol 542 coagulation 150 ff,155 ff coal outlet temperatures 355 coalescence 196 concurrent beds 307 COGents standards 766 COLaN standards 751 ff cold streams 329 ff,334 ff, 370 collision frequency 153 collocation method - crystallization processes 155 - distributed dynamic models 52 - intensification 315 columns - batch chromatography 552 - distillation 270-295 - intensification 307 - methyl acetate process 544 COM standards 754, 758 COMBO system 598 combustion 328, 351 ff, 686
commissioning 669 ff common grid approach 490 communicative actions 250 compensated disturbances 612, 616 complementary slackness 582 complex column sequences 285-296 complex multiphase reactor 393 ff complex separation systems 137- 170 complexity - batch chromatography 552 - chemical product-process design 656 - product scheduling 489 - regulatory networks 229ff, 236ff component integration 7 component-based hierarchical explorative pro. cess simulator (CHEOPS) 214 components 171 ff composite curves 331 compositions 171 compositions - multiscale process modeling 197 - utility systems 345 f, 371 ff, 375 compound balances 811 ff, 827 compound selection 660 compressed gases 328 compression - ammonia synthesis 536 - refrigeration cycles 359 computer theoretical chemistry 108 ff computer-aided educational modules 9 computer-aided equipment/process design 383-418 computer-aided flow sheet design (CAFD) 662 computer-aided integration, utility systems 327-382 computer-aided intensification methods 297- 326 computer-aided mixture-blend design (CAMbD) 649 ff, 653 ff computer-aided modeling/simulation 11-264 computer-aided molecular design (CAMD) 422, 649 ff, 653 ff computer-aided process modeling (CAPM) 1ff, 181 computer-aided process operation 443 -642 computer-aided process/product design 1ff, 265-442 computer-aided production engineering (CAPE) 1, 5, 643-770 - see also: CAPE, CAPE-OPEN computer-integrated manufacturing (CIM) 1ff computational fluid dynamics (CFD) 11, 35, 98-106 - equipment design 383 - life cycle modeling 673 - parallel integration framework 212 - simultaneous integration 207
Subject Index I861
concentration dynamics, regulatory networks 233 concentration profiles - ammonia formation 815 - model-based control 568 conceptual design - life cydes (Clip) 669, 676 ff, 691 - multiscale process modeling 203 - shale oil processing 682 condensers - complex multiphase reactor 394 - data validation 812 - distillation 283, 291 condition number 24 conditioning 519 conductive media simulation 401 configuration - interaction methods 121 ff - nonazeotropic mixtures 286 conjoint analysis 424 connection relation 255 conservation balances 176ff conservation element/solution element (CE/SE) method 39, 57, 61 ff conservation laws - distributed dynamic models 43, 62 - equipment design 384, 388 ff consistency 183 constitutive equations 173-182 - multiscale process modeling 191ff, 202 ff constraint logic programming (CLP) - product scheduling 500 ff - resource planning 469 - supply-chain management 627 constraint propagation methods 30 - chemical product-process design 659 - cost-effective manufacturing 831 f - data validation 802 - educational modules 778 - flexible recipe model 613 - hybrid processes 594, 601 ff - model-based control 558ff - process monitoring 520ff, 529ff - product scheduling 483, 489f, 496f - real-time optimization 581 ff - resource planning 456 - supply chain management 714 - utility systems 350, 365 f construction projects 465 ff consumer requirements 5, 424 contaminant profiles 322ff, 375 ff continuation methods 31 f continuity, boundaries 54 continuous multiscale process modeling 199 continuous time discretization 490, 497 continuous time representation 594
control hybrid processes 599 - model-based 541 -576 - multiscale process modeling 194 - resource planning 453 - transcriptional regulation 235 f controllability - complex multiphase reactor 397 - dynamical properties 185 - life cycles 676 - product development 431 convection - cell population dynamics 93 - complex multiphase reactor 401 - distributed dynamic models 35, 43, 59, 75 convergence 17 f, 25 - material flows 448 - real-time optimization 589 - separation systems 142 conversion systems 329 ff convex hull disjunction formulation 277, 284 f cooling requirements 330, 370 coordinate transformation 177 CORBA-IIOP standards 750, 754f Corporate Web site management 764 correction model - distributed dynamic models 60 - flexible recipes 610 correlations - molecular modeling 114 ff - multiscale process modeling 204 - utility systems 335 cosmetics 438 costs - chemical product-process design 648 f - correlations 335 - data validation 810 - effective manufacturing 829-854 - flexible recipe model 603 ff - model-based control 548, 566 - nonlinear functions 361 - process monitoring 537 - real-time optimization 580 - resource planning 450 - supply-chain management 623 - utility systems 334ff Coulombic interactions 109, 121 f, 124f counter currents - carbothermic aluminum process 400 - utility systems 330 - intensification 307 - separation systems 142 coupling - multiscale processmodeling 189 - regulatory networks 244 Courant- Friedrichs - L e y (CFL) number 59
-
862
I
Subject lndex
covariance matrix 521 CPLEX solver 469 CPU conditions 28 CPU time - distributed dynamic models 51 - water systems 324 cradle-to-the-graveprocess 667 Crank- Nicolson central difference scheme 61 creative templates 430 critical region 579 ff, 588 ff crude desalting 323 crude oil distillation - datavalidation 819 - inventory management 457 - real-time optimization 580 crystal fragmentation 150ff crystal growth - distributed dynamic models 75, 87 ff - separation systems 150 ff crystal size distribution (CSD) 87, 152 crystallization 13, 159 - chemical product-process design 653 - distributed dynamic models 35 - separation systems 149ff customers demands - cost-effective manufacturing 829 - product development 430 - supply-chain management 621, 630f customers service level (CSL) 622, 631, 635 cutoff distance 124 cycle periods, batch chromatography 552 cyclic adenosine monophosphate (CAMP) 234 cyclic material flows 448 cyclic steady state (CSS) - model-based control 565, 570 - separation systems 139
d Daesim dynamics 680 Damkohler number 80,85, 387 Danckwert's boundary condition 36ff, 66, 77, 86 D AM standards 766 data analysis - chemical product-process design 660 - life cycle modeling 681 - multiscale process modeling 194 - product development 423,438 ff data handbooks 735 ff data quality 182 data reconciliation 6, 517-540, 802ff, 81Off data validation 801-828 data validation - computer-aided integration 329 - process monitoring 519 ff, 524 f - utility systems 330
databases - educational modules 779 - thermophysical properlies 733- 748 Datacon package 527 debottle-necking 303,674 decay rates, regulatory networks 240 decentralized decision making 704 DECHEMA databases 735 ff, 744ff, 753 decommissioning 669 ff decomposition techniques - fluid bed reactor 414 - life cycle modeling 675 - product scheduling 495 ff - resource planning 447, 454 ff, 469 ff - structural 184 decoupling frameworks 205 f default values 181 definition phase, product development 421 ff deformation rate, granulation 196 degradation 502, 816 delay differential equations (DDEs) 231 f delta function 164 demand data 451 - cost-effective manufacturing 829-854 - expected 833 - manager agent system 723 - product scheduling 484 - supply-chain management 621, 626 - water usage 376 demonstration, multiagent system 726 dense overlapping regions (DOR) 243 densities, thermophysical databases 734 density functional theory (DFT) 121ff, 130f desalter 323 design - chemical product-processes 648, 657 - complex multiphase reactor 399 f - documentation standards 764 - equipment 390 - life cycle modeling 669 - product development 421,424ff, 431 - supply-chain management 624 - utility systems 367f design institute for physical property data (DIPPR) 735, 739ff desorption 139147 deterministic methods - life cycle modeling 674, 679 - multiscale process modeling 199 - product scheduling 505 - resource planning 469 - supply chain management 698 f DETHERM thermophysical databases 736, 762 diabatic distillation column 394 DICOPT MINLP solver 276 Diesel engines 336, 343
Subject lndex I863
diethyl ether 649 difference schemes 60 differential equations - partial models 201 - regulatory networks 239 ff differential index 177, 184 f differential-algebraicequation (DAE) 2, 13- 34, 171ff - educational modules 778 - life cycle modeling 680 - model-based control 545 ff diffusion - computational fluid dynamics 80, 99 - distributed dynamic models 35 f, 75 - gas separation 145 - intensification 311 diffusion coefficients - distributed dynamic models 59 - molecular modeling 114 - thermophysical databases 734 dilution loss 93 DIMA block standards 765 dimension analysis 183 dipolar interactions 121 Dirac delta function 164 Dirichlet’s boundary condition 36 discontinuities 30 discretization - crystallization processes 156ff - distributed dynamic models 35 ff - fixed-bed reactors 83 - hybrid processes 591 ff - multiscale process modeling 199 - partial differential equations 55 - time grids 488, 490 ff, 494 ff discrimination process models 171- 188 disintegrant 435 disjunctions 277 dispersion - crystallization processes 155, 161 - distributed dynamic models 36 - shale oil processing 684 displacements 119 dissipation 59 distance, linear systems 18 distillation 4, 17, 27, 32 - chloroform-acetone mixture 787 ff - columns 17f, 27, 32 - data validation 811, 818f - equipment design 388 ff, 393 ff - intensification 297, 310 - model-based control 543 ff - molecular modeling 109 - process synthesis 269- 296 - separation systems 138ff - solvents 787
distributed dynamic models 35- 106 distributed parameter multiscale process modeling 199 distribution centres 623 f, 696ff, 717 disturbances - feed compositions 397 - flexible recipe corrections 610ff, 616 - methyl acetate process 550 - model-based control 543 f divergent material flows 448 divided differences method 45 dividing wall column 286 DNA replication 224, 228 ff Documentum package 679, 692 domain relationship 206 Dorfi-Drury method 71 Dortmunder database (DDB) 736ff, 745 drum design 196 Dufort-Frankel method 59,67 duty gas turbines 336 dyes 437 dynamic data reconciliation 517, 524ff dynamic models - environment 696 - life cycles 678 - plants 541ff dynamic population balance 75, 87, 92 dynamic simulation - equipment design 390 - process operation scheduling 501 dynamical properties 185
e Eastman process 297 f echelons supply chain 699 ff, 708 f Eclipse standards 500, 752 eco-label (ecological card) 710 e-commerce 697. 750 economic lot scheduling 491 economic models - life cycle modeling 672 - supply chain management 700 eco-toxicological impact 711 edge effects 110 educational modules 9, 775 -800 effect modeling and optimization (EMO) 349 ff efficiency - data validation 809 - utility systems 335, 362 effluents 328 eigenvalues 38 ELDAR, thermophysical databases 738, 743 ff electrode heating 403 electrodeposition 203 electrolyte solution data bases 742 ff electrostatic interactions 123, 131
864
I
Subject Index
elementary functions, regulatory networks 251 f embedded integration framework 210 embedding relations 252 emergency response 673 emissions - utility systems 328 - water use 374 empirical model building 181, 199, 674 encapsulation 653 energetic-interaction-relatedphenomena 109 energy balances 173, 180ff energy balances - chemical product-process design 654 - distributed dynamic models 36 - educational modules 793 - equipment design 384 - process monitoring 535 energy consumption - datavalidation 810 - distillation 270 - formic acid plant 825 - utility systems 328-382 energy dissipation 150 f energy estandards 753, 757 ff energy recovery 330 energy recycling 80 engineering - life cycle modeling 669 ff - shale oil processing 684 enterprise content management (ECM) 692 enterprise resource planning (ERP) 463 ff, 472 ff enthalpy - balance equations 18 - thermophysical databases 734 - utility systems 340 ff entrainer selection 281 entropy - complex multiphase reactor 395 - equipment design 384 ff - thermophysical databases 734 enumeration - distillation 271 - product scheduling 488 f - resource planning 459, 467ff environmental impact - chemical product-process design 649 - life cycles 668, 672, 680 - manager agent module 707, 725 - shale oil processing 684 - supply chain management 696, 710, 727 environment-health -safety (EHS) 785 enzyme activity 568 equation construction procedure 172 equation partitioning 795
equation weights 18 f equation-oriented packages 392 equidistribution principle 71 equilibrium - adsorption 144 - chemical product-process design 649 - data validation 814 - ethanol-water mixture 782 - thermodynamics 384ff equipartition of entropy production (EDF) 389, 395 f equipment - costs 335ff - design 383-418 - failure 505, 517 - process intensification 299 ff errors - adaptive mesh refinement 68 - covariancematrix 521 - data validation 801 ff, 805 ff - distributed dynamic models 37, 51 - model-based control 560, 564f - molecular modeling 115 - multiscale process modeling 216 - process monitoring 518 ff - UNIQUACmodel 131 Escherichia coli - biochemical process design 409 - regulatory networks 224-235, 242ff essentially nonoscillatory (ENO) schemes 42-47, 57ff, 71ff esterification, methyl acetate process 544 ethanol oxidation 94 ethanol-water system 129, 782 ethylbenzene separation 811 ethylene dichloride (EDC) fraction 398 ethylene glycol production 312 Euclidean norm 19, 25 eukaryotes 237 Euler central difference method 60 ff Euler discretization 156f Euler equation 100 EURECHA Web site 798 European Symposium on Computer-Aided Process Engineering (ESCAPE) 763 eutrophication 711 event operation network (EON) 595 ff event tree 673, 679 evolutionary algorithms 487, 597 Ewald summation 124 exact solution approaches 469 exchange rates 623 exergy analysis 341 ff exergy losses 362 ff, 395 exhaust gas flow 98 exothermic behavior 146
Subject Index I865
experimental design - model-based control 569 - product development 422, 432 ff experts systems 778 explicit time discretization 58, 60 f extended Kalman filter 525 extensible markup language (XML) 750, 757 extension conditions 181 extensive quantitives 174, 387 extracellular glucose 229 extracts, separation systems 140 f
f Factory Planner software 473 FACT thermophysical databases 736 factorization 24 failure - methyl acetate process 550 - process monitoring 517 - product scheduling 505 fault diagnostics 251, 673, 678f feasibility studies 520, 531 feed compositions 270, 397, 680 feed flow rates 275, 282 feedback control 541 - 576 feedforward loop 243 feeding strategies 305 FEMLAB multiphysics module 401 Fenske-Gilliland- Underwood short cut model 283f fentanyl analysis 779 ff fermentation 318, 408 field molecular orbital concept 122 fill rate maximization 624 fillers 435 filtering 519 financial analysis models 673, 69G financial module 711, 725 finishing 651 finite difference methods (FDM) - batch chromatography 563 - distributed dynamic models 41, 54, 86 finite element methods (FEM) - complex multiphase reactor 401 - crystallization processes 154 - distributed dynamic models 41, 51 ff - life cycles 680 - stress modeling 686 finite intermediate storage (FIS) - hybrid processes 593 - product scheduling 485 finite volume method (FVM) 41, 55 ff first principals model 234 Fischer-Tropsch synthesis 83 fitness function 533 fixed-bed gas separation 143f
fixed-bed reactors 35, 75, 79ff fixed-grid method 40 fied-point homotopy 31 fixed-stencil approximation 42 ff flexible cost-effectivemanufacturing 829-854 flexible environment modeling 594 ff flexible recipe model 597-613 Floudas optimization 274, 305 f flow rates - batch chromatography 552 - complex multiphase reactor 396 - distributed dynamic models 49 - real-time optimization 580 - refrigeration cycles 359 - regulatory networks 256 - resource planning 448 - utility systems 340ff flow sheets - chemical product-process design 651, 661 - distillation 284 - educational modules 778 - life cycle modeling 672 - shale oil processing 683 fluctuations, molecular 108, 113 ff flue-gas flow 352 FLUENT software 214 fluid bed reactor 413 ff fluid concentration propagation 78 flux terms, gas separation 145 food additives plant 834 ff force equipartition 389 force fields l l O f f , 114ff, 123ff forecast management 696, 703 ff forecasting module (FOREST) 714 forecasting techniques 830 formalization, regulatory networks 251 formic acid plant 825 formulation - process monitoring 520 - product development 432 forward allocation 709 Fourier transform 114 fractionation 686 fragmentation lSOff, 160 ff frameworks - integrated chemical product-process design 658 - multiscale process modeling 205 ff, 215 f - regulatory networks 249 free software standards 752 freedom degrees 157, 183 f frequency response approximation 548 freshwater consumption 320 fmit cooperative 714 fuel additives 436 fuel cells 338
866
I
Subject
Index
fuel consumption 334 ff,352 f fuel oil production 457 fuel outlet temperatures 355 fully discretized methods 35 ff, 58 fully thermally coupled column sequences 288 functional analysis 251 f functional B-splines 163 functional equivalence 172 functional genomics 224, 236f functionalities 599 function- property-composition relations 420 ff Furzeland method 71 fuzzy modeling - cost-effective manufacturing 833 - resource planning 452 - supply-chain management 637
g
gain matrix 558, 562 galactoside permease 231 Galerkin residuals 155 ff, 163 gamma distribution 165 gamma-phi equilibrium model 782 Gantt charts 447, 608 gas concentration factor 85 gas constant 95 gas engines 336 gas permeation 314 gas phase, Fischer-Tropsch synthesis 84 gas processes, eStandards 757 f gas separation 3, 11, 138-148 gas turbines 336, 351, 373 gas-liquid-liquid reactors 311 gas-oil systems 457 gasoline blending 456 gasoline data 580 Gauss divergence 62 Gauss integration 166 Gauss law 115 Gaussian waves 50 Gear-like algorithms 116 Gebhard- Seinfeld collocation 155 gene interactions 224 gene population 532 gene transcription 228, 233ff general flexible recipe algorithm 602 f general purpose solutions 483 general rate model (GRM) 553 generalized disjunctive programming (GDP) 277 generalized Maxwell- Stefan (GMS) equation 145 generic activity-based product development 421 ff generic algorithms - investment cost function 361 - life cycle modeling 680
-
process monitoring
528, 532
- product development 423 ff, 438 - product scheduling 487, 503 - resource planning 468 ff generic manipulation 223 ff genericity tests 131 genetic code, lac operon 229 genome sequencing 224 genome wide molecular interactions 246 geographical information system (GIS) 626 Gibbs ensemble Monte Carlo 119ff, 125f Gibbs model 386 Gibbs-Duhem relation 125 f Gilliland-Fenske- Undemood method 276, 283 ff Gill-Murray criterion 25 global supply chain management (GSCM) 696 ff global warming 710 glucose - fermentation/oxidation 94 - regulatory networks 229 f, 258 glycols 649 Goffman framework 249 gPROMS package - educational modules 775 - equipment design 392 - gas separation 148 - life cycle modeling 680, 691 - model-based control 545 f - multiscale process modeling 214 gradient methods 20 ff, 26, 39, 557 ff gradual model enrichment 197 grafcet logic models 673 grand composite curves 331 - 346, 375 granulation 3, 189-197, 408 graphic user interface (GUI) 719ff, 724ff graphical representations - intensification 305, 310f - resource planning 469 - utility systems 333 ff, 339 ff - water usage 377 - water-pinch concept 320 ff Green’s theorem 56, 62 green-kubo formulas 116 grey-box model - life cycles 676 - multiscale process modeling 199 grid methods - crystallization processes 156 - distributed dynamic models 40, 52, 69 - hybrid point timing 594 grinding processes 13, 151ff,160ff gross errors 809f, 817 - data validation 803, 809 - process monitoring S18ff, 528
Subject Index Grossmann model 277 Grossmann- Pinto method 490 ff group contribution methods 662 growth, crystallization Isoff, 155 ff
h Habermas approach 249 f HAD process standards 765 Hamiltonian operator 121 f Hangos-Cameron model 172 ff, 390 hardware, process intensification 299 ff Hartree-Fock method 128 heat balances - educational modules 792 - gas separation 146 - utility systems 330f, 335 f heat cascades 331 heat exchange - ammonia synthesis 536 - distillation 270 - educational modules 791 - granulation process 191 - intensification 297, 300 f heat exchange network (HEN) 276, 328 f, 349 ff heat integrated column sequences 279 ff heat pumps 337, 360, 372 heat recovery boilers 337 heating requirements 370 heating system failure 550 heat-power combination 329 heavy duty gas turbines 336 height equivalent to theoretical plate (HETP) value 545 Hendry- Hughes technique 274 Henry coefficients 569, 734 Henry’s law 95 heptanone 649 Hermite polynomials 155 f, 165 Hessian matrix 23 f heuristic methods - distribution planning 624 - product scheduling 484, 487 ff - supply chain management 698 f hexanone 649 hidden components 174 hierarchical approaches - hybrid processes 594 - product scheduling 481 - resource planning 459 high-temperature reaction zone 400 high-throughout experimentation (HTE) 433 Hildebrandt solubility 649, 779 Hill coefficient 240 Hill-Ng procedure 163 f Hoffmann’s number 59 ff homogeneous azeotropic separation 281 f
homotopy 31 Honeywell‘s database 681 horizon methods - data reconciliation 526 - flexible recipe model 605 - model-based control 542 - real-time optimization 586 - resource planning 454, 457 - supply-chain management 628, 633 f, 714 horizon-averaged finished product (SKU) inventory 634 hot streams 329& 334ff, 352& 370 house of quality 425 Huang-Russell approach 71 Huckel calculations 121 human factors - life cycles 674 - supply chains 702 hybrid methods 7, 16, 27 ff hybrid methods - embedded integration framework 211 - life cycle modeling 671 - multiscale processes 199 - product development 438 - product scheduling 483,498 ff,506 f - real-time optimization 584 - resource planning 469 - see also: Powell method hybrid multizonal/CFD modeling 160 hybrid processes 591 -620 hybrid separation 137, 297, 301, 314-321 hydrocarbon-based fuels 682, 686 hydrocarbons 323 hydrogen bonding interactions 123 hydrogen catalytic oxidation 79 hydrogen energy balances 84 hydrogen plant process 806 f hydrogen production 414 hydrogen recovery 316 hydrogen sulfide 323 hydrogenation 686 hydrotreating system (HDS) 323 hydroxyl groups 130 hyperplanes 386 hypertext markup language (HTML) standards 750 hypertext transfer protocol (HTTP) standards 750 HYSIS package 680
i i2 packages 473, 638 - factory Planner 473 IBIS system 679 ICAS package 680 ICV-SEV electrolyte solution data
745
I
867
868
I
Subject Index ideal adsorption solution theory (IAST) 144f ideal gas law 143, 545 ideality product development 429 identification problem 248,676 ill-conditioned approximations 16, 24, 73 ILOG solver 469 implementation - cost-effective manufacturing 851 - supply chain management 710 implicit constraints 594 implicit enumeration approaches 467 implicit time discretization 60 improvement algorithms 471 incidence matrix 795 inclusion body (IB) 408, 410ff incorporation techniques 280 incremental assumption-driven models 29, 181 independence constraint quality 582 individual resource grid 490 inducer exclusion 233 ff,258 ff industrial applications - process intensification 302 - product scheduling 506 ff - supply chain management 710ff industrial source complex version 3 (ISC3) 684 inequality constraints 174 information flows, multiscale processes 204 ff Information Society Technologies (IST) standards 764 information technology standards 758 INFOTHERM thermophysical databases 736ff infrastructure - life cycle modeling 667 - supply chains 697 initial conditions 171ff - cell population dynamics 96 - distributed dynamic models 36 f, 49 f - educational modules 796 - partial models 201 - regulatory networks 255 initialization - flexible recipe model 602 f - mixed-integer linear program 587 ff injection period 552 inlet temperatures 332 integer cut 360 integral/partial differential/algebraic equations (IPDAE) 161f, 592 integrality gap 494 integrated chemical product-process design 647-668 integrated composite curves 345 f integrated computer-aided system (ICAS) 777-800 integrated supply chains management 695-732
integrated system optimization and parameter estimation (ISOPE) 557ff integrated system production planning (SIPP) 456 integration methods - crystallization processes 158 - flexible recipe model 605 - multiscale process modeling 196, 202 - ODE 38 - production/resource planning 453 - supply chain management 695 - 732 - utility systems 327-382 integropartial differential equations (IPDEs) 87 intelligent manufacturing, data validation 801-828 intensive variables 384 ff interactions, molecular 108, 121ff, 127 interactive multiscale frameworks 205 f interface - environmental module 725 - financial module 725 - negotiation agent 727 intermediates 448 ff,485 interoperability, supply chains 697 interpolation - crystallization processes 165 - distributed dynamic models 43, 53, 70 interpretation frameworks, regulatory networks 248 ff introduction strategy 461 inventory-replenishing dynamics 622, 630 f, 710 investment costs - generic function 361 - resource planning 450 - utility systems 334 investment decision calendar G28 ISA S88 framework 598 Ising method 132 IS010303 691 IS014000/15288 668 isobaric/thermal distillation 274 isobar-isothermal NPT 113 isopryl acetate 649 iso-risk contours 687 iterative methods 20 ff, 64 iterative methods - model-based control 557 ff - multiscale process modeling 197 - supply-chain management 636 IUPS software 214 jacobian matrices 2, 15, 19-31 - data reconciliation 523 ff, 529 ff, 802 - distributed dynamic models 38 ff, 64 f, 96
Subject Index I869
i
Jacob-Monad model 229ff Java platform standards 751 ]boss standards 752 jobshops 485 ff Jonsdottir - Rasmussen - Fredenslund method 128
k
Kalman filter 525 Keesman quadratic model 606 kernel functions 197 kerosene data 580 ketones 649 ketones/alkanes system 127 key assumptions 678 key performance indicators (KPI) 802-827 kinetic cell population rates 94 kinetic constants 792 knowledge-based methods 422,434 ff, 503 Kotler concept 419 ff Kronecker factor 530 Kumar-Ramkrishna discretization 156 ff, l64ff k--E turbulence model 401
I lac operon (lactose) 225-235, 256ff
ladder logic models 673 Lagrange decomposition 455 ff, 466 ff 470 ff Lagrange multipliers - cost-effective manufacturing 848 - crystallization processes 155 f - data reconciliation 521 ff, 529 ff - distributed dynamic models 53 - model-based control 558 ff - real-time optimization 582 f Laguerre polynomials 155 f, 165 Langmuir isotherm 144 ff, 553 ff large-scale algebraic systems 2, 15-34 large-scale process modeling 190 large-scale simultaneous integration 207 Lax-Wendroff scheme 59, 67 LCA evaluation 708 leaching models 684 leaks detection 816 ff lean burn configurations 336 Leapfrog scheme 58 f, 67 least-impact heuristic schedules 503 Legendre polynomials 52 Lengeler model 244, 256 length scale models 190c 203 length scale models - chemical product-processes 655 - life cycle modeling 674
Lennard-Jones functions 123, 130f Levenberg- Marquardt method 24 ff life cycle modeling 8, 667-694 - chemical product-process design 647 - product development 420 - supply chain management 707 f lignite 355 Lim - Jorgenson method 65 linear data reconciliation 810 linear discrete time system 577 ff h e a r driving force approximation (LDF) 146 linear independence constraint quality 582 linear multiscale process modeling 199 h e a r programming 489, 531 linearization, piecewise 361, 842 Iinks - hybrid processes 594 f - process monitoring 528 ff - see also: constraints Linux standards 752 liquefied petroleum production 457 liquid phase, Fischer-Tropsch synthesis 84 liquid streams 270, 328 ff liquid-liquid mass exchanger 311 liquid-vapor equilibria 812 list splitting technique 274 location-allocation problem 450, 624 logical checking 183 lognormal distribution 165 long-term planning 207, 502 ff, 449 ff Lorentz-Berthelot rule 124 LQ factorization 25 LU factorization 530 lubricants 435 Ludzack- Ettinger process 314 lumped parameter multiscale processes 199
rn MacCormark method 60, 67 macroscale multiscale process modeling 204 ff maintenance costs 334f make-to-order/stock 484 management agent system 703, 706ff, 718 manufacturing process - life cydes 667-694 - supply chains 696 manufacturing resource planning (MRP-11) 463 ff Manugistics packages 474, 638 mapping regulatory networks 236 ff market potentials 672 Markov chain 679 Marshall Swift index tables 335 mass balances - chemical product-process design 654 - crystallization processes 154, 159ff
870
I
Subject Index
data validation 810, 827 distillation 274 - equipment design 384 - gas separation 143 f - utility systems 335 mass exchange network (MEN) 276, 311, 321 mass recycling 80 mass transfer - granulation 191, 196 - intensification 304, 311 - model-based control 569 master recipes - flexible 615 - hybrid processes 598 - real-time optimization 589 material balances - distributed dynamic models 36 - product scheduling 483,489 f - resource planning 450 material data - hybrid processes 597 - product scheduling 484, 493 - resource planning 448 material requirement planning (MRP) 463 ff, 497 mathematical educational modules 775 mathematical models - batch chromatography 552 - cost-effective manufacturing 832 - equipment design 384 - life cycle modeling 672 ff mathematical programming - product scheduling 484-500 - resource planning 456 - utility systems 349 f MATLAB - educational modules 775, 781 ff - flexible recipe model 614 - life cycle modeling 674 maximum product yield 405 Maxwell- Stefan surface diffusivities 145 mean variance 407 means-end analysis 227 ff,251 ff, 255 ff measurement system 5, SlSf, 528, 556 measurements optimization 801, 808 mechanical simulation 673 mechanical vapor recompression (MVR) 360 f, 366 mechanistic models 198ff mediation regulatory relation 255 melting point 649, 734, 779 membrane - compartments 143 - distillation 297, 314ff - separation systems 3, 137, 142 ff - surface effects 144 -
merit function 17 ff MESH column model 276ff mesh refinement 68 ff messenger RNA (mRNA) 228ff, 236 f, 240 ff metabolic cell reactions 94 metaheuristc approaches 469 ff, 484, 487 ff methods of characteristics (MOC) 88 ff, 96 methods of lines (MOL) 37, 40ff, 67f methods of moments 154f, 157ff methyl acetate process 297f, 544-551 methylisobutyl ketone 649 Metropolis sampling 117 microbial culture processes 35, 75 ff microbial systems 247 ff microcanonical NVE 114 microcapsule encapsulation 653 microelectronic industries 413 microorganisms 3, 223-264 microreaction technology 300 microscale multiscale process modeling 206 middle-out strategies 195 middleware standards 760 minimum energy requirement (MER) 328-382 minimum exergy losses 395 minimum temperature difference 331 mining operation 672 Minsky model 672 mixed-integer linear programming (MILP) - educational modules 778 - hybrid processes 594 - life cycle modeling 673 - real-time optimization 586 - refrigeration cycles 360, 364ff - resource planning 449 ff, 469 ff - supply-chain management 624, 636 ff mixed-integer nonlinear programming (MINLP) - resource planning 448 ff,469 ff - distillation 274f, 290ff - educational modules 778 - electrode heating 403 - intensification 305ff, 310ff, 315f, 320ff - life cycle modeling 673 - product scheduling 483, 489,497 f, 500 ff - separation systems 141 - supply chain management 698 f - utility systems 333ff mixed-integer programming (MIP) 271 mixed-integer quadratic program (MIQP) 586 mixed-logical dynamics (MLD) optimization 584 mixer superstructures 321 MIXPROPS thermophysical databases 736 mixture design 648 ff model integration - batch chromatography 554ff
Subject Index I 8 7 1
chemical product-process design 657 ff equipment design 390 f - life cycle modeling 678 - multiscale process modeling 189, 193, 197 model tuning/discrimination 171-188 Mode1.L.a package 680 model-based control 541 -576 model-based predictive control (MPC) 577 ff, 631 model-based statistical methods 517 ff ModelEnterprise package 499 Modelica package 680, 691 modeling 171-188 - computer-aided 11-264 - equipment design 390 f - lac operon 232ff - life cycle modeling 675 - molecular 2, 1 3 ,, 107-136 - multiscale process modeling 195 ff - partial models 202 - supply chains 712 modeling functions of microbial systems (MFM) 254ff ModKit package 680 modules - educational 775-800 - environmental 707 - financial 711, 725 - supply chain management 703 f, 707 modulons. regulatory networks 241, 244 ff mole balances 792 molecular dynamics 115, 204 molecular electrostatic potential (MEP) analysis 131 molecular interactions 246 molecular mechanics 110, 122 f molecular modeling 2, 13,. 107-136 molecular properties 734 molecule structure design 648 ff, 661 molten aluminum 399 moments methods 154ff, 163ff momentum transfer 191 monetary units (ME) 349 monitoring 517-540 monitoring/forecasting techniques 830 monodimensional approach 25 monolithic reactors 297 Monte Carlo methods - crystallization processes 154, 163 - life cycle modeling 679 - molecular modeling 108 - product scheduling 505 morphological analysis 428 MoT/ICAS modules 790ff mother-cell division death term 93 moving finite difference (MFD) 71 -
-
moving finite element (MFE) 71 moving grid methods 35 ff, 68-75 Mozilla standards 752 MS Excel 674 Mulliken population analysis 131 multiagent systems (MAS) standards 702 f, 712, 718f 765 multicomponent distillation 272 mukicomponent molecular systems 124 muhicriteria piecewise linearization 842 multidimensional optimization 23 multidisciplinary tools, process -product design 777 multidomain framework 211 multiechelon supply-chain 700 multienterprise supply-chain management 635 multifunctional heat exchangers 300 multilevel flow modeling (MFM) 251 ff, 257f multimedia collection standards 764 multiobjective generic algorithms (MOGA) 471 multiparametric quadratic programming (mpQP) 582, 586ff multiperiod problems - location-allocation 624 - product scheduling 504 - resource planning 459 - utility systems 367 f multiphase reactor equipment 393 ff multiphase systems 130 multiple crew strategies 468 multiple sensors 532 multiple-input multiple-output (MIMO) block architecture 242, 820 multiprocess plants 329 f multiproduct batch plant 831 multiproduct plants - cost-effective manufacturing 829 - product scheduling 485 ff - supply-chain management 621 multipurpose batch processes 609 multipurpose equipment 448 multipurpose plants 485 ff, 493 ff multipurpose supply-chain management 621 multiresource generalized assignment problem (MRGP) 471 multiscale capacity planning 462 multiscale modeling 3, 189-222 multiscale modeling - chemical product-process design 655 f - life cycle modeling 675 - life cycles 675 - molecular modeling 107 f, 111ff multisite planning - integer optimization 627 - product scheduling 481 ff - supply-chain management 621, 628ff, 696
872
I
Subject Index multiskill strategies 468 multistage production 448 multistage stochastic programming 505 multizonal computational fluid dynamics 100f MySQL standards 752
n Nash-type objective functions 637 natural frameworks, regulatory networks 249 natural gas 355, 812 Navier - Stokes equation 100, 389 negated regulatory conditions 255 negotiation module 703-716, 726ff Nelder-Mead algorithm 831, 843 ff net present values (NPV) 450, 461 network regulatory motifs 241 ff,246 f network representations 392 network superstructures see: superstructures Neumann’s boundary condition 36 neural networks - life cycle modeling 674, 680 - model-based control 567 - product development 423,436ff new product development (NPD) 421 ff,427 ff, 460 new-born cell birth term 93 Newton homotopy 32 Newton methods 20-30 - distributed dynamic models 39, 64 - separation systems 142 Newtonian fluids 209 Newton-Raphson method 523 NIST chemistry WebBook 736 ff nitrile groups 131f, 824 no intermediate storage (NIS) 485, 593 noise 542, 560, 673 nominal optimization 405 nonazeotropic mixture distillation 286 nonconvex optimization model 637 nonisothermal systems 308 noniterative CE/SE method 65 nonlinear cost functions, utility systems 361 nonlinear equation systems (NLS) 15-34 nonlinear isotherms 562 f nonlinear model predictive control (NMPC) 6, 542 E 567ff nonlinear multiscale process modeling 199 nonlinear Nash-type objective functions 637 nonlinear ODE-based models 246 nonlinear optimization strategies 365 f nonlinear programming (NLP) - data reconciliation 523 ff - distillation 287 - intensification 305, 315 f - resource planning 456 nonlinear-based control 545
Nose-Hoover method 117 nuclear power plants 825 nucleation - crystallization processes 149ff - distributed dynamic models 75, 87 ff - granulation 196 numerical methods - crystallization processes 154ff - data reconciliation 523 - life cycle modeling 672 - molecular modeling 108, 115 ff - multiscale process modeling 203, 214 - partial differential equations 37 ff numerical standard deviation 108 numerical thermophysical databases 735 Nusselt number 387 Nylon-6 process 765 0
OASIS standards 751 f, 759 object management group (OMG) standards 751 objective functions - cost-effective manufacturing 831-849 - exergymodel 362 - model-based control 557ff - real-time optimization 583 f - supply-chain management 637 observable variables 185, 524f, 531 ff, 676 octanol-water partition coefficient, fentanyl 779 offline initialization 610 offline optimization 541,585 oil processing - data validation 826 - eStandards 756f - life cycle modeling 682 ff oilfields 456 f oleic acid methyl ester removal 649 one-way coupling 207 online monitoring 823 online parameter adaptation 542, 556, 568 ff online production accounting 827 online scheduling 482 ontology web language (OWL) 767 ontological representation (OntoCAPE) 691, 765 OPC process /control system standards 750ff open standards 750 ff operation conditions - chemical product-processes 654 - data validation 802, 822 - flexible recipe model 601 ff, 613 - hybrid processes 594 - life cycle modeling 669 ff - multiscale processes 195
Subject fndex I 8 7 3
process monitoring 517, 531 f supply-chain management 630 operation costs - resource planning 450 - supply-chain management 623 - utility systems 334 operation modeling - shale oil processing 687 - supply chain management 714 operational planning, gas fields 458 operons 244f optical databases 734 optimization - biochemical processes 409 ff - complex multiphase reactor 405 - costs 361, 842f - crystallizers 159 - data reconciliation 526, 529ff - distillation 271 - intensification 305, 320 f - life cycles 675 - membrane-based gas separation 143f - model-based control 541 ff - product scheduling 483,490f - real-time 577-590 - supply-chain management 621-642 - utility systems 333 f, 345 ff, 369 f ordinary differential equations (ODE) 15-34 - computational fluid dynamics 101 - crystallization processes 155 f, 158, 165 f - data reconciliation 526 - distributed dynamic models 36 ff - educational modules 778 - life cycle modeling 680 - process models 171, 184 - regulatory networks 238 f organic Rankine cycles 344 orthogonal collocation 51, 315 orthogonal systems 18 oscillatory cell population behavior 75 oscillatory yeast rates 94 outlet temperatures 355 overall modeling - biochemical processes 410 - masses 173 - multiscale processes 195 -
-
P
paper manufacturing 508 parallel multiscale process modeling 204- 216 parameters 182 - continuation method 32 - molecular modeling 110 - multiscale process modeling 217 - partial models 201 - real-time control 578-585
- recommended 735 - shale oil processing 682 Pareto curves 319, 716f partial consistency 183 partial differential algebraic equations (PDAEs) 2, 13-106 partial differential equations (PDEs) 173ff - complex multiphase reactor 402 - computational fluid dynamics 35 - 106 - distributed dynamic models 35 - large-scale algebraic systems 2, 13-34 - regulatory networks 238 partial models 196ff, 200ff partial pressure 142 particle size distribution (PSD) - cvstallization processes 153ff, 161ff, l64f - distributed dynamic models 87 particle surface effects 144 partitioning 15 - educational modules 795 - molecular modeling 114, 117ff partnerships, supply chains 716 pdxi data exchange 681 pdXML standards 753, 757 peak demand period 452 Peclet number 80, 85 penalty composite curves 373 penalty function 532,802 penalty parameters 548 pentanelhexane system 127 performance criterion 602, 606 performance indicators 518, 802-827 permeate compartment 147 permeation modeling 684 permutation schedule 487 perturbations - flexible recipe model 614 - model-based control 560ff pesticides 653 Petlyuk columns 282, 286 Petri-nets 631, 679 petroleum fractions 192, 818 petroleum supply chains 457 Pehov-Galerkin method 155 pharmaceuticals - chemical product-process design 648 - product development 432-437 - supply chain 460, 632 phase boundary 311 phase diagrams 778 ff phase equilibrium 8, 734 - educational modules 779 - methyl acetate process 545 - molecular modeling 109, 119ff phase stability 386 phosphoenopymvate (PEP) 234
a74
I
Subject Index
phosphoric acid fuel cells 338 phosphotransferase (PTS) 234 photochemical oxidant formation 711 photovoltaic industries 413, 466 physical agents 703 ff physical models - distillation 271ff - life cycles 672ff physical phenomena 17 - distributed dynamic models 75 - equipment design 383 - hybrid processes 592 - intensification 301 - product development 422 physical properties 171 - molecular modeling 107-136 - process monitoring 530 PHYSPROPS, thermophysical databases 736 phytochemical manufacturing 650 piecewise afine systems (PWA) 584 piecewise linearization 361, 842 pilot plants 303, 508 pinch point analysis - complex multiphase reactor 396 - intensification 320 ff, 328 ff - utility systems 331, 361 - water use 374 Pinto- Grossmann method 490 ff planning techniques - cost-effective manufacturing 830-852 - experimental product development 432 - resources 452 - supply-chain management 624 plant design 303 plant management 10 - product scheduling 504 - resource planning 452 - supply-chain management 621 plant measurements 801 plant simulation 592 plant structure 815 plug flow grinding process 161 poly(ether urethane urea) membrane 317 polymer composites 438 polymerase enzymes 228 polysulfone membranes 315 population balance equation (PBE) - computational fluid dynamics 100 - crystallization processes 151f, 160 f - distributed dynamic models 75, 87 ff, 92 ff - fluid bed reactor 414 population generation 532 porcine somatropin (pST) 409 POSC standards 753, 757 powder feed rate, granulation 196 Powell method 16,27f, 39
Powell's dogleg algorithm 523 power market, resource planning 465 ff Poynting correction factor 782 precedence constraints 714 precipitation 152 precision - data validation 807 - process monitoring 531 prediction - distributed dynamic models 60 - flexible recipes model 610 - real-time control 577, 586 preferential sampling 118 prefractionator 286 preparative chromatography 35, pressure 171 - distillation 283 - equipment design 384 - flexible recipe model GO1 - gas separation 148 - molecular modeling 113 pressure drop - gas separation 147 - methyl acetate process 545 pressure-swing adsorption (PSA) separation 13, 137ff, 789 pricing optimization 636, 639 principal component analysis (PCA) 520, 527, 821 prize collecting salesman problem 490 proactive capabilities 702 probability demand function 833 ff, 852 probability density, molecular modeling 113f probability of stock-outs (PSO) 631 ff, 635 ProCAMD modules 785 f process and materials network (PMN) 594 process control OPC standards 755 process design 383-418 - chemical products 648 - decision chain 527 - integrated 647-668 process flow diagrams (PFD) - chemical product-process design 651 - data validation 804, 812 process history database (PHD) 681 process intensification 4, 297-326 process life cycle modeling 667-694 process models - computational fluid dynamics 98- 106 - flexible recipe model 613 - hybrid processes 592, 597 - resource planning 448 ff - scheduling 482 ff, 501 - separation systems 137 - utility systems 329 process monitoring 5, 517-540
Subject Index I 8 7 5 process simulation - educational modules 775 - hybrid processes 592 - see also: simulation process solvent systems 317 ff process synthesis - intensification 302 ff - separation 269 - 296 process system enterprise (PSE) 474 process-molecule synthesis supermodel 318 f producer-product relation 255 product demand see: demands product development 4, 419-442 product engineering 189f product portfolio 462 product scheduling 481 -516 product selectivity 823 product specifications - chemical process design 647 ff - flexible recipe model 602 ff - multiscale process modeling 190 - needs 657 product testing 431 product yield 405 production accounting 827 production life cycle 669 ff production planning - cost-effective manufacturing 830-852 - flexible recipe model 603 - resource planning 472 production profiles 450 production recipes 483 production scheduling 5 production time 832 production-distribution-inventory systems 696 productivity, biochemical processes 412 product-oriented methods 431,65Off, 661 product-process design, integrated 647-668 profile-based approach 308 profit profiles - cost-effective manufacturing 831-852 - expected 716, 838 - real-time optimization 580 - supply-chain management 624 PRO-I1 simulator 680, 775, 789 f prokaryotic organisms 224, 228ff, 237ff, 244f Propred module 780 property relations 173f property relations - educational modules 779 f - multiscale process modeling 198 propionitrile 131 PROSYN-MINLP synthesizer 276 protein networks 223 - 264 protein-DNA interactions 224, 247
protein-protein interactions 224, 237, 247 proteins 409 proton exchange membranes 338 pseudocomponent concept 818 purge separation 141 purification - batch chromatography 552 ff, 568 - biochemical process design 409 ff - carbothermic aluminum process 400 - chemical product-process design 651 - fluid bed reactor 413 f PVT behavior, thennophysical databases 734 pyrolysis 682
9
quadratic programming problems 582 qualitative differential equations (QDEs) 238 quality - chemical product-process design 647 ff - equipment design 384 - hybrid processes 597, 601 ff - thermophysical databases 733 quality function deployment (QFD) 424ff quantitative performance measurements 623 quantum models 110, 121ff,673 quasi-Newton family 2, 13, 23, 27ff quaternary separation 272 queuing models 673 quick-to-market 671
r radiation risk 688 rafinates 139ff rain water percolation 684 Randolph- Larson model 151 f random noise 527 random number generation 115 ff Rankine cycles 344 raw materials 327, 597 Rayleigh number 387 reactant conversion 607, 649 reaction rates 173 - educational modules 792 - model-based control 569 reaction system models 673 reaction transfer models 304 reactive distillation process 543 ff reactive scheduling 482, 503 ff reactive separations 301, 309 ff reactive simulated moving bed (SBM) 565 reactor/mass exchanger (RMX) 311, 315 reactors - educational modules 791 - intensification 311 reactor- separator-recycle process network synthesis 310
876
I
Subject Index
real models 562 real physical distributed systems control 702 real-time adjustments 697 real-time control 518 real-time environment 697 real-time expert system 674 real-time optimization (RTO) 577-590 real-time scheduling 503 reboilers 270ff, 283-291, 394 receding horizon 586 recipe-based representations 501, 591-615 recommended parameters 735 reconciliation tools 330 reconfiguration, supply chains 697 recovery - batch chromatography 562 - carbothermic aluminum process 401 - chemical product-processes 651 rectifier 282, 311 recycling 79, 305, 668 redundancy analysis - data validation 803 ff, 815 ff - process monitoring 517, 519ff - real-time optimization 582 refineries - blending 579 - datavalidation 826 - resource planning 448 ff, 456 ff refinery and petrochemical modeling system (RPMS) 456 refluxratio 548 refrigeration - computer-aided integration 369 - product development 437 - utility systems 359, 370 regression analysis - complex multiphase reactor 407 - distillation 285 - life cycle modeling 679 - molecular modeling 126 regulatory control 525, 568 regulatory microorganism networks 223-264 regulons 244f rehabilitation phase 672 relative concentration dynamics 233 remediation 669 ff repartitioning 452 report generation 533 repository of modeling environment (ROME) 690 repressilators 239 ff repressor gene, lac operon 231 f, 258 ff requirement - parameter translation 424 ff rescheduling strategy 610f research modeling 682 residuals 19
- crystallization processes 163 distributed dynamic models 55 real-time optimization 580 residues curves 787 resource planning 447-480 - constraint frameworks 467, 483 f, 489 f - cost-effective manufacturing 830 - decomposition methods 498 - hybrid processes 592 - life cycle modeling 673 - product development 429 - supply chain management 696 resource-task network (RTN) 454, 496 f, 592 responsiveness 398, 696 restricted matches 364, 373 retentate compartment 147 retiming strategy 611 retrofit 674 reuse models 690 reverse engineering 117ff, 226 ff Reynolds number 387 ribosome-binding site 241 risk-based management (RBM) - life cycle modeling 671 - shale oil processing 687 - supply-chain 626 RNA polymerase enzymes 228 ff rolling horizon algorithm 454,457 rubber mixtures 436 rule-based methods - chemical product-process design 661 - product development 423,438 ff - product scheduling 488 Runge-Kutta method 37ff, 96 run-time deviations 610 -
s
Sacharomyces cerevisiae 93, 224, 237 f safety - chemical product-process design 649 - data validation 807 - thermophysical databases 734 safety-health-environment (SHE) concept 667 sales maximization 624, 831 salts-water systems 323 sampling 111, 115ff SAP packages 474, 638 satisfaction level 716 saturation composition 781 scale identification 193, 697 scale invariance homotopy 32 scaling laws 387 scenario-based approaches 451 scheduling techniques 481 -516 - flexible recipe model 603 f, 609 f, 614
Subject lndex 1877 -
life cycle modeling
673
- resource planning 452
Schrodinger equation lloff, 121ff Schubert formula 29 search techniques - intensification 310 - product scheduling 484 segmentation methods 154ff, 162ff, 431 self consistent field molecular orbital concept 122 self-diffusion 116 self-organization 697, 703 semantic networks 439 Semantic Web standards 9, 750& 763-769 semibatch reactive distillation process 543 ff semibatch reactor 791 semiconductor fabrication 203, 508 semidiscretized methods 35, semiempirical methods 121 sensitivity analysis - cost-effective manufacturing 847 ff - process monitoring 517, 524ff, 535 - real-time optimization 580 - supply-chain management 627 sensor network optimization 518, 528, 531 f, 801 sensors 110 separation 137-170 separation - chemical product-process design 660 - chromatographic 552 - costs 566 - product scheduling 484 - synthesis 269-296 separator-centrifuge settling area 411 sequential approach 448, 715 sequential quadratic programming (SQP) 523 ff, 531 ff,802, 810f serial integration framework 208 serial multiscale process modeling 204 f series reactions 791 ff service-oriented architecture (SOA) standards 761 set-point perturbation finite difference method (FDPN) 563 set-point tracking - flexible recipe model 602 - methyl acetate process 549 - model-based control 559, 564 set-up times 448 seven-step procedure 172, 182 f, 193 shadow compartment concept 311 shale oil processing 682 ff shared intermediate storage (SIS) 485 sharp split assumptions 272 ff shock transition 45
shortcut models 271, 274, 283 f short-term scheduling 493 short-term scheduling - flexible recipe model 602 - uncertainties 451, 502 shut-down resource planning 450 side rectifier 282, 286 side stripper column 282 signal converters 518 signaling molecules 223 ff signal-oriented modeling 244 ff signed directed graph (SDG) 185 silane feed 414 silicon 413f simple distillation columns sequences 272-286 simple object access protocol (SOAP), Web services 750, 758 ff,763 ff simplex method 843 simplified solutions 177 - distillation 271ff - model-based control 545 f - multiscale process modeling 207 ff simulated annealing - intensification 310, 315 - product scheduling 487 ff, 506 f - resource planning 469 ff simulated moving bed (SBM) process 565 simulation - computer-aided 11-264, 329 - hybrid processes 592 - life cycle modeling 673, 680 - model-based control 568 - supply-chain management 631, 701 ff simultaneous approaches 195, 207, 661 single-input multiple-output (SIMO) block architecture 242 single-level mathematical formulation 459 single-multiphase reactors 307 single-phase flow model 402 single-site scheduling 483, 492 single-stage refrigeration cycle 359 singular Jacobi matrix 24 site recipes 598 size factor 595 size reduction 287 size-shape relation 387 slack real-time optimization 582 slack resource iterative auction approach 636 slurry bubble column reactor (SBCR) 35, 75, 83 ff smart agents 697 smelting zone 400 SMILES strings 780 smoothness 195 SO2 levels 684
878
I
Subject Index
social frameworks 249 socioeconomic impact analysis 687 sociotechnical risk assessment 667, 673 soft sensors 808 software packages 9 - data reconciliation 527 - multiscale process modeling 214ff - process intensification 299 ff - product scheduling 483,499 ff - resource planning 472 ff - standards 765 - supply-chain management 637 f solar cell production 414 solid conversion 401 solid oxide fuel cells 338 solid phase 84 solubilities 649 SoluCalc toolbox 781 solution methods - crystallization processes 154ff, 162ff - distributed dynamic models 70 - multiscale process modeling 193, 203ff, 215 solvents - design 649 ff, 653 ff - utility systems 328 solver modules 778, 782, 786 source terms - computational fluid dynamics 99 - multiscale process modeling 197 space segmentation 431 sparse systems 28 spatial discretization 37, 41, 48 spatial finite volume method 58 spatial step size 38 specifications - chemical products 647 ff, 651 ff - equipment design 383 - see also: product specifications spinning disk reactor (SDR) 297ff splitter superstructures 272 ff, 321 spring model 122 SQP package 406 stability - distributed dynamic models 50 - dynamical properties 185 standard deviation - data validation 802, 806 - molecular modeling 108, 112 - process monitoring 531 f, 535 standard for exchange of product model data (STEP) 691 standards 749-770 - partial model ingredients 201 Stanton number 85 STAR-CD software 214 state operator network (SON) 276, 289
state transitions 216 state-of-the-artcontrol 585 state-sequence network (SSN) 499 state-space representation 581 state-task network (STN) - hybrid processes 592 - product scheduling 493 ff - resource planning 464 statistical error - molecular modeling 115 - see also: error statistical laws 803 statistical thermodynamics lOSf, 112 ff steady-state distillation 17 steady-state properties 194 steady-state systems 521 f, 678 steam networks 356-372 steep moving fronts 64, 75 steepest descent method 20 ff - see also: gradient method steering relation 255 stencil methods 42 stiffness problem 15, 41, 60, 73 stochastic methods - crystallization processes 159, 163f - intensification 307, 310f - life cycle modeling 674, 679 - multiscale process modeling 199 - supply chain management 699 f stochastic programming - product scheduling 487, 505 - resource planning 451 stochastic uncertainty model 406 stock imperatives 483 stock-outs 631 f stoichiometric equations 18 stop criteria 30 storage - product scheduling 485 - resource planning 448 - shale oil processing 683 strategic planning - intensification 305 - life cycle modeling 669 - multiscale process modeling 216 stratospheric none depletion 710 stream flows - chemical product-processes 654 - utility systems 329ff stretching 122 stripping 287, 311 structural analysis - computational properties 183 - distillation 276 structural decomposition 184 structural plant-model 542
Subject lndex
structures 171ff structures - catalyst intensification 300 - dynamical properties 185 - process design 383 - regulatory networks 226 f styrene separation 811 substitution method 20 sulfuric acid 791 Sum-Sandler method 128 superposition principle 199 supersaturation profile 159 superscheduling problem 454 superstructure methods - complex columns 287 - distillation 273 -296 - intensification 305, 312, 321f - steam network 358 supplier’s reliability 830 supply-chain management (SCM) 8, 621-642 supply-chain management (SCM) - capacities 473 - integration 695-732 - life cycle modeling 673 - product scheduling 481 ff support technologies 680 supporting hyperplanes 386 surface potentials 131 surface segmentation 431 surface tension 734 surfactants 653 sustainability 6G7 f synchronization 518 synetics 427 syntactical verification methods 183 synthetic regulatory networks 226ff, 239 ff system for chemical engineering model assembly (SCHEMA) 680, 690 systems biology markup language (SBML) 245
t tablet formulation 435 Tabu search 311,469ff tactical resource planning 462 ff target modeling - chemical product-process design 657 ff - shale oil processing 684 tasks - hybrid processes 592 - product scheduling 484, 493 ff tax regimes 623 Taylor series 21, 27 - distributed dynamic models 59, 65 - molecular modeling 116 TCP/IP standards 750 tearing 15
temperature profiles 171 temperature profiles - carbothermic aluminum process 400 - complex multiphase reactor 396 - equipment design 384 - flexible recipe model 601, 607 - intensification 308 - molecular modeling 108 - process monitoring 519 - utility systems 332, 340-366 temperature-swing adsorption (TSA) separation 139 temporal stepsize 38 termination criterion 561 ternary column sequencing 273, 280f testing - product development 431 - resource planning 461 - supply chain management 710 - see also: applications thermal coupling 270, 285, 290 f THERMAL databases 736 thermal radiation 688 thermal/PV batteries 467 THERMODATA databases 736 thermodynamics - data validation 810 f - distillation 284 - distributed dynamic models 35 - equipment design 384 ff - heat pumps 337 - methyl acetate process 545 - molecular modeling 111 ff - multiscale processes 189 - process monitoring 530 - utility systems 330ff, 350 thermoeconomic models 333 thermophysical databases 733 -748 thermophysical properties 8 Thiele modulus 209 thiol groups 130 Thompson - King model 275 three-point backward (TPB) method 44 time discretization - product scheduling 490 ff - distributed dynamic models 58 - resource pIanning 450, 458 time horizon - data reconciliation 526 - model-based control 543 ff - multiscale process modeling 190ff - product scheduling 484 - supply-chain management 628 time integrators 38 time-based decomposition approaches 470 time-colored Petri-nets 631
I
879
880
I
Subject Index
time-scale models 215 - chemical product-process design 655 - life cycle modeling 674 time-to-market 461 timing standards 752 tolerance values 563 toolboxes see: educational / modules / software packages top-down approaches 194 torsion 122 total annualized cost (TAC) - distillation 272 ff, 282 ff - water systems 323 total site integration 363 toxic release models 673 trading structure 629, 716 transcriptional regulation 228, 232 ff transcriptional repressors 239 f transfer coefficients 114, 173, 198 transfer prices 623 transfer times 505 transformations - algebraic 176f - integration framework 208 transient operations 35 transition probability 117 translation models - educational modules 795 - regulatory networks 228 transport mechanisms 175, 282 transport properties 116 - equipment design 387 - thermophysical databases 734 transportation 623 trapezoidal rules 96 tray cascades 270 tray-by-traymodel, distillation 287 TRC thermophysical databases 735 ff triangle distribution 833 triplet assumption variable relation keyword 175 TRIZ method 423,428ff, 439 trouble-shooting 303 true boiling point (TBP) 818 trusted solutions 216 tuning process models 171-188 turbulence model 401 two-phase flow models 401 ff U
UDDI standards 758 uncertainty - clinical trial outcomes 462 - complex multiphase reactor 404 ff - cost-effective manufacturing 829-854 - datavalidation 806
-
process monitoring
535
- resource planning 448, 451 ff supply chain management 396, 399ff, 622, 626, 639, uncertainty product scheduling 483, 502 ff UNIFAC activity coefficients 781 ff unified modeling language (UML),standards 710f,720, 751, 754ff uniform sampling 118 UNIQUAC model 125 ff unique assignment case 493 unit operation (UO) specifications, standards 755 unit-based approach (UBS) 308 united atoms force fields 123f unit-to-task allocation 460, 488, 493 unlimited intermediate storage (UIS) 485, 593 upwind schemes 43 f, 80 USTI method 429 utility systems - computer-aided integration 327-382 - life cycle modeling 689 -
V
vacuum distillation unit (VDU) 323 vacuum-swing adsorption (VSA) separation 139 validation 181 - equipment design 391 - life cycle modeling 676 - multiscale process modeling 216 - process monitoring 517ff - UNIQUAC model 125ff - Valipackage 527 van der Waals repulsion 109, 124f, 130f vapor behavior - distillation 270 - methyl acetate process 545 - molecular modeling 110 - UNIQUACmodel 130 vapor recovery reactor (VRR) 400 vapor-liquid equilibria - crystallization processes 158 - Dortmund database 745 - molecular modeling 108- 108 vapor-liquid-liquid systems 311 variables - process monitoring 521, 524 - partialmodels 201 variance, molecular modeling 114 variational multiscale process modeling 205 vector spaces 384 Verdict package 499 verification - life cycle modeling 676 - multiscale process modeling 194, 216
Subject lndex I881
PDS-ICAS 790 process models 171-188 Verlet algorithm 116 vessel design 673, 686 vibration phenomena 122 Villadsen- Michelsen collocation 52 vinyl chloride monomer (VCM) purification 394 viscosity 80, 99, 114 volatilities 270 f von Wright actions 253 -
W
W3C standards 751 ff, 763ff waiting times 604, 611 warehouse agent 689, 705, 717 waste products - scheduling 508 - intensification 297 - utility systems 328 water management 684 water pinch analysis 320 ff water systems 319f 328E 374ff water/ethanol system 129 w-commerce 697 Web portals standards 764 Web services 9 Web services-SOAPstandards 750, 758 ff, 763 ff Web thermophysical databases 735 ff Web-oriented interfaces 697 weight matrix 18f, 521, 529f weighted residuals 163 weighted stencil methods (WENO)
distributed dynamic models 42, 46 ff, 71 ff, 81, 91 - fxed-bed reactors 81 well-conditioned differential equations 16 wet-etching 508 White-Ydstie model 414 Wilkinson algorithm 455 f Wilson activity coefficient 126ff Wilson equations 545 ff workflow - chemical product-process design 660 - life cycle modeling 675 ff working fluids 372 World-Wide Web standards 750ff -
X
XML (extensible markup language) 680, 750, 757 XPRESS-MP solver 469
Y
Yeomans-Grossmann model 278 yield stress 196 yields 809, 822 Yildirim- Mackey model 230 ff Young's modulus 196
z zeolite membranes 137f, 146ff zero integration error 207, 216 zero-wait (ZW) mode 458ff, 485,491 f zero-wait intermediate storage 593