The Greening of Petroleum Operations
Scrivener Publishing 3 Winter Street, Suite 3 Salem,MA01970 Scrivener Publishing Collections Editors James E. R. Couper Richard Erdlac Pradip Khaladkar Norman Lieberman W. Kent Muhlbauer S. A. Sherif
Ken Dragoon Rafiq Islam Vitthal Kulkarni Peter Martin Andrew Y. C. Nee James G. Speight
Publishers at Scrivener Martin Scrivener (
[email protected]) Phillip Carmical (
[email protected])
The Greening of Petroleum Operations
M.R. Islam A.B. Chhetri M.M. Khan Dalhousie University
J
Scrivener
WILEY
Copyright © 2010 by Scrivener Publishing LLC. All rights reserved. Co-published by John Wiley & Sons, Inc. Hoboken, New Jersey, and Scrivener Publishing LLC, Salem, Massachusetts. Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., Ill River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. For more information about Scrivener products please visit www.scrivenerpublishing.com. Cover design by Russell Richardson. Library of Congress Cataloging-in-Publication ISBN: 978-0-470-62590-3
Printed in the United States of America 10
9 8 7 6 5 4 3 2 1
Data:
This book is dedicated to the memory of two giants of sustainable petroleum engineering Dr. Sara Thomas, PERL, Edmonton, Canada and Prof. T.F. Yen, University of Southern California. Both passed away in recent years.
Contents
Foreword 1
xix
Introduction 1.1 The Science of Change: How Will Our Epoch Be Remembered? 1.2 Are Natural Resources Finite and Human Needs Infinite? 1.3 The Standard of Sustainable Engineering 1.4 Can Nature Be Treated as If It Were Static? 1.5 Can Human Intervention Affect Long-term Sustainability of Nature? 1.6 Can an Energy Source Be Isolated from Matter? 1.7 Is It Possible That Air, Water, and Earth Became Our Enemy? 1.8 The Difference Between Sustainable and Unsustainable Products 1.9 Can We Compare Diamonds with Enriched Uranium? 1.10 Is Zero-waste an Absurd Concept? 1.11 How Can We Determine Whether Natural Energy Sources Last Forever? 1.12 Can Doing Good Be Bad Business? 1.13 Greening of Petroleum Operations: A Fiction?
vii
1 1 2 4 9 11 11 13 15 16 16 17 18 19
viii
CONTENTS
A Delinearized History of Civilization and the Science of Matter and Energy 2.1 Introduction 2.2 Fundamental Misconceptions of the Modern Age 2.2.1 Chemicals are Chemicals and Energy is Energy 2.2.2 If You Cannot See it, it Does Not Exist 2.2.3 Simulation Equals Emulation 2.2.4 Whatever Works is True 2.3 The Science of Intangibles 2.4 The Science of Matter and Energy 2.4.1 The European Knowledge Trail in Mass and Energy 2.4.2 Delinearized History of Mass and Energy Management in the Middle East 2.4.3 Accounting 2.4.4 Fundamental Science and Engineering 2.5 Paradigm Shift in Scientific and Engineering Calculations 2.6 Summary and Conclusions Fundamentals of Mass and Energy Balance 3.1 Introduction 3.2 The Difference Between a Natural Process and an Engineered Process 3.3 The Measurement Conundrum of the Phenomenon and its Observer 3.3.1 Background 3.3.2 Galileo's Experimental Program: An Early Example of the Nature-Science Approach 3.4 Implications of Einstein's Theory of Relativity on Newtonian Mechanics 3.5 Newton's First Assumption 3.6 First Level of Rectification of Newton's First Assumption 3.7 Second Level of Rectification of Newton's First Assumption 3.8 Fundamental Assumptions of Electromagnetic Theory
21 21 27 27 33 35 37 40 54 58 75 89 91 96 101 105 105 106 107 107 116 121 125 131 133 137
CONTENTS
3.9 Aims of Modeling Natural Phenomena 3.10 Challenges of Modeling Sustainable Petroleum Operations 3.11 Implications of a Knowledge-based Sustainability Analysis 3.11.1 A General Case 3.11.2 Impact of Global Warming Analysis 3.12 Concluding remarks
ix
145 147 150 151 155 157
A True Sustainability Criterion and Its Implications 4.1 Introduction 4.2 Importance of the Sustainability Criterion 4.3 The Criterion: The Switch that Determines the Direction at a Bifurcation Point 4.3.1 Some Applications of the Criterion 4.4 Current Practices in Petroleum Engineering 4.4.1 Petroleum Operations Phases 4.4.2 Problems in Technological Development 4.5 Development of a Sustainable Model 4.6 Violation of Characteristic Time 4.7 Observation of Nature: Importance of Intangibles 4.8 Analogy of Physical Phenomena 4.9 Intangible Cause to Tangible Consequence 4.10 Removable Discontinuities: Phases and Renewability of Materials 4.11 Rebalancing Mass and Energy 4.12 Energy: The Current Model 4.12.1 Supplements of Mass Balance Equation 4.13 Tools Needed for Sustainable Petroleum Operations 4.14 Conditions of Sustainability 4.15 Sustainability Indicators 4.16 Assessing the Overall Performance of a Process 4.17 Inherent Features of a Comprehensive Criterion
159 159 161
Scientific Characterization of Global Energy Sources 5.1 Introduction 5.2 Global Energy Scenario 5.3 Solar Energy 5.4 Hydropower
215 215 220 224 226
164 167 172 172 176 179 180 181 186 187 188 189 191 192 194 196 199 201 211
x
CONTENTS
5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19
Ocean Thermal, Wave, and Tidal Energy Wind Energy Bio-energy Fuelwood Bioethanol Biodiesel Nuclear Power Geothermal Energy Hydrogen Energy Carbon Dioxide and Global Warming Nuclear Energy and Global Warming Impact of Energy Technology and Policy Energy Demand in Emerging Economies Conventional Global Energy Model Renewable vs. Non-renewable: No Boundary as Such 5.20 Knowledge-based Global Energy Model 5.21 Concluding Remarks Scientific Characterization of Light and Light Sources 6.1 Introduction 6.2 Natural Light Source: The Sun 6.2.1 Sun Composition 6.2.2 Sun Microstructure 6.3 Artificial Light Sources 6.4 Pathways of Light 6.4.1 Natural Light 6.4.2 Artificial Light 6.5 Light Energy Model 6.6 Spectral Analysis of Light 6.6.1 Measured and Planck's Model Light Spectra 6.6.2 Natural and Artificial Light Spectra 6.7 Effect of Lamp Coating on Light Spectra 6.8 Effect of Eyeglasses and Sunglasses on Light Spectra 6.9 Concluding Remarks
228 228 230 231 233 235 236 239 240 242 243 245 247 248 249 253 255 257 257 259 259 259 263 266 266 267 267 269 271 272 276 278 281
CONTENTS
The Science of Global Warming 7.1 Introduction 7.2 Historical Development 7.2.1 Pre-industrial 7.2.2 Industrial Age 7.2.3 Age of Petroleum 7.3 Current Status of Greenhouse Gas Emissions 7.4 Comments on Copenhagen Summit 7.4.1 Copenhagen Summit: The political implication 7.4.2 The Copenhagen 'Agreement' 7.5 Classification of CO z 7.6 The Role of Water in Global Warming 7.7 Characterization of Energy Sources 7.8 The Kyoto Protocol 7.9 Sustainable Energy Development 7.10 Zero Waste Energy Systems 7.11 Reversing Global Warming: The Role of Technology Development 7.12 Deconstructing the Myth of G loba 1 Warming and Cooling 7.13 Concluding Remarks Diverging Fates of Sustainable and Unsustainable Products 8.1 Introduction 8.2 Chemical Composition of Polyurethane Fiber 8.3 Biochemical Composition of Wool 8.4 Pathways of Polyurethane 8.5 Pathways of Wool 8.6 Degradation of Polyurethane 8.7 Degradation of Wools 8.8 Recycling Polyurethane Waste 8.9 Unsustainable Technologies 8.10 Toxic Compounds from Plastic 8.11 Environmental Impacts Issues 8.12 How Much is Known? 8.13 Concluding Remarks
xi
283 283 286 286 287 288 289 296 296 298 302 304 308 310 314 319 324 327 333 335 335 337 339 343 347 347 348 351 354 358 358 361 365
CONTENTS
Scientific Difference Between Sustainable and Unsustainable Processes 9.1 Introduction 9.1.1 Paraffin Wax and Beeswax 9.1.2 Synthetic Plastic and Natural Plastic 9.2 Physical Properties of Beeswax and Paraffin Wax 9.2.1 Paraffin Wax 9.2.2 Beeswax 9.3 Microstructures of Beeswax and Paraffin wax 9.4 Structural Analysis of Paraffin Wax and Beeswax 9.5 Response to Uniaxial Compression 9.6 Ultrasonic Tests on Beeswax and Paraffin Wax 9.7 Natural Plastic and Synthetic Plastic 9.8 Plastic Pathway from Crude Oil 9.9 Theoretical Comparison Between Nylon and Silk 9.10 Theoretical Comparison Between Synthetic Rubber and Latex (Natural Rubber) 9.11 Concluding Remarks Comparison of Various Energy Production Schemes 10.1 Introduction 10.2 Inherent Features of a Comprehensive Criterion 10.3 The Need for a Multidimensional Study 10.4 Assessing the Overall Performance of a Process 10.5 Global Efficiency of Solar Energy to Electricity Conversion 10.5.1 Photovoltaic Cells 10.5.2 Battery Life Cycle in PV System 10.5.3 Compact Fluorescent Lamp 10.5.4 Global Efficiency of Direct Solar Application 10.5.5 Combined-Cycle Technology 10.5.6 Hydroelectricity to Electric Stove 10.6 Global Efficiency of Biomass Energy 10.7 Global efficiency of nuclear power 10.8 Discussion 10.9 Concluding remarks
367 367 368 370 372 372 373 375 380 384 390 394 395 396 400 403 405 405 407 407 410 411 411 412 414 415 420 421 422 425 426 428
CONTENTS
11 The Zero-Waste Concept and its Application to Petroleum Engineering 11.1 Introduction 11.2 Petroleum Refining 11.2.1 Zero-waste Refining Process 11.3 Zero Waste in Product Life Cycle (Transportation, Use, and End-of-Life) 11.4 No-Flaring Technique 11.4.1 Separation of Solid-Liquid 11.4.2 Separation of Liquid-Liquid 11.4.3 Separation of Gas-Gas 11.4.4 Overall Plan 12 Sustainable Refining and Gas Processing 12.1 Introduction 12.1.1 Refining 12.1.2 Natural Gas Processing 12.2 Pathways of Crude Oil Formation 12.3 Pathways of Crude Oil Refining 12.4 Additives in Oil Refining and Their Functions 12.4.1 Platinum 12.4.2 Cadmium 12.4.3 Lead 12.5 Emissions from Oil Refining Activities 12.6 Degradation of Crude and Refined Oil 12.7 Pathways of Natural Gas Processing 12.8 Oil and Condensate Removal from Gas Streams 12.9 Water Removal from Gas Streams 12.9.1 Glycol Dehydration 12.9.2 Solid-Desiccant Dehydration 12.10 Separation of Natural Gas Liquids 12.10.1 The Absorption Method 12.10.2 The Membrane Separation 12.10.3 The Cryogenic Expansion Process 12.11 Sulfur and Carbon Dioxide Removal 12.11.1 Use of Membrane for Gas Processing 12.11.2 Nitrogen and Helium Removal
xiii 431 431 433 440 450 451 452 454 458 460 463 463 464 467 469 476 479 479 480 481 481 483 484 486 486 487 488 489 489 490 490 491 492 492
xiv
CONTENTS
12.12
12.13
12.14
Problems in Natural Gas Processing 12.12.1 Pathways of Glycols and Their Toxicity 12.12.2 Pathways of Amines and Their Toxicity 12.12.3 Toxicity of Polymer Membranes Innovative Solutions for Natural Gas Processing 12.13.1 Clay as a Glycol Substitute for Water Vapor Absorption 12.13.2 Removal of C 0 2 Using Brine and Ammonia 12.13.3 C 0 2 Capture Using Regenerable Dry Sorbents 12.13.4 C 0 2 Capture Using Oxides and Silicates of Magnesium 12.13.5 H2S Removal Techniques Concluding Remarks
13 Flow Assurance in Petroleum Fluids 13.1 Introduction 13.1.1 Hydrate Problems 13.1.2 Corrosion Problems in the Petroleum Industry 13.2 The Prevention of Hydrate Formation 13.2.1 Thermodynamic Inhibitors 13.2.2 Low Dosage Hydrate Inhibitors 13.2.3 Kinetic Hydrate Inhibitors 13.2.4 Antiagglomerants (AA) 13.3 Problems with the Gas-processing Chemicals 13.4 Pathways of Chemical Additives 13.4.1 Ethylene Glycols (EG) 13.4.2 Methanol 13.4.3 Methyl Ethanol Amine (MEA) 13.4.4 Di-ethanol Amine (DEA) 13.4.5 Triethanolamine (TEA) 13.5 Sustainable Alternatives to Conventional Techniques for Hydrate Prevention 13.5.1 Sustainable Chemical Approach 13.5.2 Biological Approach 13.5.3 Direct Heating Using a Natural Heat Source
493 493 495 496 496 496 499 502 502 503 505 507 507 508 511 513 515 516 520 521 522 525 527 527 528 529 530 530 532 540 543
CONTENTS
13.6 13.7 13.8
Mechanism of Microbially Induced Corrosion Sustainable Approach to Corrosion Prevention Asphaltene Problems and Sustainable Mitigation 13.8.1 Bacterial Solutions for Asphaltene and Wax Damage Prevention
14 Sustainable Enhanced Oil Recovery Introduction 14.1 14.2 Chemical Flooding Agents 14.2.1 Toxicity of the Synthetic Alkalis 14.2.2 Alkalinity in Wood Ashes 14.2.3 Characterization of Maple Wood Ash Producing the Alkalinity 14.2.4 Alkalinity of Maple Wood Ash Extracted Solution 14.2.5 Feasibility Test of a Maple Wood Ash Extracted Solution for EOR Applications 14.2.6 Interfacial Tension (IFT) Equivalence 14.2.7 Environmental Sustainability of Wood Ash Usage 14.2.8 The Use of Soap Nuts for Alkali Extraction Rendering C 0 2 Injection Sustainable 14.3 14.3.1 Miscible C 0 2 Injection 14.3.2 Immiscible C 0 2 Injection 14.3.3 EOR Through Greenhouse Gas Injection 14.3.4 Sour Gas Injection for EOR Viscous Fingering 14.3.5 Design of Existing EOR Projects 14.3.6 14.3.7 Concluding Remarks A Novel Microbial Technique 14.4 14.4.1 Introduction 14.4.2 Some Results 14.4.3 Concluding Remarks Humanizing EOR Practices 14.5
XV
545 554 567 569 577 577 578 581 584 586 592
594 595 598 600 600 603 607 609 611 612 613 616 616 616 621 629 632
xvi
CONTENTS
15 The Knowledge Economics 635 15.1 Introduction 635 15.2 The Economics of Sustainable Engineering 635 15.2.1 Insufficiency of Current Models: The Analogy of the Colony Collapse Disorder 636 15.2.2 Insufficiency of Energy Economics Theories 645 15.2.3 Jevons' Paradox 654 15.2.4 The "Marginal Revolution" as a Legacy of Utilitarian Philosophy 657 15.2.5 What is Anti-nature About Current Modes of Economic Development? 660 15.2.6 The Problem with Taxing (Carbon Tax or Otherwise) 661 15.3 The New Synthesis 663 15.3.1 Understanding the History of Reversals of Fortune 665 15.3.2 True Sustainability is Conforming with Nature 671 15.3.3 Knowledge for Whom? 677 15.3.4 The Knowledge Dimension and How Disinformation is Distilled 680 15.4 A Case of Zero-waste Engineering 684 15.4.1 Economic Evaluation of Key Units of Zero-waste Scheme 686 15.4.2 A New Approach to Energy Characterization 692 15.4.3 Final Words 697 16 Deconstniction of Engineering Myths Prevalent in the Energy Sector 16.1 Introduction 16.1.1 How Leeches Fell out of Favor 16.1.2 When did carbon become the enemy? 16.2 The Sustainable Biofuel Fantasy 16.2.1 Current Myths Regarding Biofuel 16.2.2 Problems with Biodiesel Sources
699 699 699 707 709 710 711
CONTENTS
16.2.3 16.3
The Current Process of Biodiesel Production "Clean" Nuclear Energy 16.3.1 Energy Demand in Emerging Economies and Nuclear Power 16.3.2 Nuclear Research Reactors 16.3.3 Global Estimated Uranium Resources 16.3.4 Nuclear Reactor Technologies 16.3.5 Sustainability of Nuclear Energy 16.3.6 Global Efficiency of Nuclear Energy 16.3.7 Energy from Nuclear Fusion
xvii
17 Greening of Petroleum Operations 17.1 Introduction 17.2 Issues in Petroleum Operations 17.3 Pathway Analysis of Crude and Refined Oil and Gas 17.4 Critical Evaluation of Current Petroleum Practices 17.5 Management 17.6 Current Practices in Exploration, Drilling, and Production 17.7 Challenges in Waste Management 17.8 Problems in Transportation Operations 17.9 Greening of Petroleum Operations 17.9.1 Effective Separation of Solid from Liquid, gas from liquid, and gas from gas 17.9.2 Natural Substitutes for Gas Processing Chemicals (Glycol and Amines) 17.9.3 Membranes and Absorbents 17.9.4 A Novel Desalination Technique 17.9.5 A Novel Refining Technique 17.9.6 Use of Solid Acid Catalyst for Alkylation 17.9.7 Use of Nature-based or Non-toxic Catalyst
713 720 720 720 722 722 724 736 736 739 739 741 742 742 744 746 749 750 752
752 752 753 756 757 757 758
xviii
CONTENTS
17.9.8
17.10
Use of Bacteria to Break Down Heavier Hydrocarbons 17.9.9 Zero-waste Approach 17.9.10 Use of Cleaner Crude Oil 17.9.11 Use of Gravity Separation Systems Concluding Remarks
18 Conclusion 18.1 Introduction 18.2 The HSS®A® (Honey -» Sugar -» Saccharin®—»Aspartame®) Pathway 18.3 HSS®A® Pathway in Energy Management 18.4 The Conclusions Appendix 1 Appendix 2
758 758 758 761 761 763 763 764 770 772
Origin of Atomic Theory as Viewed by the European Scientists
775
Nobel Prize in Physics (2008) given for discovering breakdown of symmetry
789
References and Bibliography
795
Index
847
Foreword Civilization is defined by energy policies which, in turn, are subject to partisan politics - one side of the political aisle supports petroleum production and use while the other side supports the injection of various alternate energy sources. The argument can be prolonged ad nauseam depending upon the interests of those voicing the pros and cons. However, this book shows, with scientific arguments, that there is indeed a sustainable solution to petroleum production and operations. Moreover, the book is about scientific change and proposes that the science of change is equated with the science of sustainability. To achieve this, the book covers critical distinctions between the outcomes of natural processes from the outcomes of engineered processes, as well as conventional scientific discourse and work that has been either missed or dismissed. A comprehensive theory is presented that answers many of the questions that remain unanswered with the current engineering tools. The book goes on to show that if most of the misconceptions had been addressed, all the contradictions of our modern age would not come to do us harm and deprive us of the sustainable lifestyle that has become the hallmark of our current civilization. This involves the deconstruction of a series of engineering myths that have been deeply rooted in the energy sector, with the potential for powerful impacts on modern civilization. The book is well written and will cause scientists and engineers to think and look before they leap onto the latest bandwagon of myth, circumstantial evidence, and preference for a particular partisan funding source. James G. Speight Ph.D., DSc.
xix
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
1 Introduction
1.1 The Science of Change: How Will Our Epoch Be Remembered? Energy policies have defined our modern civilization. Politicizing energy policies is nothing new, but bipartisan bickering is new for the Information Age. The overwhelming theme is "change" (similar to the term "paradigm shift"), and both sides of the "change" debate remain convinced that the other party is promoting a flat-earth theory. One side supports petroleum production and usage, and the other side supports the injection of various "other" energy sources, including nuclear, wind, solar, etc. This creates consequences for scientific study. The petroleum industry faces the temptation of siding with the group that promotes petroleum production and continuing usage with only cosmetic change to the energy consumption side, namely in the form of "energy saving" utilities. The other side, of course, has vested interest in opposing this move and spending heavily on infrastructure development using renewable energy sources. Both sides seem to agree on one thing: there is no sustainable solution to the energy crisis, and the best we can do is to minimize the economic and environmental downfall. This book shows, with 1
2
THE GREENING OF PETROLEUM OPERATIONS
scientific arguments, that both sides are wrong, and there is indeed a sustainable solution to petroleum production and operations. With the proposed schemes, not only would the decline of the economic and environmental conditions be arrested, but one could improve both these conditions, launching our civilization onto an entirely new path. This book is about scientific change that we can believe in. This is not about repeating the same doctrinal lines that got us in this modern-day "technological disaster" mode (in the word of Nobel Laureate Chemist Robert Curl). The science of true change is equated with the science of sustainability. This change is invoked by introducing both a natural source and a natural pathway. This book summarizes an essential, critical distinction between the outcomes of natural processes and the outcomes of engineered processes that conventional science discourse and work have either missed or dismissed. In contrast to what defines a change in a natural process, the outcomes of engineered processes can change if there is a change in only the source or only along the pathway, and there may be no net change in outcome if changes at the source cancel out changes along the pathway or vice-versa. Today, the entire focus has been on the source (crude oil in petroleum engineering), and the role of the pathway has been completely misunderstood or deliberately ignored. Numerous schemes are being presented as sustainable alternatives - sustainable because the source has been replaced with another source while keeping the process intact. This mode of cognition has been a very typical philosophy for approximately the last 900 years and has many applications in other disciplines, including mathematics (theory of chaos). This book deconstructs this philosophy and presents truly scientific analysis that involves both the source and the pathway. As a result, all the analyses are consistent with the first premise, and no question remains unanswered.
1.2
Are Natural Resources Finite and Human Needs Infinite?
Over a decade ago, Lawrence Lerner, Professor Emeritus in Physics and Astronomy at the University of Chicago, was asked to evaluate how Darwin's theory of evolution was being taught in each state of the United States (Lerner 2000). In addition to his attempt to find a
INTRODUCTION
3
standard in K-12 teaching, he made some startling revelations. His recommendations created controversy, and many suggested that he was promoting "bad science" in name of "good science." However, no one singled out another aspect of his finding. He observed that "some Native American tribes consider that their ancestors have lived in the traditional tribal territories forever." He then equated "forever" with "infinity" and continued his comment stating, "Just as the fundamentalist creationists underestimate the age of the earth by a factor of a million or so, the Black Muslims overestimate by a thousand-fold and the Indians are off by a factor of infinity." (Lerner 2005). This confusion between "forever" and "infinity" is not new in modern European culture. In the words of Albert Einstein, "There are two things that are infinite, human stupidity and the Universe, and I am not so sure about the Universe." Even though the word "infinity" emerges from a Latin word, infinitas, meaning "unboundedness," for centuries this word has been applied in situations in which it promotes absurd concepts. In Arabic, the equivalent word means "never-ending." In Sanskrit, similar words exist, and those words are never used in mathematical terms as a number. This use of infinity to enumerate something (e.g., infinite number of solutions) is considered to be absurd in other cultures. Nature is infinite - in the sense of being all-encompassing within a closed system. Somewhat paradoxically, nature as a system is closed in the sense of being self-closing. This self-closure property has two aspects. First, everything in a natural environment is used. Absent anthropogenic interventions, conditions of net waste or net surplus would not persist for any meaningful period of time. Secondly, nature's closure system operates without benefit of, or dependence upon, any internal or external boundaries. Because of this infinite dimension, we may deem nature - considered in net terms as a system overall - to be perfectly balanced. Of course, within any arbitrarily selected finite time period, any part of a natural system may appear out of balance. However, to look at nature's system without acknowledging all the subtle dependencies that operate at any given moment introduces a bias that distorts any conclusion that is asserted on the basis of such a narrow approach. From where do the imbalance and unsustainability that seem so ubiquitously manifest in the atmosphere, the soil, and the oceans originate? As the "most intelligent creation of nature," men were expected to at least stay out of the natural ecosystem. Einstein might have had doubts about human intelligence or the infinite nature of
4
THE GREENING OF PETROLEUM OPERATIONS
the Universe, but human history tells us that human beings have always managed to rely on the infinite nature of nature. From Central American Mayans to Egyptian Pharaohs, from Chinese Hans to the Mannaeans of Persia, and from the Edomites of the Petra Valley to the Indus Valley civilization of the Asian subcontinent, all managed to remain in harmony with nature. They were not necessarily free from practices that we no longer consider (Pharaohs sacrificed humans to accompany the dead royal for the resurrection day), but they did not produce a single gram of an inherently anti-nature product, such as DDT. In modern times, we have managed to give a Nobel Prize (in medicine) for that invention. Chapter 2 examines how our ancestors dealt with energy needs and the knowledge they possessed that is absent in today's world. Regardless of the technology these ancient civilizations lacked that many might look for today, our ancestors were concerned with not developing technologies that might undo or otherwise threaten the perceived balance of nature that, today, seems desirable and worth emulating. Nature remains and will remain truly sustainable.
1.3
The Standard of Sustainable Engineering
Beginning in the 20th century about four generations ago until the Great Depression about three generations back, alcohol was placed under Prohibition in the United States even for medicinal purposes. Today, the most toxic and addictive form of alcohol is not only permitted but is promoted as a part of a reputedly "refined" life style. Only about four to six generations ago in the mid- to late 19th century, interracial marriages and marriages between cousins were forbidden (some still are, e.g., Paul and Spencer 2008), women and African-Americans did not have the right to vote in elections, and women (after marriage) and slaves (after sale) were required to change their surname and identity. In many parts of rural Quebec, well into the 20th century, women were required to replace all their teeth with a denture as a gift to the groom. Today, as part of the reaction to the extreme backwardness of these reactionary social practices, same-sex marriage is allowed in Canada, much of USA, and Europe. Marriage among siblings is even allowed in some "enlightened" parts of Europe, and changing one's surname has become a sign of backwardness. Although the religious establishment's various sanctions surrounding these relationships, not to mention the status of these
INTRODUCTION
5
various relations themselves, have actually changed very little, a vast propaganda was loosed nonetheless upon the world proclaiming the alleged modernization of all human and social relations represented by such "reversals." However, all that has "changed" is the standard as to what is acceptable. Similarly, about one to two generations ago, organic food was still the most abundant and most affordable food. Then, along came the notorious "Green Revolution," fostered mainly in Third World countries by U.S.-based agribusiness interests often acting through governments. "Productivity" reportedly doubled and tripled in less than a decade. Today, organic food in general costs three times (200% increase) more than non-organic. In this process, the actual quality of the food declined. Yet, the standard had been shifted again, rendering possible an extensive widening of profit margins in the most powerfully positioned sectors of food production and distribution. When, where, and how does such a reversal in the trend of quality and pricing start, like the reversal in the trend of the quality of certain social relations and the value placed on them? In either case, investigating and establishing true sustainability entails a deep analysis of the entire matter of what constitutes a standard, what social forces are in a position to shift standards, how the process of rewriting standards operates, and where and when may people intervene to empower themselves and put an end to being victimized in such processes? Chapter 2 discusses and discloses the problem inherent in the standards that we use today - standards or ideals that are not natural. Chapter 3 explains that by forcing a non-natural standard or ideal in all engineering calculations all subsequent conclusions are falsified. Nature exists in a state of dynamic balance both in space and time. In its attempts to comprehend fundamental changes of state within a natural environment, the conventional science of tangibles hits a wall, especially when it comes to the treatment of time's role at such bifurcation points. Why? Such a situation follows from the fact that the actual rate at which time unfolds at such points within a natural process is itself part of that process. This means that time cannot be treated as varying independently of that process. Very much akin to the problem of standards falling under the dictate of special, usually private, interests, the mathematics used by the science of tangibles becomes hijacked. For centuries, this mathematics warned its users about the falsehoods that will arise when differentiating a discontinuous function as though it were continuous,
6
THE GREENING OF PETROLEUM OPERATIONS
or integrating over a region of space or time that is discontinuous as though it were continuous. In practice, meanwhile, pragmatism has often prevailed, and the reality of a natural system's output often bearing little or no relationship to what the theoretical mathematical model predicted is treated as an allowable source of error. However, what could be expected to eventuate if the standpoint of the science of intangibles, based on a conception of nature's system as one that is perfect (in the sense of complete and self-contained), were adopted instead? Many of these howling contradictions that emerge from retaining the conventional science of tangibles in areas where its modeling assumptions no longer apply would turn out to be removable paradoxes. The conundrum arises in the first place mainly (and/or only) because the tangible aspects of any phenomenon do not go beyond the very small element in space, i.e., As —> 0, and even a smaller element in time, i.e., At = 0 (meaning, time t = "right now"). Within the space-time of a purely mathematical universe, Newton's calculus gives reliable answers concerning the derivative of a function based on taking the limit of the difference quotient of the function as change in any selected variable of the said function approaches zero. However, regardless of that fact, is the underlying first assumption correct? That is to say, what fit may be expected between the continuity of processes in a natural system and the continuity of mathematical space time that undergirds whether we can even speak of a function's derivative? In general, the results in the mathematical reality and the natural reality don't match, at least not without "fudging" of some kind. Is it reasonable to consign such a mismatch to "error," or is something else at work here? The authors believe the incompatibility has a deeper source, namely in an insurmountable incompatibility between the continuum of mathematical space-time and the essentially dynamic balance of natural systems. Since the dawn of the Industrial Revolution, the only models used and developed continually have been based on what we characterize in this work as the science of tangibles. This book reviews the outcome of these models as manifested in the area of energy management. The prejudicial components of "steady state" based analysis and assumptions have begun to emerge in their true light mostly as an unintended byproduct of the rise of the Information Age. From this perspective, it becomes possible to clarify how and why modeling anything in nature in terms of a steady state has become unsustainable.
INTRODUCTION
7
The unexpected fallout that we are ascribing to the emergence of the Information Age is simply this: in light of the undreamt-of expansion in the capacity to gather, store, and manipulate unprecedented quantities of data on anything, the science of tangibles calling itself "New Science," that developed out of the European Renaissance has turned out to be a double-edged sword. All its models are based on the short term - so short that they practically eliminate the time dimension (equivalent to assigning M = 0). However, these models are promoted as "steady state" model with the assertion that, as At approaches °°, a steady state is reached. This syllogism is based on two false premises: (1) that there is such a state as steady state and (2) that nature is never in balance. Proceeding according to a perspective that accepts and embraces the inherent overall dynamic balance of natural systems as given, it soon emerges that all these models are inherently flawed and are primarily responsible for transforming the truth into falsehood. That is because their continued promotion obscures key differences between real (natural) and artificial (created by violating natural process). Models based on steady-state have been developed and promoted by all the great names of natural and social science over the last 400 years, from Sir Isaac Newton and Lord Kelvin to the economist John Maynard Lord Keynes. However, although presented as the only acceptable bridging transition from natural science to engineering, all such models are in fact freighted with the enormous baggage of a Eurocentric cultural bias. A most glaring feature of technological development derived on the basis of this "steady state" bridging transition from theory to practice has been its denaturing of how time actually operates, reducing the meaningful sense of time to whatever exists "right now." Thus, for example, in medical science this has strengthened the tendency to treat symptoms first and worry about understanding how disease actually works later. In economic development, it amounts to increasing wasteful habits in order to increase GDP. In business it amounts to maximizing quarterly income even if it means resorting to corruption. In psychology it means maximizing pleasure and minimizing pain (both in the short-term). In politics it amounts to obliterating the history of a nation or a society. In mathematics it means obsessions with numbers and exact (and unique) solutions. In technology it means promoting comfort at the expense of long-term damage. In philosophy it means positivism, behaviorism, and materialism. In religion it means obsession with ritual and
8
THE GREENING OF PETROLEUM OPERATIONS
short-term gains. This steady state doesn't exist anywhere and contradicts fundamental traits of nature, which is inherently dynamic. When it was recognized that steady states were non-existent, the time function was introduced in practically all analysis, this time function being such that as f -» o°, and the aphenomenal steady state emerged. That should have triggered investigation into the validity of the time function. Instead, it was taken as a proof that the Universe is progressively moving toward a state of heat death an aphenomenal concept promoted by Kelvin. Chapter 3 presents a comprehensive theory that can answer all the questions that remain unanswered with the current engineering tools. This theory combines mass and energy balance to demonstrate that mass and energy cannot be treated in isolation if one has to develop sustainable energy management schemes. This theory exposes the shortcomings of New Science on this score and is a powerful tool for deconstructing key spurious concepts, such as the following. The concept that "if you cannot see it, it doesn't exist" denies all aspects of intangibles, yet it forms the basis of environmental and medical science. The concept that "chemicals are chemicals," originally promoted by Linus Pauling, a two-time Nobel Laureate, assumes that the pathway of a chemical doesn't matter and is used in the entire pharmaceutical, chemical, and agricultural industry. Numerous physicists, including Einstein, Rutherford, and Fermi, believed the notion that "heat is heat," which was originally inspired by Planck. With the above formulation, two important fallacies at the core of currently unsustainable engineering practice are removed: (1) the fallacy that human need is infinite and (2) the fallacy that natural resources are finite. These notions were not only accepted, but they were presented as the only knowledge. Yet, they clearly violate the fundamental trait of nature. If nature is perfect, it is balanced. It is inherently sustainable. However, it does not follow that it can be artificially sustained. If human beings are the best creation of nature, it cannot be that the sustainability of nature is being threatened by human activities, unless these activities are based on flawed science. In Chapter 3, the core reasons behind this apparent imperfection of nature are analyzed, and science that prompted the fallacious conclusions is deconstructed. It is shown that within the core of current engineering design practices lies a fundamentally flawed notion of ideal and standard. This ideal is mentioned as the first premise of Newton's work,
INTRODUCTION
9
followed by the first premise of Lord Kelvin. Both used the first premise that is aphenomenal. In philosophical terms, this first premise is equivalent to saying that nature is imperfect and is degrading to a lower quality as time progresses, and in order to remove this degradation, we must "engineer" nature. Chapter 3 shows that, by making a paradigm shift (starting from the first premise), if "nature is perfect' is used as the first premise, the resulting model can answer all questions that are posed in the wake of the environmental consciousness in the Information Age. This approach has been long sought-after but has not been implemented until now (Yen 2007).
1.4
Can Nature Be Treated as If It Were Static?
The most significant contribution of the previous section has been the recognition that the time function is the most important dimension in engineering design. If this is the case, one must then ask what the duration of this time function is in which one should observe the effect of a certain engineering design. It is of utmost importance to make sure that this duration is long enough to preserve the direction of the changes invoked by an engineering design. Philosophically, this is equivalent to saying that at anytime the short-term decisions cannot be based on an observation that is not absolutely true or something that would be proven to be false as a matter of time. This notion is linked to the concept of sustainability in Chapter 4. The term "sustainable" has become a buzzword in today's technology development. Commonly, the use of this term infers that the process is acceptable for a certain duration of time. True sustainability cannot be a matter of arbitrary definition, nor can it be a matter of a policy objective lacking any prospect of physical achievement in reality. In this book, a scientific criterion for determining sustainability is presented. The foundation of this criterion is "time-testedness." In Chapter 4, a detailed analysis of different features of sustainability is presented in order to understand the importance of using the concept of sustainability in every technology development model. A truly sustainable process conforms to natural phenomena both in its source, or its root, and in its process, or pathway. However, as applied to resource engineering nominally intent on preserving a healthy natural environment, the science of tangibles
10
THE GREENING OF PETROLEUM OPERATIONS
has frequently given rise to one-sided notions of sustainability. For example, a phenomenal root, such as natural gas supplies, is addressed with principal attention focused on whether there will be a sufficient supply over some finite projected duration of time. Such a bias does not consider which uses of the said resource should be expected to continue into the future and for how long into the future. For example, should natural gas production in the Canadian province of Alberta be sustainable to the end of being used for (1) feedstock for heavy-oil upgraders, (2) export to residential and commercial markets in the U.S., (3) a principal supply for Canadian residential and commercial markets, or (4) some combination of two or more of the above? Of course, absent any other considerations, sufficient supply to meet future demand can only be calculated by assuming that current demand continues into the future, including its current rate of growth, and, hence, is utterly aphenomenal. Both the origin and destiny of this source of natural gas and the mechanics by which it will enter the market are matters of speculation and not science. However, consciousness of this problem is obscured by retrofitting a pathway that might attain all potential projected demands targeted for this resource. Inconvenient facts, such as the likelihood that nuclear power is being strongly pushed to replace Alberta's natural gas as the fuel for future upgraders in situ at the tar sands, are not taken into account. Whatever the pathway, it is computed in accordance to a speculated prediction and, hence, is utterly aphenomenal. Meanwhile, whether that pathway is achievable or even desirable, given current engineering practices, is neither asked nor answered. The initial big "plus" supposedly in natural gas' favor was that it is "cleaner" than other petroleum-based sources. Given the quantity of highly toxic amines and glycols that must be added in order to make it commercially competitive for supplying residential markets, however, this aspect of its supposed sustainability would seem to raise more uncomfortable questions. In addition, the natural criterion alluded to above means that true long-term considerations of humans should include the entire ecosystem. Some have called this inclusion "humanization of the environment" and have put this phenomenon as a pre-condition to true sustainability (Zatzman and Islam 2007). The inclusion of the entire ecosystem is only meaningful when the natural pathway for every component of the technology is followed. Only such design can assure both short-term (tangible) and long-term (intangible) benefits.
INTRODUCTION
1.5
11
Can Human Intervention Affect Long-term Sustainability of Nature?
It is commonly said that any set of data doesn't reach statistical significance unless several cycles are considered. For instance, if the lifecycle of a forest is 100 years, a statistical study should cover at least several centuries. Then, one must question what the duration of the life cycle of humans is since they started to live in a community? Past experience has shown that putting 10,000 years on the age of Adam (the alleged first human) was as dangerous as calling the earth flat. Recent findings show that it is not unreasonable that the humans lived as a society for over a million years. What then should be the statistically significant time period for studying the impact of human activities? In Chapter 4, the focus lies on developing a sustainability criterion that is valid for time tending toward infinity. This criterion would be valid for both tangible (very large number) and intangible (no-end) meanings of the word infinity. If this criterion is the true scientific criterion, then there should be no discontinuity between so-called renewable and non-renewable natural resources. In fact, the characterization should be re-cast on the basis of sustainable and non-sustainable. After this initial characterization is done, only then comparisons among various energy sources can be made. Chapter 5 offers this characterization. It is shown in Chapter 5 that sustainable energy sources can be rendered fully sustainable (including the refining and emission capture), making them even more environmentally appealing in the short-term (long-term being already covered by the fact they meet the sustainability criterion). In order to characterize various energy sources, the concept of global efficiency is introduced. This efficiency automatically shows that the unsustainable technologies are also the least efficient ones. It is shown that crude oil and natural gas are compatible with organic processes that are known to produce no harmful oxidation products.
1.6
Can an Energy Source Be Isolated from Matter?
The most important feature of human beings is their ability to think (homo sapiens literally meaning the thinking man). If one simply
12
THE GREENING OF PETROLEUM OPERATIONS
thinks about this feature, one realizes that the thinking is necessary to decide between two processes. In that sense, our brain is just like a computer, the option always being 0 or 1. The history of philosophy supports this cognition process, as evidenced by Aristotle's theory of exclusions of the middles. Even though dogmatic application of this theory to define enemies with tangible features (the with us or against us syndrome) drew severe criticism from people of conscience, this theory indeed was instrumental throughout history in discerning between right and wrong, true and false, and real and artificial. This is not just a powerful tool. It is also an essential and sufficient decision-making tool because it provides one with the fundamental basis of "go" or "no go." This has been known as Al-furqan (the criterion) in Arabic and has been used throughout Qu'ranic history as the most important tool for decision makers and revolutionaries (Zatzman and Islam 2007b). Aristotle considered the speed of light to be infinity. Father of modern optics, Ibn Al-Haytham (also known as Al-Hazen) realized that this theory doesn't meet the fundamental logical requirement that a light source be an integral part of the light particles. Using this logic, he concluded that the speed of light must be finite. Many centuries later, Satyendra Nath Bose (1894-1974) supported Al-Hazen's theory. In addition, he added that the speed of light must be a function of the media density. Even though he didn't mention anything about the light source being isolated from the light particles, he did not oppose Al-Hazen's postulate. Until today, Bose's theory has been considered to be the hallmark of material research (see recent Nobel prizes in physics that refer to the BoseEinstein theory), but somehow the source has been isolated from the light particles. This was convenient for promoting artificial light as being equivalent to real light (sunlight). Chapter 5 characterizes lights from various sources and based on their sources. It shows that with such scientific characterization it becomes evident why sunlight is the essence of life and artificial light (dubbed as "white light") is the essence of death. The problems associated with artificial lights, ranging from depression and breast cancer to myopia (Chhetri and Islam 2008), are explained in terms of the characterization done in this chapter. It is shown that a natural light source is a necessary condition of sustainability. However, it is not sufficient as the process of converting the energy source into light must not be unsustainable.
INTRODUCTION
1.7
13
Is It Possible That Air, Water, and Earth Became Our Enemy?
For thousands of years of known history, air, water, fire, and earth matter were considered to be the greatest asset available for the sustainability of human civilization. On the same token, humans (earthlings, in English inhabitants of the "earth," which comes from old English, earthe, old German erda, and is very close to Arabic, o^'j^ 1 (al-ardha) that means "the natural habitat" of humans) were the greatest assets that a family, a tribe, or a community could have. Land was the measure of worldly success, whereas organic earth contents, produces, and derivatives (e.g., crops, vegetation, domestic animals) were the symbol of productivity. Ever since the reign of King David (CNN 2008a), non-organic minerals also joined the value-added possessions of humans. Never in history were these natural resources considered to be liabilities. The natural value addition was as follows: Air —> land —> water —> organic matter earth matter —> non-organic earth matter In the above path, air represents the most essential (for human survival) and abundant natural resource available. Without air, a human cannot survive beyond a few minutes. Land comes next because humans, who have the same composition as earth matter, need to connect with the earth for survival. This is followed by water, without which humans cannot survive beyond a few days. Water is also needed for all the organic matter that is needed for human survival as well. Organic matter is needed for food, and without it humans cannot survive beyond weeks. Non-organic earth matter is least abundant (readily available), but its usefulness is also non-essential for human survival. In the quest of survival and betterment of our community, the human race discovered fire. Even though the sun is the root energy source available to the earth, fire was not first discovered from solar energy. Instead, it was from natural wood, which itself was a naturally processed source of energy and matter. Nature didn't act on energy and matter separately, and both energy and mass do conserve with inherent interdependency. No civilizations had the illusion that energy and matter could be created. They all acted on the premise
14
THE GREENING OF PETROLEUM OPERATIONS
that energy and matter are interrelated. The discovery of coal as an energy source was another progression in human civilization. There was no question, however, that the direct burning of coal was detrimental to the environment. Coal was just like green wood, except more energy was packed in a certain volume of coal than in the same volume of green wood. Then came the petroleum era. Once again, a naturally processed source of energy was found that was much more efficient than coal. Even though petroleum fluids have been in use for millennia, the use of these fluids for burning and producing heat is relatively new and is a product of modern times. There is no logical or scientific reasoning behind the notion that the emissions of petroleum products are harmful and the same from wood is not. Yet, that has been the biggest source of controversy in the scientific community over the last four decades. Ironically, the scientists who promoted the notion that "chemicals are chemicals," meaning carbon dioxide is independent of the source or the pathway, are the same ones who became the most ardent proponent of the "carbon dioxide from petroleum is evil" mantra. How could this be? If carbon dioxide is the essence of photosynthesis, which is essential for the survival of plants, and those plants are needed for sustaining the entire ecosystem, how could the same carbon dioxide be held responsible for "destroying the planet?" The same group promotes that nuclear energy is "clean" energy, considers that genetically modified, chemical fertilizer- and pesticide-infested crop derivatives processed through toxic means are "renewable," and proclaims that electricity collected with toxic silicon photovoltaics and stored with even more toxic batteries (all to be utilized through most toxic "white light") is sustainable. In the past, the same logic has been used in the "I can't believe it's not butter" culture, which saw the dominance of artificial fat (transfat) over real fat (saturated fat). Chapter 7 demystifies the above doctrinal philosophy that has perplexed the entire world, led by scientists who have shown little appetite for solving the puzzle, resorting instead to be stuck in the Einstein box. This chapter discusses how the build-up of carbon dioxide in the atmosphere results in irreversible climate change and presents theories that show why such build-up has to do with the type of C 0 2 that is emitted. For the first time, carbon dioxide is characterized based on various criteria, such as the origin, the pathway it travels, and the isotope number. In this chapter, the current status of greenhouse gas emissions from various anthropogenic activities is summarized. The role of water in global warming is
INTRODUCTION
15
also discussed. Various energy sources are classified based on their global efficiencies. The assumptions and implementation mechanisms of the Kyoto Protocol are critically reviewed. Also, a series of sustainable technologies that produce natural C0 2 , which does not contribute to global warming, are presented.
1.8
The Difference Between Sustainable and Unsustainable Products
The history of this modern age tells that we cannot trust the doctrinal definition of truth and falsehood. This book is an attempt to describe fundamental principles from an entirely scientific stand. It is established in this book that truth is based on a true first premise, followed by the natural process of cognition. If this is the case for the thought process or the abstract notion of the truth, what is the case for products? Based on the explicit first premise that natural = real, Chapter 8 establishes that a sustainable product is an outcome of (1) a natural source and (2) a natural process. If this is the case, then the sustainability criterion (established in Chapter 3) will require that the sustainable product emit products that continue to be beneficial to the environment. If the sustainable product is exposed to unsustainable medium (e.g., microwave heating), then the product reacts with the medium to minimize the damage of the unsustainable medium. On the other, an unsustainable product continues to emit harmful products even if it is exposed to a sustainable environment (e.g., sunlight and natural atmosphere), and it emits much more toxic products if exposed to unsustainable medium (e.g., microwave). This chapter establishes a protocol that can answer why no amount of unsustainable product or process can be allowed as standard practice in sustainable engineering. Chapter 9 moves the argument one step further and uses two sets of products to highlight the scientific difference between sustainable and unsustainable products. It is shown that the only feature common between sustainable and unsustainable products lasts only an infinitesimally short period of time, i.e., At —> 0. This chapter serves as a verification of the theory advanced in this book that any decision making process that is based on short-term observation, i.e., most tangible features, will be inherently wrong. This chapter provides one with engineering protocol of discerning between sustainable and unsustainable products.
16
1.9
THE GREENING OF PETROLEUM OPERATIONS
Can We Compare Diamonds with Enriched Uranium?
Engineers are always charged with the task of comparing one scheme with another in order to help decide on a scheme that would be ideal for an application. It is commonly understood that a singlecriterion analysis (e.g., fuel efficiency for energy management) will be inherently skewed because other factors (e.g., environmental concerns, economic factors, and social issues) are not considered. The same engineers are also told that they must be linear thinkers (the line promoted even in the engineering classroom being "engineers love straight lines"). This inherent contradiction is very common in post-Renaissance science and engineering. Chapter 10 demystifies the characterization principle involved in energy management. Various energy systems are characterized based on 1) their sustainability and 2) their efficiency. This characterization removes the paradox of attempting to characterize diamonds (source being carbon) and enriched uranium (source being uranium ore), inherently showing that diamonds are a less efficient energy source than enriched uranium. This paradox is removed by including the time function (essence of intangibles), which clearly shows that a sustainable energy source is also the most efficient one. This characterization is an application of the single-criterion sustainability criterion of Chapter 3. After the energy systems are classified under sustainable and unsustainable, energy sources are ranked under different categories. Among sustainable ones, the classification led to improvements of design in order to achieve even better performance (in terms of immediate benefits - long-term benefits being embedded in the sustainability criterion). For the unsustainable energy systems, it is shown how the long-term environmental impacts snowball into truly disastrous outcomes. This relates to the "tipping point" that many environmental pundits have talked about but that until now has not been introduced with a scientific basis.
1.10
Is Zero-waste an Absurd Concept?
Lord Kelvin's theory leads to this engineering cognition: you move from point A to point B, then from point B to C, then back to point A. Because you came back to point A, you have not done any work.
INTRODUCTION
17
However, what if a person has actually worked (W) and spent energy (Q) to make the travel? Modern thermodynamics asserts that the claim of work is absurd and no useful work has been done. This is the engineering equivalent of stripping conscience participation of a worker. Rather than finding the cause of this confusion, which is as easy as saying that the time function to the movement should be included and that you have actually traveled from Point A (time 1) to Point A (time 2), what has been introduced is this: because you didn't do any useful work, W, the energy that you have spent, Q, is actually 0. The above example is not allegorical, it is real and anyone who attempted to design an engineering system using conventional thermodynamics principles would understand this. That's why any attempt to include real work (as opposed to useful work) or real heat (as opposed to heat to produce useful work) would blow up the engineering calculations with divisions by zero all over the place. This is equivalent to how economic calculations blow u p if the interest rate is written equal to zero1. Truly sustainable engineering systems require the use of zerowaste. This, however, would make it impossible to move further in engineering design using the conventional tool that has no tolerance to zero-waste (similar to zero-interest rate in economic models). This is why an entire chapter (Chapter 11) is dedicated to showing how petroleum engineering design can be done with the zero-waste mode. The scientific definition of a zero-waste scheme is followed by an example of zero-waste with detailed calculations showing how this scheme can be formulated. Following this, various stages of petroleum engineering are discussed in light of the zero-waste scheme.
1.11 How Can We Determine Whether Natural Energy Sources Last Forever? This would not be a valid question in the previous epochs of human civilization. Everyone knew and believed that nature was infinite and would continue to sustain itself. In the modern age, we have been told that human needs are infinite, humans are liabilities, 1. The standard practice of financial organizations that are using softwares is to put a small number, e.g., 0.1% interest rate, to simulate 0% interest rate.
18
THE GREENING OF PETROLEUM OPERATIONS
and nature is finite. We are also told that carbon is the enemy and enriched uranium is the friend. We are told that carbon dioxide is the biggest reason our atmosphere is in crisis (yet carbon dioxide is the essence of photosynthesis) and organic matter is the biggest reason our water is polluted (yet organic matter is the essence of life). If we seriously address the questions that are asked in previous questions, the above question posed in this section becomes a matter of engineering details. These engineering details are provided in Chapters 12 through 14. Chapter 12 identifies root causes of unsustainability in refining and gas processing schemes. It is shown that refining, as used in modern times, remains the most important reason behind the toxic outcome of petroleum product utilization for natural gas, liquid petroleum, and solid final products (e.g., plastics). Alternatives are proposed so that refining is done in such a way that the refined products retain real value yet do not lose their original environmental sustainability (as in crude oil). Chapter 13 identifies root causes of unsustainability in fluid transport processes. It is shown that the main causes of such unsustainability arise from the use of artificial chemicals in combating corrosion, hydrate formation, wax deposits, aspheltene deposition, etc. The pathway analysis of these chemicals shows clearly how detrimental they are to the environment. For each of these chemicals, alternate chemicals are proposed that do not suffer from the shortcomings of the artificial chemicals and prevent the flow assurance problems that prompted the usage of artificial chemicals. Chapter 14 proposes a host of sustainable enhanced oil recovery (EOR) techniques. Historically, EOR techniques have been considered the wave of the future and are believed to be a major source of increasing petroleum production in the upcoming decades. However, to-date all EOR techniques adopt unsustainable practices, and if these practices are not rendered sustainable, an otherwise sustainable recovery scheme will become unsustainable. Earlier chapters establish the criteria for rendering such schemes sustainable, and Chapter 14 shows how the most promising aspect of petroleum recovery can become sustainable.
1.12
Can Doing Good Be Bad Business?
Chapter 15 removes all the above paradoxes and establishes the economics of sustainable engineering. This chapter shows that
INTRODUCTION
19
doing good is actually good business and deconstructs all the models that violate this time-honored first premise. This chapter brings back the pricing system that honors real value, replacing the artificial pricing system that has dogged the petroleum industry for the longest time. Despite the length of this chapter, it doesn't cover all the details because that would be beyond the scope of this book. However, it provides enough details so that decision makers can be comforted with enough economic backing. After all, engineering is all about practical applications of science. No practical application can take place without financial details. This is true even for sustainable engineering.
1.13
Greening of Petroleum Operations: A Fiction?
This book is all about true paradigm shift - a paradigm shift from ignorance to knowledge. A true paradigm shift amounts to revolution because it challenges every concept, every first premise, and every process. No revolution can take place if false perceptions and misconceptions persist. Chapter 2 begins with highlighting the misconceptions that have been synonymous with the modern age. In Chapter 16, the outcomes of those misconceptions are deconstructed. It shows how Enron should have never been promoted as the most creative energy management company of our time, not unlike how DDT should not have been called the "miracle powder," and we didn't have to wait for decades to find out what false claims were made. Most importantly, this chapter shows that we must not fall prey to the same scheme. Chapter 16 discusses how if misconceptions of Chapter 2 were addressed, all the contradictions of modern times would not come to deprive us of a sustainable lifestyle that has become the hallmark of our current civilization. In particular, this chapter presents and deconstructs a series of engineering myths that have been deeply rooted in the energy sector with an overwhelming impact on modern civilization. Once the mythical drivers of our energy sector are removed, it becomes self evident that we have achieved complete reversal of slogan - a true paradigm shift. Chapter 17 summarizes all the changes that would take place if the sustainable schemes were implemented. Reversing global warming would just be icing on the
20
THE GREENING OF PETROLEUM OPERATIONS
cake. The true accomplishment would the reversal of the pathway to knowledge, like with the scientific meaning of homo sapiens in Latin or insan in Arabic 2 . If we believe that "humans are the best creation of nature," then the expectations of this book are neither unreal nor unrealistic.
2. The Arabic word, insan, is derived from root that means sight, senses, knowledge, science, discovery by senses, and responsible behavior. It also means the one that can comprehend the truth and the one that can comprehend the creation or nature.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
2 A Delinearized History of Civilization and the Science of Matter and Energy 2.1
Introduction
The term "sustainable" cannot be a matter of definition. If it were so, the greening of petroleum operations would simply mean painting gas stations green, which has actually been done (Khan and Islam 2007). In order that "sustainability" or "green" status be deemed real and provided with a scientific basis, there are criteria to be set forth and met. Nowhere is this need clearer or more crucial than in the characterization of energy sources. Petroleum fuels being the principal driver of today's energy needs, one must establish the role of petroleum fluids in the overall energy picture. One must not forget that petroleum products (crude oil and gas) are natural resources and they cannot be inherently unsustainable, unless there is an inherent problem with nature. Here it is crucial to understand nature and natural phenomena without any preconceived bias. If we have to rely on our scientists who promoted New Science theories and laws, we won't make much headway. Lord Kelvin, whose "laws" are a must for modern day engineering design, believed that the earth is progressively moving toward a worse status that would eventually lead to the "heat death" of 21
22
THE GREENING OF PETROLEUM OPERATIONS
the habitat of the "best creation of God." So, if Kelvin were correct, we are progressively moving toward a greater energy crisis, and indeed we need to worry about how to fight this "natural" death of our planet. Kelvin also believed flying an airplane was an absurd idea, so absurd that he didn't care to be a member of the aeronautical club. Anyone would agree, it is not unreasonable to question this assertion of Lord Kelvin, but the moment one talks about the environment progressively improving, if left alone (by humans, of course), many scientists break out in utter contempt and invoke all kinds of arguments of doctrinal fervor. Then how do these scientists explain, that if the earth is progressively dying, how life evolved from non-biological materials and eventually a very sophisticated creature called homo sapiens (thinking man) came to exist? Their only argument becomes the one that has worked for all religions, "you have to believe." All of a sudden, it becomes a matter of faith and all the contradictions that arise from that assertion of Lord Kelvin become paradoxes and we mere humans are not supposed to understand them. Today, the Internet is filled with claims that Kelvin is actually a god and there is even a society that worships him. This line of argument cannot be scientific. The new talk is that of hydrogen fuel, and the world is now obsessed with getting rid of carbon. Hydrogen fuel is attractive because it is not carbon. The slogan is so overpowering that a number of universities have opened up "hydrogen chairs" in order to advance humanity out of the grip of carbon. In 2005, the President of Shell Canada talked about hydrogen being the wave of the future. But what hydrogen are we talking about? Could it be the hydrogen that is present in hydrocarbons, after we get rid of the carbon, of course? This question would be discarded as "silly." Everyone should know he meant hydrogen as in fuel cells, hydrogen as in after dissociation of ultra-pure water, and so on. This is the hydrogen, one would argue, that produces clean water as a byproduct, and nothing but water. As petroleum engineers, we are supposed to marvel at how nature produces water infested with countless chemicals, many of which are not even identifiable. No one dares question if this is possible, let alone beneficial. Until such a question is raised and actually investigated, however, any claims can be made. After all, we have taken for granted the idea that "if it cannot be seen, it does not exist" (Zatzman and Islam 2007). An even more convincing statement would be, "If the Establishment says so, it exists." What progress has been made, on what pathways, in rolling hydrogen out as
A DELINEARIZED HISTORY OF CIVILIZATION
23
the energy source of the future? Apart from the fact that produced water will invariably contain toxic residue of the catalysts that are being used, especially more so because high-temperature systems are being used, one must also worry about where the hydrogen will come from. Shell invested billions of dollars in answering this question and the answer is "water." Thankfully, water doesn't have any carbon, so nothing will be "dirty," if we can just separate hydrogen from its ever-so-hard-to-break bonds with oxygen. Visualize this, then: We are breaking water so we can produce hydrogen, so it can combine with oxygen, to produce water. In the meantime, we produce a lot of energy. Is this a miracle or what? It's less a miracle than a magic trick. It is a miracle as long as the science that is used is the same as that which called DDT a miracle powder. How could such a process be sold with a straight face? What is the efficiency of such a process? It turns out the efficiency is quite high, but only so long as the focus is restricted to the hydrogen and oxygen reaction in a cocoon completely isolated from the rest of the world, a status that doesn't exist in nature. If one analyzes the efficiency using a new definition that includes more components than just those on which current discussion has so tightly focused, the efficiency indeed becomes very low (Khan et al. 2006). However, low efficiency is not the only price to pay for such an anti-nature technology. Recent findings of Cal Tech researchers indicate that the focus on creating hydrogen fuel will be disastrous - far worse than the one created by the "oil addiction." Can we, must we, wait for another few decades to verify this claim? Then comes the energy panacea - the wind energy, the photovoltaic, electric cars. No one argues that they are anything but renewable and perfectly clean, right? Wrong. If one considers what is involved in producing and utilizing this "clean" energy source, it becomes clear they are neither clean nor productive. Consider the analysis depicted in Figure 2.1. This figure shows how the solar panel itself is both toxic (emitting silicon dioxide continuously) and inefficient (efficiency around 15%). As the photovoltaic charges the battery, efficiency goes down and toxicity is further increased. When the electricity reaches the utilization stage (light source), triple deterioration occurs: 1) fluorescent bulbs contain mercury vapor, 2) efficiency drops further, and 3) the light that is emitted is utterly toxic to the brain (for details, see Chhetri and Islam 2008).
24
THE GREENING OF PETROLEUM OPERATIONS
Solar panel
Battery
Toxic silica (life cycle) Toxic and heavy metals
η, = 15%; hence, η,η2η,<« Global η : Significantly
Fluorescent
Hazardous Hg
Figure 2.1 Environmental unsustainability of photovoltaics.
The latest buzz is renewable energy. The worldwide production of biodiesel in 2005 was reported to be 2.2 billion gallons (Martinot 2005). Even though this is less than 7% of that year's total diesel need of the USA, it is considered to be significant because it is a move in the right direction. Still, a few anti-globalization activists raised the question of what wisdom could be behind converting food into fuel, particularly in view of the global food shortage in 2007 and the global financial crisis in 2008, and few scientists considered biofuel as unsustainable. Chhetri and Islam (in press) analyzed the biofuel production process using biodiesel as an example. They found biodiesel to be anything but "green" for the following reasons: 1) the source is either chemical fertilizer and pesticide infested or genetically modified; 2) the process of making biodiesel is not the one Mr. Diesel, who died before his dream process of using vegetable oil to produce combustible fuel came true, proposed to do; and 3) by using a catalytic converter the emitted C 0 2 would be further contaminated and the quality of C 0 2 would not be much different from the one emitted from non-biodiesel. This would nullify practically all reasons behind investing in biodiesel. The following figure (Figure 2.2) shows a schematic of how various functional groups of a source (Item 1 above) are linked to one another, accounting for an overall balance of the ecosystem. This figure shows clearly that it is impossible to "engineer" nature and expect to keep track of its impacts. This is particularly true because of the special features of nature that do not allow any finite boundary to be drawn. Even though mass balance, the most fundamental engineering equation, is based on a finite volume enclosed inside a boundary, this first premise of the mass balance equation is clearly aphenomenal. This is an important issue because if one ignores the fact that nature is continuous, meaning a particular segment can never be isolated,
A DELINEARIZED HISTORY OF CIVILIZATION
25
Figure 2.2 The role of genetically engineered insecticidal Bt-corn in altering the ecosystem (redrawn from Losey et al. 2004).
then no engineering design would deem any of these processes unsustainable, even if the definition of sustainability is a strictly scientific one. Zatzman and Islam recognized that an "artificial" object, though it comes to reality by its mere presence, behaves totally differently than the object it was supposedly emulating (2007a). This would explain why vitamin C acts differently depending on its origin (e.g., organic or synthetic), as does every other artificial product including antibiotics (Chhetri et al. 2007; Chhetri and Islam 2007). However, not uncharacteristically, a number of papers have appeared that deny any connection between any cause and effect, such as Dively, 2007 in the context of genetic engineering and colony collapse disorder. This is entirely expected. The nature of natural phenomena is such that there cannot be a single cause behind a phenomenon. As a consequence, each cause is a suspect, but with New Science the suspicion can be buried within statistical uncertainties, depending on which agency is paying for the study. This is beyond the conventionally accepted conflict between corporate profit and public access to information (Makeig 2002). It is rather about creating disinformation in order to increase profit margin, as noted earlier (Lähateenmäkia et al. 2002). Because economics is the driver of modern engineering, short-term is the guiding principle behind all engineering calculations. The focus on short-term poses a serious problem in terms of scientific investigation. New Science says that there's no need, or room, for intangibles
26
THE GREENING OF PETROLEUM OPERATIONS
unless one can verify their presence and role with some experimental program - experimental meaning controlled conditions, probably in a laboratory, with experiments that are designed through the same science that one is set out to "prove." In contrast, Khan and Islam (2007a, 2007b) argued that the science of tangibles so far has not been able to account for disastrous outcomes of numerous modern technologies. The same way scientists cannot determine the cause of global warming with the science that assumes all molecules are identical, thereby making it impossible to distinguish between organic C 0 2 and industrial C0 2 , scientists cannot determine the cause of diabetes unless there is a paradigm shift that distinguishes between sucrose in honey and sucrose in Aspartame (Chhetri and Islam 2007). Have we progressed in terms of knowledge in the last few centuries? Why is it then that we still don't know how a dead body can be preserved without using toxic chemicals as was done for mummies from the days of ancient Egypt? Of course, it is said, after Newton, everything has changed and we have evolved into our modern-day civilized status. Which of Newton's laws was necessary to design the structural marvels of the pyramids? Which part of Ohm's Law was used to run the fountains of the Taj Mahal? Whence came the Freon that must have been used to run the air conditioners of Persian palaces? What engineering design criteria were used to supply running water in Cordova? Which ISO standard was used to design the gardens of Babylon? One of the worst weapons of disinformation has been that things can be compared on the basis of a single dimension. The most progress that this disinformation machine is willing to accept is the extension of three spatial dimensions. If the time dimension is not included as a continuous function, the extension to 3D or even 4D (e.g., using either a discrete or a continuous temporal function) cannot distinguish between a man caressing his child or hitting her. It also cannot distinguish between a dancing person and a sleeping person, depending on the frequency of the temporal function. If that is preposterous in a social setting, imagine not being able to distinguish between the process of mummifying and modern-day "preservation," marble and cement, corral and concrete, mother's milk and baby formula, a beeswax candle and a paraffin wax candle, wood and plastic, silk and polyester, or honey and sugar? How can we call ourselves civilized if our science made it more difficult to discern between DDT and snake venom, fluorescent light and sunlight, lightening energy and nuclear energy?
A DELINEARIZED HISTORY OF CIVILIZATION
2.2
27
Fundamental Misconceptions of the Modern Age
2.2.1 Chemicals are Chemicals and Energy is Energy Paul Hermann Müller was credited with inventing dichlorodiphenyltrichloroethane (DDT) and awarded a Nobel Prize in medicine and physiology. This marks the beginning of a chemical addiction that has paralyzed modern society. In Mullet's invention, there was no distinction between natural DDT or synthetic DDT; the thrilling news was that synthetic DDT could be mass-produced whereas natural DDT could not. (Even the most devoutly religious scientist would not question why this mantra would make God a far less efficient creator than humans.) This very old misconception of "chemicals are chemicals," premised on the useless and actually harmful fact that a natural substance and its synthesized non-natural "equivalent" have similar tangible features at Δί = 0 (an equilibrium state that exists nowhere in nature at any time), got new life. What happened with DDT and many other products of high-tech chemical engineering was that they were developed as part of so-called "peaceful competition" between differing social systems. U.S. industry patented all the artificial shortcuts. Soviet industry started copying and replicating so as not to lose place in the competition, even though the peoples of the USSR had developed natural alternative pesticides that were perfectly safe. For example, malaria was eliminated in the southern parts of the former Soviet Union in the 1920s by spreading unrefined crude oil among the swamps in which the malaria-carrying mosquitoes bred (Brecht 1947). Noting the accumulation of DDT in the environment by the early 1960s, the American writer Rachel Carson famously condemned its potential to kill off bird life and bring about a "silent spring." But she missed the even greater long-term damage inflicted on scientific thinking and the environment by this "peaceful" competition - a disaster that could follow the "thermonuclear holocaust," which the "peaceful" competition was supposed to avoid. Synthetic chemicals were acceptable because no matter which individual chemical constituent one chose, it existed in nature. The fact that almost none of the synthetic combinations of these otherwise natural-occurring individual elements had ever existed in nature was ignored. A top scientist with DuPont claimed dioxin, the most toxic product emitted from polyvinyl chloride, PVC, existed in nature; therefore,
28
THE GREENING OF PETROLEUM OPERATIONS
synthetic PVC should not be harmful. The mere fact something exists in nature tells us nothing about the mode of its existence. This is quite crucial when it is remembered that synthetic products are used u p and dumped as waste in the environment without any consideration being given, at the time their introduction is being planned, to the consequences of their possible persistence or accumulation in the environment. All synthetic products "exist" (as long as the focus is on the most tangible aspect) in nature in a timeframe in which Δί = 0. That is the mode of their existence, and that is precisely where the problem lies. With this mode, one can justify the use of formaldehyde in beauty products, anti-oxidants in health products, dioxins in baby bottles, bleaches in toothpaste, all the way up to every pharmaceutical product promoted today. Of course, the same focus, based on New Science, is applied to processes as well as products. This very old misconception, dating back to Aristotle's time (as will be discussed later in this chapter), got a new life inspired by the work of Linus Pauling (see Fig. 2.3), the two time Nobel Prize winner in Chemistry and Peace who justified using artificial products because artificial and natural both have similar tangible features (at At=0). He considered an organic source of vitamin C a "waste of money" because artificial vitamin C has the same ascorbic acid content and it is cheaper. By 2003, it became evident that Pauling was wrong when organic vitamin C was found to prevent cancer as opposed to an artificial one that would actually induce cancer. (Both Pauling, who promoted vitamin C therapy, and his wife died of cancer.) However, the actual reason behind why natural vitamin C acts adversely to artificial vitamin C remains a mystery (BaillieHamilton 2004). This very finding unravels the need of indentifying the shortcoming of New Science. Even the Linus Pauling Institute now considers Pauling's stance on artificial vitamin C as erroneous. However, the old justification of introducing synthetic materials continues. Synthetic chemicals are considered acceptable, because no matter which individual chemical constituent one chooses, it exists in nature. The fact that almost none of the synthetic combinations of these otherwise naturally-occurring individual elements had ever existed in nature is ignored. Figure 2.4 shows artificial material on the left and natural material on the right. With conventional analysis, they are both high in carbon and hydrogen. In fact, their bulk composition is approximately the same. New Science discards non-bulk materials and ignores the dynamic nature of atomic or subatomic particles. The result of this
A DELINEARIZED HISTORY OF CIVILIZATION
29
Figure 2.3 Linus Pauling didn't see the difference between natural and artificial vitamin C.
Figure 2.4 Artificial materials may appear more sophisticated and cleaner than natural material, but that should not be a criterion for determining sustainability.
30
THE GREENING OF PETROLEUM OPERATIONS
Figure 2.5 Artificial light can only insult the environment, while natural light can sustain the environment.
analysis automatically makes further analysis inherently biased. With that analysis, if one burns the plastic material on the left, it would produce the same waste as the one on the right. Yet, the one on the left will continuously emit dioxin into nature while the one on the right will emit materials necessary for renewing and sustaining life. Of course, it is common knowledge that the wood stove is good for barbecue or blackened Cajun chicken or after every forest fire renewal of ecosystem occurs. Not known in New Science, however, is that we have no mechanism to establish long-term fate of carbon or carbon dioxide emitted from the wood stove and from artificial heat sources. As long as atoms are all the same and only bulk composition is considered, there will be no distinction between organic food, chemically infested food, and genetically modified food, or between sugar and honey, coke and water, snake venom and pesticide,
A DELINEARIZED HISTORY OF CIVILIZATION
31
extra-virgin olive oil and ultra-refined olive oil, stone ground flour and steel ground flour, free-range eggs and farm eggs, white bread and whole-wheat bread, white rice and brown rice, real leather and artificial leather, farm fish and ocean fish, and the list truly goes on. While this is contrary to any common sense, no one seems to question the logic behind selecting the atom as a unit of mass and assigning uniform characteristics to every atom. Yet, this is the most important reasoning behind the misconception that chemicals are chemicals Just as in assuming that atoms are rigid spheres that can be treated as the fundamental building block created confusion regarding the quality of matter, assuming photons as the fundamental unit of light has created confusion regarding how nature uses light to produce mass (Fig. 2.5). Because new science considers energy as energy, irrespective of the source and violating the fundamental continuity in nature, there is no way to distinguish between light from the sun and the light from the fluorescent light bulb. We don't even have the ability to distinguish between microwave oven heat and woodstove heat. It is known, however, that the microwave can destroy 97% of the flavonoids of certain vegetables. The current techniques do not allow us to include the role of flavonoids in any meaningful way, similar to the role of catalysts, and certainly does not explain why microwaves would do so much damage while the heating level is the same as boiling on an electric stove (Chhetri and Islam 2008). Similarly, nuclear fission in an atomic weapon is considered to be the same as what is going on inside the sun. The common saying is that "there are trillions of nuclear bombs going off every second inside the sun." In other words, "energy is energy." This is the misconception, shared by all nuclear physicists including Nobel physics laureates Enrico Fermi and Ernest Rutherford, who served to rationalize nuclear energy as clean and efficient in contrast to "dirty," "toxic," and even "expensive" fossil fuel. The theory developed by Albert Einstein, credited with discovering the theory that led to the invention of the atomic bomb, spoke of the dynamic nature of mass and the continuous transition of mass into energy, which could have been used clearly to show that "chemicals are not chemicals" and "energy is not energy." Its abuse as a prop for aphenomenal notions about the artificial, synthesized processes and the output of industry being an improvement upon and even a repairing of nature violates two deeply fundamental principles that 1) everything in nature is dynamic, and 2) nature harbors no system
32
THE GREENING OF PETROLEUM OPERATIONS
or sub-system that can be considered "closed" a n d / o r otherwise isolated from the rest of the environment. This is only conceivable where At = 0. Based on this misconception, Dr. Müller was awarded a Nobel Prize in 1948. Sixty years later, even though Dr. Muller's first premise was false, we continue to award Nobel Prizes to those who base their study on the same false premise. In 2007 (awarded in January of 2008), the Nobel Prize in medicine was offered to three researchers for their "discovery of the principle for introducing specific gene modifications in mice by the use of embryonic stem cells." What is the first premise of this discovery? Professor Stephen O'Rahilly of the University of Cambridge said, "The development of the gene targeting technology in mice has had a profound influence on medical research...Thanks to this technology we have a much better understanding of the function of specific genes in pathways of the whole organism and a greater ability to predict whether drugs acting on those pathways are likely to have beneficial effects in disease." No one sees why only beneficial effects are to be anticipated from the introduction of "drugs acting on those pathways." When did intervention in nature, meaning at this level of ignorance about the pathway, yield any beneficial result? Can one example be cited from the history of the world since the Renaissance? We have learned nothing from Dr. Muller's infamous "miracle powder," also called DDT. In 2009, the Chemistry Nobel Prize was awarded to three scientists who discovered the green fluorescent protein (GFP). While discovering something that occurs naturally is a lofty goal and has been considered worth pursuing from pre-historic time, New Science does not allow the use of this knowledge for anything other than making money, without regard to long-term impact or the validity of the assumptions behind the application. This Nobel Prize winning technology is being put to work by implanting these proteins in other animals, humans included. Two immediate applications are 1) for the monitoring of brain cells of Alzheimer patients, and 2) for use as a signal to monitor other things (including crops, infested with disease) that need to be interfered with. Both of these are money-making ventures and both are based on false premises. For instance, the first application assumes that the implantation (or mutation) of these "foreign" proteins will not alter the natural course of brain cells (affected by Alzheimer or not). So, what will be monitored is not what would have taken place; it is rather what is going to happen after the
A DELINEARIZED HISTORY OF CIVILIZATION
33
implant is in place. The two pathways are not and cannot be identical. A more in-depth study, not something that would be allowed to grow out of New Science, would show that this line of application is similar to the use of the CT scan for detecting cancer, which is at least 50 times more damaging than the X-ray and is prone to causing cancer itself (Brenner and Hall 2007). Where does money play a role here? Consider that the study of using CT scans that "revolutionized" the detection lung cancer was funded by tobacco companies, the worst perpetrators of cancer in the modern age. Going back in time, in 1979 Nobel Prizes were awarded to Hounsfield and Cormack for the CT scan technology. It would take us thirty years to discover that this Nobel Prize technology actually accounts for causing many cancers that it proclaims to detect.
2.2.2
If You Cannot See it, it Does Not Exist
It is well known that the mantra "dilution is the solution to pollution" has governed the environmental policies of the modern age. Rather than addressing the cause and removing the source of pollution, this policy has enabled operators to "minimize" impact by simply diluting the concentration. This is the reason Freon was the fluid of choice until it was discovered, even with its "negligible" level of emission in the environment, that it created a huge hole in the ozone layer of the atmosphere that remained there for billions of years undisturbed prior to this manmade insult. How could this be foreseen, when no thermodynamic calculations allow for the accounting of the "negligible" leaks? If the leak was considered to be negligible at a fundamental level, it wouldn't matter how much Freon is used worldwide, it would still amount to a negligible quantity. So, when we did discover that Freon was causing such a huge impact on the atmospheric system, one could have learned the lesson and realized that the problem with Freon is such artificial material didn't exist in nature and that's why even at a "negligible level" it is acting like a cancer cell that can cause havoc to billions of cells in a healthy body. That lesson was not learned and instead of refraining from artificial fluids, more refrigeration fluids were engineered that had only one good quality, which is that they were not Freon. Because these fluids are not Freon, they must be acceptable to the ecosystem. This line of thinking is very similar to accepting nuclear energy because it is not carbon.
34
THE GREENING OF PETROLEUM OPERATIONS
On the regulation side, this misconception plays through in lowering the "acceptable level" of a contaminant. Rather than saying a certain chemical should not be produced because it is inherently unsustainable, the attack is always on the concentration, as though lowering the concentration (meaning diluting) will solve the problem, playing in the hand of the same mantra - "dilution is the solution to pollution." Even the most militant environmental activists will admit that the use of toxic chemicals is more hazardous if the concentration is high and the reaction rate is accelerated (through combustion, for example). The entire chemical industry became engaged in developing catalysts that are inherently toxic and anti-nature through purposeful denaturing. The use of catalysts, which is always very toxic because catalysts are truly denatured or concentrated, was justified by saying that catalysts by definition cannot do any harm because they only help the reaction and do not participate. The excuse becomes, "We are replacing the enzyme with the catalyst." Enzymes allegedly don't participate in the reaction, either. The case in point is the 2005 Nobel Prize in Chemistry. Three scientists were awarded the Nobel Prize. Yves Chauvin explained how metal compounds can act as catalysts in "organic synthesis." Richard Schrock was the first to produce an efficient metal-compound catalyst, and Robert Grubbs developed an "even better catalyst" that is "stable" in the atmosphere. This invention is claimed to create processes and products that are "more efficient," "simpler to use," "environmentally friendlier," "smarter," and less hazardous. Most importantly, the entire invention is called a great step forward for "green chemistry." Another example relates to PVC. The defenders of PVC often state that just because there is chlorine in PVC doesn"t mean PVC is bad. After all, they argue, chlorine is bad as an element, but PVC is not an element, it is a compound. Here, two implicit spurious assumptions are invoked: 1) chlorine can and does exist as an element, and 2) the toxicity of chlorine arises from it being able to exist as an element. This misconception, combined with "chemicals are chemicals," makes u p an entire aphenomenal process of "purifying" through concentration. This "purification" scheme would be used from oil refining to uranium enrichment. The truth, however, is that if the natural state of a matter or the characteristic time of a process is violated, the process becomes anti-nature. For instance, chemical fertilizer and synthetic pesticides can increase the yield of a crop, but that crop would not be the crop that would provide
A DELINEARIZED HISTORY OF CIVILIZATION
35
nutrition similar to the one produced through organic fertilizer and natural pesticides. H2S is essential for human brain activities, yet concentrated H2S can kill. Water is essential to life, yet "purified" water can be very toxic and can leach out minerals rather than nourish living bodies. Many chemical reactions are thought to be valid or operative only in some particular range of temperature and pressure. This is thought so because we consider it so, because we cannot detect the products in the range outside of the one we claim to be operational. Temperature and pressure are themselves neither matter nor energy, and therefore are not deemed to be participating in the reaction. The reaction itself cannot even take place absent these "conditions of state" in definite thresholds. It is just as logical to separate the act of shooting someone from the intention of the individual who aimed the gun and pressed the trigger. Temperature and Pressure are but calibrated (i.e., tangible) measures of indirect/oblique, i.e., intangible, indicators of energy (heat, in the case of temperature) or mass (per unit area, in the case of pressure). Instead of acknowledging that these things are obviously involved in the reaction in some way that is different from the ways in which the tangible components of the reaction are involved, we simply exclude them on the grounds of this intangibility. It is the same story in mathematics. Data output from some processes are mapped to some function-like curve that includes discontinuities where data could not be obtained. Calculus teaches that one cannot differentiate across the discontinuous bits because "the function doesn't exist there." So, instead of figuring how to treat, as part of one and the same phenomenon, both the continuous areas and the discontinuities that have been detected, the derivative of the function obtained from actual observation according only to the intervals in which the function "exists" is treated as definitive clearly a fundamentally dishonest procedure.
2.2.3
Simulation Equals Emulation
We proclaim that we emulate nature. It is stated that engineering is based on natural science. The invention of toxic chemicals and antinature processes are called chemical engineering. People think sugar is concentrated sweetener, just like honey, so the process must be the same. "We are emulating nature" is the first thought, but then because honey cannot be mass-produced and sugar can be - we
36
THE GREENING OF PETROLEUM OPERATIONS
proceeded to "improve" nature. This itself will take a life of its own. The entire chemical and pharmaceutical industry engage in denaturing first, then reconstructing to maximize productivity and tangible features of the product, intensifying the process that converts real to artificial, always maintaining that we are doing nothing new, just improving on what nature had been doing for millions of years. As evidence, snapshots of simulation data are produced, ignoring the entire pathway (the time function) other than a section At —> 0. There are countless simulators and simulations. Routinely, they disregard or try to linearize or smooth over the non-linear changesof-state. The interest and focus is entirely on what can be taken from nature. This immediately renders such simulators and simulations useless for planning the long-term. Simulators and simulations start by focusing attention only on some structure or function of interest, which they then abstract. This process of abstraction negates any consideration of the natural-whole in which the structure or function of interest resides, or operates, or thrives. Anything constructed or implemented according to the norms of such an abstraction must inevitably leave a mess behind. Emulations and emulators, on the other hand, start with what is available in nature and how to sustain that, rather than what they would like to take from nature regardless of the mess this may leave behind. A great confusion about this distinction is widespread. For example, people speak and write how the computer "emulates" the computational process of an educated human - yet another abstraction in which principles of the computer's design mimic an abstraction of the principles of how computation is supposed to work. But, first of all, from the computer side of this scenario, the physical computer circuitry that is implementing this abstraction of principles operates, in reality, according to the carefully-engineered limitations of semiconducting materials that have been selected and refined to meet certain output criteria of the power supply and gate logic on circuit boards inside the computer. Secondly, from the computation side, the human brain as a biochemical complex incorporating matter-that-thinks does not actually compute according to the abstract principles of this process. It might most generously be called an abstraction being analogized to another abstraction, but it certainly cannot be properly described as emulation of any kind. Emulation requires establishing a 1:1 correspondence between operation "X" of the computer and operation "Xb" of the human
A DELINEARIZED HISTORY OF CIVILIZATION
37
brain. Yet, there exists no such correspondence. Brain research is very far from being capable of establishing such a thing. We have already found many examples where the two processes are exactly the reverse of one another, from the chip overheating whereas the brain cools to the division operator in the brain compared to the addition operator in the computer. The case in point of this misconception is the Nobel Prize in Physics awarded in 2008 for discovering that symmetry breaks down at subatomic level. It went to Yoichiro Nambu of the Enrico Fermi Institute, University of Chicago "for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics" and to Makoto Kobayashi of the High Energy Accelerator Research Organization (KEK) in Tsukuba, Japan and Toshihide Maskawa of the Yukawa Institute for Theoretical Physics (YITP) at Kyoto University Kyoto "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature." This discovery implies that there is symmetry to begin with. This is simply untrue because there is no symmetry anywhere in nature. Symmetry is a concept that was promoted by Aristotle, who defined beauty with the level of symmetry - a concept that continues to drive European sense of beauty (BBC, 2006). This fascination is not scientific and emerges from an aphenomenal first premise that has been discussed elsewhere (Zatzman and Islam, 2007a).
2.2.4
Whatever Works is True
The essence of the pragmatic approach is, "whatever works is true." This fundamentally aphenomenal principle is the driver of the short-term approach. Any short-term approach is aphenomenal and always perverts the process, even if every once in a while a conclusion appears to be correct. The problem in this approach is in the definition of "work." If one can define and portray anything as working even for a temporary period, anything becomes true during that period of time. DDT was considered to be the miracle powder for decades. For the last 8 years of its existence, in which it shed conventional business management of pipelines in favor of swapping "future," i.e., non-existent, electricity supplies across "deregulated" jurisdictions, Enron was the most creative energy company. The evil of the pragmatic approach, which
38
THE GREENING OF PETROLEUM OPERATIONS
should have been detected from the beginning, has a cohort. It is known as the public-opinion survey. If people who are facing a white wall are surveyed while they are blindfolded and the survey result is used to sell the white wall as a blackboard, with careful omission of the fact that they were blindfolded, that would make a case of disinformation. This is exactly what has happened in the modern media. Sapiro et al. (2007) examined how the pragmatic approach has been used to systematically misinform. They explained how polling is routinely launched as a purportedly scientific approach to analyzing public opinion and trends within it. It brings out the manipulation of public opinion carried on under the guise of sampling how many believe something whose truth or falsehood has not first been established anywhere. The article elaborates some of the more unsavory and insidious uses to which the assumption that all opinions are randomly distributed has been put in order to demonstrate the grave corruption that anti-conscious application of statistical reasoning can inflict on proper scientific method. If ninety-nine people out of one hundred agree with the statement that this volume will fall to the floor faster than a feather because of the difference in the mass of the volume compared to the mass of the feather, it will not make it true even though the volume will, indeed, reach the floor much sooner than the feather. Raising provocative questions of how, when, and where statistical methods should be applied in the summarizing of research data so as to eliminate what is false and leave behind only what is true, the article closes this edition on just the right, somewhat mixed, note. What Sapiro et al. deconstructed applies to scientific (hard science) studies. In fact, most frequent application of statistical methods these days is in the area of applied science and engineering. Routinely, theories are formulated that have no phenomenal basis and experiments are conducted to support the theory. If there is any discrepancy, it is said to have come from "experimental error," which implicitly suggests that the theory is the truth and experiment is the illusion. With the above misconceptions, we have come to a point that every technology that was developed based on those misconceptions has resulted in breaking the promise that was given when the technology was being introduced. This is evident from Table 2.1.
A D E L I N E A R I Z E D H I S T O R Y OF C I V I L I Z A T I O N
39
Table 2.1 Analysis of "breakthrough" technologies (Chhetri and Islam 2008). Product
Promise (knowledge at t = "right now")
Current knowledge (closer to reality)
Microwave oven
Instant cooking (bursting 97% of the nutrients destroyed; with nutrition) produces dioxin from baby bottles
Fluorescent light (white light)
Simulates the sunlight a n d can eliminate "cabin fever"
Prozac
80% effective in r e d u c - Increases suicidal behavior ing d e p r e s s i o n
Anti-oxidants
Reduces aging symptoms
Causes lung cancer
Vioxx
Best drug for arthritis pain; no side effect
Increases the chance of heart attack
Coke
Refreshing; revitalizing
Dehydrates; used as a pesticide in India
Trans fat
Should replace saturated fats, incl. high-fiber diets
Primary source of obesity and asthma
Simulated wood, plastic gloss
Improve the appearance of wood
Contains formaldehyde that causes Alzheimer
Cell phone
Empowers; keeps people Causes brain cancer; decreases sperm count among men. connected
Chemical hair colors
Keeps people looking young; gives appeal
Used for torturing people; causes severe depression
Causes skin cancer
Chemical fertilizer Increases crop yield; makes soil fertile
Harmful to crops; soil damaged
Chocolate and "refined" sweets
Increases human body volume, increasing appeal
Increases obesity epidemic and related diseases
Pesticides, ΜΤΒΕ
Improves performance
Damages the ecosystem
Desalination
Purifies water
Necessary minerals removed
Wood paint/ varnish
Improves durability
Numerous toxic chemicals released
Leather technology
Won't wrinkle; more durable
Toxic chemicals
Freon, aerosol, etc. Replaced ammonia which was "corrosive"
Global harms immeasurable and should be discarded
40
2.3
THE GREENING OF PETROLEUM OPERATIONS
The Science of Intangibles
Intangibles are those that are not quantifiable or verifiable with our modern counting technique. Mathematically, it is the fourth dimension or the time dimension. However, this should not be confused with time as a continual function (e.g., digital). Intangibles refer to the continuous function of time. Every time function will also have a source, which is the origin of the function and is itself intangible. For any human activity, the source of any action is the intention. This is truly intangible because no one else would know about the intention of a person. Even though it has been long recognized in the modern justice system that the intention must be established prior to determining accountability, little is considered about the intangible in other disciplines, such as science and engineering or even social science. Another aspect of the time function is the factor that depends on others. Intangibles are equivalent to having infinite dimensions. It is infinite because the time function is continuous, and the interaction of all other elements, with their own time history, must be considered. It literally approaches infinity because nature has no boundary, and whatever happens to one entity must have an effect on everything else. In the post-Renaissance era, while progress was made to break free from doctrinal philosophy to New Science, most intangible considerations have been discarded as pseudoscience or metaphysics that is beyond the scope of any engineering considerations. It can be argued that the lack of consideration of intangibles in the modern age is deliberate, due to focus on short-term. In the words of John Maynard Lord Keynes, who believed that historical time had nothing to do with establishing the truth or falsehood of economic doctrine, "In the long run, we are all dead" (cited by Zatzman and Islam 2007). The notion of intangibles was in the core of various civilizations, such as Indian, Chinese, Egyptian, Babylonian, and others, for several millennia. Thousands of years ago, Indian philosophers commented about the role of time as a space (or dimension) in unraveling the truth, as the essential component of knowledge (Zatzman and Islam 2007a). The phrase used by Ancient Indian philosophers was that the world reveals itself. Scientifically, it would mean that time is the dimension in which all the other dimensions completely unfold, so that truth becomes continuously known to humans, who use science (as in critical thinking). Another very well known principle
A DELINEARIZED HISTORY OF CIVILIZATION
41
from Ancient India is the connection among Chetna (inspiration), dharma (inherent property), karma (deeds arising from Chetna), and chakra (wheel, symbolizing a closed loop of a sustainable life style). Each of these concepts scientifically bears the intangible meanings, which cannot be expressed with a conventional European mathematical approach (Joseph, 2000). Only recently, Ketata et al. recognized this fact and introduced a series of mathematical tools that can utilize the concept of a meaningful zero and infinity in computational methods (2006a, 2006b, 2006c, 2006d). These ancient principles contain some of the most useful hints, extending far back into the oldest known human civilizations, of true sustainability as a state of affairs requiring the involvement of infinite time as a condition of maintaining a correct analysis as well as ensuring positive pro-social conclusions (Khan and Islam 2007b). Moving from Ancient India to Ancient China, the Chinese philosophers provided some very useful insight into very similar principles of sustainability and knowledge. The well-known statement, although rarely connected to science, of Confucius (551-479 B.C.) relates unraveling of the truth to creating balance - "Strive for balance that remains quiet within." For Confucius, balance had the essential condition of "quiet within." This idea is the essence of intangibles in the "knowledge" sense (Zatzman and Islam 2007b). In the Qu'ran (the first and only version compiled in the mid-7fh century), humans' time on earth and time in nature are all part of one vast expanse of time. This position is entirely consistent with the notion that the world reveals itself. In terms of the role of intentions,
Figure 2.6 Olive oil press (millennia old technology) versus a modern refinery, in which the former produces no toxin, yet we blame C 0 2 for global warming rather than looking at the process.
42
THE GREENING OF PETROLEUM OPERATIONS
the most famous saying of The Prophet — the very first cited in the Bukhari's collection of the hadiths — is that any deed is based on intentions (Hadiths of The Prophet, 2007). A review of human history reveals that what is commonly cast or understood as "the perpetual conflict between good and evil" has always been in fact about opposing intentions. What is good has always been characterized by an intention to serve a larger community, thus benefiting itself in the long term, while what is evil has been characterized by an intention to serve a self-interest in the short-term, thus hurting itself in the long term. What was known in Ancient India as the purpose of life (serving humanity) is promoted in the Qur'an as serving a self-interest in the long term. Because nature itself is such that any act of serving others leads to serving the self in the long term, it is conceivable that all acts of serving others in fact amount to self-interest in the long-term. In terms of balance, the Qu'ran promoted the notion of qadar (as in Faqaddarahu,ij!>ü, meaning "thereby proportioned him," (80:19, the Qur'an)), meaning proportionate or balanced in space as well as time. The Qu'ran is also specific about the beginning and the end of human life. There is a notion widespread in the Western world that the monotheistic premises of each of the three Abrahamic religions — Judaism, Christianity, and Islam — point to broad but unstated other cultural common ground. The historical record suggests such has not been the case, however, when it comes to certain fundamental premises of the outlook on and approaches taken to science and scientific method. The position of mainstream Greek, i.e., Eurocentric, philosophy on the key question of the nature of the existence of the world external to any human observer is that everything is either A or not A. That is Aristotle's law of the excluded middle which assumes time t = "right now" (Zatzman and Islam 2007b). Scientifically, this assumption is the beginning of what would be termed as steady-state models, for which At approaches 0. This model is devoid of the time component, a spurious state even if a time-dependent term is added in order to render the model "dynamic" (Abou-Kassem et al. 2007). Aristotle's model finds its own root in Ancient Greek philosophy (or mythology) that assumes that "time begins when Chaos of the Void ended" (Islam 2005b). Quite similar to Aristotle's law of excluded middle, the original philosophy also disconnected both the time function and human intention by invoking the assumption that "the gods can interrupt human intention at any time or place."
A DELINEARIZED HISTORY OF CIVILIZATION
43
This assertion essentially eliminates any relationship between individual human acts with a sense of responsibility. This particular aspect was discussed in detail by Zatzman and Islam, who identified the time function and the intention as the most important factors in conducting scientific research (2007a). Their argument will be presented later in this section. The minority position of Greek philosophy, put forward by Heraclitus, was that matter is essentially atomic and that, at such a level, everything is in endless flux. Mainstream Greek philosophy of Heraclitus' own time buried his views because of their subversive implication that nature is essentially chaotic. Such an inference threatened the Greek mainstream view that chaos was the void that had preceded the coming into existence of the world, and that a natural order came into existence putting an end to chaos. What Heraclitus had produced was in fact a most precise description of what the human observer actually perceives of the world. However, he did not account for time at all, so changes in nature at this atomic level incorporated no particular direction or intention. In the last half of the 18lh century, John Dalton reasserted the atomic view of matter, albeit now stripped of Heraclitus' metaphysical discussion and explanations. Newton's laws of motion dominated the scientific discourse of his day, so Dalton rationalized this modernized atomic view with Newton's object masses, and we end up with matter composed of atoms rendered as spherical balls in three-dimensional space, continuously in motion throughout threedimensional space, and within time considered as an independent variable. This line of research seals any hope for incorporating time as a continuous function, which would effectively make the process infinite-d imensiona 1. Zatzman and Islam have offered extensive review of Aristotle's philosophy and provided one with scientific explanation of why that philosophy is equivalent to launching the science of tangibles (2007b). In economic life, tangible goods and services and their circulation provide the vehicles whereby intentions become, and define, actions. Locked inside those tangible goods and services, inaccessible to direct observation or measurement, are intangible relations - among the producers of the goods and services and between the producer and nature - whose extent, cooperativeness, antagonism and other characteristic features are also framed and bounded by intentions at another level, in which the differing interests of producers and their employers are mutually engaged. In economic terms, Zatzman and
44
THE GREENING OF PETROLEUM OPERATIONS
Islam identified two sources of distortion in this process: 1) linearization of complex societal non-linear dependencies (functions and relationships) through the introduction of the theories of marginal utility (MU); and 2) lines in the plane intersect as long as they are not parallel, i.e., as long as the equation relationships they are supposed to represent are not redundant (2007b). The first source removes very important information pertaining to social interactions and the second source enables the use of the equal sign, where everything to its left is equated to everything to its right. Equated quantities cannot only be manipulated, but especially interchanged, according to the impeccable logic as sound as Aristotle (who first propounded it), which says that two quantities each equal to a third quantity must themselves be equal to one another or, symbolically, that "A = C" and "B = C" implies that "A = B." Scientific implications of this logic will be discussed in the latter part of this section. Here, in philosophical sense, the introduction of this logic led to the development of a "solution." As further arguments will be built on this "solution," soon this "solution" will become "the solution" as all relevant information is removed during the introduction of the aphenomenal process. This would lead to the emergence of "equilibrium," "steady state," and various other phenomena in all branches of New Science. It won't be noticeable to common people that these are not natural systems. If anyone questions the non-existence of such a process, he or she will be marginalized as "conspiracy theorists," "pseudo-scientists," and numerous other derogatory designations. This line of thinking would explain why practically all scientists up until Newton had tremendous difficulty with the religious establishment in Europe. In the post-Renaissance world, the collision between scientists and the religious establishment was erased not because the Establishment became pro-science, but more likely because the New Scientists became equally obsessed with tangibles, devoid of time function as well as intention (Zatzman and Islam 2007a). Theoretically, both of these groups subscribed to the same set of misconceptions or aphenomenal bases that launched the technology development in the post-Renaissance era. Avoiding discussion of any theological nature, Zatzman and Islam nevertheless managed to challenge the first premise (2007a). Rather than basing the first premise on the truth a la Averröes, they mentioned the importance of individual acts. Each action would have three components: origin (intention), pathway, and consequence (end). Averröes talked about origin being the truth; they talked
A DELINEARIZED HISTORY OF CIVILIZATION
45
about intention that is real. How can an intention be real or false? They equate real with natural. Their work outlines fundamental features of nature and shows there can be only two options, natural (true) or artificial (false). The paper shows Aristotle's logic of anything being "either A or not A" is useful only to discern between true (real) and false (artificial). In order to ensure the end being real, the paper introduces the recently developed criterion of Khan (2006) and Khan and Islam (2007b). If something is convergent when time is extended to infinity, the end is assured to be real. In fact, if this criterion is used, one can be spared of questioning the "intention" of an action. If any doubt, one should simply investigate where the activity will end u p if time, t, goes to infinity. This absence of discussion of whatever happened to the tangible-intangible nexus involved at each stage of any of these developments is not merely accidental or a random fact in the world. It flows directly from a Eurocentric bias that pervades, well beyond Europe and North America, the gathering and summation of scientific knowledge everywhere. Certainly, it is by no means a property inherent - either in technology or in the norms and demands of the scientific method per se, or even within historical development - that time is considered so intangible as to merit being either ignored as a fourth dimension or conflated with tangible space as something varying independently of any process underway within any or all dimensions of three-dimensional space. Recently, Mustafiz et al. identified the need of including a continuous time function as the starting point of acquiring knowledge (2007). According to them, the knowledge dimension does not get launched unless time as a continuous function is introduced (Fig. 2.7). They further show that the knowledge dimension is not only possible, but it is necessary. The knowledge dimension is conditioned not only by the quantity of information gathered in the process of conducting research, but also by the depth of that research, i.e., the intensity of one's participation in finding things out. In and of themselves, the facts of nature's existence and of our existence within it neither guarantee nor demonstrate our consciousness of either or the extent of that consciousness. Our perceptual apparatus enables us to record a large number of discrete items of data about the surrounding environment. Much of this information we organize naturally and unconsciously. The rest we organize according to the level to which we have trained a n d / o r come to use our own brains. Hence, neither can it be affirmed that we arrive at knowledge directly or merely through perception, nor can
46
THE GREENING OF PETROLEUM OPERATIONS Knowledge Averröes model (phenomenal basis)
4D model
Time
\ ^s-*\
''
^N
\
Thomas Aquinas model (Aphenomenal basis)
Ignorance
Figure 2.7 Logically, a phenomenal basis is required as the first condition to sustainable technology development. This foundation can be the truth as the original of any inspiration or it can be "true intention/' which is the essence of intangibles (Zatzman and Islam 2007a; Mustafiz et al. 2007).
we affirm being in possession at any point in time of a reliable proof or guarantee that our knowledge of anything in nature is complete. Historically, what the Thomas Aquinas model did to European philosophy is the same as what Newton's model did to New Science. The next section examines Newton's models. Here, it would suffice to say that Newton's approach was not any different from the approach of Thomas Aquinas or even Aristotle. One exception among scientists in Europe was Albert Einstein, who introduced the notion of time as the fourth dimension. However, no one followed up on this aspect of Einstein's work, and it was considered that the addition of a time term in Newton's so-called steady state models would suffice. Mustafiz recognized the need of including the time dimension as a continuous function and set the stage for modeling science of intangibles (Abou-Kassem et al. 2007). Table 2.2 summarizes the historical development in terms of scientific criterion, origin, pathway, and consequences of the principal cultural approaches to reckoning and reconciling the tangible-intangible nexus. Without time as the fourth dimension, all models become simulators, focusing on very short-term aspects of natural phenomena (Islam, 2007). In order for these models to be valid in emulating phenomena in the long-term, the fourth dimension must be included. In order for a process to be knowledge-based (precondition for
A DELINEARIZED HISTORY OF CIVILIZATION
47
Table 2.2 Criterion, origin, pathway, and end of scientific methods in some of the leading civilizations of world history. Criterion Δί —> oo
Origin
Pathway
End
Intention
fit)
Consequences
Δί -> oo Δί —> oo
Intention
Natural
Sustainability
Intention
Natural
Natural (used At —> oo to validate intention)
Einstein
t as 4,h-D
"God does not play dice..."
Natural
N/A
Newton
Δί -H>0
"external force" (1st Law)
No difference between natural & artificial
Universe will run down like a clock
Aquinas
Bible
Acceptance of Divine Order
All knowl- Heaven and Hell edge & truth reside in God; choice resides with Man
Averröes
AI- Furqan Intention (first hadith) (meaning The Criterion, title of Chapter 25 of the Quran); stands for Qur'an
Amal saliha (good deed, depending on good intention)
Accomplished (as in Mußehoon, ujyal,2:5), Good (+00)
A or not A (Δί = 0)
Natural or arti-ficial agency
Ευδαιμονια {Eudaitnonia, tr. "happiness," actually more like "Man in harmony with universe")
People Zatzman and Islam (2007) Khan (2006) Zatzman and Islam (2007a)
Aristotle
Natural law
Losers (as in Khasheroon, U J V ^ , 58:19), Evil (-oo)
48
THE GREENING OF PETROLEUM OPERATIONS
Table 2.2 (cont.) Criterion, origin, pathway, and end of scientific methods in some of the leading civilizations of world history. People
Criterion
Origin
Pathway
End
Ancient India
Serving others; "world reveals itself"
Inspiration (Chetna)
Karma (deed with inspiration, chetna)
Karma, salvation through merger with Creator
Ancient Greek (preSocratics)
t begins when chaos of the void ended
The gods can N / A interrupt human intention at any time or place
Ancient China (Confucius)
N/A
Kindness
N/A
Quiet Balance (intangible?)
Table 2.3 Typical features of natural processes as compared to the claims of artificial processes. Features of Nature and Natural Materials Feature no.
Feature
1
Complex
2
Chaotic
3
Unpredictable
4
Unique (every component is different), i.e., forms may appear similar or even "self-similar," but their contents alter with passage of time
5
Productive
6
Non-symmetric, i.e., forms may appear similar or even "self-similar," but their contents alter with passage of time
7
Non-uniform, i.e., forms may appear similar or even "self-similar," but their contents alter with passage of time
A DELINEARIZED HISTORY OF CIVILIZATION
49
Table 2.3 (cont.) Typical features of natural processes as compared to the claims of artificial processes. Feature no.
Feature
8
Heterogeneous, diverse, i.e., forms may appear similar or even "self-similar," but their contents alter with passage of time
9
Internal
10
Anisotropie
11
Bottom-up
12
Multifunctional
13
Dynamic
14
Irreversible
15
Open system
16
True
17
Self healing
18
Nonlinear
19
Multi-dimensional
20
Infinite degree of freedom
21
Non-trainable
22
Infinite
23
Intangible
24
Open
25
Flexible
Source: Adapted from Khan and Islam 2007b
emulation), the first premise of a model must be real, or existent in nature. The models of New Science do not fulfill this condition. It is observed that most of the laws and theories related to mass and energy balance have violated some natural traits by their first premise. These first premises are listed in Table 2.4.
50
THE GREENING OF PETROLEUM OPERATIONS
Table 2.4 How the natural features are violated in the first premise of various laws and theories of the science of tangibles. Law or theory
First premise
Features violated (see Table 2.3)
Conservation of mass
Nothing can be created or destroyed
None
Lavoisier's deduction
Perfect seal
15
Phlogiston theory
Phlogiston exists
16
Theory of relativity
Everything (including time) is a function of time (concept) Maxwell's theory (mathematical derivation)
None (concept) 6, 7,25 (mathematical derivation)
E = tnc2
Mass of an object is constant
13
Speed of light is constant
13
Nothing else contributes toE
14,19,20,24
Planck's theory
Nature continuously degrading to heat dead
5,17,22
Charles
Fixed mass (closed system), ideal gas, constant pressure
24, 3, 7
Boyles
A fixed mass (closed system) of ideal gas at fixed temperature
24, 3, 7
Kelvin's
Kelvin temperature scale is derived from Carnot cycle and based on the properties of ideal gas
3,8,14,15
Thermodynamics 1st law
Energy conservation (the first law of the thermodynamics is no more valid when a relationship of mass and energy exists)
None
A DELINEARIZED HISTORY OF CIVILIZATION
Table 2.4 (cont.) How the natural features are violated in the first premise of various laws and theories of the science of tangibles. Law or theory
First premise
Features violated (see Table 2.3)
Thermodynamics 2nd law
Based on Carnot cycle which is operable under the assumptions of ideal gas (imaginary volume), reversible process, adiabatic process (closed system)
3, 8,14,15
Thermodynamics 0,h law
Thermal equilibrium
10,15
Poiseuille
Incompressible uniform viscous liquid (Newtonian fluid) in a rigid, non-capillary, straight pipe
25,7
Bernouilli
No energy loss to the sounding, no transition between mass and energy
15
Newton's l sl law
A body can be at rest and can have a constant velocity
Non-steady state, 13
Newton's 2nd law
Mass of an object is constant Force is proportional to acceleration External force exists
13 18
Newton's 3 rd law
The action and reaction are equal
3
Newton's viscosity law
Uniform flow, constant viscosity
7,13
Newton's calculus
Limit At -> 0
22
Source: Adapted from Zatzman et al. 2008
51
52
THE GREENING OF PETROLEUM OPERATIONS
These violations mean such laws and theories weaken considerably, or worse, implode, if applied as universal laws and theories. They can be applied only to certain fixed conditions that pertain to "idealized" yet non-existent conditions in nature. For example, it can be said that the laws of motion developed by Newton cannot explain the chaotic motion of nature due to its assumptions that contradicts with the reality of nature. The experimental validity of Newton's laws of motion is limited to describing instantaneous, macroscopic, and tangible phenomena. However, microscopic and intangible phenomena are ignored. Classical dynamics, as represented by Newton's laws of motion, emphasize fixed and unique initial conditions, stability, and equilibrium of a body in motion (Ketata et al. 2007). With the laws and theories of Table 2.4, it is not possible to make distinction between the products of the following engineering processes. The same theories cannot be called upon to make the reversal. Wood —> plastic Glass -> PVC Cotton —»polyester Natural fiber —> synthetic fiber Clay —> cement Molasses —> sugar Sugar —> sugar-free sweeteners Fermented flower extract —> perfume Water filter (Hubble bubble) —> cigarette filter Graphite, clay —> chalk Chalk —» marker Vegetable paint —> plastic paint Natural marble —> artificial marble Clay tile —> ceramic tile Ceramic tile —> vinyl and plastic Wool —> polyester Silk —> synthetic Bone —> hard plastic Organic fertilizer —> chemical fertilizer Adaptation —> bioengineering If one considers the above pictures that show designs that are the products of as much as millennia old technologies. None of them used artificial material or modern criteria of environmental
A DELINEARIZED HISTORY OF CIVILIZATION
53
Figure 2.8 Millenia old technologies were aesthetically superior and, at the same time, truly sustainable both in material and design.
sustainability, yet each of them fulfills the true sustainability criteria (Khan and Islam 2007a). Not too long ago, every technology used real (natural) materials and processes that didn't violate natural traits of matter (Table 2.3, Fig. 2.8). Could it be that modern science is actually a disinformation machine, carefully doctored to obscure the difference between real and artificial, truly sustainable and inherently unsustainable? Zatzman examined this aspect of scientific progress (2007). He argued that the "chemicals are chemicals" mantra is not promoted out of ignorance, but rather out of necessity the necessity to uphold the aphenomenal model that is incapable of existing or coexisting with knowledge, the truth. He showed how this mantra is the driver behind the aphenomenality of mass production, i.e., that "more" must be "better" simply because it's more. If this mass is not the same mass that exists in nature, the implosive nature of the entire post-Renaissance model of New Science and the Industrial Revolution becomes very clear. Ironically, the same New Science that had no problem with Einstein's theories, all of which support "mass is not mass" and "heat is not heat" and recognize the dependence on source and pathway, had tremendous
54
THE GREENING OF PETROLEUM OPERATIONS
problems with the notion that white light from the fluorescent light is not the same as the white light from the sun, or that vitamin C from the organic orange is not the same as the vitamin C from the pharmaceutical plant. Most importantly, it didn't see the difference between industrial C 0 2 and organic C0 2 , blaming modern-day global warming on carbon, the essence of organic matters. Rather than trying to discover the science behind these pathways, industries instead introduced more aphenomenal products that created even darker opacity and obscured the difference between reality and the truth. The roller coaster was set in motion, spiraling down, bringing mankind to such a status that even a clear champion of New Science, the Chemistry Nobel Laureate, Robert Curl, called it a "technological disaster."
2.4 The Science of Matter and Energy The existence of essential matter was known to all civilizations from the prehistoric time. Artifacts exist from thousands of years ago that show the knowledge of engineering, dealing with materials needed for survival. This includes the knowledge of the carriage with wheels. It is also reported that much sophisticated technology was present in the days of pharaohs (some 6,000 years ago) and the thamud. The thamud created an entire city by carving on hard rock (similar to Petra of Jordan). This is a technology that is unthinkable today. Pharaohs, whose masterpiece of civil engineering technology in the
Figure 2.9 The usage of wheel and carriage may be as old as 9,000-year-old technology (picture taken at Tripoli Museum).
A DELINEARIZED HISTORY OF CIVILIZATION
55
form of pyramids, showed craftsmanship unprecedented in today's time. Similarly, the chemical technology employed by the engineers of the pharaoh is much more sophisticated and definitely sustainable. We know that the chemicals that were used to mummify are both non-toxic and extremely efficient. Today's technology, which uses various known toxins, is remotely as efficient and definitely harmful to the environment (e.g., the ones used in Lenin's mausoleum in Moscow or the Pope's mausoleum in the Vatican). However, we have little information as to how those engineers designed their technological marvels. All one can say is that the Stone Age and Bronze Age are not within the time frame of suggested 10,000 years of human existence. The current era of full order of social culture can date back to King David who conquered Jerusalem some 3,000 years ago and established an empire that would last some 1,300 years. This would be the first known society with remarkable law and order. He was also known to master material processing as the metal mines were being discovered. Only recently, an Israeli research team has discovered earliest-known Hebrew text on a shard of pottery that dates to the time of King David. The following was reported by CNN: "Professor Yosef Garfinkel of the Hebrew University of Jerusalem says the inscribed pottery shard was found during excavations of a fortress from the 10th century B.C. Archaeologists have yet to decipher the text, but initial interpretation indicates it formed part of a letter and contains the roots of the words 'judge/ 'slave/ and 'king/ according to the
Figure 2.10 The shard contains five lines of text divided by black lines (left picture), and 100 water pots similar to this one shown here were found on the same site (the right picture is from the Tripoli museum).
56
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
university. That may indicate it was a legal text, which archaeologists say would provide insights into the full order King David had created."
The report also states that the text was clearly written by a trained scribe: "The shard was discovered at the Elah Fortress in Khirbet Qeiyafa, about 20 miles southwest of Jerusalem. Because the ostracon is similar to that found in other Israelite settlements, and because no pig bones were found at the site, archaeologists say the site was likely part of the Kingdom of Judea. Jewish dietary laws forbid the eating of pork." Among the artifacts found at the site are more than 100 jar handles bearing distinct impressions which may indicate a link to royal vessels, the university said. It is widely believed that the first set of Dead Sea Scrolls was discovered in 1947 by a Bedouin shepherd who ventured into a cave in the Judean Desert in search of a lost sheep or goat. The texts, written on crumbling parchment and papyrus, were found wrapped in linen inside earthenware jars. These discoveries show that a society of order and discipline was in place some 3,000 years ago and that they indeed had the ability to scribe and preserve with sustainable technologies. His son, Solomon, would continue the expansion of the empire with very significant development in science and engineering. Only recently, Solomon's copper mine was reportedly discovered. On October 28, the Discovery channel reported the following: "The fictional King Solomon's mines held a treasure of gold and diamonds, but archaeologists say the real mines may have
Figure 2.11 Solomon's mine was not a myth in the Middle East, it was a fact based on the Qur'an.
A DELINEARIZED HISTORY OF CIVILIZATION
57
supplied the ancient king with copper. Researchers led by Thomas Levy of the University of California, San Diego, and Mohammad Najjar of Jordan's Friends of Archaeology, discovered a copperproduction center in southern Jordan that dates to the 10th century B.C., the time of Solomon's reign. The discovery occurred at Khirbat en-Nahas, which means 'ruins of copper" in Arabic." This mine was not a myth or fiction in the Middle East. Even the name of the location means "ruins of copper." People knew for the last 1,400 years at least that it was the location of the copper mine. The rein of Solomon has been cited in the Qur'an a number of times, all in light of his knowledge regarding many aspects of science and technology that has been forgotten in the modern age. Consider the state of the mine depicted above and compare that with today's mining technology. Below is a picture of the infamous Cape Breton tar pond of Nova Scotia. After mining the area for some 100 years, an entire cesspool of environmental disaster has been created. This would be equivalent to any of the superfund sites of the United States. Similarly, over 100 million dollars have been spent just to assess the extent of the damage, while not a single drop of the tar pond has been cleaned. In the mean time, the mining industry is also in the ruins, rendering the Cape Breton community the poorest in Nova Scotia, one of the poorest provinces of Canada. Even the recent finding of gas reserve in Nova Scotia did not lift its economy, and the local community doesn't even get to use the gas that is extracted from offshore. Contrast this with the Judea community of Solomon that thrived for a millennium after his death.
Figure 2.12 The infamous tar pond of Cape Breton, Nova Scotia.
58
THE GREENING OF PETROLEUM OPERATIONS
Solomon expanded the empire to control a wider area of the old world of the three continents. Its political and military power declined with the Israelite kingdom splitting into two regions, one in the north and the other in the south. The region of the north vanished in two centuries, whereas the region of the south, Judea, continued for another millennium. During the 1,000 years of Judea rule, this superior culture expanded to surrounding regions and exerted great influences over other civilizations and cultures. Persians, Greeks, Carthaginians, and even Indians learned from Judea. Subsequent political weakening did not stop other people's learning from this society until their total destruction by the Romans in 70 A.D. In the modern age, dominated by European culture, only reference to Judea's knowledge is credited to the Greek. There are three problems with this accreditation. First, it doesn't acknowledge the vast annals of Judea knowledge that directly benefited and enriched the knowledge of the Greek. This creates bias against the source of the Judea knowledge that was shared with both the east and west and was preserved in its original form for many centuries, being revived and utilized during the Islamic era (7th-20th century A.D.) in vast region extending from far east to central Europe. Secondly, the pre-existent knowledge of the cultures of the east, extending from Persia to China and India that had a superior social infrastructure and access to ancient knowledge, predating Solomon, is ignored or marginalized. Thirdly, all the work of the ancient Greek that came to Europe was actually a result of a gigantic translation effort by Arab scholars, some of who resided in Spain, which was the epicenter of knowledge for 400 years under Arab rule (Ibn Rushd or Averroes being the most noted one, who translated most of Aristotle's work). In fact, these documents were only made available in Arabic and later were translated to Latin to finally end up in modern Greek. By using the third-hand version of the translation, the Greek ancient knowledge is severely distorted as is evident from the discussion below.
2.4.1
The European Knowledge Trail in Mass and Energy
2.4.1.1 Introduction Why should we study history, particularly in the context of technology development? Is history useful for increasing our knowledge?
A DELINEARIZED HISTORY OF CIVILIZATION
59
The issue here is not whether new knowledge accumulates on the basis of using earlier established findings, with the entire body of knowledge then being passed on to later generations. The real issue is on what basis does an individual investigator cognize the existing state of knowledge? If the individual investigator cognizes the existing state of knowledge on the basis of his or her own re-investigation of the bigger picture surrounding his or her field of interest, that would be a conscious approach, one which shows that the investigator operating according to conscience. If, on the other hand, one accepts as given the so-called conclusions reached up to now by others, such a consideration could introduce a problem - what were the pathways by which those earlier conclusions were reached? An investigator who declines to investigate those pathways is negating conscience. Such negating of conscience is not a good thing for anyone to undertake. However, the fact is there were for a long time external or surrounding conditions asserting an undue or improper influence on this front. What if, for example, there exists an authority (like the Church of Rome during the European Middle Ages) that claims to have superior knowledge, certifying particular conclusions while at the same time banishing all thinking or writing that leads to any other conclusions? Then the individual's scientific investigation and reporting will be colored and influenced by the looming threat of censorship and / o r the actual exercise of that censorship. (The latter could occur at the cost of one's career.) Against this, mere interest on the part of the investigator to find something out, or mere curiosity, won't be enough. The investigator has to be driven by some particular consciousness of the importance for humanity. Of course, the Church agrees but insists only that one have to have the Church's conscience. This would account for Galileo's resorting to defensive maneuvers, claiming he was not out to disprove Scripture - a tactic of conceding a small lie in order to be able to continue nailing down a larger more important truth. Why mix such hypocrisy into such matters? He did it because it had worked for other investigators in the past. What was new in Galileo's case was the decision of the Church of that time not to permit him that private space in which to maneuver, in order to make of him an example with which to threaten less-talented researchers coming after him. The worst we can say against Galileo after that point is that, once an investigator (in order to get along in life) goes along with this, he or she destroys some
60
THE GREENING OF PETROLEUM OPERATIONS
part of his or her usefulness as an investigator. This destruction is even more meaningful because it is likely to change the direction of the conscience pathway of the investigator, for example, leading him or her to pursue money instead of the truth. The historical movement in this material illustrates the importance of retaining the earliest and most ancient knowledge. However, it leaves open the question of what was actually authoritative about earlier knowledge for later generations. The unstated but key point is that the authority was vested in the unchanging character of the key conclusions. That is to say, this authority was never vested in the integrity and depth of probing by earlier investigators and investigations into all the various pathways and possibilities. In medieval Europe, the resort to experimental methods did not arise on the basis of rejecting or breaking with Church authority. Rather it is justified instead by a Christian-theological argument, along the following lines: 1) knowledge of God is what makes humans right-thinking and good and capable of having their souls saved in eternity; 2) this knowledge should be accessible wherever humans live and work; and 3) the means should be at hand for any right-thinking individual to verify the truth or eliminate the error in their knowledge. These "means" are then formulated as the starting point of what becomes the "scientific method." So as a result, (combining here the matter of the absence of any sovereign authority for the scientific investigator's conscience and the Christian-theological justification for certain methods of investigation that might not appear to have been provided by any previously-existing authority), even with scientific methods such as experiments, the conscience of an investigator who separated his or her responsibility for the truth from the claims of Church authority, but without opposing or rebelling against that authority, could not ensure that his or her investigation could or would increase knowledge of the truth. There is another feature that is crucial regarding the consequences of vesting authority in a central knowledge-certifier. For thousands of years, Indian mathematics were excelling in increasing knowledge, yet nobody outside of the villages or small surrounding territories knew about its findings for millennia because there did not exist any notion of publication of results and findings for others. Contrast this with the enormous propaganda ascribing so many of the advancements in the New Science of tangibles to the system that emerged of scholarly publication and dissemination of fellow
A DELINEARIZED HISTORY OF CIVILIZATION
61
researchers" findings and results. This development is largely ascribed to "learning the lessons" of the burning of the libraries of Constantinople in 1453, which deprived Western civilization of so much ancient learning. The issue is publication, yet the issue is not just publication. Rather, it is on what basis does publication of new findings and research take place? Our point here is that publication will serve to advance knowledge in rapid and great strides if and only if authority is vested in the integrity and depth of probing by earlier investigators and investigations into all the various pathways and possibilities. Otherwise, this societal necessity and usefulness for publication becomes readily and easily subverted by the culture of patents, the exclusivity of "intellectual property" or what might be described today as "monopoly right." If and only if we put first the matter of the actual conduct of scientific investigations and the politics attached to that conduct, meaning the ways and means by which new results are enabled to build humanity's store of knowledge, then and only then can we hope to reconstruct the actual line of development. With the actual knowledge of this line of development, for any given case, we can then proceed to critique, isolate, and eliminate the thinking and underlying ideological outlook that keep scientific work and its contents traveling down the wrong path on some given problem or question. The issue is not just to oppose the Establishment in theory or in words. The issue is rather to oppose the Establishment in practice, beginning with vesting authority regarding matters of science and present state of knowledge in the integrity and depth of probing by earlier investigators and investigations to date into all the various pathways and possibilities of a given subject matter. 2.4.1.2
Characterization of Matter and Energy
Around 450 B.C., a Greek philosopher, Empedocles, characterized all matter into earth, air, fire, and water. Note that the word "earth" here implies clay material or dirt, not the planet Earth. The origin of the word "earth" (as a human habitat) originates from the Arabic word Ardh, the root meaning of which is the habitat of the human race, children of Adam, lower status, etc. Earth in Arabic is not a planet, as there are other words for planet. Similarly, the sun is not a star; it is precisely the one that sustains all energy needs of the earth. The word "air" is Hawa in Arabic, as in the atmosphere. Note
62
THE GREENING OF PETROLEUM OPERATIONS
that "air" is not the same as oxygen (or even certain percentage of oxygen, nitrogen, and carbon dioxide, etc.). It is the invisible component of the atmosphere that surrounds the earth. Air must contain all organic emission from earth for it to be "full of life." It cannot be reconstituted artificially. The term "fire" is naar in Arabic, which refers to real fire, as when wood is burned and both heat and light are produced. The word has the same root as light (noor), which however has a broader meaning. For instance, moonlight is called noor, whereas sunlight (direct light) is called adha'a. In Arabic, there is a different word for lightening (during a thunderstorm, for instance). In all, the characterization credited to Empedocles and known to modern Europe is in conformance with the criterion of phenomena as outlined in the previous section. It doesn't violate any of the fundamental properties of nature, as listed in Table 2.3. In fact, this characterization has the following strengths: 1) the definitions are real, meaning they have phenomenal first premise; 2) it recognizes the continuity in nature, including that between matter and energy; and 3) it captures the essence of natural lifestyle. With this characterization, nuclear energy would not emerge as an energy source. Fluorescent light would not qualify as natural light. In fact, with this characterization, none of the technologies, all of which are unsustainable and implosive, listed in Table 2.1 would come to existence. In the context of characterization of matter, the concept of fundamental substance was introduced by another Greek philosopher, named Leucippus, who lived around 478 B.C. Even though his original work was not accessible even to Arabs who brought the annals of ancient Greek knowledge to the modern age, his student, named Democritus (420 B.C.), documented Leucippus' work which was later translated into Arabic, then into Latin, followed by modern Greek and other European contemporary languages. That work contained the word "atom" (ατομοζ in Greek), perpetrated as a fundamental unit of matter. This word created some discussion amongst Arab scientists some 900 years ago. They understood the meaning to be "undivided" This is different from the conventional meaning, "indivisible," used in Europe in the post-Renaissance era. This would be consistent with Arab scholars because they would not assign any property (such as indivisible) that has the risk of being proven false (which is the case for the conventional meaning of atom). Their acceptance of the word atom was again in conformance with the criteria listed in Table 2.2 and the fundamental trait of nature, as listed in Table 2.3. An atom
A DELINEARIZED HISTORY OF CIVILIZATION
63
was not considered to be either indivisible, or identical, or uniform, or any other commonly asserted properties described in the contemporary atomic theory. In fact, the fundamental notion of creating an aphenomenal basis or unit is strictly a European one. Arab annals of knowledge in the Islamic era, beginning in the 7lh century, don't have any such tradition (Zatzman, 2007). This is not to say they did not know how to measure. On the contrary, they had yardsticks that were available to everyone. Consider in this, the unit of time is the blink of an eye (tarfa) for small scale and bushel of grains for medium scale (time required to prepare a bushel of grains is useful for someone who does the milling of grains using manual stone grinders). As for unit of matter, the dust particle offers a formidable unit that is both practical and tractable (the Arabic word, dharra means the dust particles that are visible when a window is opened to let the sunlight into a room - this word is erroneously translated as "atom"). Using this principle, Khan et al. (2008) introduced the avalanche theory that builds on snow flakes as the unit of matter. Heraclitus (540 B.C.) argued that all matter was in flux and vulnerable to change regardless of its apparent solidity. This is obviously a more profound view, even though, like Democritus, he lacked any special lab facilities to investigate this insight further, or otherwise to look into what the actual structure of atomic matter would be. It would turn out, the theory of Heraclitus would be rejected by subsequent Greek philosophers of his time. A further discussion follows. A less elaborate "atomic theory" as described by Democritus had the notion of atoms being in perpetual motion in a void. While the state of being in constant motion (perpetual should not mean uniform or constant speed) is in conformance with natural traits, void is not something that is phenomenal. In Arabic, the closest word to describe void is cipher (the origin of the word decipher, meaning removing the zeros or the fillers), which means empty (this word that has been in Arabic for over 1,400 years was not used in the Qur'an). For instance, a hand or a bowl can be empty because it has no visible content in it, but it would never imply it has nothing it (for instance, it must have air). The association of cipher with zero was done much later when Arabs discovered the role of zero from Indian mathematicians. One very useful application of the zero was in its role as a filler. That alone made the counting system take a giant leap forward. However, this zero (or cipher or sunya in Sanskrit) never implies nothingness. In Sanskrit Maha Sunya (Great Zero) refers
64
THE GREENING OF PETROLEUM OPERATIONS
to the outerspace, which is anything but void, as in nothingness. Similarly, the equivalent word is As-sama'a, which stands for anything above the earth, including seven layers of stars in the entire universe (in conventional astronomical sense). In ancient Greek culture, however, void refers to the original status of the Universe which was thought to be filled with nothingness. This status is further confused with the state of chaos, Χαοσ, another Greek term that has void as its root. The word chaos does not exist in the Qur'an as it is asserted there is no chaos in Universal order that would not allow any state of chaos, signaling the loss of control of the supreme authority. It is not clear what notion Liucippas had regarding the nature of atomic particles, but from the outset, if it meant a particle (undivided) that is in perpetual motion, it would not be in conflict with fundamental nature of natural objects. This notion would put everything in a state of flux. The mainstream Greek philosophy would view this negatively for its subversive implication that nature is essentially chaotic. Such an inference threatened the Greek mainstream view that chaos was the void that had preceded the coming into existence of the world, and that a natural order came into existence putting an end to chaos. As stated earlier, this confusion arises from misunderstanding the origin of the Universe. Even though contemporary Greek scholars rejected this view, this notion (nature is dynamic) was accepted by Arab scholars who did not see this as a conflict with natural order. In fact, their vision of the Universe is that everything is in motion and there is no chaos. Often, they referred to a verse of the Quran (36:38) that actually talks about the sun as a constantly moving object, moving not just haphazardly but in a precisely predetermined direction, assuring universal order. Another intriguing point that was made by Democritus is that the feel and taste of a substance is a function of ατομοσ of the substance on the ατομοσ of our sense organs. This theory advanced over thousand years before Alchemists' revolutionary work on modern science was correct in the sense that it supports the fundamental trait of nature. This suggestion that everything that comes into contact contributes to the exchange of ατομοσ would have stopped us from making toxic chemicals, thinking that they are either inert (totally isolated from the system of interest) or their concentration is so low that the leaching can be neglected. This would prevent us from seeing the headlines that we see everyday. This theory could have revolutionized chemical engineering 1,000 years before Alchemists (at least in Europe, as Egyptians already were much advanced in
A DELINEARIZED HISTORY OF CIVILIZATION
65
chemical engineering some 6,000 years ago). This theory, however, was rejected by Aristotle (384-322 B.C.) who became the most powerful and famous of the Greek scientific philosophers. Instead, Aristotle adopted and developed Empedocles's ideas of elemental substances, which was originally well founded. While Aristotle took the fundamental concept of fire, water, earth, and air being the fundamental ingredients of all matter, he added qualitative parameters, such as hot, moist, cold, and dry. This is shown in Figure 2.13. This figure characterizes matter and energy in four elements but makes them function of only composition, meaning one can move from water (cold and moist) to fire (hot and dry) by merely changing the composition of various elements. Similarly, by changing the properties one can introduce change in compositions. This description is the first known steady state model that we have listed in Table 2.3 Nature, however, is not a steady-state, and that's why this depiction is inherently flawed. In addition, the phase diagram itself has the symmetry imposed on it that is absent in nature. The Arab scientists did not pick up on this theory of Aristotle, even though many other aspects Aristotle's philosophy was adapted after careful scrutiny, including the famous law of exclusion of the middle. Democritus is indeed most often cited as the source of the atomic theory of matter, but there's a strong argument/likelihood that what he had in mind was a highly idealized notion, not anything based on actual material structure. For the Greeks, symmetry was believed to be good and was largely achieved by geometric rearrangement of (usually) two-dimensional space. There is an ambiguity as to whether Greek atomists thought Fire
Air
Earth
Moist
Water Figure 2.13 Aristotle's four-element phase diagram (steady-state).
66
THE GREENING OF PETROLEUM OPERATIONS
of atoms as anything other than an infinite spatial subdivision of matter. Heraclitus' major achievement, which also marginalized him among the other thinkers of his time unfortunately, was his incorporation of a notion of the effects of time as a duration of some kind, as some other kind of space in which everything played itself out. Mainstream Greek philosophy following Plato rigidly opposed to assigning any such role to time when it came to establishing what they called the essence of things. Plato and his school had held that all matter was physical representations of ideal forms. The task of philosophy was to comprehend these ideal forms in their essence. That essence was what the vast majority of Greek philosophers understood as "ideas." Both Democritus and Heraclitus followed the main lines of Greek thought in accepting/assuming ideas as being something purer than immediate perception. These "ideas" had their purest form within human consciousness. In effect, although the Greeks never quite put it like this, the material world as we would understand it was deemed a function of our consciousness of ideas about the forms. The Greek philosophers were deeply divided over whether matter as an idea had to have any particular physical existence. Physical existence was something assigned largely to plants and animals. For both philosophers, the atom they had in mind was more a fundamental idea than a starting point of actual material structure. Of all the leading ancient Greek thinkers, in his lifelong wrestling with how to reconcile generally accepted notions and ideas about the world with that which could be observed beyond the surface of immediate reality in the short term, it was Aristotle who came closest to grasping the real world as both a material reality outside us and a source of ideas. He himself never fully resolved the contradictions within his own position. However, he tended to side with the material evidence of the world outside us over conjectures lacking an evidentiary or factual basis. European literature is silent on the scientific progress made by Arabs and other Muslim scientists that made spectacular progress in aspects of science, ranging from architectural mathematics and astronomy to evolution theory and medicine. Much of this was preserved, but by methods that precluded or did not include general or widespread publication. Thus, there could well have been almost as much total reliable knowledge 1,400 years ago as today, but creative people's access and availability to that mass of reliable knowledge
A DELINEARIZED HISTORY OF CIVILIZATION
67
would have been far narrower. Only recently it was discovered that Islamic scholars were doing mathematics some 1,000 years ago of the same order that are thought to be discovered in the 1970s (Lu and Steinhardt 2007), with the difference being that our mathematics can only track symmetry, something that does not exist in nature (Zatzman and Islam 2007a). Knowledge definitely is not within the modern age. Recently, a three-dimensional PET-scan of a relic known as the "Antikythera Mechanism" has demonstrated that it was actually a universal navigational computing device, with the difference being that our current-day versions rely on GPS, tracked and maintained by satellite (Freeth et al. 2006). Only recently, Ketata et al. recognized that computational techniques that are anciently based, but nonlinear counting techniques, such as Abacus, are far more superior to the linear computing (2007a). Even in the field of medicine, one would be shocked to find out what Ibn Sina ("Avicenna") said about nature - that nature is the source of all cure, with the proviso that not a single quality given by nature in the source material of remain intact. For example, some of the most advanced pharmaceuticals used to "treat" cancer are so engineered during mass production that all the power to actually cure and not merely "treat," i.e., delay the onset or progress of symptoms are stripped (Crugg and Newman 2001). Therefore, there are examples from history that show knowledge is directly linked with intangibles and in fact, only when intangibles are included that science leads to knowledge (Vaziri et al. 2007). Science and technology as we see today would return to Europe in 16th century. However, much earlier than that Thomas Aquinas (1225-1274 A.D.) adopted the logic of Averröes (derived from Aristotle's work), an Arab philosopher of Spain who was liked by Thomas Aquinas and who was affectionately called "The Interpreter." However, Thomas Aquinas, whose fascination of Aristotle was well known, introduced the logic of the Creator, His Book, and this Book being the source of all knowledge to Europe with a simple yet highly consequential modification. He would color the (only) creator as God and define the collection of Catholic Church documentation on what eventuated in the neighborhood of Jerusalem some millennium ago as the only communication of God to mankind, hence the title Bible - the (only) Book. If Aristotle was the one who introduced the notion of removing intention and time function from all philosophical discourse, Thomas Aquinas is the one who legitimized the concept and introduced
68
THE GREENING OF PETROLEUM OPERATIONS
this as the only science (as in the process to gaining knowledge). Zatzman and Islam noted this divergence in pathways (2007a). All European scientists after Thomas Aquinas time would actually have a Christian root. Recently, Chhetri and Islam tabulated the list of various scientists and their religious roots, all of which pointed to the contention that Thomas Aquinas made a profound impact on them (2008). Historically, challenging the first premise, where the divergence is set, has become such a taboo that there is no documented case of anyone challenging it and surviving the wrath of the Establishment. Even challenging some of the cursory premises have been hazardous, as demonstrated by Galileo. Today, we continue to avoid challenging the first premise and even in the Information Age it continues to be hazardous, if not fatal, to challenge the first premise or secondary premises. It has been possible to keep this modus operandi because new laws have been passed to protect "freedom of religion" and, of late, "freedom of speech." For special-interest groups, this opens a Pandora's box for creating "us vs. them," "clash of civilizations," and every aphenomenal model now in evidence (Zatzman and Islam 2007b). Even though astronomers and alchemists and scholars of many other disciplines were active experimental scientists in other parts of the world for millennia, experimental science in continental Europe began only in the seventeenth century. Sir Francis Bacon (1561-1626) emphasized that experiments should be planned and the results carefully recorded so they could be repeated and verified. Again, there was no recognition of time as a dependent variable and a continuous function. The work of Sir Isaac Newton (1643-1717) marks the most profound impact on modern European science and technology. Historically, what Thomas Aquinas' model did to European philosophy is the same as what Newton's model did to New Science. Various aspects of Newton's laws of motion, gravity, and light propagation have recently been reviewed by Zatzman et al. (2008a, 2008b). In subsequent chapters of this book some of those discussions will be presented. Here, it suffices to indicate that Newton's laws suffered from the lack of a real first premise (see Table 2.3). With the exception of Einstein, every scientist took Newton's model as the ideal and developed new models based on the same, adding only factors thought to be relevant because experimental data were not matching with theoretical ones.
A DELINEARIZED HISTORY OF CIVILIZATION
69
Boyle (1627-1691), an experimentalist, recognized the existence of constant motion in gas particles (corpuscles, in his word) - the same idea that Heraclitus proposed over 2,000 years before Boyle and that was rejected by Aristotle and subsequent followers. While this recognition was in conformance with natural traits of matter (Table 2.3), his belief that the particles are in constant motion and uniform and rigid is in stark contradiction to the real nature of matter. This fundamentally incorrect notion of matter continues to dominate kinetic molecular theory. In the last half of the 18th century, John Dalton (1766-1844) reasserted the atomic view of matter, albeit now stripped of Heraclitus' metaphysical discussion and explanations. Newton's laws of motion dominated the scientific discourse of his day, so Dalton rationalized this modernized atomic view with Newton's object masses, and we end up with matter composed of atoms rendered as spherical balls in three-dimensional space, continuously in motion throughout three-dimensional space, within time considered as an independent variable. This line of research seals any hope for incorporating time as a continuous function, which would effectively make the process infinite-dimensional. The essential observations of Dalton are as follows: 1) Elements are composed of atoms (themselves being unbreakable). 2) All atoms of a given element have identical properties, and those properties differ from those of other elements. 3) Compounds are formed when atoms of different elements combine with one another in small whole numbers. (This one emerges from previous assumption that atoms are unbreakable.) 4) The relative numbers and kinds of atoms are constant in a given compound. (This one asserts steady state, in contrast to notion of kinetic models.) Figure 2.14 shows Dalton's depiction of molecular structure. Note that in this figure, No. 28 denotes carbon dioxide. This depiction is fundamentally flawed because all four premises listed above are aphenomenal. This representation amounts to Aristotle's depiction of matter and energy, both of which are unique functions of
70
THE GREENING OF PETROLEUM OPERATIONS
Figure 2.14 Depiction of Dalton's atomic symbols.
composition and devoid of the time function (Figure 2.13). It should also be noted that until today, this is the same model that is used in all disciplines of New Science. Consider the following figure (Figure 2.15), which shows depiction of non-organic molecules as well as depiction of DNA. This fundamentally flawed model of matter was used as the basis for subsequent developments in chemical engineering. In subsequent Europe-based studies, research in physico-chemical properties of matter was distinctly separate from research in energy and light. Even though Newton put forward theories for both mass and energy, subsequent research saw different tracks of research, some focusing
A DELINEARIZED HISTORY OF CIVILIZATION
71
Figure 2.15 Symmetry and uniformity continue to be the main trait of today's scientific models (DNA model, left; molecular model, right).
on chemistry, others on physics, astronomy, and numerous other branches of New Science. The law of conservation of mass was known to be true for thousands of years. In450B.C.,Anaxagorassaid, "Wrongly do the Greeks suppose that aught begins or ceases to be; for nothing comes into being or is destroyed; but all is an aggregation or secretion of preexisting things; so that all becoming might more correctly be called becoming mixed, and all corruption, becoming separate." When, Arabs translated this work from old Greek to Arabic, they had no problem with this statement of Anaxagoras. In fact, they were inspired by Qu'ran verses that clearly defined that the Universe was created out of nothing and ever since its creation all has been a matter of phase transition as no new matter or energy is created. However in modern scientific literature, Antoine Laurent Lavoisier (1743-94) is credited to have discovered the law of conservation of mass. Lavoisier's first premise was "mass cannot be created or destroyed." This assumption does not violate any of the features of nature. However, his famous experiment had some assumptions embedded in it. When he conducted his experiments, he assumed that the container was sealed perfectly. This would violate the fundamental tenet of nature that an isolated chamber can
72
THE GREENING OF PETROLEUM OPERATIONS
be created (Table 2.3). Rather than recognizing the aphenomenality of the assumption that a perfect seal can be created, he "verified" his first premise (law of conservation of mass) "within experimental error." The error is not in the experiment, which remains real (hence, true) at all times, but it is in fact within the first premise that a perfect seal has been created. By avoiding confronting this premise and by introducing a different criterion (e.g., experimental error), which is aphenomenal and, hence, non-verifiable, Lavoisier invoked a European prejudice linked to the pragmatic approach which is "whatever works is true." This leads to the linking of measurement error to the outcome. What could Lavoisier have done with the knowledge of his time to link this to intangibles? For instance, if he left some room for leaking from the container, modern day air conditioner design would have room for how much Freon is leaked to the atmosphere. Lavoisier nevertheless faced extreme resistance from scientists who were still firm believers of the phlogiston theory. (In Greek, phlogios means "fiery".) A German physician, alchemist, adventurer, and a professor of medicine named Johann Joachim Becher (1635-1682) first promoted this theory. The theory recognizes a matter, named phlogiston, exists within combustible bodies. When burnt (energy added), this matter was thought to have been released to achieve its "true" state. This theory enjoyed support of the mainstream European scientists for nearly 100 years. One of the proponents of this theory was Robert Boyles, the scientist who would gain fame for relating pressure with volume of gas. Mikhail Vasilyevich Lomonosov (Mnxanji BacroibeBirq JIOMOHOCOB) (1711-1765) was a Russian scientist, writer, and polymath who made important contributions to literature, education, and science. He wrote in his diary, "Today I made an experiment in hermetic glass vessels in order to determine whether the mass of metals increases from the action of pure heat. The experiment demonstrated that the famous Robert Boyle was deluded, for without access of air from outside, the mass of the burnt metal remains the same." Ever since the work of Lavoisier, the steady-state model of mass balance has been employed in all segments of chemistry and chemical engineering. These works focused on defining symbols and identifying new elements and classifying them. Current chemical symbols (formulas) are derived from the suggestions of Jons Berzelius (1779-1848). He used oxygen to be the standard reference for atomic mass (O = 16.00 AMU). In contrast to Dalton's assertion
A DELINEARIZED HISTORY OF CIVILIZATION
73
that water had a formula of HO, Berzelius showed it to be H 2 0. For Dalton, all atoms had a valence of one. This made the atomic mass of oxygen to be 8. The consideration of mass being independent of time forced all chemical models to be steady or non-dynamic. More importantly, this model was embedded into the definition of time, coupling mass and energy in an intricate fashion that obscured the reality even from experts. See the following analysis. In an attempt to standardize distance as part of the universal measurement unit, the French in the 1770s defined meter in the following form: one meter is defined as 1/10,000,000 the distance from the North Pole to the Equator (going through Paris). They also discussed the spurious arrangement of introducing the unit of time as a second. It wasn't until 1832 that the concept of second was attached to the SI arrangement. The original definition was 1 second = 1 mean solar day/864,000. As late as 1960, the ephemeris second, defined as a fraction of the tropical year, officially became part of the new SI system. It was soon recognized that both mean solar day and mean tropical year vary, albeit slightly, and a more "precise" (apparent assertion being more precise means closer to the truth) unit was introduced in 1967. It was defined as 9,192,631,770 cycles of the vibration of the cesium 133 atom. The assumption here is that the vibration of the cesium 133 atom is exact, this a s s u m p t i o n being the basis of the atomic clock. Only recently, it has been revealed that this assumption is not correct, creating an added source of error in the entire evaluation of the speed of light. On the other hand, if purely scientific approach is taken, one would realize that the true speed of light is neither constant nor the highest achievable speed. Clayton and Moffat discussed the phenomenon of variable light speed (1999). Also, Schewe and Stein discussed the possibility of very low speed of light (1999). In 1998, the research group of Lene Hau showed that the speed of light can be brought down to as low as 61 k m / h o u r (17 m / s ) by manipulating the energy level of the medium (Hau et al. 1999). Two years later, the same research group reported near halting of light (Liu et al. 2001). The work of Bajcsy et al. falls under the same category except that they identified the tiny mirror-like behavior of the media, rather than simply low energy level (2003). More recent work on the subject deals with controlling light rather than observing its natural behavior (Ginsberg et al. 2007). Abou-Kassem et al. used the arguments provided by
74
THE GREENING OF PETROLEUM OPERATIONS
previous physicists and constructed the following graph (2008). It is clear from the graph (Figure 2.16) that the assumption that "speed of light," "vacuum," "unit of time," and "unit of distance" are some arbitrarily set constants that do not change the true nature of nature, which remains continuously dynamic. Note that media density can be converted into media energy, only if continuous transition between energy and mass is considered. This transition was even known by Democritus and accepted by Aristotle. Such transition, however, is rarely talked about in the context of engineering (Khan et al. 2007). This graph also reveals that once definitions and assertions have been accepted in face values and are not subject to further scrutiny, the possibility of increasing knowledge (as in being closer to discovering the truth about nature) is diminished. Finally, this graph confirms that Aristotle's notion of infinite speed, which was rejected by Arab scientists/philosophers, was applicable only if the media density is zero - a scientifically absurd condition because it would mean the space is void, as in no is matter present whatsoever. This is the state that Ancient Greeks accepted as the condition prior to the creation of the Universe. It is probable that the speed of light would be infinity in void, but the presence of light would simply mean the void is now filled with matter, unless the assumption is light carries no matter with it - yet another absurdity. Albert Einstein came up with a number of theories, none of which are considered laws. The most notable theory was the theory of relativity. Unlike any other European scientists of modern time, this theory recognized the true nature of nature and does not have True speed of light
Media density Figure 2.16 Speed of light as a function of media density (redrawn from Abou-Kassem et al. 2007).
A DELINEARIZED HISTORY OF CIVILIZATION
75
the first premise that violates any fundamental feature of nature. Ironically, the very first scientific article that mentioned relativity after Einstein was by Walter Kaufmann who "conclusively" refuted the theory of relativity. Even though this "conclusive" refutation did not last very long, one point continues to obscure scientific studies, which is the expectation that something can be "proven." This is a fundamental misconception as outlined by Zatzman and Islam (2007a). The correct statement in any scientific research should involve discussion of the premise a research is based on. The first premise represents the one fundamental intangible of a thought process. If the first premise is not true, because it violates fundamental feature(s) of nature, the entire deduction process is corrupted and no new knowledge can emerge from this deduction. Einstein's equally famous theory is more directly involved with mass conservation. He derived E = mc2 using the first premise of Planck (1901). Einstein's formulation was the first attempt by European scientists to connect energy with mass. However, in addition to the aphenomenal premises of Planck, this famous equation has its own premises that are aphenomenal (see Table 2.3). This equation remains popular and is considered to be useful (in pragmatic sense) for a range of applications, including nuclear energy. For instance, it is quickly deduced from this equation that 100 kj is equal to approximately 10~9 gram. Because no attention is given to the source of the matter or the pathway, the information regarding these two important intangibles is wiped out from the science of tangibles. The fact that a great amount of energy is released from a nuclear bomb is then taken as evidence that the theory is correct. By accepting this at face value (heat as the one-dimensional criterion), heat from nuclear energy, electrical energy, electromagnetic irradiation, fossil fuel burning, wood burning, or solar energy becomes identical. This has tremendous implication on economics, which is the driver of modern engineering.
2.4.2
Delinearized History of Mass and Energy Management in the Middle East
At the Petroleum Development Oman (PDO) Planetarium, Dr. Marwan Shwaiki recounted for us an arrestingly delinearized history of the Arab contribution to world scientific and technical culture. What follows is our distillation of some of the main outlines.
76
THE GREENING OF PETROLEUM OPERATIONS
Human civilization is synonymous with working with nature. For thousands of years of known history, we know that man marveled in using mathematics to design technologies that created the basis of sustaining life on this planet. In this design, the natural system had been used as a model. For thousands of years, the sun was recognized as the source of energy that is needed to sustain life. For thousands of years, improvements were made over natural systems without violating natural principles of sustainability. The length of a shadow was used by ancient civilizations in the Middle East to regulate the flow of water for irrigation - a process still in presence in some parts, known as the fallaj system. At nights, stars and other celestial bodies were used to ascertain water flow. This is old, but by no means obsolete, technology. In fact, this technology is far superior to the irrigation implanted in the modern age that relies on deep-water exploitation. For thousands of years of known history, stars were used to navigate. It was no illusion, even for those who believed in myths and legends. Stars and celestial bodies are dynamic. This dynamic nature nourished poetry and other imaginings about these natural illuminated bodies for thousands of years. As far as we know from recorded history, these stories began with the Babylonians. Babylonian civilization is credited with dividing the heavenly bodies in 12 groups, known as the Zodiac. The Babylonians are also credited with the sexagesimal principle of dividing the circle into 360 degrees and each degree into 60 minutes. They are not, however, responsible for creating the confusion between the unit of time (second and minute) and space (Zatzman 2007b). Their vision was more set on the time domain. The Babylonians had noticed that the sun returned to its original location among the stars once every 365 days. They named this length of time a "year." They also noticed that the moon made almost 12 revolutions during that period. Therefore, they divided the year into 12 parts and each of them was named a "month." Hence, the Babylonians were the first to conceive of the divisions of the astronomical clock. Along came Egyptian civilization, which followed the path opened by the Babylonians. They understood even in those days that the sun is not just a star and the earth is not just a planet. In a continuous advancement of knowledge, they added more constellations to those already identified by the Babylonians. They divided the sky into 36 groups starting with the brightest star, Sirius. They believed (on the basis of their own calculations) that the sun took
A DELINEARIZED HISTORY OF CIVILIZATION
77
10 days to cross over each of the 36 constellations. That was what they were proposing thousands of years before the Gregorian calendar fixed the number of days to some 365. Remarkably, this latter fixation would actually violate natural laws; in any event, the Egyptians had no part. The Gregorian "solution" was larded with a Eurocentric bias, one that solved the problem of the days that failed to add u p by simply wiping out 12 days. (Unix users can see this for themselves if they issue the command "cal 1752" in a terminal session.) It was the Greeks — some of whom, e.g., Ptolemy, traveled to Egypt to gather knowledge — that brought the total number of constellations to 48. This was a remarkable achievement. Even after thousands of more years of civilization and the discovery of constellations in the southern sky, the total number of constellations was declared to be 88 in 1930. Of course, the Greek version of the same knowledge contained many myths and legends, but it always portrayed the eternal conflict between good and evil, between ugly and beautiful, and between right and wrong. The emergence of Islam in the Arabian Peninsula catapulted Arabs to gather knowledge on a scale and at a pace unprecedented in its time. Even before this, they were less concerned with constellations as groups of stars and far more focused on individual stars and using them effectively to navigate. (Not by accident, star constellations' names are of Greek origin, while the names of individual stars are mostly of Arabic origin.) In the modern astronomical atlas, some 200 of the 400 brightest stars are given names of Arabic origin. Arabs, just like ancient Indians, also gave particular importance to the moon. Based on the movement of the moon among the stars, the Arabs divided the sky and its stars into 28 sections naming them manazil, meaning the "mansions of the moon." The moon is "hosted" in each mansion for a day and a night. Thus, the pre-Islamic Arabs based their calendar on the moon although they noted the accumulating differences between the solar and lunar calendars. They also had many myths surrounding the sun, moon, and the stars. While Greek myths focused on kings and gods, Arab myths were more focused on individuals and families. Prehistoric Indians and Chinese assumed that the Earth had the shape of a shell borne by four huge elephants standing on a gigantic turtle. Similarly, some of the inhabitants of Asia Minor envisioned that the Earth was in the form of a huge disk carried by three gigantic whales floating on the water. The ancient inhabitants of
78
THE GREENING OF PETROLEUM OPERATIONS
Africa believed that the sun set into a "lower world" every evening and that huge elephants pushed it back all night in order to rise the next morning. Even the ancient Egyptians imagined the sky in the shape of a huge woman surrounding the Earth and decorated from the inside with the stars. This was in sharp contrast to the ancient Greek belief that the stars were part of a huge sphere. Ptolemy refined the ancient Greek knowledge of astronomy by imagining a large sphere with the stars located on the outer surface. He thought that all the planets known at the time - Mercury, Venus, Mars, Jupiter, and Saturn - were revolving within this huge sphere, together with the sun and the moon. The ancient Greeks, including Aristotle, assumed that the orbits of these celestial bodies were perfectly circular and that the bodies would keep revolving forever. For Aristotle, such perfection manifested symmetric arrangements. His followers continue to use this model. Scientifically speaking the spherical model is nothing different from the huge elephant on a gigantic turtle model and so on. What occurred over the centuries following Ptolemy is an Eurocentric bias that any models that the Greek proposed is inherently superior than the models proposed by Ancient Indians, Africans, or Chinese. In the bigger picture, however, we know now that the pathways of celestial bodies are non-symmetric and dynamic. Only with this non-symmetric model can one explain retrograde motion of the planets - a phenomenon that most ancient civilizations even noticed. Eurocentric views, however, would continue to promote a single theory that saw the Earth as the center of the Universe. In Ptolemy's word, "During its rotation around the Earth, a planet also rotates in a small circle. On return to its orbit, it appears to us as if it is going back to the west." Of course, this assertion, albeit false, explained the observation of retrograde motion. Because it explains a phenomenon, it becomes true that the essence of a pragmatic approach led to the belief that the Earth is indeed the center of the Universe - a belief that would dominate the Eurocentric world for over thousand years. The knowledge gathered about astronomy by the ancient Chinese and Indians was both extensive and profound. The Chinese were particularly proficient in recording astronomical incidents. The Indians excelled in calculations and had established important astronomical observatories. It was the Arabs of the post-Islamic Renaissance that would lead the world for many centuries, setting an example of how to benefit from knowledge of the previous civilizations. Underlying
A DELINEARIZED HISTORY OF CIVILIZATION
79
this synthesizing capacity was a strong motive to seek the truth about everything. Among other reasons for this, an important reason was that every practicing Muslim was required to offer formal prayer five times a day, all relating to the position of the sun in the horizon. They are also required to fast one month of the year and offer pilgrimage to Mecca once in a lifetime, no matter how far they resided, and as long as they could afford the trip. Most importantly, they were motivated by the hadith of the Prophet, which clearly outlined, "It is obligatory for every Muslim man and woman to seek Knowledge through science (as in process)." This was a significant point of departure diverging sharply away from the Hellenized conception that would form the basis of what later became "Western Civilization" at the end of the European Middle Ages. Greek thinking from its earliest forms associated the passage of time not with the unfolding of new knowledge about a phenomenon, but rather with decay and the onset of increasing disorder. Its conceptions of the Ideal, of the Forms etc., are all entire and complete unto themselves, and they standing outside Time, truth being identified with a point in which everything stands still. (Even today, conventional models, based on the "New Science" of tangibles that have unfolded since the 17th century, disclose their debt to these Greek models by virtue of their obsession with the steady state, which is considered the "reference-point" from which to discuss many physical phenomena, as though there was such a state anywhere in nature.) Implicitly, on the basis of such a standpoint, consciousness and knowledge exist in the "here-and-now," after the past, and before the future unfolds. (Again, today, conventional scientific models treat time as the independent variable, in which one may go forward or backward, whereas time in nature cannot be made to go backward even if a process is reversible.) All this has a significant but rarely articulated consequence for how nature and its truths would be cognized. According to this arrangement, the individual's knowledge of the truth at any given moment, frozen outside of time, is co-extensive with whatever is being observed, noted, studied, etc. The Islamic view diverged sharply by distinguishing between belief, knowledge (i.e., some conscious awareness of the truth), and truth (or actuality). In this arrangement, the individual's knowledge of the truth or of nature is always fragmentary and also time-dependent. Furthermore, how, whether, or even where
80
THE GREENING OF PETROLEUM OPERATIONS
knowledge is gathered cannot be subordinated to the individual's present state of belief(s), desires, or prejudices. In the Islamic view, a person seeking knowledge of the truth cannot be biased against the source of knowledge, be it in the form of geographical location or tangible status of a people. Muslims felt compelled to become what we term as "scientists" or independent thinkers, each person deriving their inspiration from the Quran and the hadith of the Prophet Muhammad. Hence, they had no difficulty gaining knowledge from the experience of their predecessors in different fields of science and mathematics. They were solely responsible for bringing back the writings of Aristotle and Ptolemy and the Indian Brahmagupta in the same breath. None of these ancient scholars was the role model for the Muslim scholars and they were simply their ancestors whose knowledge Muslims didn't want to squander. They started the greatest translation campaign in the history of mankind converting the written works of previous civilizations into Arabic. In due course, they had gained all prior knowledge of astronomy, and that enabled them to become the world leaders in that field of science for five successive centuries. Even their political leaders were fond of science and knowledge. One remarkable pioneer of knowledge was Caliph Al-Mamoon, one of the Abbasite rulers. Some one thousand years before Europeans were debating how flat the Earth was, Al-Mamoon and his scholars already knew the earth was spherical (although significantly not in the European perfect-sphere sense), but they wanted to find out the circumference of the Earth. Al-Mamoon sent out two highly competent scientific expeditions. Working independently, they were to measure the circumference of the Earth. The first expedition went to Sinjar, a very flat desert in Iraq. At a certain point, on latitude 35 degrees north, they fixed a post into the ground and tied a rope to it. Then they started to walk carefully northwards in order to make the North Pole appear one degree higher in the sky. Each time the end of the rope was reached, the expedition fixed another post and stretched another rope from it until their destination was reached - latitude 36 degrees north. They recorded the total length of the ropes and returned to the original starting point at 35 degrees north. From there, they repeated the experiment heading south this time. They continued walking and stretching ropes between posts until the North Pole dropped in the sky by one degree when they reached the latitude of 34 degrees.
A DELINEARIZED HISTORY OF CIVILIZATION
81
The second of Al-Mamoon's expeditions did the same thing but in the Kufa desert. When they had finished the task, both expeditions returned to Al-Mamoon and told him the total length of the rope used for measuring the length of one degree of the Earth's circumference. Taking the average of all expeditions, the length of one degree amounted to 56.6 Arabic miles. The Arabic mile is equal to 1973 meters. Therefore, according to the measurements made by the two expeditions, the Earth's circumference is equal to 40,252 kilometers. Nowadays, the figure is held to be 40,075 kilometers. So, how does it compare with the circumference of Earth as we know it today? Today, it is known to be 40,075 kilometers if measured through the equator, a difference of less than 200 km. Contrast that with the debate that was taking place in Europe over the earth being flat many centuries later. Another important aspect is that this was the first time in known history that a state sponsored fundamental research. The motive of Caliph Mamoon to not capture more land and history shows that these rulers were not the recipients of any tax. In fact, all rulers paid zakat, the obligatory charity, for the wealth they possessed, with the entire amount going to the poor. Also, the judicial system was separate from the administration. The judicial system was always in the hands of the "most righteous" rather than most "powerful." In fact, during the entire Ottoman period, even the state language was not Arabic. Arabic was the language of science (See Figure 2.17). For administration, Turkish was the language for communication with the headquarters, and local languages were used for local communication. This attitude is starkly different from what we encountered in Europe. In the 16th century, Copernicus identified, "The Earth is not located in the center of the universe but the sun is. The earth and the planets rotate around the Sun." The Church that Galileo served his entire life could not tolerate this simple observation of the truth. Galileo saw the earth moving and couldn't reconcile with any dogma that prohibited him from stating what he knew as the truth. In his words, "O people! Beware that your Earth, which you think stationary is in fact rotating. We are living on a great pendulum." He discovered the four great moons of Jupiter. He was the inventor of the clock pendulum and the "Laws of Motion." The Church could not bear Galileo's boldness, and he was put on trial. Confronted with such tyranny, Galileo, who by then was old and weak, yielded and temporarily changed his mind. But as he walked out of the court, he stamped
82
THE GREENING OF PETROLEUM OPERATIONS
Did Islamic scientists discover evolutionary theory before Darwin?
Figure 2.17 During Islamic era, science meant no restriction on knowledge gathering.
his feet in anger saying, "But you are still rotating Earth!" This was the beginning of New Science that would dominate the world until today. Galileo marks the eureka moment in western "science." Science finally had broken out of the grip of the Church, and therefore was free from bias that had a chokehold on clear thinking. This is, unfortunately, yet another misconception. The earth science that was unleashed after Galileo remains the science of tangibles. With this science, the earth is not flat or at steady state, but it still is not the science of knowledge (Islam 2007). The same mindset has led a lot of scientists rejecting Darwin's theory - a topic that was actually handled by Muslim scholars centuries ago (see Figure 2.17). Also, consider the example of the case of Earth itself. In Ibn Kordathyah, an Arab scientist mentioned the earth is not flat in his early books Al-Masalik and Al-mamlik in the 800s. So, what shape did he think the earth was? It is the word υ^ 'j ifj^^ii (baidh or baidha). In the modern Europe-dominated word, it is translated as
A DELINEARIZED HISTORY OF CIVILIZATION
83
"elliptical." In reality, an ellipse is an aphenomenal shape, meaning it doesn't exist anywhere in nature. The true meaning of this word is "Ostrich's egg" or its nest, which, obviously, is not elliptical (Fig. 2.18). The inspiration of Ibn Kordathyah came from the Qu'ran (Chapter 79, verse 30). Ideal in Islamic culture is the Quran (Zatzman and Islam 2007). Contrast this with western "science," for which the starting point would be the outline circumference of a circle rendered as an ellipse that has "degenerated" into some kind of ovoid. Then the egg is elaborated as an extrusion into 3-D of a particular case or class of a non-spherical somewhat ellipsoidal circumference. Why not just start with the egg itself, instead of with circles and ellipses? Eggs are real, actual. We can know all their properties directly, including everything important to know about the strength and resilience of its shape as a container for its particular contents, without having to assume some simple ideal and then extrapolate everything about it and an egg from abstractions that exist solely in someone's imagination. Going the other direction, on the other hand, is the much richer scientific path. Once we have explored real eggs and generalized everything we find out, we can anticipate meaningfully what will happen in the relations between the form of other exterior surfaces found in nature and their interior contents. What we see here is a difference in attitude between standpoints maintained in pre- and post- Thomas Aquinas, the father of Europe-centric philosophy. Before his time, truth was bound up
Figure 2.18 An ostrich egg was the shape that was used to describe the earth in the 9Lh century.
84
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
with knowledge and could be augmented by subsequent inquiry. After that point, on the other hand, the correctness or quality of knowledge has rendered as a function of its conformity with the experience or theories of the elite (called "laws")· Before, personal experience was just "personal." After, the experience of the elite became a commodity that could be purchased as a source of knowledge. Before, the source of knowledge was individual endeavor, research, and critical thinking. After, it became dogma, blind faith, and the power of external (aphenomenal) forces. After Thomas Aquinas, few Europeans had engaged in increasing knowledge per se. If they did, they were severely persecuted. Copernicus (1473-1543) is just one example. What was his charge? The Earth moves around a stationary sun. It was not complete knowledge (it is important to note that "complete" knowledge is anti-knowledge), but it was knowledge in the right direction. His theory contradicted that of Ptolemy and in general the Catholic Church. Yet, Wikipedia wrote this about him: "While the heliocentric theory had been formulated by Greek, Indian and Muslim savants centuries before Copernicus, his reiteration that the sun — rather than the Earth — is at the center of the solar system is considered among the most important landmarks in the history of modern science." (Website 1) While there is s o m e recognition that C o p e r n i c u s ' s k n o w l e d g e w a s not n e w k n o w l e d g e , it did not p r e v e n t E u r o p e a n scientists from m a k i n g s t a t e m e n t s that w o u l d sanctify Copernicus. Goethe, for instance, wrote: "Of all discoveries and opinions, none may have exerted a greater effect on the human spirit than the doctrine of Copernicus. The world had scarcely become known as round and complete in itself when it was asked to waive the tremendous privilege of being the center of the universe. Never, perhaps, was a greater demand made on mankind — for by this admission so many things vanished in mist and smoke! What became of our Eden, our world of innocence, piety and poetry; the testimony of the senses; the conviction of a poetic — religious faith? No wonder his contemporaries did not wish to let all this go and offered every possible resistance to a doctrine which in its converts authorized and demanded a freedom of view and greatness of thought so far unknown, indeed not even dreamed of." (Website 1)
A DELINEARIZED HISTORY OF CIVILIZATION
85
In the above statement, there are three items to note: 1) there is no reference to Copernicus' knowledge being prior knowledge, 2) there is no comment on what the problem was with Copernicus' theory, and 3) there is no explanation as to why religious fanatics continued to stifle knowledge and how to handle them in the future. What would be the knowledge-based approach here? To begin with, ask whether the theory contradicts the truth. European scholars did not ask this question. They compared it with words in the Holy Bible — a standard whose authenticity, impossible to establish unambiguously, was itself subject to interpretation. When we question whether something is true, we cannot simply define the truth as we wish. We have to state clearly the standard measure of this truth. For Muslim scientists prior to the European Renaissance, the Quran formed the standard. Here is the relevant passage from Chapter 36 (36^40) in the Qu'ran addressing the matters of whether the sun is "stationary," the earth stands at the center of the solar system, or the moon is a planet: uj^ iy&üj rA) fjjai jajai) üii. ' J k j j i i . iüja jMj n ) ^M üj£_ß2) U E tJ*h} l_yiu5l\ (jjkiuij tills (ji JSJ JUJ11 j^Uii ijjli Uj J^J& (ί ·) JJJSJ tilltaljJ jiäuil One possible translation is, "And the sun runs on its fixed course for a term (appointed). That is the Decree (the word comes from qadr as in "proportioned" or "balanced") of the All Mighty (Al-Aziz) and the All Knowing (Al-Aleem, the root word being ilm or science). And the moon, we have measured (or "proportioned," again coming from the root word qadr) for it locations (literally meaning "mansion") till it returns like the old dried curved date stalk. It is not for the sun to overtake the moon, nor does the night outstrip the day. They all float, each in an orbit." When did you find out that the sun is not stationary? What is the speed and what does the solar orbit look like? See the following table. With 20/20 hindsight, many write these days that the speed of the sun could be predicted using Newton's law. What is missing in this assertion is that Newton's law is absolute and all hypotheses behind Newton's gravitational law are absolutely true. In addition, it also assumes that we know exactly how the gravitational attractions are imparted from various celestial bodies — a proposition that stands "over the moon." Along came Galileo (1564-1642). Today, he is considered to be the "father of modern astronomy," the "father of modern physics," and
86
T H E GREENING OF PETROLEUM OPERATIONS
Table 2.5 Speed of Sun movement as noted in recent articles. Standardized Result
Bibliographic Entry
Result (w/ surrounding text)
Chaisson, Eric, & McMillan, Steve. Astronomy Today. New Jersey: PrenticeHall, 1993: 533.
"Measurements of gas velocities in the solar neighborhood show that the sun, and everything in its vicinity, orbits the galactic center at a speed of about 220 k m / s . . . . "
220 k m / s
"Milky Way Galaxy. " The New Encyclopedia Britannica .15th ed. Chicago: Encyclopaedia Britannica, 1998:131.
"The Sun, which is located relatively far from the nucleus, moves at an estimated speed of about 225 km per second (140 miles per second) in a nearly circular orbit."
225 k m / s
Goldsmith, Donald. The Astronomers. New York: St. Martin's Press, 1991: 39.
"If the solar system ... were not moving in orbit around the center, we would fall straight in toward it, arriving a hundred million years from now. But because we do move (at about 150 miles per second) along a nearly circular path ...."
240 k m / s
Norton, Arthur P. Norton's Star Atlas. New York: Longman Scientific & Technical, 1978: 92.
"... the sun's neighborhood, including the Sun itself, are moving around the centre of our Galaxy in approximately circular orbits with velocities of the order of 250 k m / s . "
250 k m / s
A DELINEARIZED HISTORY OF CIVILIZATION
87
Table 2.5 (cont.) Speed of Sun movement as noted in recent articles. Bibliographic Entry
Result (w/ surrounding text)
Recer, Paul (Associated Press). Radio Astronomers Measure Sun's Orbit Around Milky Way. Houston Chronicle. 1 June 1990.
"Using a radio telescope system that measures celestial distances 500 times more accurately than the Hubble Space Telescope, astronomers plotted the motion of the Milky Way and found that the sun and its family of planets were orbiting the galaxy at about 135 miles per second." "The sun circles the Milky Way at a speed of about 486,000 miles per hour."
Standardized Result
217km/s
the "father of science." As usual, the Church found reasons to ask Galileo to stop promoting his ideas. However, Galileo really was not a "rebel." He remained submissive to the Church and never challenged the original dogma of the Church that promoted the aphenomenal model. Consider the following quotations (Website 2): Psalm 93:1, Psalm 96:10, and Chronicles 16:30 state that "the world is firmly established, it cannot be moved." Psalm 104:5 says, "[the LORD] set the earth on its foundations; it can never be moved." Ecclesiastes 1:5 states that "the sun rises and the sun sets, and hurries back to where it rises." Galileo defended heliocentrism and claimed it was not contrary to those Scripture passages. He took Augustine's position on Scripture not to take every passage literally, particularly when the scripture in question is a book of poetry and songs and not a book of instructions or history. The writers of the Scripture wrote from the perspective of the terrestrial world, and from that vantage point the
88
THE GREENING OF PETROLEUM OPERATIONS
sun does rise and set. In fact, it is the earth's rotation that gives the impression of the sun in motion across the sky. Galileo's trouble did not come from the Establishment because he contradicted Aristotle's principle. For instance, Galileo contradicted Aristotle's notions that the moon is a perfect sphere and that heavy object would fall faster than lighter objects directly proportional to weight, etc. Amazingly, both the Establishment and Galileo continued to be enamored with Aristotle while fighting with each other. Could the original premise that Aristotle worked on be the same as that of the Church as well as Galileo's? Why didn't he rebel against this first premise? Galileo's contributions to technology, as the inventor of geometric and military compasses suitable by gunners and surveyors, are notable. There, even Aristotle would agree, this was indeed τεχνε (techne) or "useful knowledge" — useful to the Establishment, of course. The most remarkable technological developments in the Middle East occured between the 8th and 18th centuries. During that time, Islamic philosophy outlined in the Qur'an did include intangibles, such as intention and the time function. The modern-day view holds that knowledge and solutions developed from and within nature might be either good or neutral [zero net impact] in their effects or bad, all depending on how developed and correct our initial information and assumptions are. The view of science in the period of Islam's rise was rather different. It was that, since nature is an integrated whole in which humanity also has its roles, any knowledge and solutions developed according to how nature actually works will be ipso facto positive for humanity. Nature possesses an inbuilt positive intention of which people have to become conscious in order to develop knowledge and solutions that enhance nature. On the other hand, any knowledge or solutions developed by taking away from nature or going away from nature would be unsustainable. This unsustainability would mark such knowledge and solutions as inherently anti-nature. Inventions from that era continue to amaze us today (Lu and Steinhardt, 2006). Recently, Al-Hassani has documented 1,001 inventions of that era (2006). On reviewing those technologies, one discovers that none of them were unsustainable. We contend it was so because those technologies had fundamentally phenomenal basis. Recall that phenomenal basis refers to intentions that are in conformance with natural law and pathway that emulates
A DELINEARIZED HISTORY OF CIVILIZATION
89
nature. Following is a brief summary of some of the known facts from that era.
2.4.3
Accounting
The following is a brief summary recounting some of the major contributions of Middle Eastern scholars. For detailed information, readers are directed to Kline (1972), Struik (1967), and Logan (1986). Original accounting in the Islamic era was introduced to calculate obligatory charity and heritage (particularly of orphans) as prescribed by the Quran. Because any contract had to be written down with witnesses, it was obligatory for scholars of that time to come u p with a numbering system. The currently used numbering system emerged from the Babylonians about 4,000 years ago. Their system was base 60, or perhaps a combination of base 60 and base 10, and was a positional or place-value system; that is, the relative position of a digit enters into determining its value. In our system we multiply by successive powers of 10 as we move to the left. The Babylonians used powers of 60. Some argue that this system emerged from the sundial that had 360 degree on it. All Arabic numbers we use today are ideograms created by Abu Ja'far Muhammad ibn Musa al-Khowarizmi (c.778-c.850). Using the abacus notations, he developed the manuscript decimal system. These numerals are scientific display of the number of angles created. There are some ambiguities as to how exactly the numbers of his time looked, but it is certain that he had introduced both the numeral (including zero) and the modern positioning system in counting. The same person wrote the book titled, Hidab al-jabr wal-muqubala, written in Baghdad in about 825 A.D. The title has been translated to mean "science of restoration (or reunion) and opposition" or "science of transposition and cancellation," and "The Book of Completion and Cancellation" or "The Book of Restoration and Balancing." The book essentially outlines restoration (jabr) and cancellation of equivalent terms (the actual word, Muqabalah means "comparison") from two sides of an equation. The equality sign (=) in Arabic represents natural balance or optimization, including intangibles. The first operation, Jabr, is used in the step where x-2 = 12 becomes x = 14. The left side of the first equation, where x is lessened by 2, is "restored" or "completed" back to x in the second equation. Muqabalah takes us from
90
THE GREENING OF PETROLEUM OPERATIONS
x + y = y + 7 t o x = 7by "cancelling" or "balancing" the two sides of the equation. Today's word algebra is a Latin variant of the Arabic word al-jabr. At the outset, this seems to be exactly the same as any "equation" one would use in algebra or chemical equation, but it is not so. Take for instance the simple equation, oxygen + hydrogen —> water. It would be written in modern form as follows: 2H2 + 0 2 = 2H20
(2.1)
However, the above equality symbol is illegitimate. It is because the elemental forms of oxygen and hydrogen cannot be equal to the compound form on the right hand side. The most one can say about the two sides is they are equivalent, but that too is not quite correct. This sign (=) in original Arabic means it is an equivalence. For the above equation to be equivalent, one must have the following elements added: 2H2 + 0 2 + Σ = 2H20 + ΣΟ + ΔΕ (Η,Ο,Σ)
(2.2)
Here, the symbol ΔΕ represents energy that in itself is a function of (Η,Ο,Σ), where Σ is the summation of other matter present. This would be the minimum requirement for the legitimate equivalency represented by the equal sign. This simple accounting system keeps the pathway of all transformations and is equivalent to keeping track of the time function. This is necessary because both matter and energy are conservative and every matter is dynamic - a fact that was known even by Greek philosophers, dating back to 500 B.C. Arabs also merged the notion of zero in its use as a filler in accounting. The Arabic term cipher (used by Arabs at least before 570 A.D.) means empty (not void). For instance, there could be a cipher hand, which would mean there is nothing visible, but it would not mean there is void (or nothingness, as in the Greek word Χαοζ, which represents void). This is in conformance with the Sanskrit word, Sunya, which also means empty space. Indians were advanced in the sense that they were already using the positional and decimal system, using zero. When, Arabs adopted that notion, they called it cipher. Because the language of science remained Arabic for some 900 years since the 7th century, most scientific and philosophical work of Muslim (not all Arab) scholars had to be translated into Latin before they were accessible to modern Europe. The Medieval Latin version of cipher became "ciphra." The Latin entered Middle
A DELINEARIZED HISTORY OF CIVILIZATION
91
English as "siphre" which eventually became "cypher" in English and "cipher" in American English. Until now, integers were referred to as "cyphers" in English, even though the usage is not common in American English. With "ciphra" taking on a new more general meaning, a word derived from it, the Medieval Latin "zephirum" or "zepharino," came to be used to denote zero. This word eventually entered English as "zero." Interestingly, in Medieval Europe some communities banned the positional number system. The bankers of Florence, for example, were forbidden in 1299 to use Indian-Arabic numerals. Instead they had to use Roman numerals. Thus the more convenient Hindu-Arabic numbers had to be used secretly. As a result, "ciphra" came to mean a secret code, a usage that continues in English. Of course, resolving such a code is "deciphering"- a very popular word in modern English (Peterson, 1998).
2.4.4
Fundamental Science and Engineering
Ibn al-Haitham (Alhacen) is known as the father of Modern Optics. Using an early experimental scientific method in his Book of Optics, he discovered that light has a finite speed. This contrasts with Aristotle's belief that the speed of light is infinity. While Al-Haitham and his contemporary Persian Muslim philosopher and physicist, Avicenna, demonstrated light has finite speed, they did not seek a constant speed as they were content with finite speed theory. This notion of constant speed is an European one and emerges from an aphenomenal first premise. In Avicenna's translated words, "if the perception of light is due to the emission of some sort of particles by a luminous source, the speed of light must be finite." These "particles" were not atoms or even photons. They were simply emissions from the source of illumination. Using this argument, along with the notion of equivalency (Equation 2.2), let's consider the most known energy equation of today: C + 0 2 + Σ = C0 2 + ΣΟ + ΔΕ (Η,Ο,Σ)
(2.3)
Here, the rationale behind using ΣΟ is that every matter oxidizes because of the nature of matter. If one considers Table 2.3 as the basis for describing natural material, this would become obvious. As usual, ΔΕ represents energy that is a function of (C,0,E), where Σ is the summation of other matter present. The above equation conserves both matter and energy balance - a term familiar to
92
THE GREENING OF PETROLEUM OPERATIONS
human civilization for millennia. With this formulation, there would be no confusion between CO z coming from a toxic source or a natural source. For that matter, white light from a toxic source and white light from the sun would not be the same either. See Figure 2.19, which shows that the spectral analysis shows the optical toxicity can arise, but this is never depicted in modern-day chemical or physical accounting system. This is also true about oil refining technologies (Fig. 2.20). Wikepedia has some coverage of this "first scientist" of our time. Following are some details: • Main interests: Anatomy, Astronomy, Engineering, Mathematics, Mechanics, Medicine, Optics, Ophthalmology, Philosophy, Physics, Psychology, Science • Notable ideas: Pioneer in optics, scientific method, experimental science, experimental physics, experimental psychology, visual perception, phenomenology, analytic geometry, non-Ptolemaic astronomy, celestial mechanics • Works: Book of Optics, Doubts Concerning Ptolemy, On the Configuration of the World, The Model of the Motions, Treatise on Light, Treatise on Place • Influenced: Khayyam, al-Khazini, Averroes, Roger Bacon, Witelo, Pecham, Farisi, Theodoric, Gersonides, Alfonso, von Peuerbach, Taqi al-Din, Risner, Clavius, Kepler, John Wallis, Saccheri
rec
·
700
Orange yellow green
600
500
400
Figure 2.19 Spectral analysis shows the artificial white light is toxic (Chhetri and Islam 2008).
A DELINEARIZED HISTORY OF CIVILIZATION
93
Figure 2.20 For many centuries, olive oil was refined all over the Middle East. This technology that made bio-kerosene was truly sustainable, yet today this technology is forgotten.
A Moon crater is named after him, along with an asteroid (59239 Alhazen, Feb. 7,1999). For the Moon crater, see Alhazen (crater). For the asteroid, see 59239 Alhazen. In addition to being a nature scientist and philosopher, Ibn alHaitham was also an experimentalist (in today's word) and even an engineer. He actually developed an extensive plan to build a dam on the river Nile, but later realized it wasn't sustainable. He also invented numerous gadgets, including the pinhole camera. Working on many topics of science and philosophy very common those days in the Middle East. It was markedly different in two respects. Their intention was doing long-term good to the society. Then, when they undertook a research task, they did not allow any dogma or false premise to interfere with their cognition process. They would freely access knowledge of other civilizations (e.g., Greek, Indian, Chinese) and yet would not take them for granted. For instance, both Aristotle's and Ptolemy's work was vastly translated by Arab scholars, yet Ibn Haitham and others didn't take them in face value, nor did they reject outright. They carefully filtered the information and rejected notions that couldn't pass the reality check with contemporary knowledge. This would explain why such explosive growth in science and technology took place in such a short span of time. Here are some examples. Ibn Nafis first recorded observations on pulmonary blood circulation, a theory attributed to William Harvey 300 years later. Much later than that Poiseuille's blood flow model, which is essentially a linearized form of Ibn Nafis' model, would come to existence.
94
THE GREENING OF PETROLEUM OPERATIONS
Abbas ibn Firnas made the first attempt of human flight in the 9th century using adjustable wings covered with feathers. A thousand years later, the Wright brothers would attempt to fly, except their flying machine is neither adjustable nor an emulation of birds. Leonardo da Vinci, who was inspired by Muslim scientists, is credited to have designed the first flying machine. Zeng He, the Chinese Muslim admiral, used refined technologies to construct fleets of massive non-metal ship vessels five centuries ago. Until today, this technological marvel remains unsurpassed. The Ottomans (the Muslim regime that ruled a vast region within Europe, Asia, and Africa) were known for their excellence in naval architecture and a powerful naval fleet and continued to be recognized until the fall of the Ottoman Empire in the early 20th century. Avicenna remains the most important name in medical science, with one clear distinction. He never believed in mass producing "nature-simulated" products that actually proved that artificial products do not work. Similarly, when Muslim scientists invented shampoo, they used olive oil and wood ash, without any artificial or toxic addition to the product. The word "alkali" comes from Arabic, and the word conserves the meaning of wood ash. Synthetically made alkali would not qualify as "alkali" in Arabic. The most visible contribution to engineering was in the areas of architecture and building design (Fig. 2.21 through 2.23). Their marks are visible in many sites, ranging from the architectural icon
Figure 2.21 A building in Iran that adopts natural air conditioning.
A DELINEARIZED HISTORY OF CIVILIZATION
95
Figure 2.22 Arrows showing how natural air flow and heating/cooling occurs.
Figure 2.23 Solar heating is used to cool the air, using water as a refrigerant.
of St. Paul's Cathedral in London, UK and the horseshoe arches and gothic ribs of Al-Hamra in Granada, Spain to the Tajmahal in India. Many forget that buildings in those days did not need Newton's mechanics or Kelvin's thermodynamics nor did they need ISO 900x standards to become the symbol of time-defying marvels of engineering design. They had natural designs that would eliminate the need of dependence on artificial fluid or electricity. The structures themselves served multiple purposes, similar to what takes place in nature. These designs conformed with natural traits, as listed in Table 2.3. For example, consider the following building in Iran. Below is the schematic of airflow, showing how the design itself can create natural air conditioning.
96
THE GREENING OF PETROLEUM OPERATIONS
2.5 Paradigm Shift in Scientific and Engineering Calculations The first problem with the laws and theories of New Science has been the aphenomenal basis. This means that the fist premise is not real. As mentioned earlier, coupling the fundamental units of time and space compounds this problem. In modern science, several mitigating factors were at work. From the BernouUis at the end of the 1600s and into the mid-1700s, the method emerged from considering physical mass, space, and time as homogeneous at some abstract level and hence divisible into identical units. This trend had a positive starting-point with a number of French mathematicians who were seeking ways to apply Newton's calculus to analyzing change within physical phenomena generally (and not just problems of motion). This became something of a back-filling operation, intended to maneuver around the troublesome problem of nailing down Leibniz's extremely embarrassing differentials, especially all those dx's in integration formulas. In England by the last third of the 1700s, the philosophy of Utilitarianism (often described as "the greatest good for the greatest number") was spread widely among engineers and others involved in the new industrial professions of the budding Industrial Revolution. Underlying this thinking is the notion that the individual is, at some abstract level, the unitary representative of the whole. Of course, this means all individuals at that level are identical, i.e., they have lost any individuality relative to one another and are merely common units of the whole. Generalizations of this kind made it trivially easy to put a single identifying number on anything, including many things where a single identifying number would be truly inappropriate, anti-nature, misleadingly incomplete, and false. For example, "momentum" is one of the physical phenomena that is conserved throughout nature — not possible to create or destroy — and being able to draw a proper momentum balance is part of analyzing and discussing some natural phenomenon. But if we want to compare momentum, the "single identifying number" approaches limits and reduces us to comparing the numerical magnitude of the speed component of its respective velocity. Some units, like that of force, are entirely concocted to render certain equations dimensionally consistent, like the constant of proportionality that allows us to retain Newton's Second Law which
A DELINEARIZED HISTORY OF CIVILIZATION
97
computes force as the product of mass and acceleration - essentially more back-filling. So the atom indeed turns out to be quite breakable, but breakable into what? Quarks, photons, all of them unitized. That's what quantized means, but this is then "qualified" (i.e., rendered as opaque as possible) with elaborate hedges about exact time or place being a function of statistical probability. The time function, being the dependent continuous function and having an aphenomenal basis (the essence of unit), would falsify subsequent engineering calculations. Zatzman analyzed the merits (or demerits) of the currently used time standard (2008). The time unit known as "the second," the standard unit of time measurement for both Anglo-American and SI systems, can be multiplied or divided infinitely, but that is the only real aspect of this unit. Recent research has inserted some strikingly modern resonances into much ancient wisdom that has long been patronized yet largely disregarded as insufficiently scientific for modern man. Everyone knows, for example, that the "second" is a real unit of time, whereas the phrase "blink of an eye" (an actual time unit that has been used in many cultures for expressing short-term events) is considered to be poetic enough but relatively fanciful. The latest research has established what happens, or more precisely what does not happen, in the blink of an eye. The visual cortex of the brain resets, and when the lens of the retina reopens to the outside world, the information it now takes in is new and not a continuation of the information it took in before the eye-blink. Hence, visual perception that was long assumed to be essentially continuous for the brain, regardless of the blink, is in fact discontinuous. Therefore, the natural unit of time is the blink of an eye. It is the shortest unit of time in which the perceptual apparatus of the individual cognizes the continuity of the external world. Consider yet another natural unit of time, but this time for a factory worker. If he is told, after you pack 100 boxes of a certain product, he can go for a coffee break. This is a real unit and it is immediately connected to direct productivity and hence economic growth. This manner of characterizing time with real units is both convenient and productive for an individual. Of course, this was known in many cultures for the longest time, even though the science behind this wisdom was not legitimized through standardization. What better way to measure time than the rise and setting of the sun, when a farmer knows he must finish sowing seeds during the daylight? This method is fundamentally sustainable.
98
THE GREENING OF PETROLEUM OPERATIONS
If it is accepted that standards ought to have a natural basis, then the notion that a standard must be strictly quantifiable and measurable, or capable of calibration in order to be "objective" and hence acceptable, is more a matter for Kabbalists, Triskaidekaphobes, or some other numerological cult than for engineers or serious students of science. The giveaway is the obsession with "objectivity," which is intimately bound up yet again with "disinterestedness." A natural basis must be applied where the phenomenon actually is as it actually is. The reason is simply that a loss of information in applying standards developed for some macro level to a micro/ nano level must occur. The determination of a ceiling or a floor for a phenomenon is the very essence of any process of defining a nature-based standard. A ceiling or floor is a region in space-time, but it is not a number. We can readily distinguish effects above a floor or below a ceiling from effects below the floor or above the ceiling. However, from a reference frame based on time t = "right now," there is no way we can know or appreciate what 60 Hz or 9 ppb + 0.3% is going to mean over time and space within nature. Underlying the thinking that a standard can be reduced to a number is some notion of equilibrium-points, represented by a "real (sic) number," enjoying some sort of existence anywhere in nature. This assumption is unwarranted. The "ceiling/floor" approach that characterizes naturally based standards makes no such assumption. The "ceiling/floor" approach incorporates everything attached to and surrounding the phenomenon in its natural state. Standards based on nature must be as four-dimensional as nature itself, whereas a point on a real-number line is a onedimensional standard. Scientifically speaking, this alone should guarantee that, when it comes to the real, i.e., natural, world, such a standard must be as useless as it is meaningless. The notion of using some exact number as a "standard" is put forward as being objective and not dependent on any subjective factor, but it is inherently aphenomenal. Assurances that the standard would undergo only "changes in light of new knowledge" (or higher-precision measurement technology) do nothing to reduce this aphenomenality. For example, we have all heard this concession from the religious Establishment that the moment we can detect effects of low-level radiation with better detection equipment etc., then and only then will it be acceptable to rewrite the standards. Again the idea is that we just move the standard to the new equilibrium-point on the spectrum.
A DELINEARIZED HISTORY OF CIVILIZATION
99
Let's say we knew at this time what the true background level is of naturally occurring radiation from uranium ore in the earth before it is extracted and processed. Let's say we were able to account definitively for the separate and distinct effects of background radiation in the atmosphere in the aftermath of all the various kinds of nuclear testing and bombings since 1945, plus all radioactive effects of nuclear power plants, uranium mining and processing and any other anthropogenically-induced source. Is it likely that the consequences of radiation not augmented by human intervention is not anywhere near as dangerous as these anthropogenically-induced sources? There is another deeply rooted error that attaches to any and every aphenomenal pathway. As a result of being subordinated by our practices, nature becomes marginalized in our thinking to the point that we develop and propagate standards that are utterly alien to how anything in nature actually works. The very notion of elaborating a standard that will "keep" us safe itself requires asking the first most obvious and necessary question - are we in fact safe in the present environment in general, or are we menaced by it, and if so, from what direction(s)? Because it has become so "natural" for us to look at this matter from the standpoint of t & 0, the future beyond t + At does not arise even as a consideration. Hence, this first most obvious and necessary question will not even be posed. If we have set out a standard that was not based on nature to begin with, changes to that standard over time will not improve its usefulness or the processes it is supposed to regulate. Just ask the question, why are we beset in the first place by so many standards and so many amendments to them? Here's the rub - there is no natural standard that could justify any aphenomenal pathway. The presence and further proliferation of so many one-dimensional standards, then, must be acknowledged for what they actually are. They are a function of the very presence and further proliferation of so many aphenomenal pathways in the first place. Any given standard may seem to protect the relative positions into the foreseeable future of competing short-term interests of the present. Its defenders/regulators may be truly disinterested in theory or practice about maintaining and enforcing it. Regardless, the setting of any standard that is "objective" only in the sense that its key criteria accord with something that enjoys no actual existence in nature (and therefore possesses no characteristic non-zero time/existence
100
THE GREENING OF PETROLEUM OPERATIONS
of its own) can never protect the interests of nature or humanity in the long-term. Standards based on natural phenomena as they are and not as eternal or constant under all circumstances would provide an elegant solution and response. For too long, there has been this discourse about "standards" and the necessity to measure mass according to some bar kept in an evacuated bell jar somewhere in Paris or set to measure the passage of time with the highest precision according to some clock on a computer in Washington. Now they admit the movement of the cesium atoms for the clock was a bit off and that the bar in Sevres has lost weight over the last couple of centuries. If for instance, the unit of time, the second, were replaced by the blink of an eye, all calculations would have to change. Each person would have an independent conclusion based on his/her own characteristic time unit. This characteristic time unit would lead to characteristic solutions — the only one that is natural. Any solution that is imposed from others is external and, hence, aphenomenal. This would result in one solution per person, and would honor the fundamental trait of nature, which is uniqueness. There are no two items that are identical. When the manufacturers claim, "natureidentical flavors," they are simply making a false statement. False is aphenomenal or anti-nature. There must be a general line of solution that incorporates all individually characteristic forms of solution. This solution-set is infinite. Hence, even though we are talking about a general line or direction of solution, it is not going to produce a unique one-size-fitsall answer. The individual has to verify the general line of solution for him/herself. This is where the individually characteristic part must emerge. The notion of unit comes from the need of basing on unbreakable departure point. The word "atom" indeed means "unbreakable" in Greek. Atoms are not "unbreakable" to begin with. "Second" on the other hand is a unit that is not only breakable, people take pride in the fact that it can be broken indefinitely, The unit of time violates the fundamental notion of unit. Unless the unit problem is solved, standards cannot be selected because aphenomenal units will lead to aphenomenal standards, much like Aristotle's notion of "beauty" being that of symmetry, and after several millennia we now give a Nobel Prize to someone who discovers "symmetry" breaks down (Nobel Prize in Physics 2008).
A DELINEARIZED HISTORY OF CIVILIZATION
2.6
101
Summary and Conclusions
Today, everyone is convinced that the current mode of energy management is not sustainable. Throughout human history, at no time human civilization was incapacitated with such a helpful thought. Our information age is a product of our past. Time is a continuous function. How did we come to such a defeatist mindset? If natural resources were sustainable for so many millennia with so many civilizations, what did we do differently than our ancestors that we have to conclude that natural resources are no longer sustainable? Where did we fail in our engineering design that no matter what we produce or process, we end up with a technology that can only break its promises and ruin our hope? This chapter attempts to answer those questions. This chapter reviews fundamental theories of mass and energy. Theories, dating back millennia, are considered and recast in light of modern theories and laws. It is discovered that there is a fundamental departure in the modern era from sustainable engineering to unsustainable engineering. The science that has been used in modern times uses theories that have aphenomenal (unreal) first premises. Building on that first premise makes it impossible to identify the source of spurious results obtained after many steps of cognition. By comparing with fundamental traits of nature, this chapter identifies exactly where the modern theories have departed from being science of knowledge to science of commercialization at a cost. The most notable victim of this disinformation is the science of mass and energy - the core subject of this book. Historical time in social development and characteristic time in natural processes each exist, and operate, objectively and independent of our will or even our perception. They are certainly not perceived as such by humans living in the present. We cognize these phenomena, and their objectivity, only in the process of summing up matters on the basis of looking back from the vantage point of the present. We may idealize the arc of change, development a n d / o r motion of a process. This idealization can be as tractable or complex as we desire, with a view to being reproducible in experiments of various kinds. What weight is to be assigned, however, to any conclusions drawn from analysis of this idealization and how it works? Can those conclusions apply to what is actually happening in the objective, social, or natural process? The nub of this problem is that the
102
THE GREENING OF PETROLEUM OPERATIONS
input-state and output-state of an actual process can be readily simulated in any such idealization or its experimental reproduction. The actual pathway, meaning how matters actually proceeded from input to output, is very likely another matter entirely, however. When it comes to things that are human-engineered — the fashioning of some process or product, for example — the pathway of the natural version may not seem or even be particularly important. But the pragmatic result of simulating an idealization cannot be confused with actual understanding of the science of how the natural process works. Essentially, that idealization takes the form of a first assumption. The most dangerous first assumptions seem the most innocent. Consider, for example, the notion of the speed of light taken as a constant in a vacuum. Where in nature is there a vacuum? Since no such location is known to exist anywhere in nature, if the speed of light is observed to vary, i.e., not be constant, does this mean any observed non-constant character can be ascribed to the absence of a vacuum, so therefore the original definition remains valid? Or, does it mean rather that we need better measuring instruments? This notion of the speed of light being constant in a vacuum has been retrofitted to make it possible to bridge various gaps in our knowledge of actually observed phenomena. It is an example of an idealization. By fitting a "logically necessary" pathway of steps between input and output, however, on the basis of applying conclusions generated by an idealization of some social or natural process to the social or natural process itself, it becomes trivially easy to create the appearance of a smooth and gradual development or evolution from one intermediate state to another intermediate state. In such linearizing and smoothing, some loss of information, perhaps even a great deal, necessarily occurs. Above all, however, what is being passed off as a scientific explanation of phenomena is in fact an aphenomenal construction on the actual social or natural process. This aphenomenal modeling of reality closes all loops, bridges, and gaps with fictions of various kinds. One necessary corrective to this hopeless course should rely instead on the closest possible observation of input-state (i.e., historical origin), pathway and output-state (i.e., the present condition, as distinct from a projection) of the actual social or natural process, starting with the present, meaning that current output-state. Whatever has been clearly established and whatever still remains incompletely understood are then summed up. A process of elimination
A DELINEARIZED HISTORY OF CIVILIZATION
103
is launched. This is based on abstracting absence to advance a hypothesis that both might account for whatever gaps remain in the observer's knowledge and is possible to test. The observer plans out some intervention(s) that can establish in practice whether the hypothesized bridging of the gap in knowledge indeed accounts for what's been "missing." All processes explained u p until now rather simplistically, only insofar as their change, development, and motion conformed to known laws of social or natural development, can be reviewed by these same methods and their conventional explanation can be replaced with these essentially "delinearized" histories.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
3 Fundamentals of Mass and Energy Balance 3.1
Introduction
Balances in nature are all dynamic. They must be, for if they were not, balance as we actually find it in nature could not be restored after being disturbed. Unfortunately, as an outgrowth of the linearizations that are developed and applied throughout existing aphenomenal models of nature, scientists and engineers become myopic. A widespread presumption has emerged that assumes balance is a point of steady-state equilibrium and that the retention of balance takes place mechanically, thus a given steady state is perturbed, chaos ensues, a new steady-state equilibrium emerges. If one attempted to publish an article with detailed 500-ms by 500-ms recordings of data about someone he or she observed holding out his empty hat and a handkerchief, tossing the handkerchief into the hat, shaking them and finally pulling out a rabbit, should he be applauded as a careful scientific observer of the phenomena unfolding before his eyes or dismissed and mocked as a thorough dupe of the magician? Those who defend the observation of cycles of "steady state," perturbation, chaos, and "new steady state" as the best evidence we can collect regarding any natural balance are operating under an 105
106
THE GREENING OF PETROLEUM OPERATIONS
unwarranted assumption - that some displacement(s) in space and time which correlate with some or any stage(s) of organic growth and development must register somehow, somewhere. This is a defect not only in the quantity of the data collected but especially in the quality of the data collected. However, certain qualities in the reasoning that is applied to summarize the available data tend to obscure this latter defect from view. Only if one already knows the actual internal signatures of the stages of organic development and growth and finds some displacement(s) in space and time external to the organism that corresponds, then is it reasonable to assert the correlation. But to reason in the other direction - "I saw this external sign; the internal development must therefore be thus" - is only acceptable if one is certain from the outset that no other correlation is possible because all other possibilities in the field of available observations have been accounted for. When it comes to sorting out natural processes, the quality of the reasoning is no substitute for a clear and warranted understanding of what is actually going on within the process as well as in its relations to phenomena that surround it and may interact with it. Inductive reasoning about an effect from the evidence of a cause to a conclusion is always safe, although limited in its generality, whereas deductive reasoning from observed effects backwards to its causes is seductively breathtaking in its generality but justifiable only within a fully defined field of observed causes and effects. The ongoing value of deductive reasoning lies in the questions that require further investigation, while the ongoing value of inductive reasoning ought to be its demonstration of how modest and limited our understanding presently is.
3.2 The Difference Between a Natural Process and an Engineered Process For purposes of study and investigation, any process may be reduced to an indefinitely extended sequence of intervals of any length. All such processes may then be considered "equal." However, as a result of this first-order abstraction, a process in which the observer can definitely be separated from the subject matter under observation may be treated no differently than a process in which the observer cannot necessarily be separated, or considered truly separate, from the subject matter under observation. In other
FUNDAMENTALS OF MASS AND ENERGY BALANCE
107
words, to the extent that both involve physical or chemical laws in some manner, a natural process (a process unfolding in the natural or social environment) and an engineered process (something in a laboratory or under similarly controlled conditions) may be treated the same insofar as they involve the same physical-chemical laws, governing equations, etc. There are reasons for wishing to be able to treat a natural process and an engineered process the same. This can lend universal authority to scientific generalizations of results obtained from experiments, e.g., in a laboratory. However, this is where many complications emerge regarding how science is cognized among researchers on the one hand, and how those involved with implementing the findings of research cognize science on the other. Furthermore, in the literature of the history of science, and especially of experimental and research methods, there are sharp differences of approach between what happens with such cognition in the social sciences and what happens with such cognition in the engineering and so-called "hard" sciences. In either case, these differences and complications are shaped by the ambient intellectual culture. How the science is cognized is partly a function of discourse, terminology, and rhetoric, and partly a function of the attitudes of researchers in each of these fields toward the purposes and significance of their work for the larger society. Zatzman et al. (2007a) examined these issues at considerable length.
3.3 3.3.1
The Measurement Conundrum of the Phenomenon and its Observer Background
The single most consequential activity of any scientific work is probably that which falls under the rubric of "measurement." There is a broad awareness among educated people in the general public of the endless and unresolved debates among social scientists over what it is that they are actually counting and measuring. This awareness is stimulated by a sense that, because the phenomena they examine cannot be exactly reproduced and experiments can only examine narrowly selected pieces of the social reality of interest, the social sciences are scientific in a very different way than the engineering and "hard" sciences. This outlook conditions and frames much of the discussion of measurement issues in the social science literature up to our present day.
108
THE GREENING OF PETROLEUM OPERATIONS
During the 1920s and 1930s, when the social sciences in North America were being converted into professions based on programs of post-graduate-level academic formation, many dimensions of these problems were being discussed fully and frankly in the literature. In a memorable 1931 paper, Prof. Charles A. Ellwood, who lead in professionalizing sociology, chaired the American Sociological Association in 1924, and produced (before his death in 1946) more than 150 articles and standard textbooks in the field that sold millions of copies, weighed in strongly on these matters: A simple illustration may help to show the essence of scientific reasoning or thinking. Suppose a boy goes out to hunt rabbits on a winter morning after a fresh fall of snow. He sees rabbit tracks in the fresh snow leading toward a brush pile. He examines the snow carefully on all sides of the brush pile and finds no rabbit tracks leading away from it. Therefore he concludes that the rabbit is still in the brush pile. Now, such a conclusion is a valid scientific conclusion if there is nothing in the boy's experience to contradict it, and it illustrates the nature of scientific reasoning. As a matter of fact, this is the way in which the great conclusions of all sciences have been reached-all the facts of experience are seen to point in one direction and to one conclusion. Thus the theory of organic evolution has been accepted by biological scientists because all the facts point in that direction-no facts are known which are clearly against this conclusion. Organic evolution is regarded as an established scientific fact, not because it has been demonstrated by observation or by methods of measurement, but rather because all known facts point to that conclusion. This simple illustration shows that what we call scientific method is nothing but an extension and refinement of common sense, and that it always involves reasoning and the interpretation of the facts of experience. It rests upon sound logic and a common-sense attitude toward human experience. But the hyper-scientists of our day deny this and say that science rests not upon reasoning (which cannot be trusted), but upon observation, methods of measurement, and the use of instruments of precision. Before the boy concluded that there was a rabbit in the brush pile, they say, he should have gotten an x-ray machine to see if the rabbit was really there, if his conclusion is to be scientific; or at least he should have scared bunny from his hiding place and photographed him; or perhaps he should have gotten some instrument of measurement, and measured carefully the
FUNDAMENTALS OF MASS AND ENERGY BALANCE
109
tracks in the snow and then compared the measurements with standard models of rabbit's feet and hare's feet to determine whether it was a rabbit, a hare, or some other animal hiding in the brush pile. Thus in effect does the hyper-scientist contrast the methods of science with those of common sense. Now, it cannot be denied that methods of measurement, the use of instruments of precision, and the exact observation of results of experiment are useful in rendering our knowledge more exact. It is therefore, desirable that they be employed whenever and wherever they can be employed. But the question remains, in what fields of knowledge can these methods be successfully employed? No doubt the fields in which they are employed will be gradually extended, and all seekers after exact knowledge will welcome such an extension of methods of precision. However, our world is sadly in need of reliable knowledge in many fields, whether it is quantitatively exact or not, and it is obvious that in many fields quantitative exactness is not possible, probably never will be possible, and even if we had it, would probably not be of much more help to us than more inexact forms of knowledge. It is worthy of note that even in many of the so-called natural sciences quantitatively exact methods play a very subordinate role. Thus in biology such methods played an insignificant part in the discovery and formation of the theory of organic evolution. (16) A discussion of this kind is largely absent in current literature of the engineering and "hard" sciences and is symptomatic of the nearuniversal conditioning of how narrowly science is cognized in these fields. Many articles in this journal, for example, have repeatedly isolated the "chemicals are chemicals" fetish — the insistent denial of what is perfectly obvious to actual common sense, namely, that what happens to chemicals and their combinations in test tubes in a laboratory cannot be matched 1:1 with what happens to the same combinations and proportions of different elements in the human body or anywhere else in the natural environment. During the 20th century and continuing today in all scientific and engineering fields pursued in universities and industry throughout North America and Europe, the dominant paradigm has been that of pragmatism — the truth is whatever works. This outlook has tended to discount, or place at a lower level, purely "scientific" work, meaning experimental or analytical-mathematical work that
110
THE GREENING OF PETROLEUM OPERATIONS
produces hypotheses a n d / o r various theoretical explanations and generalizations to account for phenomena or test what the researcher thinks he or she cognizes about phenomena. The pressure has been for some time to, first, make something work and, second, to explain the science of why it works later, if ever. This begs the question, "Is whatever has been successfully engineered actually the truth, or are we all being taken for a ride?" The first problem in deconstructing this matter is the easy assumption that technology, i.e., engineering, is simply "applied science" — a notion that defends engineering and pragmatism against any theory and any authority for scientific knowledge as more fundamental than practical. Ronald Kline is a science historian whose works address various aspects of the relationship of formal science to technologies. In a 1995 article, he points out: A fruitful approach, pioneered by Edwin Layton in the 1970s, has been to investigate such "engineering sciences" as hydraulics, strength of materials, thermodynamics, and aeronautics. Although these fields depended in varying degrees on prior work in physics and chemistry, Layton argued that the groups that created such knowledge established relatively autonomous engineering disciplines modeled on the practices of the scientific community. (195) Kline does go on to note that "several historians have shown that it is often difficult to distinguish between science and technology in industrial research laboratories; others have described an influence flowing in the opposite direction - from technology to science - in such areas as instrumentation, thermodynamics, electromagnetism, and semiconductor theory." Furthermore, he points out that although a "large body of literature has discredited the simple applied-science interpretation of technology - at least among historians and sociologists of science and technology - little attention has been paid to the history of this view and why it (and similar beliefs) has [sic] been so pervasive in American culture. Few, in other words, have [examined]... how and why historical actors described the relationship between science and technology the way they did and to consider what this may tell us about the past." It is important to note, however, that all this still leaves the question of how the relationship of engineering rules of thumb (by which the findings of science are implemented
FUNDAMENTALS OF MASS AND ENERGY BALANCE
111
in some technological form or other) might most usefully apply to the source findings from "prior work in physics and chemistry." One of the legacies of the pragmatic approach is that any divergence between predictions about, and the reality of, actual outcomes of a process is treated usually as evidence of some shortcoming in the practice or procedure of intervention in the process. This usually leapfrogs any consideration of the possibility that the divergence might actually be a sign of inadequacies in theoretical understanding a n d / o r the data adduced in support of that understanding. While it may be simple in the case of an engineered process to isolate the presence of an observer and confine the process of improvement or correction to altering how an external intervention in the process is carried out, matters are somewhat different when it comes to improving a demonstrably flawed understanding of some natural process. The observer's reference frame could be part of the problem, not to mention the presence of apparently erratic, singular, or episodic epiphenomena that are possible signs of some unknown sub-process(es). This is one of the greatest sources of confusion often seen in handling so-called data scatter, presumed data 'error,' or anomalies generated from the natural version of a phenomenon that has been studied and rendered theoretically according to outcomes observed in controlled laboratory conditions. Professor Herbert Dingle (1950), more than half a century ago, nicely encapsulated some of what we have uncovered here: Surprising as it may seem, physicists thoroughly conversant with the ideas of relativity, and well able to perform the necessary operations and calculations which the theory of relativity demands, no sooner begin to write of the theory of measurement than they automatically relapse into the philosophical outlook of the nineteenth century and produce a system of thought wholly at variance with their practice. (6) He goes on to build the argument thus: It is generally supposed that a measurement is the determination of the magnitude of some inherent property of a body. In order to discover this magnitude we first choose a sample of the property and call it a 'unit'. This choice is more or less arbitrary and is usually determined chiefly by considerations of convenience. The process of measurement then consists of finding out how
112
THE GREENING OF PETROLEUM OPERATIONS
many times the unit is contained in the object of measurement. I have, of course, omitted many details and provisos, for I am not criticising the thoroughness with which the matter has been treated but the fundamental ideas in terms of which the whole process is conceived and expressed. That being understood, the brief statement I have given will be accepted, I think, as a faithful account of the way in which the subject of measurement is almost invariably approached by those who seek to understand its basic principles. Now it is obvious that this is in no sense an 'operational' approach. 'Bodies' are assumed, having 'properties' which have 'magnitudes'. All that 'exists', so to speak, before we begin to measure. Our measurement in each case is simply a determination of the magnitude in terms of our unit, and there is in principle no limit to the number of different ways in which we might make the determination. Each of them-each 'method of measurement', as we call it-may be completely different from any other; as operations they may have no resemblance to one another; nevertheless they all determine the magnitude of the same property and, if correctly performed, must give the same result by necessity because they are all measurements of the same independent thing. (6-7)
Then Prof. Dingle gives some simple but arresting examples: Suppose we make a measurement-say, that which is usually described as the measurement of the length of a rod, AB. We obtain a certain result-say, 3. This means, according to the traditional view, that the length of the rod is three times the length of the standard unit rod with which it is compared. According to the operational view, it means that the result of performing a particular operation on the rod is 3. Now suppose we repeat the measurement the next day, and obtain the result, 4. On the operational view, what we have learnt is unambiguous. The length of the rod has changed, because 'the length of the rod' is the name we give to the result of performing that particular operation, and this result has changed from 3 to 4. On the traditional view, however, we are in a dilemma, because we do not know which has changed, the rod measured or the standard unit; a change in the length of either would give the observed result. Of course, in practice there would be no dispute; the measurements of several other rods with the same standard, before and after the supposed change, would be compared, and if they all showed a proportionate change it would be decided that the standard
F U N D A M E N T A L S OF M A S S A N D ENERGY B A L A N C E
113
had changed, whereas if the other rods gave the same values on both occasions, the change would be ascribed to the rod AB; if neither of these results was obtained, then both AB and the standard would be held to have changed. If an objector pointed out that this only made the adopted explanation highly probable but not certain, he would be thought a quibbler, and the practical scientist would (until recently, quite properly) take no notice of him. (8-9) The " o p e r a t i o n a l v i e w " is the s t a n d p o i n t according to w h i c h the reference frame of the observer is a matter of indifference. The conventional view, by w a y of contrast, assumes that the observer's standard(s) of measurement corresponds to actual properties of the object of observation. However, w h e n the physical frame of reference in w h i c h o b s e r v a t i o n s are m a d e is transformed, t h e a s s u m p t i o n a b o u t t h e reference frame of the o b s e r v e r that is built into t h e "operational v i e w " b e c o m e s dysfunctional. Such a c h a n g e also transforms the reference frame of the observer, m a k i n g it impossible to d i s m i s s or ignore: But with the wider scope of modern science he can no longer be ignored. Suppose, instead of the length of the rod AB, we take the distance of an extra-galactic nebula, N. Then we do, in effect, find that of two successive measurements, the second is the larger. This means, on the traditional view, that the ratio of the distance of the nebula to the length of a terrestrial standard rod is increasing. But is the nebula getting farther away or is the terrestrial rod shrinking? Our earlier test now fails us. In the first place, we cannot decide which terrestrial rod we are talking about, because precise measurements show that our various standards are changing with respect to one another faster than any one of them is changing with respect to the distance of the nebula, so that the nebula may be receding with respect to one and approaching with respect to another. But ignore that: let us suppose that on some grounds or other we have made a particular choice of a terrestrial standard with respect to which the nebula is getting more distant. Then how do we know whether it is 'really' getting more distant or the standard is 'really7 shrinking? If we make the test by measuring the distances of other nebulae we must ascribe the change to the rod, whereas if we make it by measuring other 'rigid' terrestrial objects we shall get no consistent result at all. We ought, therefore, to say that the
114
THE GREENING OF PETROLEUM OPERATIONS
probabilities favour the shrinking of the rod. Actually we do not; we say the universe is expanding. But essentially the position is completely ambiguous. As long as the transformation of the frame of reference of the p h e n o m e n o n is taken properly into account, the operational v i e w will o v e r c o m e ambiguity. H o w e v e r , t h e c o m p a r i s o n of w h a t w a s altered b y m e a n s of the p r e c e d i n g transformation serves to establish t h a t t h e r e is n o s u c h t h i n g as a n a b s o l u t e m e a s u r e of a n y t h i n g in p h y s i c a l reality: Let us look at another aspect of the matter. On the earth we use various methods of finding the distance from a point C to a point D: consider, for simplicity, only two of them-the so-called 'direct' method of laying measuring rods end to end to cover the distance, and the indirect method of 'triangulation' by which we measure only a conveniently short distance directly and find the larger one by then measuring angles and making calculations. On the earth these two methods give identical results, after unavoidable 'experimental errors' have been allowed for, and of course we explain this, as I have said, by regarding these two processes as alternative methods of measuring the same thing. On the operational view there are two different operations yielding distinct quantities, the 'distance' and the 'remoteness', let us say, of D from C, and our result tells us that, to a high degree of approximation, the distance is equal to the remoteness. Now let us extend this to the distant parts of the universe. Then we find in effect that the 'direct' method and the triangulation method no longer give equal results. (Of course they cannot be applied in their simple forms, but processes which, according to the traditional view, are equivalent to them, show that this is what we must suppose.) On the view, then, that there is an actual distance which our operations are meant to discoverwhich, if either, gives the 'right' distance, direct measurement or triangulation? There is no possible way of knowing. Those who hold to the direct method must say that triangulation goes wrong because it employs Euclidean geometry whereas space must be non-Euclidean; the correct geometry would bring the triangulation method into line with the other, and the geometry which is correct is then deduced so as to achieve just this result. Those who hold to triangulation, on the other hand, must say that space is pervaded by a field of force which distorts the
FUNDAMENTALS OF MASS AND ENERGY BALANCE
115
measuring rods, and again the strength of this field of force is deduced so as to bring the two measurements into agreement. But both these statements are arbitrary. There is no independent test of the character of space, so that if there is a true distance of the nebula we cannot know what it is. On the operational view there is no ambiguity at all; we have simply discovered that distance and remoteness are not exactly equal, but only approximately so, and then we proceed to express the relation between them. The nature-science approach takes what Prof. Dingle has outlined to the final stage of considering nature four-dimensional. If time is taken as a characteristic measure of any natural phenomenon and not just of the vastnesses of distance in cosmic space, then the observer's frame of reference can be ignored if and only if it is identical to that of the phenomenon of interest. Otherwise, and most if the time, it must be taken into account. That means, however, that the assumption that time varies independently of the phenomena unfolding within it may have to be relaxed. Instead, any and all possible (as well as actual) non-linear dependencies need to be identified and taken explicitly into account. At the time Prof. Dingle's paper appeared in 1950, such a conclusion was neither appealing nor practicable. The capabilities of modern computing methods since then, however, have reduced previously insuperable computational tasks to almost routine procedures. In mentioning the "operational view," Prof. Dingle has produced a hint of something entirely unexpected regarding the problemspace and solution-space of reality according to Einstein's relativistic principles. Those who have been comfortably inhabiting a three-dimensional sense of reality have nothing to worry about. For them, time is just an independent variable. In four-dimensional reality, however, it is entirely possible that the formulation of a problem may never completely circumscribe how we could proceed to operationalize its solution. In other words, we have to anticipate multiple possible, valid solutions to one and the same problem formulation. It is not a 1:1 relationship, so linear methods that generate a unique solution will be inappropriate. The other possibilities are one problem formulation with multiple solutions (l:many) or multiple formulations of the problem having one or more solutions in common (many:l, or many:many). In modelers' language, we will very likely need to
116
THE GREENING OF PETROLEUM OPERATIONS
solve non-linear equation descriptions of the relevant phenomena by non-linear methods, and we may still never know if or when we have all possible solutions. (This matter of solving non-linear equations with non-linear methods is developed further at §§4-5 supra.) At this moment, it is important to establish what operationalizing the solution of a problem — especially multiple solutions — could mean or look like. Then, contrast this to how the scenario of one problem with multiple solutions is atomized and converted into a multiplicity of problems based on different sub-portions of data and data-slices from which any overall coherence has become lost.
3.3.2 Galileo's Experimental Program: An Early Example of the Nature-Science Approach Although science and knowledge are not possible without data measurements, neither science nor knowledge is reducible to data, techniques of measurement, or methods of data analysis. Before operationalizing any meaningful solution(s) to a research problem, there is a crucial prior step - the stage of "experimental design," of formulating the operational steps or the discrete sub-portions of the problematic, in which our knowledge is incomplete. For this, there is no magic formula or guaranteed road to success. This is where the true art of the research scientist comes into play. In adopting the standpoint and approach of nature-science, in which actual phenomena in nature or society provide the starting-point, the different roles of the investigator as participant on the one hand and as observer on the other can be clearly delineated as part of the research program. Galileo's experiments with freely falling bodies are well known and widely discussed in the literature of theoretical physics, the history of science and technology, the history of the conflicts between science and religion, and a number of other areas. There is a widespread consensus about his principled stance in defense of the findings in his own research, coupled with some ongoing disputes as to how consistently he could defend principles against the pressures of the Inquisition. In a departure from that path in the literature, rather than adding or passing judgment on Galileo's conduct vis-ä-vis the Church authorities, this paper addresses the backbone of Galileo's
F U N D A M E N T A L S OF M A S S A N D ENERGY B A L A N C E
117
stand, namely, his conviction that his m e t h o d w a s a m o r e reliable g u i d e to finding the truth of the nature of freely falling bodies t h a n a n y g u e s s w o r k b y Aristotle, a Greek w h o had been d e a d for t w o millennia. The standpoint a d o p t e d here is that Galileo's research prog r a m represents an early model of the nature-science approach — the first b y a E u r o p e a n , in a n y event. Its "correction" b y t h o s e w h o c a m e after him, o n the other h a n d , represents a corruption of his m e t h o d b y the m a n d a t e s of " N e w Science," m a n d a t e s w h e r e b y s u b sequent investigators w o u l d become so preoccupied w i t h tangible evidences to the point of excluding other considerations. T h e f o l l o w i n g i n f o r m a t i o n a b o u t G a l i l e o ' s a p p r o a c h to e x p e r i m e n t a l m e t h o d s a n d t h e controversy that c o n t i n u e d a r o u n d it, taken from t h e s u m m a r y from the Wikipedia e n t r y "Two N e w Sciences," s u m m a r i z e s t h e m o s t significant d e v e l o p m e n t s o n w h i c h the claim of t h e first E u r o p e a n p r o p o n e n t of t h e n a t u r e science a p p r o a c h is b a s e d : The Discourses and Mathematical Demonstrations Relating to Two New Sciences (Discorsi e dimostrazioni matematiche, intorno a due nuove scienze, 1638) was Galileo's final book and a sort of scientific testament covering much of his work in physics over the preceding thirty years. Unlike the Dialogue Concerning the Two Chief World Systems (1632), which led to Galileo's condemnation by the Inquisition following a heresy trial, it could not be published with a license from the Inquisition. After the failure of attempts to publish the work in France, Germany, or Poland, it was picked u p by Lowys Elsevier in Leiden, The Netherlands, where the writ of the Inquisition was of little account. The same three men as in the Dialogue carry on the discussion, but they have changed. Simplicio, in particular, is no longer the stubborn and rather dense Aristotelian; to some extent he represents the thinking of Galileo's early years, as Sagredo represents his middle period. Salviati remains the spokesman for Galileo. Galileo was the first to formulate the equation for the displacement s of a falling object, which starts from rest, under the influence of gravity for a time t: s = V2gt2
He (Salviati speaks here) used a wood molding, "12 cubits long, half a cubit wide and three finger-breadths thick" as a
118
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
ramp with a straight, smooth, polished groove to study rolling balls ("a hard, smooth and very round bronze ball"). He lined the groove with "parchment, also smooth and polished as possible". He inclined the ramp at various angles, effectively slowing down the acceleration enough so that he could measure the elapsed time. He would let the ball roll a known distance down the ramp, and used a water clock to measure the time taken to move the known distance. This clock was "a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results." (Website 2a) It is critical to a d d that, instead of clocking standardized "seconds" or minutes, this m e t h o d of time m e a s u r e m e n t calibrates o n e natural m o t i o n b y m e a n s of a n o t h e r natural d u r a t i o n . The water clock mechanism described above was engineered to provide laminar flow of the water during the experiments, thus providing a constant flow of water for the durations of the experiments. In particular, Galileo ensured that the vat of water was large enough to provide a uniform jet of water. Galileo's experimental setup to measure the literal flow of time, in order to describe the motion of a ball, was palpable enough and persuasive enough to found the sciences of mechanics and kinematics, (ibid.) Although Galileo's procedure founded "time" in physics, in particular, o n the basis of uniformity of flow in a given interval, this w o u l d later b e generalized as the notion of a linear flow of time. Einstein w o u l d later o v e r t h r o w this notion w i t h regard to the vastnesses of space in the universe a n d w h a t the nature-science approach proposes to correct in all investigations of processes unfolding in the natural e n v i r o n m e n t of the earth.
FUNDAMENTALS OF MASS AND ENERGY BALANCE
119
The law of falling bodies was discovered in 1599. But in the 20th century some authorities challenged the reality of Galileo's experiments, in particular the distinguished French historian of science Alexandre Koyre. The experiments reported in Two New Sciences to determine the law of acceleration of falling bodies, for instance, required accurate measurements of time, which appeared to be impossible with the technology of 1600. According to Koyre, the law was arrived at deductively, and the experiments were merely illustrative thought experiments. Later research, however, has validated the experiments. The experiments on falling bodies (actually rolling balls) were replicated using the methods described by Galileo, and the precision of the results was consistent with Galileo's report. Later research into Galileo's unpublished working papers from as early as 1604 clearly showed the reality of the experiments and even indicated the particular results that led to the time-squared law. (ibid.) Of interest here is the substance of Koyre's challenge - that the time it would take objects to fall to the ground from the top of the Tower of Pisa could never have been measured precisely enough in Galileo's day to justify his conclusion. Of course, subsequent experimental verification of Galileo's conclusions settles the specific question, but Koyre's objection is important here for another reason. What if, instead of following Galileo's carefully framed test, there was a series of increasingly precise measurements of exactly how long it took various masses in free fall to reach the ground from the same height? The greater the precision, the more these incredibly small differences would be magnified. One could hypothesize that air resistance accounted for the very small differences, but how could that assertion then be positively demonstrated? If modern statistical methods had been strictly applied to analyzing the data generated by such research, magnificent correlations might be demonstrated. None of these correlations, however, would point conclusively to the uniformity of acceleration of the speed at which these freely falling objects descend over any other explanation. As long as the focus remained on increasing the precision of measurement, the necessity to drop Aristotle's explanation entirely (that object fall freely at speeds proportional to their mass) would never be established unambiguously.
120
THE GREENING OF PETROLEUM OPERATIONS At of falling object Linear Piece-wise linear Non-linear
^ .
v
_
Density Figure 3.1 If the model were true, then the theory would be verified. So, what if the model were false from the outset, betrayed by retention of a first assumption that was not more carefully scrutinized? With more observations and data, this modeling exercise could carry on indefinitely, e.g., a 5-parameter univariate nonlinear "function" like y = ax + bx 2 + cxH px 4 + qx 5 ... that continues towards ever higher degrees of "precision."
The following possible representation of how the data resulting from such a falsified experimental approach reinforces this conclusion (Fig. 3.1): Galileo's act of publishing the Discorsi in 1638 as the main summary of the most important part of his life's work was not an ordinary act of defiance. It was his affirmation before the entire world that he had stuck to his original research program and never took the easy way out. The nature-science approach settles the question of whether one is on the right path to begin with, and that is Galileo's primary accomplishment. Today the time has arrived to take matters to the next stage. In order to capture and distinguish real causes and real effects as a norm in all fields of scientific and engineering research, it is necessary to apply what Albert Einstein elaborated - from examining the extremities of space, namely that time is a fourth dimension, to the observation of all phenomena, including those that are natural/social and those that are more deliberately engineered. Proponents of the "New Science" of tangibles have long insisted that Galileo is truly their founding father. In fact, their premise has been that the collection and sifting of the immediate evidence of nature and its processes is the sum total of reliable scientific knowledge.
FUNDAMENTALS OF MASS AND ENERGY BALANCE
121
Without conscious investigation of nature and critical questioning of conclusions that no longer fit the available evidence, how does science advance? This contribution is acknowledged by recognizing Galileo as the first European practitioner of nature-science. Although science and knowledge are bound to give rise to data and measurement, the reverse is not necessarily true. In the scramble to produce models that generate information about phenomena with increasing specificity and precision, it is easy to lose sight of this elementary insight and identify advances in data gathering and management with advances in scientific understanding. The coherence of research efforts undertaken in all fields has come under threat from this quarter, conventional research methods and approaches have come into question, and novel alternatives excite growing interest. As the review of the "engineering approach" serves to reconfirm, what decides the usefulness of any mathematical modeling tool is not the precision of the tool itself, but rather the observer's standpoint of a real process unfolding in nature. In fact, this ultimately limits or determines its actual usefulness. As Galileo's example clearly shows, regardless of the known value of improved precision of measurement, greater precision cannot overcome problems that arise from having embarked on the wrong path to begin with. The same goes for how precisely, i.e., narrowly, one specifies the reference frame of an observer or what is being observed. Conventional methods of modeling have long since passed the point of being able to specify to the point of excluding the general context of the reality surrounding the observer or the phenomena of interest. The apparent short-term gain in specificity may end u p being vastly overcompensated by such things as the impossibility of attaching any meaningful physical interpretation to elaborately computed mathematical results and, with that, the subsequent realization that a research effort may have begun by looking in the wrong direction in the first place.
3.4 Implications of Einstein's Theory of Relativity on Newtonian Mechanics The underlying problems that philosophy of science addressed before World War II were linked to deciding what weight to give Einstein's disturbance of the Newtonian worldview of "mechanism." There was considerable concern over the principle of relativity and how
122
THE GREENING OF PETROLEUM OPERATIONS
far some of its new concepts might be taken, e.g., where the idea of time as a fourth dimension might be taken next. Alongside this came pressing concerns after World War II to eliminate the powerful influence of the advances in science coming out of the Soviet Union. These concerns were based on a paradigm shift that was universally rejected in Western countries but nevertheless produced powerful results in many engineering fields throughout the Soviet system that could not be ignored. The notion on which all scientific research in the Soviet Union came to be based was that the core of scientific theory-building depended on sorting out the necessary and sufficient conditions that account for observed phenomena. In the West, this was allowed only for mathematics. Introducing this notion into investigations of, and practical engineering interventions in, natural processes, was dismissed as "excessive determinism." Instead, an entire discussion of "simplicity" as a diversionary discourse was developed. This discourse had an aim. It was intended first and foremost to avoid dealing with the entire matter of necessary and sufficient conditions. This approach reduced the process of selecting the more correct, or most correct, theoretical explanation accounting for one's data to the investigator's personal psychology of preference for "the simple" over "the complex." This was put forward in opposition to the notion that a scientific investigator might want to account for what is in line with his, her, or others' observations or counter others' observations of the same phenomenon (Ackermann 1961; Bunge 1962; Chalmers 1973b; Feuer 1957, 1959; Goodman 1961; Quine 1937; Rudner 1961; Schlesinger 1959,1961). As presented in Chapter 1 in the context of historical development, the notion that nature is dynamic has been known for at least two and a half millennia. However, Einstein's revelation of the existence of the fourth dimension was a shock because all scientific and engineering models used in the post-Renaissance world were steady-state models. The analysis presented in Chapter 1 shows that the acceptance of these models, without consideration of their existence in the real world, led to subsequent models that remain aphenomenal. In this, it is important to note that an aphenomenal hypothesis will entirely obscure the actual pathway of some natural phenomena that anyone with a correct starting point would set out to study. Unless this is recognized, resorting to curve-fitting and /or other retrofitting of data to highly linearized preconceptions and assumptions concerning the elementary physics of the phenomenon in its natural
FUNDAMENTALS OF MASS AND ENERGY BALANCE
123
environment will not validate the aphenomenal model. Zatzman et al. (2008a, 2008b) re-asserted this stance in a recent series of articles on Newtonian mechanics. They highlighted the following: 1. All natural phenomena at all stages and phases are four-dimensional. 2. All models based on conceiving the time factor as "the independent variable" rather than as a dimension in its own right are aphenomenal and bound to fail. 3. Any engineering design based on these aphenomenal bases can only simulate nature and never emulate it. 4. Our present-day "technological disaster" is essentially a litany of the inevitable failure that aphenomenallytheorized engineering, beyond the shortest of short terms, will always produce. With the revolutionary paper of Albert Einstein, the understanding of light was placed on a fully scientific basis for the first time as a form of matter that radiates as an energy wave (Website 2b). To establish this it was necessary to breach the wall in human thought that was created as a result of following the misconceived startingpoint that "light" was the opposite of "dark." How had an everyday notion become such an obstacle to scientific understanding? Investigation unexpectedly disclosed the culprit, lurking in what was universally acknowledged for the three hundred and twenty-seven years preceding Einstein's paper, to represent one of the most revolutionary scientific advances of all time, namely, the affirmation by Isaac Newton that "motion" was the opposite of "rest" (Website 3). Nowhere does Newton's mechanism explicitly deny that motion is the very mode of existence of all matter. However, his system crucially fails to affirm this as its first principle. Instead, his First Law of Motion affirms that a body in motion remains in motion (disregarding drag effects due to friction) and an object at rest remains at rest unless acted upon by an external force. The corollary flowing immediately from this law is not that motion is the mode of existence of all matter, but only that motion is the opposite of rest. In Newton's day, no one doubted that the earth and everything in the heavens had been created according to a single all-encompassing clock. Among Christians (mainly Catholics and Protestants during the Reformation), all that was disputed at this time was whether the clock was set in motion by a divine creator and then carried on
124
THE GREENING OF PETROLEUM OPERATIONS
heedless of human whims, or whether humans, by their individual moral choices, could influence certain operations of this clock. What are the implications, however, of affirming motion as the mode of matter's very existence? The most important implication is that any notions regarding the universe as a mechanism operating according to a single all-encompassing clock lose all coherence. Newton's great antagonist, Bishop George Berkeley, excoriated Newton's failure to close the door to such heretically anti-Christian views (Stock, 1776). In the same moment that the unquestioning belief in a single allencompassing clock is suspended or displaced, it becomes critical to affirm the existence of, and a role for, the observer's frame of reference. Newton, his contemporaries, and other men and women of science before Einstein's theory of relativity were certain that Newton's Laws of Motion defined the mechanism governing force of any kind, motion of any type, as well as any of their possible interactions. What we now understand is that, in fact, Newton's laws of motion actually defined only the possibilities for a given system of forces, one that would moreover appear "conservative" only because any observer of such a system was assumed to stand outside it. To seventeenth-century European contemporaries of Isaac Newton, this matter of frame of reference was of no moment whatsoever. For them, the flora, humans, and non-human fauna of the known world had never been anywhere other than where they currently were. Nor was it generally known or understood that people lived in human social collectives other than those already known since "ancient times," an era beginning in western Asia some finite but unknown number of years before the birth of Jesus Christ. There was no reason to wrestle with the prospect of any other frame of reference either in historical time or in spaces elsewhere in the universe. Only by means of subsequent research, of a type and on a scale that could not have been undertaken in Newton's day, did it become possible to establish that in any space-time coordinates anywhere in the universe, from its outermost cosmic reaches to the innermost sub-atomic locale, mass, energy, and momentum would be and must be conserved regardless of the observer's frame of reference. In a universe defined by a single clock and common reference frame, three principal physical states of matter — vapor, solid, and liquid — could be readily distinguished. (By driving externally applied energy, bounded by powerful electro-magnetic force fields, and still without having to re-adjust any other prevailing assumptions about a single clock and common reference-frame, a fourth
FUNDAMENTALS OF MASS AND ENERGY BALANCE
125
highly transient plasma state could be further distinguished.) Overall, motion could and would still be distinguished as the opposite of rest. What, however, can be said to connect matter in a vapor, solid, or liquid state to what happens to a state of matter at either sub-atomic or cosmic spatial scales? These are the regions for which the conservation of matter, energy, and momentum must still be accounted. However, in these regions, matter cannot possibly be defined as being at rest without introducing more intractable paradoxes and contradictions. It is only when motion is recognized as the mode of existence of all matter in any state that these paradoxes become removable.
3.5
Newton's First Assumption
Broadly speaking, it is widely accepted that Newton's system, based on his three laws of motion accounting for the proximate physical reality in which humans live on this Earth coupled with the elaboration of the principle of universal gravitation to account for motion in the heavens of space beyond this Earth, makes no special axiomatic assumptions about physical reality outside what any human being can observe and verify. For example, Newton considers velocity, v, as a change in the rate at which a mass displaces its position in space, s, relative to the time duration, t, of the motion of the said mass. That is: 3s v=-
(3.1)
This is no longer a formula for the average velocity, measured by dividing the net displacement in the same direction as the motion impelling the mass by the total amount of time that the mass was in motion on that path. This formula posits something quite new, actually enabling us to determine the instantaneous velocity at any point along the mass's path while it is still in motion. The velocity that can be determined by the formula given in Equation 3.1 above is highly peculiar. It presupposes two things. First, it presupposes that the displacement of an object can be derived relative to the duration of its motion in space. Newton appears to cover that base already by defining this situation as one of what he calls "uniform motion." Secondly, what exactly is the
126
THE GREENING OF PETROLEUM OPERATIONS
time duration of the sort of motion Newton is setting out to explain and account for? It is the period in which the object's state of rest is disturbed, or some portion thereof. This means the uniformity of the motion is not the central or key feature. Rather, the key is the assumption in the first place that motion is the opposite of rest. In his First Law, Newton posits motion as the disturbance of a state of rest. The definition of velocity as a rate of change in spatial displacement relative to some time duration means that the end of any given motion is either the resumption of a new state of rest or the starting-point of another motion that continues the disturbance of the initial state of rest. Furthermore, only to an observer external to the mass under observation can motion appear to be the disturbance of a state of rest and a state of rest appear to be the absence or termination of motion. Meanwhile, within nature, is anything ever at rest? The struggle to answer this question exposes the conundrum implicit in the Newtonian system - everything "works" and all systems of forces are "conservative" if and only if the observer stands outside the reference frame in which a phenomenon is observed. In Newton's mechanics, motion is associated not with matter-assuch, but only with force externally applied. Inertia, on the other hand, is definitely ascribed to mass. Friction is considered only as a force equal and opposite to that which has impelled some mass into motion. Friction in fact exists at the molecular level as well as at all other scales, and it is not a force externally applied. It is a property of matter itself. It follows that motion must be associated fundamentally not with force(s) applied to matter, but rather with matter itself. Although Newton nowhere denies this possibility, his First Law clearly suggests that going into motion and ceasing to be in motion are equally functions of some application of force external to the matter in motion; motion is important relative to some rest or equilibrium condition. Examination of developments in European science and what prominent historians of this era of New Science have had to say about Newton's mathematical treatment of physical problems compels the conclusion that the failure to ascribe motion to matter in general is implicit in, and built into, Newton's very approach to these problems (Cohen 1995; Grabiner 2004). For example, Grabiner (2004) explains Newton's approach thus: ...Newton first separated problems into their mathematical and physical aspects. A simplified or idealized set of physical
FUNDAMENTALS OF MASS AND ENERGY BALANCE
127
assumptions was then treated entirely as a mathematical system. Then the consequences of these idealized assumptions were deduced by applying sophisticated mathematical techniques. But since the mathematical system was chosen to duplicate the idealized physical system, all the propositions deduced in the mathematical system could now be compared with the data of experiment and observation. Perhaps the mathematical system was too simple, or perhaps it was too general and choice had to be made. Anyway, the system was tested against experience. And then — this is crucial — the test against experience often required modifying the original system. Further mathematical deductions and comparisons with nature would then ensue... What makes this approach non-trivial is the sophistication of the mathematics and the repeated improvement of the process. It is sophisticated mathematics, not only a series of experiments or observations,that links a mathematically describable law to a set of causal conditions. (842) What is this initial "simplified or idealized set of physical assumptions" but the isolation from its surrounding environment of the phenomenon of interest? Immediately — before the assumptions are even tested against any mathematical approximation — this must narrow the investigative focus in a way that is bound to impose some loss of connected information of unknown significance. No amount of the "sophistication of the mathematics" can overcome such an insufficiency. On the contrary, the very "sophistication of the mathematics" can be so exciting to the point of blinding the investigator's awareness of any other possible connections. No doubt this produces some answer, and no doubt "the sophistication of the mathematics" renders the approach "non-trivial" as well. However, nowhere in this is there any guarantee that the answer will be either physically meaningful or correct. In a Newtonian physical system, however, the logic and standard of proof is the following: if a phenomenon can be cognized by everyone, e.g., the motion of an object or mass, and if some mathematical demonstration is developed that confirms the hypothesis of a law purporting to account for the said phenomenon, then the law is considered to have been verified. Is this about science in the sense of establishing knowledge of the truth by exposing and eliminating error, or is it about something else? The key to solving this problem is to answer the question, is the scientific authority of Newton's approach "knowledgebased," or is it based on something else? In discussing the career
128
THE GREENING OF PETROLEUM OPERATIONS
of the 18th century Scottish mathematician Colin Maclaurin, who discovered the Maclaurin series, or the more general form of the Taylor Series used extensively throughout all fields of engineering and applied science, Grabiner hints at an answer when she writes, "Maclaurin's career illustrates and embodies the way mathematics and mathematicians, building on the historical prestige of geometry and Newtonianism, were understood to exemplify certainty and objectivity during the eighteenth century. Using the Newtonian style invokes for your endeavor, whatever your endeavor is, all the authority of Newton... The key word here is 'authority.' Maclaurin helped establish that..." (2004,841). Unlike the conditions attached to the publication of Galileo's and Copernicus' works, Newton's struggle was at no time a fight over the authority of the Church — not even with the Church of England, a Protestant Church very much at odds with the Roman Catholic Vatican on various Christian theological doctrines. Grabiner notes, for example, that Newton argued that the Earth was not perfectly spherical because the forces of its own rotation led to flattening at its poles over time. Note how artfully this line of argument dodges questioning the basis of the religious authorities' long-standing assertion that the Earth had to be spherical in order to fit with "God's plan." Grabiner's descriptions of Maclaurin's career-long coat tailing in Newton's reputation make more than clear that the issue became whether the scientists and mathematicians could govern themselves under their own secular priesthood, one that would no longer be accountable before any theological censor but that would prove to be no less ruthless in attacking any outsider challenging their authority. This is the meaning of Maclaurin's efforts to refute and discredit Bishop George Berkeley's serious questioning of Newton's calculus (Grabiner 2004). Berkeley and those who agreed with his criticisms were ridiculed for daring to challenge Newton the Great. During the 18th century, Maclaurin, and his example to others, ensured that the authority of Newtonian mathematics directly replaced the previous authority — either that of Christian scripture or the arbitrary exercise of monarchical power — across a wide range of commercial and other fields. A knowledge-based approach to science explains and accounts for actual phenomena not mainly or only in themselves, but also in relation to other phenomena and especially those characteristically associated with the phenomenon of interest. Then, how knowledge-based was Newton's mathematization of physics as a form of science?
FUNDAMENTALS OF MASS AND ENERGY BALANCE
129
Simply by virtue of how the investigator has isolated the phenomenon, e.g., looked at the motion, but only the motion, of one or more tangible temporally finite object-masses, the phenomenon may appear cognizable by all. When it comes, however, to establishing anything scientifically valid about actual phenomena characteristic of some part of the natural environment, this act of isolation is the starting-point of great mischief. For any phenomenon considered in its natural or characteristic environment, the basis (or bases) of any change of the phenomenon is/are internal to the phenomenon. The conditions in which that change may manifest are all external to the phenomenon. However, when only some particular part of the phenomenon is isolated, what has actually happened? First, almost "by definition" so to speak, some or any information about the root-source of the phenomenon and about its pathway up to the isolated point or phase is already discounted. As a result, consideration of any conditions of change external to the phenomenon has become marginalized. What weight should then be assigned to any mathematical demonstration of the supposed law(s) of operation of the phenomenon thus isolated? Among other things, such demonstrations are substituted for any physical evidence surrounding the phenomenon in its characteristic state-of-nature, which would serve to corroborate the likeliest answer(s) to the question of what constitutes the internal basis (or bases) of change(s) within the phenomenon. Such mathematical demonstrations effectively marginalize any consideration of an internal basis (or bases). In other words, isolating only the tangible and accessible portion of a phenomenon for observation and mathematical generalization transforms what was a phenomenon, characteristic of some portion of the natural environment, into an aphenomenon. Whereas the basis of any change in a real, natural, characteristic phenomenon is internal and its conditions of change are external, neither any external conditions of change nor any idea of what might be the internal basis of change for the phenomenon attaches to the aphenomenon. One alternative approach to the problem of motion encapsulated in Newton's First Law would be to consider "rest" as a relative or transient condition, rather than as something fundamental that gives way to motion only as the result of disturbance by some external force. (Newton's Second Law, often summarized by his F = ma equation-relationship, represents in effect a dynamic case of his First Law.) Newton's schema has made it simple to take for granted the idea of rest and static equilibriums in all manners of physical
130
THE GREENING OF PETROLEUM OPERATIONS
situations. Clearly, however, by the very same token according to which motion should not be viewed as "absence" or disturbance of "rest," no equilibrium state should be considered static, permanent, or anything other than transitional. Is it not absurd, not to mention embarrassingly elementary yet no less necessary, to ask how any "steady-state" reproducible under laboratory conditions — conditions which are always controlled and selected — could ever be taken as definitive of what would occur "in the field," or in the phenomenon's native environment within nature? Yet, it would seem that an excessive focus on reproducing some measurement — a measurement moreover that would approximate the prediction of a governing equation developed from an idealized model — seems to have obscured this fundamental absurdity of infinitely reproducing an equilibrium state. This was precisely the point at which the "authority" of Newton's mathematics could become a source of scientific disinformation. As long as the existence of uniform motion (First Law) or constant accelerated (Second Law) motion is taken for granted, one loses the ability to see the role of this disinformation. On the other hand, mathematically speaking, Newton's Third Law says that J F = 0, i.e., that the algebraic sum of all the forces acting on some objectmass "at rest" is zero. Physically speaking, however, "= 0" does not mean there are no forces acting. Rather, "=" means that there is something of a dynamic equilibrium in effect between what is represented on the left-hand and right-hand sides of this expression. One dynamic state of affairs may have given way to another, differently dynamic state of affairs, but does this "rest" truly constitute absence of motion? That is the first level at which disinformation may enter the picture. No single mathematical statement can answer this question or encompass the answer to this question in any particular case. Mathematically speaking, in general: LHS: expression (simple or complex) = RHS: value (number or function) In the sciences of physics and chemistry and the engineering associated with their processes, that same "=" sign, which often translates into some sort of balance between a process (or processes) described in one state on the left-hand side and an altered condition, state of matter, or energy on the right-hand side, is also
FUNDAMENTALS OF MASS AND ENERGY BALANCE
131
used to describe data measurements (expressed as a number on the right-hand side) of various states and conditions of matter or energy (described symbolically on the left hand side). The equivalence operator as some kind of balance, however, is meaningful in a different way than the meaning of the same equivalence operator in a statement of a numerical threshold reached by some measurement process. Confusing these "=" signs is another potential source of scientific disinformation. The problem is not simply one of notational conventions; "the sophistication of the mathematics" becomes a starting point for sophistries of various kinds.
3.6
First Level of Rectification of Newton's First Assumption
On the one hand, in a Newtonian mechanical system, time-duration remains tied to the motion of a mass inside a referential frame of active forces. On the other hand, momentum is preserved at all scales, from the most cosmic to the nano- and inter-atomic level, and at none of these scales can time or motion stop. Although Newton posited gravitation as a universally acting force, we now know that electromagnetic forces predominate in matter at the nano- or inter-atomic level. Electromagnetic forces, like frictional forces, can exist and persist without ever having been externally applied. Reasoning thus "by exhaustion," Newton's Three Laws of Motion plus the principle of universal gravitation are actually special cases of "something else." That "something else" is far more general, like the universal preservation of mass-energy balance and conservation of momentum. The connecting glue of this balance that we call nature is that motion is the mode of existence of all matter. This is what renders time a characteristic of matter within the overall context of mass-energy-momentum conservation. By considering motion as the mode of existence of all matter, it also becomes possible at last to treat time, consistently, as a true fourth dimension and no longer as merely the independent variable. In other words, time ceases to be mainly or only a derivative of some spatial displacement of matter. Also, if time is characteristic of matter (rather than characteristic particularly or only of its spatial displacement), then transformation of matter's characteristic scale must entail similarly transforming the scale on which time and its roles are accounted.
132
THE GREENING OF PETROLEUM OPERATIONS
From this, it follows as well that whatever we use to measure time cannot be defined as fixed, rigid, or constant, e.g., a standard like the "second," or selected or imposed without regard to the referenceframe of the phenomenon under observation. It is possible to assign physical meaning to ds/dt (velocity), so long as — and only so long as — time is associated with matter only indirectly, e.g., in reference to a spatial displacement rather than directly to matter. However, it seemed impossible and unthinkable to assign any physical meaning to dt/ds. Only with Einstein's conception of relativity does this become possible. However, Einstein's work confines this possibility to applications at vast distances in space measurable in large numbers of light-years. Comparing the scale of everyday reality accessible to human cognition — a terrestrial scale, so to speak — to nano-scale or any similarly atomic-molecular scale is not unlike comparing terrestrial scales of space and time to thousands of light-years removed in space and time. It would therefore seem no less necessary to look at time as a fourth dimension in all natural processes. Of course without also transforming the present arbitrary definitions of space and time elaborated in terms of a terrestrial scale, the result would be, at best, no more informative than retaining the existing Newtonian schema and, at worst, utterly incoherent. Subdividing conventional notions of space or time to the milli-, micro-, or nano- scale has been unable to tell us anything meaningful about the relative ranking of the importance of certain phenomena at these scales (including, in some cases, their disappearance). These phenomena are common and well known either on the terrestrial scale or on the non-terrestrial scales but rarely seen and less understood on the terrestrial scale. Relative to the terrestrial scale, for example, the electron appears to be practically without mass. It does possess what is called "charge," however, and this feature has consequences at the atomic and subatomic scale that disappear from view at the terrestrial scale. Einstein's tremendous insight was that, at certain scales, time becomes a spatial measurement, while quantum theory's richest idea was that, at certain other scales, space becomes a temporal measurement. However, the effort to explain the mechanics of the quantum scale in terrestrially meaningful terms led to a statistical interpretation and a mode of explanation that seemed to displace any role for natural laws of operation that would account for what happens in nature at that scale. Reacting against this, Einstein famously expostulated that "God does not play dice with the world." This comment has been widely interpreted as a bias
FUNDAMENTALS OF MASS AND ENERGY BALANCE
133
against statistical modes of analysis and interpretation in general. Our standpoint, however, is different. The significance here of these areas of contention among scientists is not about whether any one of these positions is more or less correct. Rather, the significance is that the assertion that all of nature, at any scale, is quintessentially four-dimensional accords with, and does not contradict, profoundly different and even opposing observations of, and assertions about, similar phenomena at very different scales. There may be any number of ways to account for this, including mathematical theories of chaos and fractal dimension. For example, between qualitatively distinct scales of natural phenomena, there may emerge one or more interfaces characterized by some degree of mathematical chaos and multiple fractal dimensions. Statements of that order are a matter of speculation today and research tomorrow. Asserting the four-dimensionality of all nature, on the other hand, escapes any possibility of 0 mass, 0 energy, or 0 momentum. Simultaneously, this bars the way to absurdities like a mass-energy balance that could take the form of 0 mass coupled with infinite energy or 0 energy coupled with infinite mass. Nature up to now has mainly been characterized as flora, fauna, and the various processes that sustain their existence, plus a storehouse of other elements that play different roles in various circumstances and have emerged over time periods measured on geological and intergalactic scales. According to the standpoint advanced in this paper, nature, physically speaking, is space-time completely filled with matter, energy, and momentum. These possess a temporal metric, which is characteristic of the scale of the matter under observation. Speaking from the vantage point of the current state of scientific knowledge, it seems highly unlikely that any such temporal metric could be constant for all physically possible frames of reference.
3.7 Second Level of Rectification of Newton's First Assumption Clarification of gaps in Newton's system makes it possible to stipulate what motion is and is not. However, this still leaves open the matter of time. If time is considered mainly as the duration of motion arising from force(s) externally applied to matter, then it must cease when an object is "at rest." Newton's claim in his First Law of Motion, that an object in motion remains in (uniform) motion until acted upon
134
THE GREENING OF PETROLEUM OPERATIONS
by some external force, appears at first to suggest that, theoretically, time is physically continual. It is mathematically continuous but only as the independent variable, and according to Equation 3.1, velocity v becomes undefined if time-duration t becomes 0. On the other hand, if motion ceases — in the sense of ds, the rate of spatial displacement, going to 0 — then velocity must be 0. What has then happened, however, to time? Where in nature can time be said to either stop or come to an end? If Newton's mechanism is accepted as the central story, then many natural phenomena have been operating as special exceptions to Newtonian principles. While this seems highly unlikely, its very unlikelihood does not point to any way out of the conundrum. This is where momentum, p, and — more importantly — its "conservation," come into play. In classically Newtonian terms: 9s p = tnv = m—
(3 2)
dp d ds d 2s -rr- = — m—+m—r2 at dt dt dt
(3.3)
of
Hence,
If the time it takes for a mass to move through a certain distance is shortening significantly as it moves, then the mass must be accelerating. An extreme shortening of this time corresponds, therefore, to a proportionately large increase in acceleration. However, if the principle of conservation of momentum is not to be violated, either (a) the rate of increase for this rapidly accelerating mass is comparable to the increase in acceleration, in which case the mass itself will appear relatively constant and unaffected; (b) mass will be increasing, which suggests the increase in momentum will be greater than even that of the mass's acceleration; or (c) mass must diminish with the passage of time, which implies that any tendency for the momentum to increase also decays with the passage of time. The rate of change of momentum (dp/dt) is proportional to acceleration (the rate of change in velocity, expressed as d2s/dt2) experienced by the matter in motion. It is proportional as well to the rate of change in mass with respect to time (dm/dt). If the rate of change in momentum approaches the acceleration undergone by the mass in question, i.e., if dp/dt —> d2s/dt2, then the change in
FUNDAMENTALS OF MASS AND ENERGY BALANCE
135
mass is small enough to be neglected. On the other hand, a substantial rate of increase in the momentum of a moving mass on any scale much larger than its acceleration involves a correspondingly substantial increase in mass. The analytical standpoint expressed in Equations 3.2 and 3.3 above work satisfactorily for matter in general, as well as for Newton's highly specific and peculiar notion of matter in the form of discrete object-masses. Of course, here it is easy to miss the "catch." The "catch" is the very assumption in the first place that matter is an aggregation of individual object-masses. While this may be true at some empirical level on a terrestrial scale — say, 10 balls of lead shot or a cubic liter of wood sub-divided into exactly 1,000 one-cm by one-cm by one-cm cubes of wood — it turns out, in fact, to be a definition that addresses only some finite number of properties of specific forms of matter that also happen to be tangible and, hence, accessible to us on a terrestrial scale. Once again, generalizing what may only be a special case — before it has been established whether the phenomenon is a unique case, a special but broad case, or a characteristic case — begets all manner of mischief. To appreciate the implications of this point, consider what happens when an attempt is made to apply these principles to object-masses of different orders and-or vastly different scales but within the same reference-frame. Consider the snowflake — a highly typical piece of natural mass. Compared to the mass of an avalanche of which it may come to form a part, the mass of any individual component snowflake is negligible. Negligible as it may seem, however, it is not zero. Furthermore, the accumulation of snowflakes in an avalanche's mass of snow means that the cumulative mass of snowflakes is heading towards something very substantial, infinitely larger than that of any single snowflake. To grasp what happens for momentum to be conserved between two discrete states, consider the startingpoint p = mv. Clearly in this case, that would mean that in order for momentum to be conserved, i
ai'alanche
r smw/tokes-rts-rt-nMss
which means
m'tiliittchc avnhmhc
£a $iwwflake=\
snowflake usnowflnke
(3.5)
136
THE GREENING OF PETROLEUM OPERATIONS
On a terrestrial scale, an avalanche is a readily observed physical phenomenon. At its moment of maximum (destructive) impact, an avalanche looks like a train wreck unfolding in very slow motion. However, what about the energy released in the avalanche? Of this we can only directly see the effect, or footprint, and another aphenomenal absurdity pops out - an infinitude of snowflakes, each of negligible mass, have somehow imparted a massive release of energy. This is a serious accounting problem; not only momentum, but mass and energy as well, are conserved throughout the universe. This equation is equivalent to formulations attributed to knowledge of Avicenna as well as Ibn-Haithan (Equations 2.3 and 2.4) who both recognized that any form of energy must be associated with a source. Philosophically, this was also seen by Aristotle and later confirmed and extended by Averroes, whose model permeated to modern Europe through the work of Thomas Aquinas (Chhetri and Islam 2008). The same principle of conservation of momentum enables us to "see" what must happen when an electron (or electrons) bombards a nucleus at a very high speed. Now we are no longer observing or operating at terrestrial scale. Once again, however, the explanation conventionally given is that since electrons have no mass, the energy released by the nuclear bombardment must have been latent and entirely potential stored within the nucleus. Clearly, then, in accounting for what happens in nature (as distinct from a highly useful toolset for designing and engineering certain phenomena involving the special subclass of matter represented by Newton's object-masses), Newton's central model of the object-mass is insufficient. Is it even necessary? Tellingly on this score, the instant it is recognized that there is no transmission of energy without matter, all the paradoxes we have just elaborated are removable. Hence, we may conclude that, for properly understanding and being able to emulate nature on all scales, mass-energy balance and the conservation of momentum are necessary and sufficient. On the other hand, neither constancy of mass, nor constancy of speed of light, nor even uniformity in the passage and measure of time is necessary or sufficient. In summary, the above analysis overcomes several intangibles that are not accounted for in conventional analysis. It includes a) a source of particles and energy, b) particles that are not visible or measurable with conventional means, and c) tracking of particles based on their sources (the continuous time function).
FUNDAMENTALS OF MASS AND ENERGY BALANCE
137
3.8 Fundamental Assumptions of Electromagnetic Theory Once the confining requirement that phenomena be terrestrially tangible and accessible to our perception is removed, it quickly becomes evident that the appearance of energy radiating in "free space" — electromagnetic phenomena such as light energy, for instance — is an appearance only. (As for the transmission of any form of energy at a constant speed through a vacuum, this may signal some powerful drug-taking on the observer's part, but otherwise it would seem to be a physical impossibility since nature has no such thing as a vacuum.) Nowhere in nature can there be such a thing as energy without mass or mass without energy. Otherwise, the conservation of mass and energy both come into question. Matter at the electronic, inter-atomic, inter-molecular level cannot be dismissed as inconsequential by virtue of its extremely tiny amounts of mass. Electromagnetic phenomena would appear to demonstrate that whatever may be lacking in particulate mass at this level is made up for by electron velocity, or the rate at which these charged particles displace space. The inter-atomic forces among the molecules of a classically Newtonian object-mass sitting "at rest," so to speak, must be at least a match for, if not considerably stronger than, gravitational forces. Otherwise the object-mass would simply dissolve upon reaching a state of rest as molecular matter is pulled by gravity towards the center of the earth. The fact, that in the absence of any magnetic field, either applied or ambient, "Newtonian" object-masses seem electrically neutral, means only that they manifest no net charge. Lacking the Bohr model or other subsequent quantum models of matter at the electronic level, nineteenth-century experimenters and theorists of electricity had no concept of matter at the electronic level comprising extremely large numbers of very tiny charged particles. They struggled instead to reconcile what was available to their limited means for observing electrical phenomena with Newtonian mechanics. Information as to what was at the input of an electrical flow and what was measured or observed at an output point of this flow was available for their observation. The actual composition of this flow, however, remained utterly mysterious. No amount of observation or careful measurement of this flow could bring anybody closer to discovering or hypothesizing the electronic character of matter, let
138
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
alone bring anybody closer to discovering or hypothesizing that the manner in which electricity flowed was a function of this fundamental electronic character. As discussed in the first part of this chapter, absent Galileo's careful deconstruction of the Aristotelian notion that objects falling freely reached the earth at different times dependent on their mass, the fundamental fact of the existence and operation of gravitational attraction would have been missed. In the absence of an alternative hypothesis, Newton's mechanics were assumed to apply to electrical flow. Among the leading developers in European science, there were many disputes about the theory of electrical phenomena, their experimental verification, or both, but these were all within the camp of what might be broadly called "Newtonianism." Before Maxwell there had been those, such as Ampere, Oersted, and Berzelius, who proposed to model electrical phenomena as a Newtonian kind of action-at-a-distance (Mundy, 1989). It was their line of thought that inspired Faraday's experimental program at the start of his career towards the end of the 1810s, as Sir Humphrey Davy's assistant in the laboratories of The Royal Institution. That line of thought also ended up raising questions whose answers, sought and obtained by Faraday, ultimately refuted such explanations of current flow. Faraday's experiments showed that there were other electrical effects that did not operate at a distance and that there could not be two distinct kinds of electricity or "electrical fluids". Maxwell adopted a compromise position that electricity manifested the characteristics of an "incompressible fluid" but was not itself a fluid: "The approach which was rejected outright was that of 'purely mathematical' description, devoid of 'physical conceptions'; such an approach, Maxwell felt, would turn out to be unfruitful. More favorably viewed, and chosen for immediate use in 1856, was the method of physical analogy. Physical analogies were not only more physical and suggestive than purely mathematical formulae; they were also less constraining than physical hypotheses. Their use, then, constituted a desirable middle way, and Maxwell proceeded to treat electric fields, magnetic fields, and electric currents each by analogy with the flow of an incompressible fluid through resisting media. There was no suggestion here that in an actual electric field, for example, there was some fluidflow process going on; rather, an analogy was drawn between the two different physical situations, the electric field and the
FUNDAMENTALS OF MASS AND ENERGY BALANCE
139
fluid flow, so that with appropriate changes of the names of the variables the same equations could be applied to both" (Siegel 1975,365). Meanwhile, although Maxwell became the head of the Cavendish Laboratory, the world-famous research center at Cambridge University, neither he nor his students would ever undertake any directed program of experiments to establish what electricity itself might be (Simpson 1966). Instead, they remained supremely confident that systematic reapplication of Newtonian principles to all new data forthcoming regarding electrical effects would systematically yield whatever electricity was not. The common understanding among engineers is that Maxwell's equations of electromagnetism established the notion that light is an electromagnetic phenomenon. Broadly speaking this is true, but Maxwell had a highly peculiar notion of what constituted an electromagnetic phenomenon. First and foremost, it was a theoretical exercise not based on any actual experimental or observational program of his own regarding any electrical or electromagnetic phenomena at all. Secondly — and most tellingly in this regard — when Maxwell's equations are examined more closely, his original version includes an accounting for something he calls "displacement current" whose existence he never experimentally verified (Simpson 1966,413; Bork 1963, 857; Chalmers 1973a, 479). Furthermore, the version of Maxwell's equations in general use was actually modified by Hertz; this was the version on which Einstein relied. Some historical background helps illuminate what was going on within this development before, during, and following Maxwell's elaboration of his equations. Notably, Maxwell seemed to have felt no compelling need to further establish, for his own work, what electricity might be (Bork 1967). As a strong proponent of the experimental findings of Michael Faraday, he felt no need to "reinvent the wheel." Faraday's brilliance lay in his design and execution of experimental programs that systematically eliminated false or unwarranted inferences from the growing body of knowledge of electrical phenomena one by one (Williams 1965). Maxwell saw a need to furnish Faraday's work with a mathematical basis so that the theoretical coherence of mankind's knowledge in this field could be presented with the same elegance that the rest of physics of that time was presented, relying on a foundation of Newtonian
140
THE GREENING OF PETROLEUM OPERATIONS
mechanics: "Maxwell's objective was to establish Faraday's theory on a surer physical basis by transforming it into a mechanical theory of a mechanical aether, that is, an aether whose behavior is governed by the principles of Newtonian mechanics" (Chalmers 1973b, 469). One of Faraday's biographers has questioned whether he had a general theory about electrical phenomena as opposed to experimentally demonstrable explanations of specific electrical phenomena, many of them linked (Williams 1965). Notwithstanding that issue, however, Maxwell firmly accepted the existence of "a mechanical aether" as something required for fitting a Newtonian theoretical framework in order to render existing knowledge of electromagnetic phenomena coherent. What is known, but not well understood, is the degree to which the general dependence among scientists on a mathematical, i.e., aphenomenal, framing of a natural phenomenon like electricity — a phenomenon not normally readily accessible in complete form to the five senses — exercised so much influence over those scientists whose sense of physics was not initially trained in the Newtonian mold. During the 1870s, one of the most important developers of insights opened by Maxwell's work, for example, was Hermann von Helmholtz. Helmholtz came to physics via physiology, in which he had become interested in the electrical phenomena of the human body. The task of "squaring the circle," so to speak, fell to Helmholtz. He reconciled Maxwell's equations with the "action-at-a-distance" theories of Ampere and of Weber especially, who formulated an equation in 1847 predicting dielectric effects of charged particles as a form of electrical potential. In order to keep the analytical result consistent with the appropriate physical interpretations of observed, known phenomena of open and closed circuits, charged particles, dielectrics, and conductors, Helmholtz was compelled to retain the existence of "the aether." However, his analysis set the stage for his student, Hertz, to predict and extrapolate the source of electromagnetic waves propagating in "empty space" — a shorthand for a space in which "the aether" did not seem to play any role (Woodruff 1968). What is much less well known is that Maxwell's mentor, Faraday, rejected the assumption that such an "aether" existed. He maintained this position, albeit unpublished, for decades. One of his biographers reproduced for the first time in print a manuscript from the eight volume folio of Faraday's diary entitled "The Hypothetical Ether," which establishes, in the drily understated words of his biographer,
FUNDAMENTALS OF MASS AND ENERGY BALANCE
141
that "the ether quite obviously did not enjoy much favor in Faraday's eyes" (Williams 1965, 455). This was long before the famous Michelson-Morley experiment failed to measure its "drift" and placed the asserted existence of the aether in question among other men of science (Williams 1965; Holton 1969). The real shocker in all this, moreover, is the fundamental incoherence that Maxwell ended up introducing into his theoretical rendering of electromagnetic phenomena. Maxwell was struggling to remain theoretically in conformity with a Newtonian mechanical schema: "The major predictions of Maxwell's electromagnetic theory, namely, the propagation of electromagnetic effects in time and an electromagnetic theory of light, were made possible by Maxwell's introduction of a displacement current" (Chalmers 1973a, 171). The "major predictions" were correct; the justification, namely displacement current, was false. In the first part of this chapter, an absurdity was deliberately extrapolated of scientists and engineers. The example showed that in the absence of Galileo's point of departure, 21 st century research would only refine the precision of demonstrations proving Aristotle's assumption about the relationship of mass to rate of free fall for heavier-than-air object-masses. The history of the issues in dispute among experimenters and theorists of electrical phenomena, before the emergence of modern atomic theory, serves to illustrate, with factual events and not imaginary projections, the same difficulty that seized the development of scientific investigation in the shadow of Newtonian "authority." This is something seen repeatedly in what we have identified elsewhere as part of aphenomenal modeling of scientific explanations for natural phenomena (Zatzman and Islam 2007). What Maxwell in effect erected was the following false syllogism: • Any deformation of matter in space, including wavelike action, must fulfill requirements of Newtonian mechanism. • Postulating electrical flow as a displacement due to electromagnetic waves propagating in space at the speed of light and causing mechanical deformation of an "aether" across vast distances anywhere in space fulfills this requirement. • Therefore, electromagnetic waves must propagate anywhere in space at the speed of light.
142
THE GREENING OF PETROLEUM OPERATIONS
In the 19th century, at a time when models of matter on an electronic scale were sketchy to non-existent, Newton's mechanics — developed for object-masses on a terrestrial scale — were assumed to apply. Once the "mechanical aether" was found not to exist, however, light and other electromagnetic phenomena as forms of energy became separated from the presence of matter. Einstein disposed of the "mechanical aether" also without the benefit of more fully developed modern atomic theory. He instead retained the aphenomenal idea that light energy could travel through a vacuum, i.e., in the complete absence of matter. Meanwhile, practical — that is, physically meaningful — interpretations of modern atomic theory itself today, however, persist in retaining a number of aphenomenal assumptions that make it difficult to design experiments that could fully verify, or falsify, Einstein's general relativity theory. Hermann Weyl, one of Einstein's close collaborators in elaborating relativistic mathematical models with meaningful physical interpretations, summarized the problem with stunning clarity 64 years ago in an article suggestively entitled "How Far Can One Get With a Linear Field Theory of Gravitation in Flat Space-Time?" He wrote, "Our present theory, Maxwell + Einstein, with its inorganic juxtaposition of electromagnetism and gravitation, cannot be the last word. Such juxtaposition may be tolerable for the linear approximation (L) but not in the final generally relativistic theory" (Weyl 1944,602). As an example of the persistence of aphenomenal models in many areas of practical importance, starting with the assertion of a scale of compressibility-incompressibility for example, matter is classified as either purely and entirely incompressible (also known as solid), slightly compressible (also known as liquid), or compressible (also known as gas). In other words, one and the same matter is considered to possess three broad degrees of compressibility. This counterposes the idea that matter could exist characteristically mostly in a solid, liquid, or vapor state, but between each of these states there is some non-linear point of bifurcation. Before such a bifurcation point, matter is in one state, and after that point it is distinctly in another state. The underlying basis of the compressibility-scale reasoning is not hard to spot. A little reflection uncovers the notion of discrete bits of matter, conceived of as spherical nuclei consisting of neutrons, and positively charged discrete masses orbited by negatively-charged, much smaller, faster-moving balls called electrons.
FUNDAMENTALS OF MASS AND ENERGY BALANCE
143
Newton's laws of motion are insufficient for determining where in space at any point in time any piece of electronic-scale matter actually is. A statistical version of mechanics, known as quantum theory, has been developed instead to assert probable locations taking into account additional effects that do not normally arise with objectmasses on terrestrial scale, such as spin and charge. It is a system whose content is radically modified from that of Newtonian mechanism, but whose form resembles a Newtonian system of planets and satellite orbiting bodies. These arrangements are often pictured as electron or atom "clouds" purely for illustrative purposes. Mathematically these are treated as spherical balls, interatomic forces are computed in terms of spehrical balls, porosity at the molecular level is similarly computed according to the spherical-balls model, etc. Just as the curl and divergence in Maxwell's famous equations of electromagnetism purport to describe Newtonian mechanical deformations of a surface arising from externally acting forces, the aphenomenal assumption of a spherical-balls model underpins the compressibility-incompressibility scale already mentioned. The mathematics to deal with spheres and other idealized shapes and surfaces always produces some final answer for any given set of initial or boundary conditions. It prevails in part because the mathematics needed to deal with "messy," vague things, such as clouds, are far less simple. It is the same story when it comes to dealing with linearized progression on a compressibility scale as opposed to dealing with non-linear points of bifurcation between different phases of matter. All the retrofitting introduced into the modern electronic theory cannot hide the obvious. The act of deciding "yes or no" or "true or false" is the fundamental purpose of model building. However, as a result of adding so many exceptions to the rule, the logical discourse has become either corrupted or rendered meaningless. For example, there is no longer any fundamental unit of mass. Instead we have the atom, the electron, the quark, the photon, etc. The idea is to present mass "relativistically," so to speak, but the upshot is that it becomes possible to continue to present the propagation of light energy in the absence of mass. What was discussed above, regarding conservation of momentum, hints at the alternative. This alternative is neither to finesse mass into a riot of sub-atomic fragments nor to get rid of it, but rather to develop a mathematics that can work with the reality in which mass undergoes dynamic change.
144
THE GREENING OF PETROLEUM OPERATIONS
Outside those scales in which Newton either actually experimented (e.g., object-masses) or about which he was able to summarize actual observational data (e.g., planetary motion), the efforts to retain and re-apply a Newtonian discourse and metaphor at all scales continue to produce models, analysis, and equations that repeatedly and continually diverge from what is actually observed. For example, mass and energy are analyzed and modeled separately and in isolation. In nature, mass and energy are inseparable. The conservation of each is a function of this physical (phenomenal) inseparability. Hence, within nature, there is some time function that is characteristic of mass. Once mass and energy are conceptually separated and isolated, however, this time function disappears from view. This time function accounts for the difference between artificial, synthesized versions of natural products or phenomena on the one hand, and the original or actual version of the material or phenomenon in its characteristic natural environment. It is why results from applying organic and chemical fertilizers (their pathways) are not and can never be the same, why fluorescent light will not sustain photosynthesis that sunlight very naturally produces and sustains, why the anti-bacterial action of olive oil and antibiotics cannot be the same. Wherever these artificial substitutes and so-called "equivalents" are applied, the results are truly magical — in the full sense of the word, since magic is utterly fictional. It is easy to dismiss such discourse as "blue sky" and impractical. Such conclusions have been the reaction to a number of obstacles that have appeared over the years. One obstacle in adapting engineering calculations that account for natural time functions and their consequences has been the computational burden of working out largely non-linear problems by non-linear methods. These are systems in which multiple solutions will necessarily proliferate. However, modern computing systems have removed most of the practical difficulties of solving such systems. Another obstacle is that time measurement itself appears to have been solved long ago, insofar as the entire issue of natural time functions has been finessed by the introduction of artificial clocks. These have been introduced in all fields of scientific and engineering research work, and they have varying degrees of sophistication. All of them measure time in precisely equal units (seconds, or portions thereof)· Since Galileo's brilliant demonstration (using a natural clock) of how heavier-thanair objects in free fall reach the earth at the same time regardless of their mass, scientists have largely lost sight of the meaning,
FUNDAMENTALS OF MASS AND ENERGY BALANCE
145
purpose, and necessity of using such natural time measurement to clock natural phenomena (Zatzman et al. 2008). Another obstacle is that many natural time functions have acquired undeserved reputations as being subjective and fanciful. That is what has happened, for example, to the concept of "the blink of an eye." Yet, contemporary research is establishing that "the blink of an eye" is the most natural time unit (Zatzman, 2008). Another obstacle is the trust that has been (mis)placed in conventional Newtonian time functions. Classically the Newtonian calculus allows the use of a At of arbitrary length — something that causes no problems in an idealized mathematical space. However, any At that is longer than the span in which significant intermediate occurrences naturally occur out in the real world, i.e., in the field, is bound to miss crucially important moments, such as the passage of a process through a bifurcation point to a new state. In petroleum engineering, an entire multi-million-dollar sub-industry of retrofitting and history matching has come into existence to assess and take decisions regarding the divergence of output conditions in the field from the predictions of engineering calculations. If, on the other hand, the scientifically desirable natural time function were the basis of such engineering calculations in the first place, the need for most, if not all, history matching exercises would disappear. This is a source of highly practical, large savings on production costs.
3.9
Aims of Modeling Natural Phenomena
The inventor of the Hamming code — one of the signal developments in the early days of information theory — liked to point out in his lectures on numerical analysis that "the purpose of computing is insight, not numbers" (Hamming 1984). Similarly, we can say the aim in modeling natural phenomena, such as nature, is direction (or, in more strictly mathematical-engineering terms, the gradient). That is, this aim is not and cannot be some or any precise quantity. There are three comments to add that help elaborate this point. First, with nature being the ultimate dynamical system, no quantity, however precisely measured, at time t0 will be the same at time t0 + At, no matter how infinitesimally small we set the value of that At. Secondly, in nature matter in different forms at very different scales interacts continually, and the relative weight or balance of very different forces — inter molecular forces, interatomic forces of attraction and
146
THE GREENING OF PETROLEUM OPERATIONS
repulsion, and gravitational forces of attraction — cannot be predicted in advance. Since nature operates to enable and sustain life forms, however, it is inherently reasonable to confine and restrict our consideration to three classes of substances that are relevant to the maintenance or disruption of biological processes. Thirdly, at the same time, none of the forces potentially or actually acting on matter in nature can be dismissed as negligible, no matter how "small" their magnitude. It follows that it is far more consequential for a practically useful nature model to be able to indicate the gradient/trend of the production, and conversion or toxic accumulation of natural biomass, natural non-biomass, and synthetic sources of biomass respectively. As already discussed earlier, generalizing the results for physical phenomena observed at one scale to fit all other scales has created something of an illusion - one reinforced moreover by the calculus developed by Newton. That analytical toolset included an assumption that any mathematical extension, x, might be infinitely subdivided into an infinite quantity of Δχ-es which would later be (re-) integrated back into some new whole quantity. However, if the scales of actual phenomena of interest are arbitrarily mixed, leapfrogged, or otherwise ignored, then what works in physical reality may cease to agree with what works in mathematics. Consider in this connection the extremely simple equation: y=5
(3.6)
Taking the derivative of this expression, with respect to an independent variable x, yields: ^ =0
(3.7)
dx To recover the originating function, we perform: (3.8) \dy = c Physically speaking, Equation. (3.8) amounts to asserting that "something" of indefinite magnitude, designated as c — it could be " 5 " as a special case (with proper boundaries or conditions), but it could well be anything else — has been obtained as the result of
FUNDAMENTALS OF MASS AND ENERGY BALANCE
147
integrating Equation 3.7, which itself had an output magnitude of 0, i.e., nothing. This is scientifically absurd. Philosophically, even Shakespeare's aging and crazed King Lear recognized that "nothing will come of nothing: speak again" (Shakespeare 1608). The next problem associated with this analysis is that the pathway is obscured, opening the possibility of reversing the original whole. For instance, a black (or any other color) pixel within a white wall will falsely create a black (or any other color corresponding to the pixel) wall if integrated without restoring the nearby pixels that were part of the original white wall. This would happen even though, mathematically, no error has been committed. This example serves to show the need for including all known information in space as well as in time. Mathematically, this can be expressed as: J J mv = constant
(3.9)
/=0 s=l
The aim of a useful nature model can be neither to account for some "steady state," an impossibility anywhere in nature, nor to validate a mechanical sub-universe operating according to the criteria of an observer external to the process under observation. Dynamic balances of mass, energy, and momentum imply conditions that will give rise to multiple solutions, at least with the currently available mathematical tools. When it comes to nature, a portion of the space-time continuum in which real physical boundary conditions are largely absent, a mathematics that requires Δί -» 0 is clearly inappropriate. What is needed are non-linear algebraic equations that incorporate all relevant components (unknowns and other variables) involved in any of these critical balances that must be preserved by any natural system.
3.10 Challenges of Modeling Sustainable Petroleum Operations Recently, Khan and Islam (2007a, 2007b) outlined the requirements for rending fossil fuel production sustainable. This scientific study shows step by step how various operations ranging from exploration to fuel processing can be performed in such a manner that resulting products will not be toxic to the environment. However, modeling such a
148
THE GREENING OF PETROLEUM OPERATIONS
process is a challenge because the conventional characterization of matter does not make any provision for separating sustainable operations from unsustainable ones. This description is consistent with Einstein's revolutionary relativity theory, but does not rely on Maxwell's equations as the starting point. The resulting equation is shown to be continuous in time, thereby allowing transition from mass to energy. As a result a single governing equation emerges. This equation is solved for a number of cases and is shown to be successful in discerning between various natural and artificial sources of mass and energy. With this equation, the difference between chemical and organic fertilizers, microwave and wood stove heating, and sunlight and fluorescent light can be made with unprecedented clarity. This analysis would not be possible with conventional techniques. Finally, analysis results are shown for a number of energy and material related prospects. The key to the sustainability of a system lies within its energy balance. Khan and others recast the combined energy-mass balance equation in the following form as in Equation 3.9. Dynamic balances of mass, energy and momentum imply conditions that will give rise to multiple solutions, at least with the currently available mathematical tools. In this context, Equation 3.9 is of utmost importance. This equation can be used to define any process to which the following equation applies: Q =Q
+Q ,
(3-10)
In the above classical mass balance equation, Q.n expresses mass inflow matter, Qacc represents the same for accumulating matter, and Qou( represents the same for outflow matter. Qflcc will have all terms related to dispersion/diffusion, adsorption/desorption, and chemical reactions. This equation must include all available information regarding inflow matters, e.g., their sources and pathways, the vessel materials, catalysts, and others. In this equation, there must be a distinction made among various matter based on their source and pathway. Three categories are proposed: 1) biomass (BM); 2) convertible non-biomass (CNB); and 3) non-convertible non-biomass (NCNB). Biomass is any living object. Even though, conventionally dead matters are also called biomass, we avoid that denomination as it is difficult to scientifically discern when a matter becomes non-biomass after death. The convertible non-biomass (CNB) is the one that, due to natural processes, will be converted to biomass. For example, a dead tree is converted into methane after microbial actions, the
FUNDAMENTALS OF MASS AND ENERGY BALANCE
149
methane is naturally broken down into carbon dioxide, and plants utilize this carbon dioxide in the presence of sunlight to produce biomass. Finally, non-convertible non-biomass (NCNB) is a matter that emerges from human intervention. These matters do not exist in nature and their existence can only be considered artificial. For instance, synthetic plastic matters (e.g., polyurethane) may have similar composition as natural polymers (e.g., human hair, leather), but they are brought into existence through a very different process than that of natural matters. Similar examples can be cited for all synthetic chemicals, ranging from pharmaceutical products to household cookware. This denomination makes it possible to keep track of the source and pathway of a matter. The principal hypothesis of this denomination is that all matters naturally present on Earth are either BM or CNB with the following balance: Matter from natural source + CNB, = BM + CNB2
(3.Π)
The quality of CNB2 is different from or superior to that of CNB ] in the sense that CNB 2 has undergone one extra step of natural processing. If nature is continuously moving toward a better environment (as represented by the transition from a barren Earth to a green Earth), CNB 2 quality has to be superior to CNBj quality. Similarly, when matter from natural energy sources comes in contact with BMs, the following equation can be written: Matter from natural source + B,M = B2M + CNB
(3.12)
Applications of this equation can be cited from biological sciences. When sunlight comes in contact with retinal cells, vital chemical reactions take place that result in the nourishment of the nervous system, among others (Chhetri and Islam 2008a). In these mass transfers, chemical reactions take place entirely differently depending on the light source, of which the evidence has been reported in numerous publications (Lim and Land 2007). Similarly, sunlight is also essential for the formation of vitamin D, which is essential for numerous physiological activities. In the above equation, vitamin D would fall under B2M. This vitamin D is not to be confused with the synthetic vitamin D, the latter one being the product of an artificial process. It is important to note that all products on the right hand side have greater value than the ones on the left hand side.
150
THE GREENING OF PETROLEUM OPERATIONS
This is the inherent nature of natural processing - a scheme that continuously improves the quality of the environment and is the essence of sustainable technology development. The following equation shows how energy from NCNB will react with various types of matter: Matter from unnatural source + B,M = NCNB2
(3.13)
An example of the above equation can be cited from biochemical applications. For instance, if artificially generated UV is in contact with bacteria, the resulting bacteria mass would fall under the category of NCNB, stopping further value addition by nature. Similarly, if bacteria are destroyed with synthetic antibiotic (pharmaceutical product, pesticide, etc.), the resulting product will not be conducive to value addition through natural processes, and instead becomes a trigger for further deterioration and insult to the environment. Matter from unnatural source + CNB, = NCNB3
(3.14)
An example of the above equation can also be cited from biochemical applications. The NCNB,, which is created artificially, reacts with CNB, (such as N 2 ,0 2 ) and forms NCNB r The transformation will be in a negative direction, meaning the product is more harmful than it was earlier. Similarly, the following equation can be written: Matter from unnatural source + NCNB2, = NCNB2
(3.15)
An example of this equation is that the sunlight leads to photosynthesis in plants, converting NCBM to MB, whereas fluorescent lighting would freeze that process and never convert natural nonbiomass into biomass.
3.11 Implications of a Knowledge-based Sustainability Analysis The principles of the knowledge-based model proposed here are restricted to those of mass (or material), balance, energy balance, and momentum balance. For instance, in a non-isothermal model, the first step is to resolve the energy balance based on temperature as the driver for some given time-period, the duration of which
FUNDAMENTALS OF MASS AND ENERGY BALANCE
151
has to do with characteristic time of a process or phenomenon. Following the example of the engineering approach employed by Abou-Kassem (2007) and Abou-Kassem et al. (2006), the available temperature data are distributed block-wise over the designated time-period of interest. Temperature being the driver of the bulk process of interest, a momentum balance may be derived. Velocity would be supplied by local speeds, for all known particles. This is a system that manifests phenomena of thermal diffusion, thermal convection, and thermal conduction without spatial boundaries but giving rise nonetheless to the "mass" component. The key to the system's sustainability lies with its energy balance. Here is where natural sources of biomass and non-biomass must be distinguished from non-natural, non-characteristic, industrially synthesized sources of non-biomass.
3.11.1
A General Case
Figure 3.2 envisions the environment of a natural process as a bioreactor that does not and will not enable conversion of synthetic non-biomass into biomass. The key problem of mass balance in this process, as in the entire natural environment of the earth as a whole, is set out in Figure 3.3, in which the accumulation rate of synthetic non-biomass continually threatens to overwhelm the natural capacities of the environment to use or absorb such material. In evaluating Equation 3.10, it is desirable to know all the contents of the inflow matter. However, it is highly unlikely to know all the contents, even on a macroscopic level. In the absence of a technology that would find the detailed content, it is important to know the pathway of the process in order to have an idea of the source of impurities. For instance, if de-ionized water is used in a system, one would know that its composition would be affected by the process of de-ionization. Similar rules apply to products of organic sources, etc. If we consider combustion reaction (coal, for instance) in a burner, the bulk output will likely be C O r However, this C 0 2 will be associated with a number of trace chemicals (impurities) depending on the process it passes through. Because Equation 3.10 includes all known chemicals (e.g., from source, absorption/desorption products, catalytic reaction products), it is possible to track matters in terms of CNB and NCNB products. Automatically, this analysis will lead to differentiation of C 0 2 in terms of pathway
152
THE GREENING OF PETROLEUM OPERATIONS Sustainable Pathway
CO,
Plants
Soil/sand
CH.
Bioreactor
Microbe converts to biomass or
Soil/sand
Plastic Non-biomass
Non-biomass Figure 3.2 Sustainable pathway for material substance in the environment.
Pathways of natural vs. synthetic materials Biomass Biomass
*· Time t = °°
Natural non-biomass (convertible to biomass, e.g., by sunlight) Non-Biomass DDT, Freon, Plastic (synthetic non-biomass, Inconvertible to biomass) Figure 3.3 Transitions of natural and synthetic materials.
FUNDAMENTALS OF MASS AND ENERGY BALANCE
Useful
153
Convertible C0 2 ,
> Time t = °°
ψ
Non-convertible C0 2
Harmful Figure 3.4 Divergent results from natural and artificial.
and the composition of the environment, the basic requirement of Equation 3.11. According to Equation 3.11, charcoal combustion in a burner made of clay will release C 0 2 and natural impurities of the charcoal and the materials from the burner itself. Similar phenomena can be expected from a burner made of nickel plated with an exhaust pipe made of copper. Anytime CO z is accompanied with CNB matter, it will be characterized as beneficial to the environment. This is shown in the positive slope of Figure 3.3. On the other hand, when C 0 2 is accompanied with NCNB matter, it will be considered harmful to the environment, as this is not readily acceptable by the eco-system. For instance, the exhaust of the Cu or Ni-plated burner (with catalysts) will include chemicals, e.g., nickel, copper from pipe, trace chemicals from catalysts, besides bulk C 0 2 because of adsorption/desorption, catalyst chemistry, etc. These trace chemicals fall under the category of NCNB and cannot be utilized by plants (negative slope from Figure 3.3). This figure clearly shows that the upward slope case is sustainable as it makes an integral component of the eco-system. With the conventional mass balance approach, the bifurcation graph of Figure 3.3 would be incorrectly represented by a single graph that is incapable of discerning between different qualities of C 0 2 because the information regarding the quality (trace chemicals) is lost in the balance equation. Only recently, the work of Sorokhtin et al. (2007) has demonstrated that without such distinction there cannot be any
154
THE GREENING OF PETROLEUM OPERATIONS
scientific link between global warming and fossil fuel production and utilization. In solving Equation 3.10, one will encounter a set of non-linear equations. These equations cannot be linearized. Recently, Moussavizadegan et al. (2007) proposed a method for solving non-linear equations. The principle is to cast the governing equation in engineering formulation, as outlined by AbouKassem et al. (2006) whose principles were further elaborated in Abou-Kassem (2007). The non-linear algebraic equations then can be solved in multiple solution mode. Moussavizadegan (2007) recently solved such an equation to contemporary, professionally acceptable standards of computational efficiency. The result looked like what is pictured in Figure 3.5.
Figure 3.5 The solution behavior manifested by just two non-linear bivariate equations, x4 + x*y + 0.2V4- 15x - 3 = 0 and 2 x 4 - y 4 - lOy + 3 = 0, suggests that a "cloud" of solutions would emerge.
FUNDAMENTALS OF MASS AND ENERGY BALANCE
3.11.2
155
I m p a c t of G l o b a l W a r m i n g A n a l y s i s
In light of the above analysis shown in the above section, consider the problem we encounter in evaluating global warming and its cause, as considered by Chhetri and Islam (2008b). The total energy consumption in 2004 was equivalent to approximately 200 million barrels of oil per day, which is about 14.5 terawatts, over 85% of which comes from fossil fuels (Service 2005). Globally, about 30 billion tons of C 0 2 is produced annually from fossil fuels, which includes oil, coal, and natural gas (EIA 2004). The industrial CO z produced from fossil fuel burning is considered solely responsible for the current global warming and climate change problems (Chhetri and Islam 2007). Hence, burning fossil fuels is not considered to be a sustainable option. However, this "sole responsibility" is not backed with science (in absence of our analysis above). The confusion emerges from the fact that conventional analysis doesn't distinguish between C 0 2 from natural processes (e.g., oxidation in national systems, including breathing) and C 0 2 emissions that come from industrial or man-made devises. This confusion leads to making the argument that man-made activities cannot be responsible for global warming. For instance, Chilingar and Khilyuk (2007) argued that the emission of greenhouse gases by burning of fossil fuels is not responsible for global warming and, hence, is not unsustainable. In their analysis, the amount of greenhouse gases generated through human activities is scientifically insignificant compared to the vast amount of greenhouse gases generated through natural activities. The factor that they do not consider, however, is that greenhouse gases that are tainted through human activities (e.g., synthetic chemicals) are not readily recyclable in the ecosystem. This means when "refined" oil comes in contact with natural oxygen, it produces chemicals (called non-convertible, non-biomass, NCNB). (See Equations 3.13-3.15.) At present, for every barrel of crude oil, approximately 15% additives are added (California Energy Commission 2004). These additives, with current practices, are all synthetic a n d / o r engineered materials that are highly toxic to the environment. With this "volume gain," the following distribution is achieved (Table 3.1). Each of these products is subject to oxidation either through combustion or low-temperature oxidation, which is a continuous process. Toward the bottom of the table, the oxidation rate is decreased but the heavy metal content is increased, making each product equally vulnerable to oxidation. The immediate consequence of this conversion
156
THE GREENING OF PETROLEUM OPERATIONS
Table 3.1 Petroleum products yielded from one barrel of crude oil in California. Product
Percent of Total
Finished Motor Gasoline
51.4%
Distillate Fuel Oil
15.3%
Jet Fuel
12.3%
Still Gas
5.4%
Marketable Coke
5.0%
Residual Fuel Oil
3.3%
Liquefied Refinery Gas
2.8%
Asphalt and Road Oil
1.7%
Other Refined Products
1.5%
Lubricants
0.9%
(From California Energy Commission, 2004).
through refining is that one barrel of naturally occurring crude oil (convertible non-biomass, CBM) is converted into 1.15 barrel of potential non-convertible non-biomass (NCNB) that would continue to produce more volumes of toxic components as it oxidizes either though combustion or through slow oxidation. Refining is by and large the process that produces NCNB, similar to the process described in Equations 3.12-3.15. The pathways of oil refining illustrate that the oil refining process utilizes toxic catalysts and chemicals, and the emission from oil burning also becomes extremely toxic. Figure 3.6 shows the pathway of oil refining. During the cracking of the hydrocarbon molecules, different types of acid catalysts are used along with high heat and pressure. The process of employing the breaking of hydrocarbon molecules is thermal cracking. During alkylation, sulfuric acids, hydrogen fluorides, aluminum chlorides, and platinum are used as catalysts. Platinum, nickel, tungsten, palladium, and other catalysts are used during hydro processing. In distillation, high heat and pressure are used as catalysts. As an example, just from oxidation of the carbon component, 1 kg of carbon, which was convertible non-biomass, would turn into 3.667 kg of carbon dioxide (if completely burnt) that is now no longer acceptable by the ecosystem, due to the presence of the
FUNDAMENTALS OF M A S S AND ENERGY BALANCE
Crude Oil
—►
Heat, pressure, acid catalysts
—►
H 2 S0 4 , HF, AICI3, Al 2 0 3 , Pt etc as catalysts
—►
Boiler Super-heated steam
—►
157
Distillation Column
Cracking Thermal/Catalytic
Alkylation
Platinum, nickel, tungsten, palladium
—►
Hydro processing
High heat/pressure
—►
Distillation Other methods
Figure 3.6 Pathway of oil refining process.
non-natural additives. Of course, when crude oil is converted, each of its numerous components would turn into such non-convertible non-biomass. Many of these components are not accounted for or even known, let alone a scientific estimation of their consequences. Hence, the sustainable option is either to use natural catalysts and chemicals during refining or to design a vehicle that directly runs on crude oil based on its natural properties. The same principle applies to natural gas processing (Chhetri and Islam 2008).
3.12 Concluding remarks It has long been established that Einstein's work on relativity displaced and overcame known limitations in the applicability of Newton's laws of motion at certain physical scales. Less considered, however, has been another concurrent fact. The implications of certain pieces of Einstein's corrections of Newton, especially the role of time functions, opened u p a much larger question. Perhaps the Newtonian mechanism and accounting for motion, by way of the laws of motion and universal gravitation, would have to be adjusted with regard to natural phenomena in general at other scales.
158
THE GREENING OF PETROLEUM OPERATIONS
Careful analysis suggests that conservation of mass, energy, and momentum are necessary and sufficient in accounting for natural phenomena at every scale, whereas the Laws of Motion are actually special cases and their unifying, underlying assumption — that there can be matter at rest anywhere in the universe — is aphenomenal. The sense or assumption that it is necessary to fit all physical cases to the Newtonian schema seems to have led scientists and engineers on a merry chase, particularly in dealing with phenomena observed at the molecular, atomic, electronic, and (today) the nano scale. The history of what actually happened with Faraday's experimental program, Maxwell's equations, and Einstein's generalization of Maxwell, by means of Lorentz's transformations, illustrates the straitjacket in which much applied science was placed, as a result of insisting all observed phenomena fit Newton's schema and rendering these phenomena aphenomenally, such that the logic of the chain of causation became twisted into syllogisms expressing a correct conclusion — the phenomenon — as the result of two or more falsified premises. The nature-science standpoint provides a way out of this impenetrable darkness created by the endless addition of seemingly infinite layers of opacity. We have no case anywhere in nature where the principle of conservation of mass, energy, or momentum has been violated. The truly scientific way forward, then, for modern engineering and scientific research would seem to lie on the path of finding the actual pathway of a phenomenon from its root or source to some output point by investigating the mass-balance, the energy balance, the mass-energy balance, and the momentum balance of the phenomenon. This seems to be a fitting 21 st century response to the famous and, for its time, justified lament from giants of physical science, such as Pierre Duhem who was a leading opponent of Maxwell's method. Drawing an important distinction between modeling and theory, he pointed out, even after Einstein's first relativity papers had appeared, that, in many cases, scientists would employ models (using Newtonian laws) about physical reality that were "neither an explanation nor a rational classification of physical laws, but a model constructed not for the satisfaction of reason, but for the pleasure of the imagination" (Duhem, P. 1914,81; Ariew and Barker 1986). The engineering approach has profound implications both for modeling phenomena and for the mathematics that are used to analyze and synthesize the models.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
4 A True Sustainability Criterion and Its Implications 4.1
Introduction
"Sustainability" is a concept that has become a buzzword in today's technology development. Commonly, the use of this term infers that the process is acceptable for a period of time. True sustainability cannot be a matter of definition. In this chapter, a scientific criterion for determining sustainability is presented. In this chapter, a detailed analysis of different features of sustainability is presented in order to understand the importance of using the concept of sustainability in every technology development model. As seen in previous chapters, the only true model of sustainability is nature. A truly sustainable process conforms to the natural phenomena, both in source and process. Scientifically, this means that true long-term considerations of humans should include the entire ecosystem. Some have called this inclusion "humanization of the environment'" and put this phenomenon as a pre-condition to true sustainability (Zatzman and Islam 2007). The inclusion of the entire ecosystem is only meaningful when the natural pathway for every 159
160
THE GREENING OF PETROLEUM OPERATIONS
component of the technology is followed. Only such design can assure both short-term (tangible) and long-term (intangible) benefits. However, tangibles relate to short-term and are very limited in space, whereas the intangibles relate either to long-term or to other elements of the current time frame. Therefore, a focus on tangibles will continue to obscure long-term consequences. The long-term consequences will not be uncovered until intangible properties are properly analyzed and included. Recently, Chhetri and Islam (2008) have established that by taking a long-term approach the outcome that emerges from a short-term approach is reversed. This distinction is made in relation to energy efficiency of various energy sources. By focusing on just heating value, one comes up with a ranking that diverges into what is observed as the global warming phenomenon. On the other hand, if a long-term approach were taken, none of the previously perpetrated technologies would be considered "efficient" and would long have been replaced with truly efficient (global efficiency-wise) technologies, avoiding the current energy crisis. This chapter emphasizes on intangibles due to their inherent importance and shows how tangibles should link with intangibles. This has opened up the understanding of the relationship between intangible and tangible scales, from microscopic to macroscopic properties. It has long been accepted that nature is self-sufficient and complete, rendering it as the true teacher of how to develop sustainable technologies. From the standpoint of human intention, this selfsufficiency and completeness is actually a standard for declaring nature perfect. "Perfect" here, however, does not mean that nature is in one fixed unchanging state. On the contrary, nature has the capacity to evolve and sustain which makes it such an excellent teacher. This perfection makes it possible and necessary for humanity to learn from nature, not to fix nature but to improve its own condition and prospects within nature in all periods and for any timescale. The significance of emulating nature is subtle but crucial; it is that technological or other development undertaken within the natural environment only for a limited, short term must necessarily, sooner or later, end u p violating something fundamental or characteristic within nature. Understanding the effect of intangibles and the relations of intangibles to tangibles is important for reaching appropriate decisions affecting the welfare of society and nature as well. A number of aspects of natural phenomena have been discussed here to find out the relationship between intangibles and tangibles. The
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
161
target of this study is to provide strong basis to the sustainability model. The combined mass and energy balance equation has provided sufficient and necessary support for the role of intangibles in developing sustainable technologies.
4.2
Importance of the Sustainability Criterion
Few would disagree that we have made progress as a human race in the post-Renaissance world. Empowered with New Science, led by Galileo and later championed by Newton, modern engineering is credited to have revolutionized our lifestyle. Yet, centuries after those revolutionary moves of New Science pioneers, modern day champions (e.g., Nobel Laureate Chemist Robert Curd) find such technology development mode more akin to "technological disaster" than technological marvels. Today's engineering, which is driven by economic models that have been criticized as inherently unsustainable by Nobel Laureates with notable economic theories (Joseph Stiglitz, Paul Krugman, Mohamed Yunus), has positioned itself as environmentally unsustainable and socially unacceptable. Figure 4.1 shows the nature of the problem that human habitats are facing today. All value indices that would imply improvement of human social status have declined, whereas per capita energy consumption has increased. This figure clearly marks a problem of directionality in technology development. If increasing per capita energy consumption is synonymous with economic growth (this is in line with modern-day definition of gross domestic product, GDP),
Population Per capita energy consumption - Environment pollution j r Stress, waste
*■ State of Environment k Natural resources ' Quality of life Social integration
Figure 4.1 Rising costs and declining values during the post-Renaissance world.
162
THE GREENING OF PETROLEUM OPERATIONS
further economic growth would only mean even worse decline in environmental status. This is not merely a technological problem, it is also a fundamental problem with the outlook of our modern society and how we have evolved in the modern age leading up to the Information Age. Table 4.1 summarizes various man-made activities that have been synonymous with social progress (in the modern age) but have been the main reason why we are facing the current global crisis. The current model is based on conforming to regulations and reacting to events. It is reactionary because it is only reactive and not fundamentally proactive. Conforming to regulations and rules that may not be based on any sustainable foundation can only increase long-term instability. Martin Luther King, Jr. famously pointed out, "We should never forget that everything Adolf Hitler did in Germany was 'legal.'" Environmental regulations and technology standards are such that fundamental misconceptions are embedded in them; they follow no natural laws. A regulation that violates natural law has no chance to establish a sustainable environment. What was "good" and "bad" law for Martin Luther King, Jr., is actually sustainable (hence, true) law and false (hence, implosive) law, respectively. With today's regulations, crude oil is considered to be toxic and undesirable in a water stream, whereas the most toxic additives are not. For instance, a popular slogan in the environmental industry has been, "Dilution is the solution to pollution." This is based on all three misconceptions that were discussed in Chapter 2, yet all environmental regulations are based on this principle. The tangible aspect, such as the concentration, is considered but not the intangible aspect, such as the nature of the chemical, or its source. Hence, "safe" practices initiated on this basis are bound to be quite unsafe in the long run. Environmental impacts are not a matter of minimizing waste or increasing remedial activities, but of humanizing the environment. This requires the elimination of toxic waste altogether. Even non-toxic waste should be recycled 100%. This involves not adding any anti-nature chemical to begin with. Then making sure each produced material is recycled, often with value addition. A zero-waste process has 100% global efficiency attached to it. If a process emulates nature, such high efficiency is inevitable. This process is the equivalent of greening petroleum technologies. With this mode, no one will attempt to clean water with toxic glycols, remove C 0 2 with toxic amides, or use toxic plastic paints to be more "green." No one will inject synthetic and expensive chemicals to increase EOR production.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
163
Table 4.1 Energy consumed through various human activities. Activity
Btu
Calorie
A match
1
252
An apple
400
100,800
Making a cup of coffee
500
126,000
Stick of dynamite
2,000
504,000
Loaf of bread
5,100
1,285,200
Pound of wood
6,000
1,512,000
Running a TV for 100 hours;
28,000
7,056,000
Gallon of gasoline
125,000
31,500,000
20 days cooking on gas stove
1,000,000
252,000,000
Food for one person for a year
3,500,000
882,000,000
Apollo 17's trip to the moon
5,600,000,000
1,411,200,000,000
Hiroshima atomic bomb
80,000,000,000
20,160,000,000,000
1,000 transatlantic jet flights
250,000,000,000
63,000,000,000,000
97,000,000,000,000,000
24,444,000,000,000,000,000
United States in 1999
Instead, one would settle for waste materials or naturally available materials that are abundantly available and pose no threat to the eco-system. The role of a scientific sustainability criterion is similar to the bifurcation point shown in Figure 4.2. This figure shows the importance of the first criterion. The solid circles represent a natural (true) first premise, whereas the hollow circles represent an aphenomenal (false) first premise. The thicker solid lines represent scientific steps that would increase overall benefit to the whole system. At every phenomenal node, spurious suggestions will emerge from an aphenomenal root (e.g., bad faith, bottom-line driven, myopic models), as represented by dashed thick lines. However, if the first premise is sustainable, no node will appear ahead of the spurious suggestions. Every logical step will lead to sustainable options. The thinner solid lines represent choices that emerge from aphenomenal or false sustainability criteria. At every aphenomenal node, there will be aphenomenal solutions, however each will lead to further compounding of the
164
THE GREENING OF PETROLEUM OPERATIONS Sustainable/ Beneficial
«. Logical steps
Implosive/ Harmful Figure 4.2 The role of the first criterion premise in determining the pathways to sustainable and implosive developments.
problem, radically increasing environmental costs in the long run. This process can be characterized as the antithesis of science (as a process). At every aphenomenal node, anytime a phenomenal solution is propose it is deemed spurious because it opposes the first criterion of the implosive model. Consequently, these solutions are rejected. This is the inherent conflict between sustainable and unsustainable starting points. In the implosive mode, phenomenal plans have no future prospect, as shown by the absence of a node. Albert Einstein famously said, "The thinking that got you into the problem is not going to get you out." Figure 4.2 shows the two different thought processes that dictate diverging pathways. Once launched in the unsustainable pathway, there cannot be any solution other than to return to the first bifurcation point and re-launch in the direction of the sustainable pathway. This logic applies equally to technology development as well as economics.
4.3 The Criterion: The Switch that Determines the Direction at a Bifurcation Point It is long understood that the decision making process involves asking "yes" or "no" questions. Usually, this question is thought
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
165
to be posed at the end of a thought process or logical train. It is less understood that the "yes" or "no" question cannot lead to a correct answer if the original logical train did not start with a correct first premise and if the full time domain (defining logical train) is not fully considered. Consider the following question. Is whole wheat bread better than white bread? One cannot answer this question without knowledge of the past history. For instance, for organic flour (without chemical fertilizer, genetic alteration, pesticide, metallic grinder, artificial heat sources), whole wheat bread is better. However, if the flour is not organic, then more questions need to be asked in order to first determine the degree of insult caused to the natural process and to determine what would be the composition of the whole wheat if all ingredients (including trace elements from grinder, chemical fertilizers, pesticide, heat source, and others) were considered. In this analysis, one must include all elements in space at a given time. For instance, if trace elements from pesticide or a metallic grinder are neglected, the answer will be falsified. In this particular case, whole wheat non-organic bread is worse than white non-organic bread, but it will not be shown as such if one doesn't include all elements in time and space (mass and energy). In summing up these two points, one must consider the full extent of time from the start of a process, (including logical train) and one must include all elements in space (for both mass and energy sources), in line with the theory advanced by Khan et al. (2008). Each of these considerations will have a question regarding the diversion of the process from a natural process. At the end, anything that is natural is sustainable, and therefore, is good. Let's rephrase the question: Q: Is whole wheat bread better than white bread? Conventional answer sought would be either yes or no, or true or false. However, without a proper criterion for determining true or false, this question cannot be answered. In order to search for the knowledge-based answer to this question, the following question must be asked: Qk: Are both the white bread and the whole wheat bread organic? If the answer is yes, then the answer to Q is yes. If the answer is no, then the following knowledge-based question has to be asked: Q kl : Are both non-organic? If the answer is yes, then the answer to Q becomes no, meaning whole wheat bread is not better than white
166
THE GREENING OF PETROLEUM OPERATIONS
bread. If the answer to Q kl is no, then another knowledge-based question has to be asked: Qk2: Is the white bread organic? If the answer is yes, then the answer to Q becomes no, meaning whole wheat non-organic bread is not better than white organic bread. If the answer to Q k2 is no, then the answer to Q is yes, meaning whole wheat organic bread is better than white non-organic bread. In the above analysis, the definition of "organic" has been left to the imagination of the reader. However, it must be stated that a 100% scientific organic cannot be achieved. Scientifically, organic means something that has no anti-conscious intervention of human beings. Obviously, by nature being continuous in space and time, there is no possibility of having a 100% organic product. However, this should not stop one searching for the true answer to a question. At the very least, this line of analysis will raise new questions that should be answered with more research, if deemed necessary. For this particular question, Q, we have only presented the mass balance aspect. For instance, organic bread also means that it is baked in a clay oven with natural fuel. Now, what happens if this energy balance is not respected? This poses another series of questions. Let's call them energy-related questions, QE. This question must be asked in the beginning, meaning before asking the question Qk. QE: Are both the whole wheat and non-whole wheat breads organically baked? If the answer is yes, then the previous analysis stands. If the answer is no, then the following knowledge-seeking question must be asked: QEK: Are both breads baked non-organically (e.g., electricity, microwave, processed fuel, recombined charcoal, steel stove)? If the answer is yes, then the previous analysis stands. If the answer is no, then it is a matter of more research. To date, we do not have enough research to show how whole wheat flour would react with non-organic energy sources as compared to white flour. It is clear from the above analysis that we come across many knowledge-seeking questions and each question demarks a bifurcation point. At each bifurcation point, the question to ask is, "Is the process natural?" The time frame to investigate is many times the characteristic time of a process. For environmental sustainability,
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
167
the characteristic time of the process is the duration of human species in existence. This can easily transform into infinity, as originally proposed by Khan and Islam (2007). The process, then, involves taking the limit of a process as time goes to infinity. If the process is still sustainable, it can be considered a natural process and is good for the environment. Otherwise, the process is unnatural, therefore, unsustainable. This analysis shows that the most important role of the time dimension is in setting the direction. In Figure 4.2, we see that a real starting point would lead to knowledge, whereas an unreal starting point will lead to prejudice. If the time dimension is not considered on a continuous ("continual" is not enough) basis, even the logical steps cannot be traced back in order for one to verify the first premise. It is not enough to back u p a few steps; one must back up to the first premise that led to the bifurcation between sustainable and implosive pathways. Zatzman et al. (2008) have recently highlighted the need for such considerations of the time domain in order to utilize the time dimension as a switch. It turns out that with such considerations scientists cannot determine the cause of global warming with the science that assumes all molecules are identical, thereby making it impossible to distinguish between organic CO z and industrial C0 2 . Similarly, scientists cannot determine the cause of diabetes unless there is a paradigm shift that distinguishes between sucrose in honey and sucrose in Aspartame® (Chhetri and Islam 2007).
4.3.1
Some Applications of the Criterion
The same logic would indicate that, unless the science includes intangibles, the cause(s) of global warming could not be determined either. What remain uncharted are the role of pathways and the passage of time — something that cannot be followed meaningfully in lab-controlled conditions — in transforming the internal basis of changes in certain natural phenomena of interest. One example has been given by Khan and Islam (2007b) in the context of the use of catalysts. Tangible science says catalysts play no role in the chemical reaction equation because they do not appear in the result/outcome. No mass balance accounts for the mass of catalyst lost during a reaction, and no chemical equation accounts for what happens to the "lost" catalyst molecules when they combine with the products during extremely unnatural conditions. By using the science of tangibles, one can argue that the following patent is a technological
168
THE GREENING OF PETROLEUM OPERATIONS
breakthrough (El-Shoubary et al. 2003). This patented technology separates Hg from a contaminated gas stream using CuCl 2 as the main catalyst. At a high temperature CuCl 2 would react with Hg to form Cu-Hg amalgam. This process is effective when combined with fire-resistant Teflon membranes. 1. Patent #6,841,513 - "Adsorption powder containing cupric chloride." Jan 11, 2005. 2. Patent #6,589,318 - "Adsorption powder for removing mercury from high temperature, high moisture stream." July 8, 2003. 3. Patent #6,5824,97 - "Adsorption powder for removing mercury from high temperature high moisture gas stream." June 24, 2003. 4. Patent #6,558,642 - "Method of adsorbing metals and organic compounds from vaporous streams." May 6, 2003. 5. Patent #6,533,842 - "Adsorption powder for removing mercury from high temperature, high moisture gas stream." March 18, 2003. 6. Patent #6,524,371 - "Process for adsorption of mercury from gaseous streams." Feb 25 2003. This high level of recognition for the technology is expected. After all, what happens to Teflon at high temperature and what happens to Cu-Hg amalgam is a matter of long term, or at least of time being beyond the "time of interest." (Khan (2006) describes this as "time=right now.") However, if longer-term time is used for the analysis and a bigger area is considered for the mass balance, it would become clear that the same process has actually added more waste to the environment in the form of dioxins released from Teflon and Cu-Hg. The dioxins from both would be in a more harmful state than their original states in Teflon, CuCl2, and gas stream, respectively. In the efficiency calculation, nearly 90% efficiency is reported within the reactor. This figure makes the process very attractive. However, if the efficiency calculation is conducted including the entire system, in which the heater resides, the efficiency drops drastically. In addition, by merely including more elements, the conversion of Hg in a natural gas
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
169
stream and Cu in CuCl 2 solution into Cu-Hg sludge, as well as the addition of chlorine in the effluent gas, pose the difficult question as to what has been accomplished overall. Another example can be given from the chemical reaction involving honey and Aspartame®. With the science of tangibles, the following reactions take place: Honey + 0 2 —» Energy + C0 2 + Water Aspartame® + Oz —> Energy + C0 2 + Water In fact, a calorie-conscious person would consider Aspartame® a better alternative to honey as the energy produced in Aspartame is much less than that of honey for the same weight burnt. An entirely different picture emerges if all components of honey and Aspartame® are included. In this case, the actual compositions of water as a product are very different for the two cases. However, this difference cannot be observed if the pathway is cut off from the analysis and if the analysis is performed within an arbitrarily set confine. Similar to confining the time domain to the "time of interest," or time = right now, this confinement in space perverts the process of scientific investigation. Every product emerging after the oxidation of an artificial substance will come with long-term consequences for the environment. These consequences cannot be included with the science of tangibles. Zatzman and Islam (2007) detailed the following transitions in commercial product development and argued that this transition amounts to an increased focus on tangibles in order to increase the profit margin in the short-term. The quality degradation is obvious, but the reason behind such technology development is quite murky. At present, the science of tangibles is totally incapable of lifting the fog out of this mode of technology development. A third example involves natural and artificial vitamin C. Let's use the example of lemon and vitamin C. It has been known for the longest time that lemon has both culinary and medicinal functions in ancient cultures, ranging from the Far East to Africa. However, in European literature, there is a confusion that certain fruits (e.g., orange in the following example) are only for pleasure (culinary) and others are for medicinal applications (e.g., lemon). Apart from this type of misconception, the point to note is that lemon was known to cure scurvy, a condition that arises from lack of vitamin
170
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
C. So, lemons can cure scurvy — p r e m i s e n u m b e r one. Reference to this p r e m i s e is m a d e in old literature. For instance, the following site states ( A n o n y m o u s , 2008): "An Italian Jesuit and full professor in Rome, Ferrari was an incredible linguist and broad scholar...A lover of flowers, he authored four volumes on the culture of flowers (1632) illustrated by some of the same engravers as was Hesperides. He was the "first writer to collect all the evidence available on the location of the garden of the Hesperiden and on the stealing of the Apples." Ferrari described numerous medicinal preparations based upon citrus blossoms or fruits. He notes that the orange is usually eaten for pleasure alone; the lemon, citron and pomegranate as medicine. He mentions only in passing using citrus to cure scurvy since his frame of reference is the Mediterranean world in which this disease was not a problem." Wikipedia discusses this s a m e premise a n d states, "In 1747, James Lind's experiments on s e a m e n suffering from scurvy involved a d d ing Vitamin C to their diets t h r o u g h lemon juice." Wikipedia, 2008). N o w w i t h that first premise, if o n e researches w h a t the composition of a lemon is, o n e w o u l d encounter the following types of c o m m e n t s readily. As an example, note this statement on the website, "I a m a chemist a n d I k n o w that lemon juice is 94% water, 5% citric acid, a n d 1% unidentifiable chemicals." Of course, other chemists w o u l d use m o r e scientific terms to describe the 1% "unidentified chemicals." For instance, the website of Centre national de la recherche scientifique of France, h t t p : / / c a t . i n i s t . f r / , states: "This interstock grafting technique does not increase the flavonoid content of the lemon juice. Regarding the individual flavonoids, the 6,8-di-C-glucosyl diosmetin was the most affected flavonoid by the type of rootstock used. The interstock used is able to alter the individual quantitative flavonoid order of eriocitrin, diosmin, and hesperidin. In addition, the HPLC-ESI/MSn analyses provided the identification of two new flavonoids in the lemon juice: Quercetin 3-0-rutinoside-7-0-glucoside and chrysoeriol 6,8-di-C-glucoside (stellarin-2). The occurrence of apigenin 6,8-di-C-glucoside (vicenin-2), eriodictyol 7-O-rutinoside, 6,8di-C-glucosyl diosmetin, hesperetin 7-O-rutinoside, homoeriodictyol 7-O-rutinoside and diosmetin 7-O-rutinoside was also confirmed in lemon juice by this technique."
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
171
The entire exercise involves determining the composition using the steady-state model, meaning the composition does not change with time. In addition, this line of science also assumes that only composition (and not the dynamics of matter) matters - much like the model Aristotle used some 2,500 years ago. One immediate "useful" conclusion of this is that 5% of lemon juice is ascorbic acid. With this isolated aphenomenal first premise, one can easily proceed to commercialization by producing ascorbic acid with techniques that are immediately proven to be more efficient and therefore, more economical. This is because, when ascorbic acid is manufactured, the pills have much higher concentration of ascorbic acid than lemon juice ordinarily would have. In addition, the solid materials used to manufacture vitamin C pills are also cheaper than lemons and are definitely more efficient in terms of preservation and commercialization. After all, no lemon would last for a year, whereas vitamin C pills will. Overall, if vitamin C is ascorbic acid, then manufactured vitamin C can be marketed at a much lower price than vitamin C from lemons. Now, if a clinical test can show that the manufactured vitamin C indeed cures scurvy, no one can argue that real lemons are needed, and they would obviously be a waste of money. This is the same argument put forward by Nobel Laureate Chemist Linus Pauling, who considered that synthetic vitamin C is identical to natural vitamin C and warned that higherpriced "natural" products are a "waste of money." Some thirty years later, we now know synthetic vitamin C causes cancer while natural vitamin C prevents it (Chhetri and Islam 2008). How could this outcome be predicted with science? Remove the fundamental misconception that lemon juice is merely 5% ascorbic acid, which is independent of its source or pathway. The above false conclusions, derived through conventional New Science, could be avoided by using a criterion that distinguishes between real and artificial. This can be done using the science of intangibles that includes all phenomena that occur naturally, irrespective of what might be detectable. For the use of catalysis, for instance, it can be said that if the reaction cannot take place without the catalyst, clearly it plays a role. Just because, at a given time (e.g., time = right now), the amount of catalyst loss cannot be measured, it does not mean that it (catalyst loss and / o r a role of catalysts) does not exist. The loss of catalyst is real, even though one cannot measure it with current measurement techniques. The science of
172
THE GREENING OF PETROLEUM OPERATIONS
intangibles does not wait for the time when one can "prove" that catalysts are active. Because nature is continuous (without a boundary in time and space), considerations are not focused on a confined "control" volume. For the science of tangibles, on the other hand, the absence of the catalyst's molecules in the reaction products means that one would not find that role there. The science of tangibles says that you can't find it in the reaction product, so it doesn't count. The science of intangibles says that obviously it counts, but, just as obviously, not in the same way as what is measurable in the tangible mass-balance. This shows that the existing conventional science of tangibles is incomplete. To the extent that it remains incomplete, on the basis of disregarding or discounting qualitative contributions that cannot yet be quantified in ways that are currently meaningful, this kind of science is bound to become an accumulating source of errors.
4.4
Current Practices in Petroleum Engineering
In very short historical time (relative to the history of the environment), the oil and gas industry has become one of the world's largest economic sectors, a powerful globalizing force with far reaching impacts on the entire planet that humans share with the rest of the natural world. Decades of continuous growth of the oil and gas operations have changed, in some places transformed, the natural environment and the way humans have traditionally organized themselves. The petroleum sectors draw huge public attention due to their environmental consequences. All stages of oil and gas operations generate a variety of solids, liquids, and gaseous wastes (Currie and Isaacs 2005; Wenger et al. 2004; Khan and Islam 2003a; Veil 2002; de Groot 1996; Wiese et al. 2001; Rezende et al. 2002; Holdway 2002). Different phases of petroleum operations and their associated problems are discussed in the following sessions.
4.4.1 Petroleum Operations Phases In the petroleum operations, different types of wastes are generated. In broad category they can be categorized as drilling wastes, human-generated wastes, and other industrial wastes. There are also accidental discharges, for example via air emission, oil spills, chemical spills, and blowouts.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
173
During the drilling of an exploratory well, several hundred tons of drilling mud and cuttings are commonly discharged into the marine environment. And, though an exploratory activity, such as seismic exploration, does not release wastes, it nevertheless also has a potential negative impact (Cranford et al. 2003; Putin 1999). According to a report (SECL 2002; Putin 1999), seismic shooting kills plankton, including eggs, larvae of many fish and shellfish species, and juveniles that are very close to the airguns. The most important sub-lethal effect on adult organisms exposed to chronic waste discharges, from both ecological and fisheries perspectives, is the impairment of growth and reproduction (GESAMP 1993; Putin 1999). Growth and reproduction are generally considered to be the most important sub-lethal effects of chronic contaminant exposure (Cranford et al. 2003). Seabirds aggregate around oil drilling platforms and rigs in above average numbers due to night lighting, flaring, food, and other visual cues. Bird mortality has been documented due to impact on the structure, oiling, and incineration by the flare (Wiese et al. 2001; See Fig. 4.3). Khan and Islam (2005) reported that large quantity of water discharge takes place during petroleum production, primarily resulting from the discharge of produced water, which includes injected fluid during drilling operations as well as connate water of high salinity. They also reported that produced water contains various
Figure 4.3 Flaring from an oil refinery.
174
THE GREENING OF PETROLEUM OPERATIONS
contaminants, including trace elements and metals from formations through which the water passed during drilling, as well as additives and lubricants necessary for proper operation. Water is typically treated prior to discharge, although historically this was not the case (Ahnell and O'Leary 1997). Based on the geological formation of a well, different types of drilling fluids are used. The composition and toxicity of these drilling fluids are highly variable, depending on their formulation. Water is used as the base fluid for roughly 85% of drilling operations internationally, and the remaining 15% predominantly use oil (Reis 1996). Spills make up a proportionately small component of aquatic discharges (Liu 1993). C0 2 emissions are one of the most pressing issues in the hydrocarbon sector. There are direct emissions, through flaring and burning fossil fuels, from production sites. For example, during exploration and production emissions take place due to control venting and/or flaring and the use of fuel. Based on 1994 British Petroleum figures, it is reported that emissions by mass were 25% volatile organic compounds (VOCs), 22% CH 4 ,33% NO x , 2% SOx, 17% CO, and 1 % particulate matter. Data on C 0 2 are not provided (Ahnell and O'Leary 1997). Until now, flaring is considered a production and refining technique, which wastes a huge amount of valuable resource through burning. The air emissions during petroleum processing are primarily due to uncontrolled volatilization and combustion of petroleum products in the modification of end products to meet consumer demand (Ahnell and O'Leary 1997). Oils, greases, sulphides, ammonia, phenols, suspended solids, and chemical oxygen demand (COD) are the common discharges into water during refining (Ahnell and O'Leary 1997). Natural gas processing generally involves the removal of natural gas liquid (NGLs), water vapor, inert gases, C0 2 , and hydrogen sulphide (H2S). The by-products from processing include C 0 2 and H2S (Natural Resources Canada 2002a). The oil sector contributes a major portion of C 0 2 emissions. Figure 4.4 presents the world historical emissions and projected emissions of C 0 2 from different sectors. About 29 billion tons of C 0 2 are released into the air every year by human activities, and 23 billion tons come form industry and burning fossil fuels (IPCC 2001; Jean-Baptiste and Ducroux 2003), which is why this sector is blamed for global warming. The question is how might these problems best be solved? Is there any possible solution?
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
175
Figure 4.4 World C 0 2 emissions by oil, coal, and natural gas, 1970-2025 (EIA 2004).
Throughout the life cycle of petroleum operations, there are accidental discharges, e.g., via air emission, oil spills, chemical spills, and blowouts. Figure 4.5 shows the annual total amount of oil release in the marine environment. Crude oil is one of the major toxic elements released in the marine environment by the oil industry. On average, 15 million gallons of crude oil are released yearly from offshore oil and gas operations into the marine environment. There are in total 700 million gallons of oil discharged from other sources into the sea (United States Coast Guard 1990; Khan and Islam 2004). Other sources of oil release are routine maintenance of shipping, domestic/urban runoff, up in smoke, and natural seepages. The examples mentioned above are only a few examples of the current technology development mode in petroleum sector. It is hard to find a single technology that does not have such problems. In addition to the use of technologies that are unsustainable, the corporate management process is based on a structure that resists sustainability. Generally, corporate policy is oriented towards gaining monitory benefits without producing anything (Zatzman and Islam 2006b). This model has imploded spectacularly in the aftermath of the fall of the world energy giant, Enron, in December 2001 (Deakin and
176
THE GREENING OF PETROLEUM OPERATIONS
Figure 4.5 Annual total amount of oil release in the marine environment (Oil in the Sea 2003).
Konzelmann 2004; Zatzman and Islam 2006). Post-Enron events, including the crisis that afflicted World Dot Com, indicate that practically all corporate structures are based on the Enron model (Zatzman and Islam 2005). It is clear from above discussion that there are enormous environmental impacts from current petroleum operations, but with high market demand and technological advancement in exploration and development, the petroleum operations spread all around the world and even into remote and deeper oceans (Wenger et al. 2004; Pinder 2001). Due to the limited supply of onshore oil and gas reserves, and the fact that these reserves have already been exploited for a long term, there is an increasing pressure to explore and exploit offshore reserves. As a result of declining onshore reserves, offshore oil and gas operations have increased dramatically within the last two decades (Pinder 2001). This phenomenon has already been evident in many parts of the world. For example, the gas reserves on the Scotian Shell, Canada that were unsuitable/unfeasible in the 1970s are found to be economically attractive at present (Khan and Islam 2006).
4.4.2 Problems in Technological Development The technologies promoted in the post-industrial revolution are based on the aphenomenal model (Islam 2005). This model is a gross linearization of nature ("nature" in this context includes humanity in its social nature). This model assumes that whatever appears at
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
177
At = 0 (or time = right now) represents the actual phenomenon. This is clearly an absurdity. How can there be such a thing as a natural phenomenon without a characteristic duration a n d / o r frequency? When it comes to what defines a phenomenon as truly natural, time, in one form or another, is of the essence. The essence of the modern technology development scheme is the use of linearization, or reduction of dimensions, in all applications. Linearization has provided a set of techniques for solving equations that are generated from mathematical representations of observed physical laws - physical laws that were adduced correctly, and whose mathematical representations as symbolic algebra have proven frequently illustrative, meaningful, and often highly suggestive. However, linearization has made the solutions inherently incorrect. This is because, any solution for t = "right now" represents the image of the real solution, which is inherently opposite to the original solution. Because this model does not have a real basis, any approach that focuses on the short-term may take the wrong path. Unlike common perception, this path does not intersect the true path at any point in time other than t = "right now." The divergence begins right from the outset. Any natural phenomenon or product always travels an irreversible pathway that is never emulated by the currently used aphenomenal model of technology development. Because, by definition, nature is non-linear and "chaotic" (Glieck 1987), any linearized model merely represents the image of nature at a time, t = "right now," in which their pathways diverge. It is safe to that state all modern engineering solutions (all are linearized) are anti-nature. Accordingly, the black box was created for every technology promoted (Figure 4.6). This formulation of a black box helped keep "outsiders" ignorant of the linearization process that produced spurious solutions for every problem solved. The model itself has nothing to do with knowledge. In a typical repetitive mode, the output (B) is modified by adjusting input (A). The input itself is modified by redirecting
(C) (A) Input (Supply)
Black box
(B) -+ Output (Product)
Figure 4.6 Classical "engineering" notion (redrawn from Islam 2005a).
178
THE GREENING OF PETROLEUM OPERATIONS
(B). This is the essence of the so-called "feedback" mode that has become very popular in our day. Even in this mode, nonlinearity may arise as efforts are made to include a real object in the black box. This nonlinearity is expected. Even a man-made machine would generate chaotic behavior that becomes evident only if we have the means of detecting changes over the dominant frequency range of the operation. We need to improve our knowledge of the process. Before claiming to emulate nature, we must implement a process that allows us to observe nature (Figure 4.7). Research based on observing nature is the only way to avoid spurious solutions due to linearization or elimination of a dimension. Sustainable development is characterized by certain criteria. The time criterion is the main factor in achieving sustainability in technological development. However, in the present definition of sustainability, a clear time direction is missing. To better understand sustainability, we can say that there is only one alternative to sustainability, namely, unsustainability. Unsustainability involves a time dimension, but it rarely implies an immediate existential threat. Existence is threatened only in the distant future, perhaps too far away to be properly recognized. Even if a threat is understood, it may not cause much concern now, but it will cumulatively work in its effect in the wider time scale. This problem is depicted in Figure 4.8. In Figure 4.8, the impact of the wider time scale is shown where A and B are two different development activities that are undertaken in a certain time period. According to the conventional environmental impact assessment (EIA), or sustainability assessment process, each project has insignificant impacts in the environment in the short time scale. However, their cumulative impacts will be much
(A) Input (Supply)
Nature/Environment
(B) -► Output (Product)
{Research} Figure 4.7 Research based on observing nature intersects classical "engineering' notion (redrawn from Islam 2005a).
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
179
CO
σ 'c. Φ
E c o > c LU
-t
0
Week
Month
Year
Life
Era
to infinity
Figure 4.8 Cumulative effects of activities A and B within different temporal periods.
higher and will continue under a longer time scale. The cumulative impacts of these two activities (A and B) are shown as a dark line.
4.5
Development of a Sustainable Model
The sustainability model developed by Khan and Islam (2007a) provides the basis for the direction of sustainable technology. According to this model, a process is sustainable if and only if it travels a path that is beneficial for an infinite span of time. Otherwise the process must diverge in a direction that is not beneficial in the long run. Pro-nature technology is the long-term solution. Anti-nature solutions come from Schemas that comprehend, analyze, or plan to handle change on the basis of any approach, in which time-changes, or Δί, are examined only as they approach 0 (zero), that has been designed or selected as being good for time t = "right now" (equivalent to the idea of Δί —> 0). Of course, in nature, time "stops" nowhere, and there is no such thing as steady-state. Hence, regardless of the self-evident tangibility of the technologies themselves, the "reality" in which they are supposed to function usefully is non-existent, or "aphenomenal," and cannot be placed on the graph (Figure 4.9). "Good" technology can be developed if and only if it travels a path that is beneficial for an infinite span of time. In Figure 4.9, this concept is incorporated in the notion of
180
THE GREENING OF PETROLEUM OPERATIONS
Figure 4.9 Direction of sustainability (Redrawn from Khan and Islam 2007a).
"time tending to Infinity," which (among other things) implies also that time-changes, instead of approaching 0 (zero), could instead approach Infinity, i.e., Δί —» °°. In this study, the term "perception" has been introduced, and it is important at the beginning of any process. Perception varies person to person. It is very subjective and there is no way to prove if a perception is true or wrong, or its effect is immediate. Perception is completely one's personal opinion developed from one's experience without appropriate knowledge. That is why perception cannot be used as the base of the model. However, if perception is used in the model, the model would look like as follows (Figure 4.10).
4.6 Violation of Characteristic Time Another problem with current technology is that it violates the natural, characteristic time. Characteristic time is similar to the natural life cycle of any living being. However, characteristic time does not include any modification of life cycle time due to non-natural human intervention. For instance, the life span of an unconfined natural chicken can be up to 10 years, yet table fowls or broilers
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
181
Figure 4.10 Direction of sustainability (Modified from Khan and Islam 2007a).
reach adult size and are slaughtered at six weeks of age (PAD 2006). The characteristic time for broiler chickens has been violated due to human intervention. This study has emphasized on characteristic time because of its pro-nature definition. Anything found in nature, grown and obtained naturally, has been optimized as a function of time. However, anything produced either by internal, genetic intervention or external, chemical fertilizer along with pesticide utilization is guaranteed to have gone through imbalances. These imbalances are often justified in order to obtain short-term tangible benefits, by trading off with other intangible benefits that are more important. In the long run, such systems can never produce long-term good.
4.7
Observation of Nature: Importance of Intangibles
Nature is observed and recorded only in tangible aspects detectable with current technologies. Accordingly, much of what could only be taking place as a result of intangible, but very active, orderliness
182
THE GREENING OF PETROLEUM OPERATIONS
within nature is considered "disorder" according to the tangible standard. The greatest confusion is created when this misapprehension is then labeled "chaotic" and its energy balance on this basis portrayed as headed towards "heat death," or "entropy," or the complete dissipation of any further possibility of extracting "useful work." Reality is quite different. In nature, there is not a single entity that is linear, symmetric, or homogeneous. On Earth, there isn't a single process that is steady or even periodic. Natural processes are chaotic, but not in the sense of being arbitrary or inherently tending towards entropy. Rather, they are chaotic in the sense that what is essentially orderly and characteristic only unfolds with the passage of time within the cycle or frequency that is characteristic of the given process at a particular point. What the process looks like at that point is neither precisely predictable, previous to that point, nor precisely reconstructible or reproducible after that point. The path of such a process is defined as chaotic on the basis of it being aperiodic, non-linear, and non-arbitrary. Nature is chaotic. However, the laws of motion developed by Newton cannot explain the chaotic motion of nature, due to their assumptions that contradict the reality of nature. The experimental validity of Newton's laws of motion is limited to describing instantaneous macroscopic and tangible phenomena. Microscopic and intangible phenomena are ignored, however. The classical dynamics, as represented by Newton's laws of motion, emphasize fixed and unique initial conditions, stability, and equilibrium of a body in motion (Ketata et al. 2007a). However, the fundamental assumption of constant mass is adequate to conflict Newton's laws of motion. Ketata et al. (2007a) formulated the following relation to describe the body in continuous motion in one space:
K6f + 2)+(3f2 + 2i + l j ) c e " where F is the force on the body; U =
t3+t2+t+l;
and c is a constant. The above relation demonstrates that the mass of a body in motion depends on time, whether F varies over time or not. This
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
183
is absolutely the contradiction of the first law of motion. Similarly, the acceleration of a body in motion is not proportional to the force acting on the body because mass is not constant. Again, this is a contradiction of the second law of motion. Here it is found that time is the biggest issue which, in fact, dictates the correctness of Newton's laws of motion. Considering only instantaneous time (Af —» 0), Newton's laws of motion will be experimentally valid with some error. However, considering the infinite time span (Af —> °°), the laws cannot be applicable. That is why sustainable technologies that include short-term to long-term benefits cannot be explained by Newton's laws. To overcome this difficulty, it is necessary to break out of "Af —> 0" in order to include intangibles, which is the essence of pro-nature technology development. In terms of the well-known laws of conservation of mass (m), energy (E), and momentum (p), the overall balance, B, within nature may be defined as a function of all of them: B=f(m,E,p)
(4.2)
The perfection without stasis that is nature means everything that remains in balance within it is constantly improving with time. That is:
If the proposed process has all concerned elements, such that each element is following this pathway, none of the remaining elements of the mass balance will present any difficulty. Because the final product is considered as time extends to infinity, the positive ("> 0") direction is assured. Pro-nature technology, which is non-linear, increases its orderliness on a path that converges at infinity, after providing maximum benefits over the intervening time. This is achievable only to the extent that such technologies employ processes as they operate within nature. They use materials whose internal chemistry has been refined entirely within the natural environment and whose subsequent processing has added nothing else from nature in any manner other than its characteristic form. Any and every other technology is anti-nature. The worst among them are self-consciously linear, "increasing" order artificially by means of successive superpositions that supposedly take side-effects and negative consequences into
184
THE GREENING OF PETROLEUM OPERATIONS
account as they are detected. This enables the delivery of maximum power, or efficiency, for an extremely short term. It does so without regard to coherence or overall sustainability and at the cost of detrimental consequences carrying on long after the "great advances" of the original anti-nature technology have dissipated. Further disinformation lies in declaring the resulting product "affordable," "inexpensive," "necessary," and other self-serving and utterly false attributes while increasing only very short-term costs. Any product that is anti-nature would turn out to be prohibitively costly if long-term costs are included. A case in point is tobacco technology. In Nova Scotia alone, 1,300 patients die each year of cancer emerging directly from smoking (Islam 2003). These deaths cost us 60 billion dollars in body parts alone. How expensive should cigarettes be? If intangibles are included in any economic analysis, a picture very different from what is conventionally portrayed will emerge (Zatzman and Islam 2007). Any linearized model can be limited or unlimited, depending on the characteristics of the process (Figure 4.11). The "limited linearized model" has two important characteristics — more tangible features than intangible, and a finite, limited amount of disorder or imbalance. Because only linearized models are man-made, nature has time to react to the disorder created by this limited model, and it may, therefore, be surmised that such models are unlikely to cause irreparable damage. With more intangible features than tangible and an unlimited degree of disorder, or imbalance, the unlimited linearized model is characterized by long-term effects that are little understood but far more damaging. Contemporary policy-making processes help conceal a great deal of actual or potential imbalance from immediate view or detection - a classic problem with introducing new pharmaceuticals, for example. Since a drug has to pass the test of not showing allergic reactions, many drugs make it into the market after being "tweaked" to delay the onset of what are euphemistically called "contra-indications." An elaborate and tremendously expensive process of clinical trials is unfolded to mask such "tweaking," mobilizing the most heavily invested shareholders of these giant companies to resist anything that would delay the opportunity to recoup their investment in the marketplace. The growing incidences of suicide among consumers of Prozac® and other SSRI-type anti-depressant drugs and of heart-disease "complications" among consumers of
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
185
Order/balance A
Non-linear
Time
Linearized (limited) Disorder/Imbalance
Linearized (unlimited)
Figure 4.11 Pathway of nature and anti-nature (Modified from Khan and Islam 2006).
"Cox-2" type drugs for relief from chronic pain are evidence of the consequences of the unlimited linearized model and of how much more difficult any prevention of such consequences is (Miralai 2006). In forms of concentrations, unlimited pertains to intangible. Here is another example of how the unlimited linearized model delays the appearance of symptoms. If food is left outside, in 2 to 3 days it will cause food poisoning, which provokes diarrhea. However, if the food is placed in artificial refrigeration, the food will retain some appearance of "freshness" even after several weeks, although its quality will be much worse than the "rotten" food that was left outside. Another more exotic but non-industrial example can be seen in the reaction to snake venom. The initial reaction is immediate. If the victim survives, there is no long-term negative consequence. Used as a natural source or input to a naturally-based process, snake venom possesses numerous long-term benefits and is known for its anti-depressed nature. Repositioning cost-benefit analysis away from short-term considerations, such as the cheapness of synthesized substitutes, to the more fundamental tangible/intangible criterion of long-term costs and benefits, the following summary emerges: tangible losses are very limited, but intangible losses are not.
186
THE GREENING OF PETROLEUM OPERATIONS
4.8
Analogy of Physical Phenomena
Mathematicians continue to struggle with the two entities "0" and "°°," whose full meanings and consequences continue to mystify (Ketata et al. 2006a, 2006b). However, these two entities are most important when intangible issues are counted, as the following simple analogy from well-known physical phenomena (Figure 4.12) can demonstrate. As "size," i.e., space occupied (surface area or volume) per unit mass, goes down, the quantity of such forms of matter goes up. This quantity approaches infinity as space occupied per unit mass heads towards zero. However, according to the Law of Conservation of Mass and Energy, mass can neither be created nor destroyed and it only can transform from one form to another form. This contradiction was resolved in the early 20th century when it was proven that as mass decreased, its quantity could increase as particles of mass were converted into quanta of energy. Infinity means that a quantity is too large to count exactly, but that it enjoys practical existence. Conventionally, zero on the other hand denotes non-existence, posing another paradox that is nonetheless removable when the intangible aspect is considered. Something that is infinite in number is present everywhere but has no size. As Figure 4.13 shows, mass turns into the energy at the end and loses
Photon
q Quark k Electron b Proton
\ > Atom
\_Molecule ^ * " - ^ ^ Particle - o Planet Size/Mass Figure 4.12 Relation of size/mass to number.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
187
"size" — a transition of the tangible into the intangible. This also signifies that the number of intangibles is much more than that of tangibles. We can measure the tangible properties, but it is difficult to measure the intangible. Yet, inability to measure the intangible hardly demonstrates non-existence. Happiness, sorrow, etc., are all clearly intangible, and they possess no tangible properties whatsoever, no matter how tangible their causes. As Figure 4.12 suggests, the scale of the intangible is potentially far more consequential than that of the tangible.
4.9 Intangible Cause to Tangible Consequence Short-term intangible effects are difficult to understand, but consideration of the treatment procedures employed by homeopaths may serve to elucidate. The most characteristic principle of homeopathy is that the potency of a remedy can be enhanced by dilution, an inconsistency with the known laws of chemistry (Homeopathy 2006). In some cases, the dilution is so high that it is extremely unlikely that one molecule of the original solution would be present in that dilution. As there is no detectable mechanism to this, the effect of the molecule cannot always be understood, and that is why the homeopathy still remains controversial to the modern science of tangibles. However, the trace ingredient of dilution is not always ignorable. Recently, Rey (2003) studied the thermoluminescence of ultra-high dilution of lithium chloride and sodium chloride and found the emitted light specific of the original salts dissolved initially. The dilution was beyond Avogadro's number (~ 6.0 x 1023 atoms per mole), but its effect was visible. In other words, when concentration of a substance descends to below detection level, it cannot be ignored, as its effects remain present. This is where greater care needs to be taken in addressing the harmful potential of chemicals in low concentrations. Lowering the concentration cannot escape the difficulty — a significant consideration when it comes to managing toxicity. Relying on low concentration as any guarantee of safety defeats the purpose when the detection threshold used, to regulate what is "safe," is higher than the lowest concentrations at which these toxins may be occurring or accumulating in the environment. Although the science that will identify the accumulation of effects from toxic concentrations before they reach the threshold of regulatory detection remains to be established, the point is already clear. Tangible effects may proceed
188
THE GREENING OF PETROLEUM OPERATIONS
from causes that can remain intangible for some unknown period of time. Mobile phones are considered one of the biggest inventions of modern life for communication. So far, the alert of using mobile phones was limited only to the human brain damage from nonnatural electro magnetic frequency. An official Finnish study found that people who used the phones for more than 10 years were 40 percent more likely to get a brain tumor on the same side as they held the handset (Lean and Shawcross 2007). However, recently it has been observed that mobile frequency also causes serious problems for other living beings of nature, which are very important for the balance of the ecological system. Recently, an abrupt disappearance of the bees that pollinate crops has been noticed, especially in the USA as well as some other countries of Europe (Lean and Shawcross 2007). The plausible explanation of this disappearance is that radiation from mobile phones interferes with bees' navigation systems, preventing the famously home-loving species from finding their way back to their hives. Most of the world's crops depend on pollination by bees. That is why a massive food shortage has been anticipated due to the extinction of these bees, which is due to radiation given off by mobile phones. Albert Einstein once said that if bees disappeared, "man would have only four years of life left" (Lean and Shawcross 2007). This is how a non-natural hi-tech instrument poses tangible effects in the long run due to its intangible causes.
4.10 Removable Discontinuities: Phases and Renewability of Materials By introducing time-spans of examination unrelated to anything characteristic of the phenomenon being observed in nature, discontinuities appear. These are entirely removable, but they appear to the observer as finite limits of the phenomenon, and as a result, the possibility that these discontinuities are removable is not even considered. This is particularly problematic when it comes to phase transitions of matter and the renewability or non-renewability of energy. The transition between solid, liquid, and gas in reality is continuous, but the analytical tools formulated in classical physics are
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
189
anything but continuous. Each P-V-T model applies to only one phase and one composition, and there is no single P-V-T model that is applicable to all phases (Cismondi and Mollerup 2005). Is this an accident? Microscopic and intangible features of phase transitions have not been taken into account, and as a result of limiting the field of analysis to macroscopic, tangible features, modeling becomes limited to one phase and one composition at a time. When it comes to energy, everyone has learned that it comes in two forms - renewable and nonrenewable. If a natural process is being employed, however, everything must be "renewable" by definition, in the sense that, according to the Law of Conservation of Energy, energy can be neither created nor destroyed. Only the selection of the time-frame misleads the observer into confounding what is accessible in that finite span with the idea that energy is therefore running out. The dead plant material that becomes petroleum and gas trapped underground in a reservoir is being added to continually. However, the rate at which it is extracted has been set according to an intention that has nothing to do with the optimal timeframe in which the organic source material could be renewed. Thus, "non-renewability" is not any kind of absolute fact of nature. On the contrary, it amounts to a declaration that the pathway on which the natural source has been harnessed is anti-nature.
4.11
Rebalancing Mass and Energy
Mass and energy balance inspected in depth discloses intention as the most important parameter and sole feature that renders the individual accountable to, and within, nature. This draws serious consequences for the black box approach of conventional engineering, because a key assumption of the black box approach stands in contradiction to one of the key corollaries of the most fundamental principle of all, the Law of Conservation of Matter. Conventionally, the mass balance equation is represented as "mass-in equals mass-out" (Figure 4.13). In fact, however, this is only possible if there is no leak anywhere and no mass can flow into the system from any other point, thereby rendering the entire analysis a function of tangible, measurable quantities, or a "science" of tangibles only.
190
THE GREENING OF PETROLEUM OPERATIONS
Known Mass in
Known Accumulation
Known Mass out
Figure 4.13 Conventional mass balance equation incorporating only tangibles.
The mass conservation theory indicates that the total mass is constant. It can be expressed as follows: Σρ mi = Constant
(Equation 4.4)
where m = mass and i is the number from 0 to °°. In a true sense, this mass balance encompasses mass from the very macroscopic to microscopic, detectable to undetectable, or in other words, from tangible to intangible. Therefore, the true statement should be as illustrated in Figure 4.14: "Known mass-in" + "Unknown mass-in" = "Known mass-out" + "Unknown mass-out" + "Known accumulation" + "Unknown accumulation" (Equation 4.5)
The unknowns can be considered intangible, yet they are essential to include in the analysis as they incorporate long-term and other elements of the current timeframe.
Unknown Mass in
Known Accumulation Known Mass in
Known Mass out Unknown Accumulation
Unknown Mass out Figure 4.14 Mass-balance equation incorporating tangibles and intangibles.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
191
In nature, the deepening and broadening of order is continually observed - many pathways, circuits, and parts of networks are partly or even completely repeated and the overall balance is further enhanced. Does this actually happen as arbitrarily as conventionally assumed? A little thought suggests this must take place principally in response to human activities and the response of the environment to these activities and their consequences. Nature itself has long established its immediate and resilient dominion over every activity and process of everything in its environment, and there is no other species that can drive nature into such modes of response. In the absence of the human presence, nature would not be provoked into having to increase its order and balance, and everything would function in "zero net waste" mode. An important corollary of the Law of Conservation of Mass is that no mass can be considered in isolation from the rest of the universe. Yet, the black box model clearly requires such an impossibility. However, since human ingenuity can select the time frame in which such a falsified "reality" would be, the model of the black box can be substituted for reality, and the messy business of having to take intangibles into account is foreclosed once and for all.
4.12
Energy: The Current Model
A number of theories have been developed in the past centuries to define energy and its characteristics. However, none of the theories are enough to describe energy properly. All of the theories are based on many idealized assumptions that have never existed practically. Consequently the existing model of energy and its relation to others cannot be accepted confidently. For instance, the second law of thermodynamics depends on Carnot's cycle in classical thermodynamics, and none of the assumptions of Carnot's cycle exist in reality. The definitions of ideal gas, reversible process, and adiabatic process used in describing the Carnot's cycle are imaginary. In 1905, Einstein came u p with his famous equation, E = mc2, which shows an equivalence between energy (E) and relativistic mass (m) in direct proportion to the square of the speed of light in a vacuum (c2). However, the assumption of constant mass and the concept of a vacuum do not exist in reality. Moreover, this theory was developed on the basis of planks constant, which was derived from black body radiation. Perfectly, black body does not even exist in reality.
192
THE GREENING OF PETROLEUM OPERATIONS
Therefore, the development of every theory has depended on a series of assumptions that do not exist in reality.
4.12.1 Supplements of Mass Balance Equation For whatever else remains unaccounted, the energy balance equation supplements the mass balance equation, which in its conventional form necessarily falls short in explaining the functionality of nature coherently as a closed system. For any time, the energy balance equation can be written as: [ΣΟ, = Constant, i going from 1 to infinity (Equation 4.6) o Where a is the activity equivalent to potential energy. In the above equation, only potential energy is taken into account. Total potential energy, however, must include all forms of activity, and once again a large number of intangible forms of activity, e.g., the activity of molecular and smaller forms of matter, cannot be "seen" and accounted for in this energy balance. The presence of human activity introduces the possibility of other potentials that continually upset the energy balance in nature. There is overall balance but some energy forms, e.g., electricity (either from combustion or nuclear sources), which would not exist as a source of useful work except for human intervention, continually threaten to push this into a state of imbalance. In the definition of activity, both time and space are included. The long term is defined by time reaching to infinity. The "zero waste" condition is represented by space reaching infinity. There is an intention behind each action, and each action plays an important role in creating overall mass and energy balance. The role of intention is not to create a basis for prosecution or enforcement of certain regulations. Rather, it is to provide the individual with a guideline. If the product, or the process, is not making things better with time, it is fighting nature - a fight that cannot be won and is not sustainable. Intention is a quick test that will eliminate the rigorous process of testing feasibility, long-term impact, etc. Only with "good" intentions, things can improve with time. After that, other calculations can be made to see how fast the improvement will take place.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
193
In clarifying the intangibility of an action or a process, with reference to the curve of Figure 4.11, the equation has a constant, which is actually an infinite series: a = Σ^α, = a0 + αλ + a2 + a3 +
(Equation 4.7)
If each term of Equation 4.6 converges, it will have a positive sign, which indicates intangibility; hence, the effect of each term thus becomes important for measuring the intangibility overall. On this path, it should also become possible to analyze the effect of any one action and its implications for sustainability overall, as well. In can be inferred that man-made activities are not enough to change the overall course of nature. Failure until now, however, to account for the intangible sources of mass and energy has brought about a state of affairs in which, depending on the intention attached to an intervention, the mass-energy balance can either be restored and maintained over the long-term, or increasingly threatened and compromised in the short term. In the authors' view, it would be better to develop the habit of investigating nature and the possibilities it offers to humanity's present and future by considering time t at all scales, reaching to infinity. This requires eliminating the habit of resorting to time scales that appear to serve an immediate ulterior interest in the short term, but that in fact have nothing to do with the natural phenomena of interest, and therefore lead to something that will be anti-nature in the long term and short term. The main obstacle in discussing and positioning human intentions within the overall approach to the Laws of Conservation of Mass, Energy, and Momentum stems from notions of the so-called "heat death" of the universe predicted in the 19lh century by Lord Kelvin and enshrined in his Second Law of Thermodynamics. In fact, this idea, that the natural order must "run down" due to entropy, eliminating all sources of "useful work," naively assigns a permanent and decisive role for negative intentions in particular, without formally fixing or defining any role whatsoever for human intentions in general. Whether failures arise out of the black box approach to the mass-balance equation or out of the unaccounted, missing potential energy sources in the energy-balance equation, failures in the short term become especially consequential when made by those who defend the status quo to justify anti-nature "responses," the kind well-described as typical examples of "the roller coaster of the Information Age" (Islam 2003).
194
THE GREENING OF PETROLEUM OPERATIONS
4.13 Tools Needed for Sustainable Petroleum Operations Sustainability can be assessed only if technology emulates nature. In nature, all functions or techniques are inherently sustainable, efficient, and functional for an unlimited time period. In other words, as far as natural processes are concerned, "time tends to infinity." This can be expressed as t or, for that matter, At —> oo. By following the same path as the functions inherent in nature, an inherently sustainable technology can be developed (Khan and Islam 2005b). The "time criterion" is a defining factor in the sustainability and virtually infinite durability of natural functions. Figure 4.11 shows the direction of nature-based, inherently sustainable technology contrasted with an unsustainable technology. The path of sustainable technology is its long-term durability and environmentally wholesome impact, while unsustainable technology is marked by At approaching 0. Presently, the most commonly used theme in technology development is to select technologies that are good for t = "right now," or At = 0. In reality, such models are devoid of any real basis (termed "aphenomenal" by Khan et al. 2005) and should not be applied in technology development if we seek sustainability for economic, social, and environmental purposes. In terms of sustainable technology development, considering pure time (or time tending to infinity) raises thorny ethical questions. This "time tested" technology will be good for nature and good for human beings. The main principle of this technology will be to work towards, rather than against, natural processes. It would not work against nature or ecological functions. All natural ecological functions are truly sustainable in this long-term sense. We can take a simple example of an ecosystem technology (natural ecological function) to understand how it is time-tested (Figure 4.15). In nature, all plants produce glucose (organic energy) through utilizing sunlight, C0 2 , and soil nutrients. This organic energy is then transferred to the next higher level of organisms which are small animals (zooplankton). The next higher (tropical) level organism (high predators) utilizes that energy. After the death of all organisms, their body masses decompose into soil nutrients, which again take plants to keep the organic energy looping (Figure 4.15). This natural production process never malfunctions and remains constant for an infinite time. It can be defined as a time-tested technique.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
195
Figure 4.15 Inherently sustainable natural food production cycle.
This time-tested concept can equally apply to technology development. New technology should be functional for an infinite time. This is the only way it can achieve true sustainability (Figure 4.16). This idea forms the new assessment framework that is developed and shown in Figure 4.16 and 4.17. The triangular sign of sustainability in Figure 4.16 is considered the most stable sign. This triangle is formed by different criteria that represent a stable sustainability in technology development. Any new technology could be evaluated and assessed by using this model. There are two selection levels - the primary level and the secondary level. A technology must fulfill the primary selection criterion, "time," before being taken to the secondary level of selection. For a simulation test, we imagine that a new technology is developed to produce a product named "Ever-Rigid." This product is noncorrosive, non-destructive, and highly durable. The "Ever-Rigid" technology can be tested using the proposed model to determine whether it is truly sustainable or not. The first step of the model is to find out if the "Ever-Rigid" technology is "time-tested." If the technology is not durable over infinite time, it is rejected as an
196
THE GREENING OF PETROLEUM OPERATIONS
Figure 4.16 Pictorial view of the major elements of sustainability in technology development.
unsustainable technology and would not be considered for further testing. For, according to the model, time is the primary criterion for the selection of any technology. If the "Ever-Rigid" technology is acceptable with respect to this time criterion, then it may be taken through the next process to be assessed according to a set of secondary criteria. The initial set of secondary criteria analyzes environmental variants. If it passes this stage, it goes to the next step. If the technology is not acceptable in regard to environmental factors, then it might be rejected, or further improvements might be suggested to its design. After environmental evaluation, the next two steps involve technological, economic, and societal variants analyses, each of which follows a pathway similar to that used to assess environmental suitability. Also, at these stages, either improvement on the technology will be required or the technology might be rejected as unsustainable.
4.14 Conditions of Sustainability In order to consider a technology that is inherently sustainable and used in petroleum operations, a method is needed for evaluation. This evaluation method should be based on principles of true sustainability, which are defined and shown in the form of a flowchart in Figure 4.17. Based on this newly developed method, a practical
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
197
tool is proposed and presented as shown in Figure 4.18. In this evaluation method, for the sake of sustainability, the total natural resources should be conserved in a whole technological process.
Figure 4.17 Proposed sustainable technology flowchart (modified Khan and Islam 2005a).
198
THE GREENING OF PETROLEUM OPERATIONS
Figure 4.18 Proposed sustainable technology flowchart (after Khan et al. 2006a).
Also, waste produced in the process of using the technology should be within the assimilative capacity of the ecosystem likely to be affected. This means that an intra- and inter-generation ownership equity of natural resources, on which the technology depends, must be ascertained (Daly 1999). Daly (1999) points out that all inputs to an economic process, such as the use of energy, water, air, etc., are
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
199
from natural ecology and all the wastes of the economic process are sunk back into it. In other words, energy from the ecological system is used as throughput in the economic process, and emissions from the process are given back to ecology. In this sense, an economic system is a subsystem of ecology, and, therefore, the total natural capital should be constant or increasing. Man-made capital and environmental capitals are complementary, but they are not substitutable. As such, any energy system should be considered sustainable only if it is socially responsible, economically attractive, and environmentally healthy (Islam 2005c). To consider the petroleum operations system sustainable, it should fulfill basic environmental, social, economic, and technological criteria (Pokharel et al. 2003, 2006; Khan and Islam 2005b, 2005c; Khan et al. 2005, 2006a). In this study, the following criteria are taken into consideration: Natural (Environment) Capital (Cn) + Economic Capital (Ce) + Social Capital (Cs) a Constant for all Time Horizons (Cn + Ce + Cc) t > constant for any time "t" provided that dt
' dt
' dt
These conditions are shown in a flow chart format in Figure 4.18. In the proposed model, a technology is only "truly sustainable" if it fulfills the time criterion. Other important criteria that it must also fulfill are related to environmental, social, and economic factors as shown in 4.18.
4.15
Sustainability Indicators
Indicators can be used to measure the sustainability state of petroleum operations. Sustainability, or sustainable operations, is accepted as a vision for managing the interaction between the natural environment and social and economic progress with respect to time. However, there is no suitable method to measure the sustainability of petroleum operations. Experts are still struggling with the practical problem of how to measure it. The Centre d'Estudis D'Informaci Ambiental (CEIA 2001) stated, "the move towards sustainability
200
THE GREENING OF PETROLEUM OPERATIONS
would entail minimizing the use of energy and resources by maximizing the use of information and knowledge." In effect, in order to develop sustainable technology and manage natural resources in a sustainable manner, decision-makers and policy-makers need to improve the application of knowledge gained form information. However, there is generally a large communication gap between the provision of data and the application of knowledge. The use of sustainability indicators is one method of providing information in a format that is usable by policy-makers and decisionmakers. An indicator is a parameter that provides information about environmental issues with a significance that extends beyond the parameter itself (OECD 1993 and 1998). Indicators have been used for many years by social scientists and economists to explain economic trends. A typical example is Gross National Product (GNP). Different NGOs, government agencies, and other organizations are using indicators for addressing sustainable development. These organizations include the World Resources Institute, the World Conservation UnionIUCN, United Nations Environmental Program, the UN Commission on Sustainable Development, the European Environmental Agency, the International Institute of Sustainable Development (USD), and the World Bank (IChemE 2002). Indicators for addressing sustainable development are widely accepted by development agencies at national and international levels. For example, Agenda 21 (Chapter 40) states that "indicators of sustainable development need to be developed to provide solid bases for decision-making at all levels and to contribute to the self-regulating sustainability of integrating environmental and development systems" (WCED, 1987). This has led to the acceptance of sustainability indicators as basic tools for facilitating public choices and supporting policy implementation (Dewulf and Langenhove 2004; Adrianto et al. 2005). It is important to select suitable indicators, because they need to provide information on relevant issues, identify development-potential problems and perspective, provide analyses and interpret potential conflicts and synergies, and assist in assessing policy implementations and impacts. \ Khan and Islam (2005a, 2006) developed sets of indicators for technology development and oil and gas operations. The hierarchy positions of criteria and indicators are presented in Figure 4.19. They developed indicators for environmental, societal, policy, community, and technological variants, which are shown in Figures 4.20-4.23.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
201
Energy Sector
Criterion 1
l
Criterion 2
Criterion 3
Indicators 1
Indicators 1
Indicators 1
Indicators 2
Indicators 2
Indicators 2
Indicators 3
Indicators 3
Indicators 3
Indicators 4
Indicators 4
Indicators 4
Figure 4.19 Hierarchical position of criteria and indicators of sustainability.
By analyzing these sets of indicators they also evaluated the sustainable state of offshore operations and its technology
4.16
Assessing the Overall Performance of a Process
In order to break out of the conventional analysis introduced through the science of tangibles, we proceed to discuss some salient features of the time domain and present how using time as the fourth dimension can assess the overall performance of a process. Here, time t is not orthogonal to the other three spatial dimensions. However, it is no less a dimension for not being mutually orthogonal. Socially available knowledge is also not orthogonal either with respect to time f, or with respect to the other three spatial dimensions. Hence, despite the training of engineers and scientists in higher mathematics that hints, suggests, or implies that dimensionality must be tied u p somehow in the presence of orthogonality, orthogonality is not a relationship built into dimensionality. It applies only to the arrangements we have invented to render three spatial dimensions simultaneously visible, i.e., tangible. Between input and output, component phenomena can be treated as lumped parameters, just as, for example, in electric circuit theory, resistance/reactance is lumped in a single resistor, capacitance in a single capacitor, inductance in a single inductor, electromotive
202
THE GREENING OF PETROLEUM OPERATIONS
Figure 4.20 Environmental criteria.
Figure 4.21 Criteria to consider for sustainability study of offshore oil and gas.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
203
Figure 4.22 Policy criteria to consider for the sustainability of offshore oil and gas.
Figure 4.23 Technological indicators of sustainability.
204
THE GREENING OF PETROLEUM OPERATIONS
potential/force and the current of the entire circuit are lumped at a power supply or at special gated junction-points (such as between the base and emitter of a transistor), etc. Similarly, in the economic theory of commodity transactions, relations of exchange in the market lump all "supply" with the seller and all "demand" with the buyer, even though in reality, as everyone knows, there is also a serious "demand" (need for money) on the part of the seller and there is a certain "supply" (of cash) in the hands of the buyer. Even within certain highly-engineered phenomena, such as an electric circuit, in which human engineering has supplied all the ambient conditions (source of electrical energy, circuit transmission lines, etc.), after assuming certain simplifying conditions - a near-zero frequency, virtually direct current flow, and very small potential differences - we still have no idea how stable or uniform the voltage difference is at any point in the circuit or whether the current is continuous. The lumped-parameter approach enables us to characterize the overall result/difference/change at the output compared to the input without worrying about the details of what actually happened between the input and the output. Clearly, when natural processes are being considered, such an approach leaves a great deal unaccounted for and unexplained. So long as the computed result matches the difference measured between the input and the output, this approach allows any interpretation to account for what happened. Closely related to the technique of characterizing the operations of a process by means of lumped parameters is the technique of assessing or describing the overall performance of the process (or development) according to objective, external, uniform "standards" or norms. In the MKS system of SI units, for example, the meter is standardized as a unit of distance according to the length of some rod of some special element maintained in a vacuum bell at a certain temperature and pressure at some location in Paris, France. Similarly the NIST in Washington, D.C. standardizes the duration of the "second" as the fundamental unit of time according to an atomic clock, etc. The problem with all such standards is that the question of the standard's applicability for measuring something in the processof-interest is never asked beforehand. Consider the known, and very considerable, physical difference between the way extremely high-frequency [tiny-wavelength] EM waves and much lowerfrequency [much-greater wavelength] audible-sound waves
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
205
propagate. The meter may be quite reasonable for the latter case. Does it follow, however, that the nanometer — recall that it is based on subdividing the meter into one billion units — is equally reasonable for the former case? The physical reality is that the standard meter bar in Paris actually varies in length by a certain number of picometers or nanometers just within an Earth year. If the processor-interest is EM radiation traversing light-years through space, variation of the standard meter by 1 nanometer or even 1000 picometers will make no sense of whatever measure we assign to something happening in the physical universe at this scale. The objectivity, externality, and uniformity of standards enables a comparison based on what the human observer can directly see, hear, smell, touch or taste — or, more indirectly, measure — according to standards that can be tangibly grasped within ordinary human understanding. However, is science reducible to that which may be tangibly grasped within ordinary human understanding? If science were so reducible, we could, and should, have spent the last 350+ years since Galileo fine-tuning our measurements for the speed of bodies falling freely towards Earth. This example hints at the solution to the conundrum. Once the principle of gravity as a force — something that cannot be directly seen, heard, smelled, touched, or tasted — acting everywhere on the earth was grasped, then measuring and comparing the free fall of objects according to their mass had to be given up. The attraction due to gravity was the relevant, common, and decisive characteristic of all these freely falling objects, not their individual masses. Standards of measurement applied to phenomena and processes in nature should cognize features that are characteristic to those phenomena and processes, not be externally applied regardless of their appropriateness or inappropriateness. Instead of measuring the overall performance of a process, phenomenon, or development according to criteria that are characteristic, statistical norms are frequently applied. These compare and benchmark performance relative to some standard that is held to be both absolute and external. Public concern about such standards, such as what constitutes a "safe level of background radiation," has grown in recent years to the point where the very basis of what constitutes a standard has come into question. Recently, Zatzman (2007) advanced the counter-notion of using units or standards that are "phenomenal" (as opposed to aphenomenal). For those who want a science of nature that can account for phenomena as they actually
206
THE GREENING OF PETROLEUM OPERATIONS
occur in nature, standards whose constancy can only be assured outside the natural environment — under highly controlled laboratory conditions or "in a vacuum," for example — are, in fact, entirely arbitrary. On the other hand, phenomenally-based standards are natural in a deeper sense. They include the notion of characteristic features that may be cognized by the human observer. These are standards whose objectivity derives from the degree to which they are in conformity with nature. The objectivity of a natural standard cannot and must not be confounded with the neutrality of the position of an external arbiter. For all the work on intangibles (the mathematics of, the science of, etc.), one must establish 1) an actual, true source; 2) an actual, true science, or pathway; and 3) an actual, true end-point, or completion. Knowledge can be advanced even if the "true object" is not the entire truth. In fact, it is important to recognize the whole truth cannot be achieved. However, this should not be used as an excuse to eliminate any variable that might have a role but whose immediate impact is not "measurable." All the potential variables that might have a role should be listed right at the beginning of the scientific investigation. During the solution phase, this list should be discussed in order to make room for possibilities that, at some point, one of the variables will play a greater role. This process is equivalent to developing the model that has no aphenomenal assumption attached to it. There is a significant difference between that which tangibly exists according to the five senses in a finite portion of time and space and that which imperceptibly exists in nature in a finite portion of time and space. Our limitation is that we are not able to observe or measure beyond what is tangible. However, the models we use should not suffer from these shortcomings. If we grasp the latter first, then the former can be located as a subset. However, errors will occur if we proceed from the opposite direction, which assumes that what is perceivable about a process or phenomenon in a given finite portion of time and space is everything characteristic of the natural environment surrounding and sustaining the process or phenomenon as observed in that given finite portion of time and space. For example, proceeding according to this latter pattern, medieval medical texts portrayed the human fetus as a "homunculus," a miniaturized version of the adult person. On the other hand, proceeding according to the former pattern, if we take phase [or "angle"]
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
207
x as a complex variable, de Moivre's Theorem can be used to generate expressions for cos nx and sin nx. Whereas, (by comparison) if we struggle to construct right triangles in the two-dimensional plane, then it becomes computationally intensive to derive cos 2x, sin 2x, and orders of magnitude needed to extend the procedure so as to derive cos nx and sin nx. In technology development, it is important to take a holistic approach. The only single criterion that one can use is the reality criterion. A reality is something that doesn't change with time tending to infinity. This is the criterion Khan (2007) used to define sustainability. If the number of options were ranked based on this criterion, that ranking would be equivalent to the real (phenomenal) ranking, and it is absolute and must be the basis for comparison of various options. This ranking is given in the left most column of Table 4.2. In technology development, this natural (real) ranking is practically never used. Based on other ranking criteria, most of the ranking is reversed, meaning the natural order is turned u p side down. However, there are some criteria that would give the same ranking as the natural one, but that does not mean that the criterion is legitimate. For instance, the heating value for honey is the highest. However, this does not mean the process is correct. This table is discussed here as a starting-point for establishing a "reality index" that would allow a ranking according to how close the product is to being natural. In engineering calculations, the most commonly used criterion is the efficiency, which deals with output over input. Ironically, an infinite efficiency would mean someone has produced something out of nothing - an absurd concept for an engineered creation. However, if nature does that, it operates on 100% efficiency. For instance, every photon coming out of the sun gets used. So, for a plant, the efficiency is limited (less than 100%) because it is incapable of absorbing every photon it comes in contact with, but it would become 100% efficient if every photon is accounted for. This is why maximizing efficiency as a man-made engineering practice is not a legitimate objective. However, if the concept of efficiency is used in terms of overall performance, the definition of efficiency has to be changed. With this new definition (called "global efficiency" in Khan et al. 2007, and Chhetri 2007), the efficiency calculations would be significantly different from conventional efficiency, which only considers a small object of practical interest. As an example, consider an air conditioner running
208
THE GREENING OF PETROLEUM OPERATIONS
Table 4.2 Synthesized and natural pathways of organic compounds as energy sources, ranked and compared according to selected criteria. Natural (real) ranking ("top" rank means most acceptable)
1. 2. 3. 4.
Honey Sugar Sacchharine Aspartame
1. Organic wood 2. Chemicallytreated wood 3. Chemically grown, Chemically treated wood 4. Geneticallyaltered wood 1. 2. 3. 4. 5.
Aphenomenal ranking by the following criteria
Profit margin
Heating value (cal/g)*
2 3 4 2
4 "sweetness / g " 4 3 3 2 2 2 2
2 2 3 4
2 REVERSES
4 REVERSES if
4 REVERSES
3 toxicity is 2 considered 2
if 3 organic wood 2 treated with 2 organic chemicals
4 3 2 1
5 4 3 4 #
5 4 3 2 2
5 # - Heating 4 value cannot 3 be 2 calculated # for direct solar
6 5 4 3 2 2
4 #1 cannot
2 depending 3 on applic'n 4, e.g., durability
Not Solar applicable Gas Electrical Electromagnetic Nuclear
1. Clay or wood ash 2. Olive oil + wood ash 3. Veg oil+NaOH 4. Mineral oil + NaOH
Efficiency1, e.g., η = Outp-Inpxl0() Inp
Biodegradability
2 Anti3 bacterial 4 soap won't 5 use olive oil; 6 volume 2 needed for
#Efficiency can-not be calculated for direct solar
6 Reverses if 5 global is 4 considered 3 2 2
6 be ranked 5 3 2 #
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
209
Table 4.2 (cont.) Synthesized and natural pathways of organic compounds as energy sources, ranked and compared according to selected criteria. Natural (real) ranking ("top" rank means most acceptable)
Aphenomenal ranking by the following criteria Efficiency1,
Biodegradability
Profit margin
e.g., η = ut
° P- In Pxioo Inp
Heating value (cal/g)* #
5. Synthetic oil + NaOH 6. 100% sythetic (soap-free soap)
cleaning unit area
1. Ammonia 2. Freon 3. Non-Freon synthetic
1 2 3
Unknown
3 2 1
Not applicable
1. Methanol 2. Glycol 3. Synthetic polymers (low dose)
1 2 3
1 For hydrate 2 control 3
3 2 1
Not applicable
1. Sunlight 2. Vegetable oil light 3. Candle light 4. Gas light 5. Incandescent light 6. Fluorescent light
Not applicable
6 5 4 3 2 1
6 5 4 3 2 1
Not applicable
'This efficiency is local efficiency that deals with an arbitrarily set size of sample. *Calorie/gm is a negative indicator for "weight watchers" (interested in minimizing calorie) and a positive indicator for energy drink makers (interested in maximizing calorie).
outdoors. The air in front of the air conditioner is chilled, while the air behind the device will be heated. For instance, if cooling efficiency calculations were performed on an air conditioner running outdoors, the conventional calculations would show finite efficiency, albeit not 100%, determined by measuring the
210
THE GREENING OF PETROLEUM OPERATIONS
temperature in front of the air conditioner and dividing the work by the work done to operate the air conditioner. Contrast this with the same efficiency calculation if the temperature all around were considered. The process would be proven utterly inefficient, and it would become obvious that the operation is not a cooling process at all. Clearly, the cooling efficiency of the process, which is actually heating, is absurd. Now, consider an air conditioner running on direct solar heating. An absorption cooling system means there is no moving part and the solar heat is being converted into cool air. The solar heat is not the result of an engineered process. Then, what would be the efficiency of this system, and how would this cooling efficiency compare with the previous one? Three aspects emerge from this discussion. First, global efficiency is the only one that can measure true merit of a process. Second, the only efficiency that one can use to compare various technological options is global efficiency. Third, if one process involves natural options, it cannot be compared with a process that is totally engineered. For instance, efficiency in the latter example (as output/input) is infinity considering no engineered energy has been imparted on the air conditioner. No engineering design is complete until economic calculations are performed, thus the need for maximizing profit margin. Indeed, the profit margin has been the single-most criterion used for developing a technology ever since the Renaissance saw the emergence of a short-term approach in an unparalleled pace. As Table 4.2 indicates, natural rankings generally are reversed if the criterion of profit maximization is used. This affirms, once again, how modern economics has turned pro-nature techniques upside down (Zatzman and Islam 2007). This is the onset of the economics of tangibles, as shown in Figure 4.24. As processing is done, the quality of the product is decreased. Yet, this process is called value addition in the economic sense. The price, which should be proportional to the value, in fact, goes u p inversely proportional to the real value (opposite to perceived value, as promoted through advertisement). Here, the value is fabricated. The fabricated value is made synonymous with real value or quality without any further discussion of what constitutes quality. This perverts the entire value addition concept and falsifies the true economics of commodity (Zatzman and Islam 2007). Only recently, the science behind this disinformation has begun to surface (Shapiro et al. 2007).
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
211
Reality ^
Profit margin
Degradation from reality to aphenomenality Extent of processing Figure 4.24 The profit margin increased radically with external processing.
4.17
Inherent Features of a Comprehensive Criterion
The most serious, most important, most significant, and truly acid test of a proposed scientific criterion is that it accounts for everything necessary and sufficient to explain the phenomenon — its origin, its path, and its end-point — thereby rendering it positively useful to human society. The same criterion was used in previous civilizations to distinguish between real and artificial. Khan (2007) introduced a criterion that identifies the end-point, by extending time to infinity. This criterion avoids scrutiny of the intangible source of individual action (namely, intention). However, Zatzman and Islam (2007a) pointed out that the end-point at time t = infinity can be a criterion, but it will not disclose the pathway unless a continuous time function is introduced. Mustafiz (2007) used this concept and introduced the notion of knowledge dimension — a dimension that arises from introducing time as a continuous function. In all these deductions, it is the science of intangibles that offers some hope. It is important to note that the insufficiency just mentioned is not overcome by doing "more" science of tangibles "better." It is already evident that what is not being addressed are intangible components that cannot be wrinkled out or otherwise measured by existing means available within the realm of the science of tangibles.
212
THE GREENING OF PETROLEUM OPERATIONS
Intangibles, which essentially include the root and pathway of any phenomenon, make the science suitable for increasing knowledge, as opposed to increasing confidence in a conclusion that is inherently false (Zatzman and Islam 2007a). Zatzman and Islam (2007) introduced the following syllogism to make this point about the science of intangibles: All Americans speak French, (major premise) Jacques Chirac is an American, (minor premise) Therefore, Jacques Chirac speaks French. (conclusion-deduction) If, in either the major or minor premise, the information relayed above is derived from a scenario of what is merely probable (as distinct from what is actually known), the conclusion (which happens to be correct in this particular case) would not only be acceptable as something independently knowable but also reinforced as something statistically likely. This, then, finesses determining the truth or falsehood of any of the premises, and, eventually, someone is bound to "reason backwards" to deduce the statistical likelihood of the premises from the conclusion. This latter version, in which eventually all the premises are falsified as a result of starting out with a false assumption asserted as a conclusion, is exactly what has been identified and labeled elsewhere as the aphenomenal model (Khan et al. 2005). How can this aphenomenal model be replaced with a knowledge model? Zatzman and Islam (2007a) emphasized the need of recognizing the first premise of every scientific discourse. They used the term "aphenomenality" (contrasted to truth) to describe, in general, the non-existence of any purported phenomenon or any collection of properties, characteristics, or features ascribed to such a purported but otherwise unverified or unverifiable phenomenon. If the first premise contradicts what is true in nature, then the entire scientific investigation will be false. Such investigation cannot lead to reliable or useful conclusions. Consider the following syllogism (the concept of "virtue" intended here is that which holds positive value for an entire collectivity of people not just for some individual or arbitrary subset of individual members of humanity): All virtues are desirable. Speaking the truth is a virtue. Therefore, speaking the truth is desirable.
A TRUE SUSTAINABILITY CRITERION AND ITS IMPLICATIONS
213
Even before it is uttered, a number of difficulties have already been built into this seemingly non-controversial syllogism. When it is said that "all virtues are desirable," there is no mention of a time factor (pathway) or intention (source of a virtue). For instance, speaking out against an act of aggression is a virtue, but is it desirable? A simple analysis would indicate that unless the time is increased to infinity (meaning something that is desirable in the long-run), practically all virtues are undesirable. (Even giving to charity requires austerity in the short-term, and defending a nation requires self-sacrifice, an extremely undesirable phenomenon in the short-term). The scientifically correct reworking of this syllogism should be: All virtues (both intention and pathway being real) are desirable for time approaching infinity. Speaking the truth is a virtue at all times. Therefore, speaking the truth is desirable at all times. The outcome of this analysis is the complete disclosure of source, pathway (time function), and final outcome (t approaching °°) of an action. This analysis can and does restore to its proper place the rational principle underlying the comparison of organic products to synthetic ones, free-range animals to confined animals, handdrawn milk to machine-drawn, thermal pasteurization with wood fire compared to microwave and / o r chemical Pasteurization®, solar heating compared to nuclear heating, the use of olive oil compared to chemical preservatives, the use of natural antibiotics compared to chemical antibiotics, etc. When it comes to food or other matter ingested by the human body, natural components ought to be preferred because we can expect that the source and pathway of such components, already existing in nature, will be beneficial (assuming non-toxic dosages of medicines and normal amounts of food are being ingested). Can we have such confidence when it comes to artificially simulated substitutes? The pathway of the artificial substitute's creation lies outside any process already given in nature, the most important feature of food.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
5 Scientific Characterization of Global Energy Sources 5.1
Introduction
Fossil fuels have become the major driving factor of modern economic development. However, recent hikes in petroleum prices have shown severe impacts on the global economic scenario. It is especially important for the countries whose economies are largely dependent on the import of oil from abroad. The global energy scenario is tainted by the widening gap between the discovery of the petroleum reserves and the rate of production. Environmentally, fossil fuel burning is the major cause of producing greenhouse gases, which are major precursors for the current global warming problem. The alternatives envisioned to date are also based on petroleum resources for primary energy input. A paradigm shift towards knowledge-based technology development is essential to achieve true sustainability in technology development. A single energy source cannot be the solution for all energy problems. Proper science indicates that the distinction between renewable and non-renewable is superficial, devoid of a scientific basis. By removing this distinction and applying knowledge-based processing and refining schemes, one can shift the current energy consumption base from "non-renewable" 215
216
THE GREENING OF PETROLEUM OPERATIONS
to a readily renewable one. This same conclusion is reached if global efficiency is considered. With the use of global efficiency rather than local efficiency, it is shown that environmental sustainability improves as efficiency increases. This chapter analyzes the shortcomings of conventional energy development, deconstructs the conventional energy models, and proposes an energy model with technology options that are innovative, economically attractive, environmentally appealing, and socially responsible. It is shown that crude oil and natural gas are compatible with organic processes that do not produce harmful oxidation products. Modern economic development is largely dependent on the consumption of large amounts of fossil fuels. Due to this reason, fossil fuel resources are sharply depleting. With the current consumption rate, the oil use will reach its highest level within this decade. For example, humans today collectively consume the equivalent of a steady 14.5 trilling watts of power and 80% of that comes from fossil fuel (Smalley 2005). Moreover, oil prices have skyrocketed and have shown severe impacts on all economic sectors. Yet, oil is expected to remain the dominant energy resource in the decades to come with its total share of world energy consumption (Figure 5.1). This analysis indicates that, except hydropower resources, the consumption of other resources will still continue to rise. Worldwide oil consumption is expected to rise from 80 million barrels per day in 2003 to 98 million barrels per day in 2015 and then to 118 million barrels per day in 2030 (EIA 2006a). Transportation
Figure 5.1 Global energy consumption by fuel type (Quadrillion Btu) (EIA 2006a).
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
217
and industries are the major sectors that will demand oil in the future (Figure 5.2). The transportation sector accounts for about 60% of the total projected increase in oil demand in the next two decades, followed by the industrial sector. Similarly, natural gas demand is expected to rise by an average of 2.4% per year over the 2003-2030 period, and coal use is expected to rise by an average of 2.5% per year. Total world natural gas consumption is projected to rise from 95 trillion cubic feet in 2003 to 134 trillion cubic feet in 2015 and 182 trillion cubic feet in 2030 (EIA 2006a). The oil demand in residential and commercial sectors will also increase constantly. The residential oil consumption demand increases at a much lower rate than other sectors' oil demand, which means almost half of the world's population with access to modern forms of energy will continue to depend on the traditional fuel resources. Burning fossil fuels has several environmental problems. Due to the increased use of fossil fuels, the world carbon dioxide emission is increasing severely and is expected to grow continuously in the future. Currently, the total C 0 2 emission from all fossil fuel sources is about 30 billion tons per year (Figure 5.3). The total CO z emission from all fossil fuels is projected to be almost 44 billion tons by 2030 which exceeds 1990 levels by more than double (EIA 2006b). At present, the C 0 2 emission is at its highest level in 125,000 years (Service 2005). The current technology development mode is completely unsustainable (Khan and Islam 2006a). Due to the use of unsustainable technologies, both energy production and energy consumption
Figure 5.2 Delivered energy consumption by sector (quadrillion Btu) (EIA 2006a).
218
THE GREENING OF PETROLEUM OPERATIONS
Figure 5.3 World C 0 2 emissions by oil, coal and natural gas, 1970-2025 (adopted from EIA 2005).
have an environmental downside, which may in turn threaten human health and the quality of life. Impacts on atmospheric composition, deforestation leading to soil erosion and siltation of water bodies, the disposal of nuclear fuel wastes, and occasional catastrophic accidents such as Chernobyl and Bhopal are some of the widely recognized problems. The price of petroleum products is constantly increasing for two reasons. First, the global oil consumption is increasing due to the increased industrial demand, higher number of vehicles, increase in urbanization and higher population. The industrial and transportation sectors demand is rapidly increasing and is expected to increase in the future. The current oil consumption rate is much higher than the discovery of new oil reserves. Secondly, fossil fuel use is subjected to meet the strict environmental regulations such as low sulfur fuel, unleaded gasoline, etc. This increases the price of the fuel. Figure 5.4 shows the rising trend of regular gasoline prices in the U.S. from early 2006 to date. The U.S. energy demand is significantly increasing. To meet the increasing demand, U.S. imports will also increase. Figure 5.5 indicates the projected net import of energy on a Btu basis to meet a growing share of total U.S. energy demand (EIA 2006a). It is projected that the net imports are expected to constitute 32% and 33% of total U.S. energy consumption in 2025 and 2030, respectively, which is up from 29% in 2004.
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
219
Figure 5.4 Regular gasoline price (EIA 2006c).
Figure 5.5 Total energy production, consumption, and imports for U.S. from 1980-2030 (quadrillion Btu) (EIA 2006a).
Modern economic development is highly dependent on energy resources and their effective utilization. There is great disparity between the rate at which the fossil fuel is being used up and the rate at which the new reserves are found. Moreover, the new and renewable energy resources are not being developed at the pace to replace the fossil fuel. Most of the energy resources that are claimed to replace fossil fuel are based on fossil fuel for their primary energy resources. The alternatives being developed are based on chemical technologies that are hazardous to the environment. Modern technology development follows the degradation of chemical technologies (the Honey —> Sugar —> Saccharine —» Aspartame syndrome) (Islam et al. 2006). Artificial light, which is the alternative to natural light, has several impacts on human health. Schernhammer (2006)
220
THE GREENING OF PETROLEUM OPERATIONS
reported a modestly elevated risk of breast cancer after longer periods of rotating night work. Melatonin-depleted blood in premenopausal women exposed to light at night stimulates growth of human breast cancer xenografts in nude rats (Blask et al. 2005). The gasoline engine, which replaced the steam engine, became worse than its earlier counterpart. Modern gasoline and diesel engines use fuel that is refined by using highly toxic chemicals and catalysts. The biodiesel that is touted to replace the petroleum diesel uses similar toxic chemicals and catalysts, such as methanol and sodium hydroxide, producing similar exhaust gas as that of petroleum diesel. This principle applies in every technological development. The major problem that energy development faces is that the conventional policies are meant to maintain the status quo, and all the technological development that is taking place is anti-nature. This chapter provides a comprehensive analysis of global energy problems and possible solutions to meet these global energy problems in a sustainable way.
5.2
Global Energy Scenario
The global energy consumption share from different sources is shown in Table 5.1. The analysis carried out by EIA (2006a) shows that oil remains the dominant energy source followed by coal and natural gas. It is projected that nuclear energy production will also increase by more than two times by the year 2030. Renewable energy sources, such as biomass, solar, and hydro, will not increase significantly compared to the total energy consumption. Renewable energy sources supply 17% of the world's primary energy. They include traditional biomass, large and small hydropower, wind, solar geothermal, and biofuels (Martinot 2005). The total global energy consumption today is approximately 14.5 terawatts, which is equivalent to 220 million barrels of oil per day (Smalley 2005). The global population rise will settle somewhere around 10 billion by 2050 based on the current average population increase (WEC 2005). The per capita energy consumption is still rising in developing countries as well as in already developed countries. However, almost half of the world's population in the developing countries still relies on traditional biomass sources to meet their energy needs. Smalley (2005) argued that to meet the
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
221
Table 5.1 World total energy consumption by source from 1990-2030 (Quadrillion Btu). History
Source
Projections
Ave. Annual
1990
2002
2003
2010
2015
2020
2025
2030
Oil
136.1
158.7
162.1
191.0 210.5
229.8
254.9
277.5
2.0
Natural Gas
75.20
95.90
99.10
126.6
149.1
170.1
189.8
218.5
3.0
Coal
89.40
96.80
100.4
132.2
152.9
176.5
202.8
231.5
3.1
Nuclear
20.40
26.70
26.50
28.90 31.10 32.90
34.00
34.70
1.0
Others
28.30
32.20
32.70
45.80 50.60 58.80
64.10
73.20
3.0
Total
47.30
410.3
420.7
524.2 594.2
745.6
835.4
2.6
666.0
% Change
Source: EIA 2006b
energy requirements for almost 10 billion people on Earth by 2050, approximately 60 terawatts of energy would be required, which is equivalent to some 900 million barrels of oil per day. According to him, major reservoirs of the oil will have been used u p by that time. Thus, there should be alternatives to fulfill such a huge energy requirement in order to maintain the growing economic development. The point that Smalley does not make is that the need for an alternate source does not arise from the apparent depletion of petroleum basins. In fact, about 60% of a petroleum reservoir is left unrecovered even when a reservoir is called "depleted" (Islam 2000). It is expected that, with more appropriate technologies, what is currently recoverable will double. Similarly, it is well known that the world has more heavy oil and tar sands than the "recoverable"' light oil (Figure 5.6). It is expected that technologies will emerge so that heavy oil, tar sands, and even coal can be extracted and processed economically without compromising the environment. This will require drastic change in practically all aspects of oil and gas operations (Khan and Islam 2006b). However, the immediate outcome would be that all negative impacts of petroleum operations and usage would be eradicated, erasing the boundary between renewable and non-renewable energy sources. Figure 5.6 indicates the similar trend for natural gas reserves. Conventional technology is able to recover lighter gases, which
222
THE GREENING OF PETROLEUM OPERATIONS
Figure 5.6 Worldwide oil and natural gas resource base (after Stosur 2000).
are relatively at lower depths. Recently, more focus is being placed on developing technologies in order to recover the coal-bed methane. There are still large reserves of tight gas, Devonian shale gas, and gas hydrates. It is expected that new technologies will emerge to economically recover the deeper gas and hydrates so that natural gas can contribute significantly to the global fuel scenario. Currently, the problem with the natural gas industry is that it uses highly toxic glycol for dehydration and amines for the removal of carbon dioxide and hydrogen sulfide (Chhetri and Islam 2006a). The oxidation of glycol produces carbon monoxide, which is poisonous, and amines form carcinogens with other oxidation products. Hence, the use of processed natural gas is hazardous to human health and the environment. Chhetri and Islam (2006a) proposed the use of natural clay material for the dehydration of natural gas and the use of natural oil to remove carbon dioxide and hydrogen sulfides from natural gas streams. Smalley (2005) also missed the most important reason why the correctly used modus operandi in energy management is unsustainable. With current practices, thousands of toxic chemicals are produced at every stage of the operation. Many of these toxic products are manufactured deliberately in the name of value addition (Globe and Mail 2006). These products would have no room to circulate in a civic society if it was not for the lack of long-term economic considerations (Zatzman and Islam 2006). These products are routinely touted as cheap alternatives to natural products. This has two immediate problems associated with it. First, natural products used to be the most abundant and, hence, the cheapest. The fact that they became more expensive and often rare has nothing to do with the free market economy and natural justice. In fact, this is the testimony of the type of manipulation and market distortion that
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
223
have become synonymous with Enron and other companies, who failed due to corrupt management policies. The second problem with touting toxic materials as cheap (hence, affordable) is that the long-term costs are hidden. If long-term costs and liabilities were incorporated, none of the products would emerge as cheap. Most of the economic hydropower sources have already been exploited. There is huge potential to generate power from ocean thermal and tidal energy sources. These resources have great promise but have yet to be commercialized. These sources are more feasible on isolated islands because power transportation on such islands is difficult. Tidal energy also has great potential but depends on the sea tides, which are intermittent. Biomass is a truly renewable source of energy, however, sustainable harvesting and replenishment is a major challenge in utilizing this resource. This is the major energy source for almost half of the world's population residing in the developing countries. It is also argued that increasing the use of biomass energy would limit the arable land to grow food for an increasing population. The global share of biomass energy is less than 1% (Martinot 2005). Wind is a clean energy source. Wind energy development is increasing rapidly. However, this is highly location specific. This will be an effective supplement while waiting for other renewable energy resources to meet the global energy requirement for the long term. Nuclear energy has been the most debated source of energy. It has been argued that the development of nuclear energy reduces greenhouse gas emissions. However, a detailed analysis showed that nuclear energy creates unrecoverable environmental impacts because of the nuclear radiations, which have very long half-lives (millions and billions of years). The nuclear disasters that the world has already witnessed cannot be afforded any more. The safe disposal of nuclear waste is yet to be worked out and is proving to be an absurd concept. Bradley (2006) reported that the disposal of spent fuel has been in debate for a long time and has not been solved. Hydrogen energy is considered the major energy carrier for the st 21 century, but the current mode of hydrogen production is not sustainable. Using electricity for electrolysis in order to produce electricity becomes a vicious cycle that has very little or no benefit. The search for hydrogen production from biological methods is an innovative idea that is yet to be established commercially. The most debated problem in hydrogen energy is the storage and transportation of energy. However, hydrogen production using solar heating
224
THE GREENING OF PETROLEUM OPERATIONS
has a good promise. Geothermal energy involves high drilling costs, reducing its economic feasibility. Geothermal electricity generation is characterized by low efficiency. However, the application of direct heat for industrial processes or other uses would contribute significantly. The problems and prospects of each of the energy sources are discussed below.
5.3
Solar Energy
Solar energy is the most important source of energy in terms of life survival and energy source. Service (2005) reported that the earth receives 170,000 terawatts of energy every moment of everyday. This means that every hour, Earth's surface receives more energy from the sun than what humans use in an entire year. The only challenge is how to use this huge energy source in a sustainable way for the benefit of mankind. Moreover, the conventional way of utilizing solar electric energy is extremely inefficient. For example, solar electric conversion has the efficiency of approximately 15%. This is further reduced if global efficiency is considered (Khan et al. 2006a). Despite the availability of such huge power, the current technological development is unable to exploit this resource to fulfill the global energy requirements. Service (2005) argued that solar energy is one of the most expensive renewable energy technologies on the market and far more expensive than the competition (Table 5.2). Solar energy is mostly used for water heating and lighting. For water heating, the direct solar energy is utilized. Even though the initial investment cost of the direct water heating system is comparatively high, it is a one-time investment that lasts for 20-30 years, which becomes economical in the long term. Using waste vegetable oil as heating fluid instead of directly heating water can significantly increase the efficiency of the conventional heating system (Khan et al. 2006a). The maximum theoretical temperature for water heating is 100°C, whereas heating the vegetable oil can increase the temperature about three folds higher than that of water heating using solar concentrators. The energy of heated oil can then be transferred through a suitable heat exchanger for household and industrial processes. Thus, direct solar application has very high efficiency and also has no negative environmental impact. Khan et al. (2006a) developed a thermally driven refrigeration system that runs on a single pressure refrigeration cycle, which is a thermally driven cycle using three fluids. The thermal driving
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
225
Table 5.2 Unit cost of various energy technologies. Energy Technology
Cost per kW ($)
Reference
Solar
0.25-0.50
Service (2005)
Wind
0.05-0.07
Service (2005)
Natural gas
0.025-0.05
Service (2005)
Coal
0.01-0.04
Service (2005)
Nuclear
0.037
Uranium Information Center, 2006
force is derived from the direct use of solar energy. This is a very efficient system that has no moving parts and has an extremely high efficiency. Similarly, Khan et al. (2006b) developed a direct solar heating system that utilizes the waste vegetable oil as a thermal fluid. With a parabolic solar collector, the temperature of the fluid can reach more than 300°C with 70% efficiency. The direct use of solar energy for heating or generating solar electricity can significantly contribute to the global energy problem. Similarly, heat engines can be used in order to capitalize from heat differences at any place. Such heat engines can even run in a low-temperature difference (Website 4). The temperature difference of a wood fired stove and the outside temperature can be effectively used to run such heat engines. Such a principle is utilized in ocean thermal technology, where temperature difference between the two anticlines of the sea could potentially generate a heat difference of 10-20°C. Solar energy is mostly used for lighting. The current technological development has gone in such a direction that we first shade the sunlight by erecting some structures, and then artificial light is created using fossil fuels to light billions of homes, offices, and industries. Utilizing daylight at least during the day would save money, health, and the environment. Solar lighting utilizes the solar cells to convert sunlight into electricity. Millions of barrels of fossil fuels are burnt just to create artificial lighting, which causes severe environmental problems and impacts on living beings. Developing technologies to create healthy light for the nighttime with renewable sources, such as biogas light from wastes, would provide an environmentally friendly and zero waste solution. Figure 5.7 shows the pathway of generating light from solar
226
THE GREENING OF PETROLEUM OPERATIONS
energy. The efficiency of a solar panel is around 15%. The solar panels are made of silicon cells, which are very inefficient in hot climate areas and are inherently toxic because they consist of heavy metals, such as, silicon, chromium, lead, and others. The energy is stored in batteries for nighttime operations. These batteries are exhaustible and have a short life even if they are rechargeable. The batteries have a maximum efficiency of about 30% (Fraas et al. 1986). The conversion of batteries into fluorescent light has an efficiency of about 40-50%. Solar electricity with its current technology cannot be the solution for the global energy problem. The use of electricity for various end uses is also debatable. For example, using electricity for cooking is not natural. Moreover, microwave cooking is a fashionable way of cooking in modern society. It has been reported that microwave cooking destroyed more than 97% of the flavonoids in broccoli, 55% chlorogenic acid loss occurred in potatoes, and 65% quercetin content loss was reported in tomatoes (Vallejo et al. 2003). There are several other compounds formed during electric and electromagnetic cooking which are considered to be carcinogenic based on their pathway analysis. Hence, electricity is not good for cooking healthy food. However, using the direct application of solar energy to generate industrial processed heat, household water, and cooking and space heating could be a significant supplement to the global energy systems. The direct application of solar energy has no negative environmental impacts and the resource availability is huge, providing an unlimited opportunity.
5.4
Hydropower
Hydropower today provides approximately 19% of the world's electricity consumption (WEC 2001). Evidence of global environmental problems due to the use of fossil fuels to meet industrial, commercial, and domestic requirements is growing. In order to achieve
Solar (100%)
Solar PV 15%
-►
Battery 30%
—►
Fluorescent Light 40-50%
Total <5%
Figure 5.7 Flow chart showing pathway of generating artificial light from natural sunlight.
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
227
Efficiency+Env. benefit+Value
1
► Cost of production
Figure 5.8 Cost of production vs. efficiency, environment, and value (Islam et al. 2006).
solutions to the environmental problems that we face today, we need long-term potential actions for sustainable energy development and resource management. Hydropower is one renewable energy source that provides clean energy options for economic development. However, large hydropower development has several environmental and socio-economical consequences. The upstream land submergence due to large dams would create social as well as environmental problems. There would be a loss in biodiversity due to the large reservoirs. The change in the concentration of minerals due to the large volumes of water would change the soil microorganism interaction affecting the biodiversity in the area. Submergence of land area creates the displacement of the inhabitants in the project area. A change in land use pattern would significantly affect the social features, albeit being site specific. Only small, mini and micro-hydro systems, which do not have impacts on the environment, are sustainable and manageable by the local community (Pokharel et al. 2006). WEC (2001) reported the survey result that the world hydropower potential is approximately 2360 GW of which approximately 700 GW has already been exploited. The development of half of the total hydro potential could lead to the reduction of approximately 13% of greenhouse gas emissions due to avoiding the use of fossil fuels as reported by WEC in 2001. Careful planning and development of hydropower projects is crucial in order to have the minimum social and environmental impact in the project areas. Hydropower, especially small, mini, and micro-hydro power development, is a key to addressing the global environmental problems due to energy generation.
228
THE GREENING OF PETROLEUM OPERATIONS
5.5
Ocean Thermal, Wave, and Tidal Energy
Ocean Thermal Energy Conversion (OTEC) utilizes the useful energy due to the temperature difference between surface water and water at a depth of approximately 1,000 meters. The depth of approximately 1,000 meters would provide the temperature difference of approximately 20°C. OTEC is a very useful energy source in the islands because it is difficult to supply energy from inland due to the transportation problems. OTEC is in the early stages of development and pilot studies. Even though this is a clean source of energy, governments and developers are reluctant to invest in this technology because it is not yet tested. The capital cost for OTEC is very high because of heat exchangers, long pipes, and large turbines (Tanner 1995). The global efficiency OTEC becomes lower due to the involvement of series of units such as pumps, evaporators, condensers, separators, turbines, and generators. However, this is still a clean source of energy and could contribute significantly to the islands and coastal areas, and it has great potential for avoiding emissions of greenhouse gases. Tidal and wave energy represent largely ignored renewable energy resources. Thorpe (1998) reported that wave energy alone can contribute approximately 10% of the current level of world electricity supply. The worldwide potential of wave power is in the order of 1-10 TW (Voss 1979). Even though wave energy is in the early stages of development, it holds a good promise and is an effective alternative to greenhouse gas emissions from the use of fossil fuels. Similarly, the global potential of tidal energy is approximately 3 TW, but only in certain locations of the world do the natural conditions promise technical and economic viability (Voss 1979). However, more research and technological development towards utilizing these clean resources would have a great impact on the future energy scenario. Finally, the potential of this tidal energy used directly, rather than for generating electricity, must be studied.
5.6
Wind Energy
Wind energy is a clean and natural source of energy. The world market for wind energy technologies has grown dramatically in recent years. Before 2000, a small number of European companies dominated the production of wind turbines but the situation
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
229
substantially changed when wind power development in the U.S., China, and India substantially increased. Global wind power generating capacity has reached to 59,322 megawatts (MW) as of 2005 (GWEC 2006). This is 25% growth over one year period. Wind energy is the world's fastest-growing energy source on a percentage basis. The wind turbines installed in the year 2005 alone had a capacity of 11,769 MW (GWEC, 2006). Figure 5.9 shows the wind power capacity of the world's top ten wind producing countries. It was also reported in 2006 that by 2010, wind energy alone will save enough greenhouse gas emissions to meet one third of the European Union's Kyoto obligation (GWEC 2006). North America had the highest capacity installed in 2005, 37% higher than the previous year. Similarly the growth of wind energy capacity in Canada was 53% in 2005 (GWEC 2006). Asian countries, especially India and China, have experienced a strong growth of over 46% of installed capacity, bringing in a total capacity of 7,135 MW. In 2005 alone, the Asian continent accounted for 19% of the new installations. Wind energy generally cannot compete with fossil fuel resources if the environmental impacts of fossil fuels are not considered. There are various policies in place to support wind power development. The Clean Development Mechanism (CDM) of the Kyoto Protocol supports funds for wind energy development. However, there are significant bureaucratic formalities that slow down the approval of the projects. Base-lining, additionality, and certified emission
5
Figure 5.9 Top ten wind producing (GWEC 2006).
230
THE GREENING OF PETROLEUM OPERATIONS
reductions have to be fulfilled as prerequisites for CDM funding. Besides this, investors should demonstrate that emission reductions are "additional" to any that would occur in the absence of the certified project activities. By 2003, only 49 projects out of 1,030 submitted were approved (Pershing and Cedric 2002). Thus, it seems, the CDM model cannot help fulfill the Kyoto emission reduction requirements. Since wind energy is highly location specific, local community organizations, co-operatives, and local and provincial governments should take part in the wind power development in order to achieve the true sustainability (Chhetri et al. 2006c). Despite the fact that wind energy is considered a clean energy source, there are several adverse environmental impacts. It creates noise in the areas where large wind farms are developed. The wind turbines can scatter electromagnetic communication signals and create problems for flying creatures, including airplanes, thus threatening the security of several lives. Moreover, converting wind to electricity is not a good-to-better option, meaning it is not a pronature process and, hence, not sustainable. It is widely considered that wind energy can contribute significantly in reducing greenhouse gas emissions. However, decreasing greenhouse gas emissions is not an accomplishment. In fact, in new technology development, no technologies should be allowed to emit greenhouse gases.
5.7
Bio-energy
Almost half of the world's population, especially people in developing countries, is heavily dependent on biomass sources. Biomass is a very important source of energy generally classified as wood fuels, agro-fuels, and waste or by-products (Figure 5.10). This source should remain an important source of energy for humanity because it is a readily renewable source. Replenishment of biomass in sustainable way is essential to maintaining the largest carbon sink in the planet. IEA (2004) reported that the use of biomass in developing countries will increase from 886 mtoe in 1997 to 1103 mtoe in 2020 at an annual growth rate of 1%. Biomass resources are potentially the world's largest and most sustainable energy sources, comprising 220 billion oven-dry tons (about 4500 EJ) of annual primary production (Hall and Rao 1999). The annual bio-energy potential is about 2,900 EJ, though only 270 EJ could be considered available on a sustainable basis and at competitive prices. The problem is not
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
231
Bio-energy
Woodfuels Solid/Liquid/Gases
Agro-fuels Solid/Liquid/Gases
Solid: Wood, chips, twigs pellets, charcoal Liquid: Black liquor, pyrolytic oil Gases: Gases from gasification and pyrolysis
Solid: Straw, stalks, husks, bugasse, charcoal Liquid: Ethanol, veg.oil, methanol, pyrolytic oil Gases: Biogas, producer gas, pyrolytic gas
I
By-products/wastes Solid/Liquid/Gases
X
Solid: Municipal solid waste (MSW) Liquid: Sewage sludge, pyrolytic oil from MSW. Gases: Sludge and landfill gas
Figure 5.10 Bio-energy classification based on source.
availability but the sustainable management and delivery of energy to those who are in real need.
5.8
Fuelwood
Fuelwood is a major form of biomass used for cooking, heating, and converting into other forms of energy such as a liquid or gas. Wood is the most abundantly available and widely used resource, especially in developing countries. Wood is either directly harvested for cooking and space heating, or it can be from waste streams from various industries. The combustion of wood in traditional stoves has relatively low efficiency in the range of 14% (Shastri et al. 2002). Chhetri (1997) reported from the experimental investigation that some stoves designed precisely reached an efficiency of up to 20%. Some improved cook-stoves have an efficiency of u p to 25% (Kaoma and Kasali 1994). However, the conventional efficiency calculation is based on calculating the local efficiency, considering only the energy input and heat output in the system itself. This method does not consider the utilization of by-products, the fresh C 0 2 that is essential for plant photosynthesis, the use of exhaust heat for household water heating, the use of ash as surfactant, or the use of fertilizer and good sources of natural minerals such silica, potassium, sodium, calcium and others.
232
THE GREENING OF PETROLEUM OPERATIONS
Wood ash is a very rich source of silica and an important source for industrial applications. The ash also contains various minerals such as potassium, sodium, magnesium, calcium, and others. Conventionally, ash has been used as a source of fertilizer because of its high mineral content. It is a truly natural detergent. Sodium or potassium can also be extracted from wood ash and used as a saponification agent for making soap from vegetable oils and animal or fish fats. A fine wood ash is a very good raw material for making non-toxic toothpaste. Chhetri and Islam (2006a) reported that wood ash can be extracted to use as a natural catalyst for the transesterification of vegetable oil in order to produce bio-diesel for a diesel substitute. Rahman et al. (2004) reported that maple wood ash has the potential to adsorb both Arsenic (III) and Arsenic (V) from contaminated aqueous streams at low concentration levels without any chemical treatment. Static tests showed up to 80% arsenic removal, and in various dynamic column tests the arsenic concentration was reduced from 500 ppb to lower than 5 ppb. Moreover, in eastern culture, ash is used traditionally as a water-disinfecting agent possibly because of its mineral content. However, detailed scientific research on this topic is only beginning to surface (Rahman et al. 2006). Chhetri et al. (2006a) developed an energy efficient stove fuelled by compacted sawdust that utilizes exhaust heat coming from flue gas for household water heating. The global efficiency of wood combustion in the stove is considered to be more than 90%. Thus, wood combustion in effectively designed stoves has one of the highest efficiencies among the combustion technologies. (See Chapter 7, section 7.9 for more details.) Waste energy is the second-largest source of biomass energy. The main contributors of waste energy are municipal solid waste (MSW), manufacturing waste, and landfill gas. The waste can be converted to energy by an anaerobic digestion system to produce biogas that can be used for cooking and fuel for transportation and lighting. Alcohol can also be a major energy source, such as ethanol from corn or sugar cane and bio-diesel from vegetable oils. Charcoal and pellets are made from firewood. Agro-fuel is another important source of energy that is grown naturally, planted for food production purposes, or planted as energy crops. Ethanol, biogas, and pyrolytic gases are produced from agro-fuels and used for transportation and stationery engines. By-products of biomass or wastes generated from domestic and public activities generate large amounts of organic wastes that are important sources of energy.
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
233
Utilizing waste as an energy source would not only help to solve the energy problem but would also support a clean environment, leading to zero-waste living. There are several technologies that convert biomass into electricity, such as gasification and conventional steam cycles through the pyrolytic gases. However, large amounts of energy are involved during such processes, and the efficiency is significantly reduced and, thus, is not technically attractive.
5.9
Bioethanol
Biofuels have been drawing considerable attention recently due to the disadvantages of fossil fuels (e.g., emissions of greenhouse gases, unsustainable supply), which produce large amounts of C 0 2 during a combustion process. Among the liquid biofuels, ethanol has the major contribution in the transportation sector. Lignocellulosic biomass contains 40-60% of cellulose and 20-40% of hemicellulose, two thirds of which are polysaccharides that can be hydrolyzed to sugars and then fermented to ethanol. Ethanol can be easily burned in today's internal combustion engines to substitute gasoline. The worldwide gasoline use in the transportation industry is about 1,200 billion liters/year (Martinot 2005). The total ethanol production as of 2004 is approximately 32 billion liters/year. Brazil is the leading country in using ethanol as transportation fuel. There is a huge gap between gasoline use and the supply of ethanol as a substitute for gasoline in the current market. Ethanol is being produced from various biomass sources such as corn, sugarcane, sweet sorghum, switch-grass, and other food grains. Figure 5.11 is the schematic process for the hydrolysis fermentation process. During the pretreatment, the biomass is sized, cleaned, and pretreated with low concentration acid to hydrolyze the hemicellulose and expose it for hydrolysis. Table 5.3 shows the chemicals, temperature, and pressure used for the pretreatment of the biomass before converting it into ethanol. Different kinds of acids, bases, high temperature steam, and carbon dioxides are used for pretreatment. In acid hydrolysis, dilute sulfuric acid, hydrochloric acid, or nitric acids are used. In alkaline treatment, sodium hydroxide and calcium hydroxide are most common. One of the major objectives of the pretreatments is to remove the lignin part, which cannot be fermented and cannot convert the
234
THE GREENING OF PETROLEUM OPERATIONS
Pre-treatment
Biomass Ethanol <1 1 Water ■*
1'
Fermentation
Seperation
Solid residuals
Figure 5.11 Hydrolytic fermentation process. Table 5.3 Comparison of various pretreatment options. Chemicals used for pretreatment
Temperature/ Pressure
Xylose Yield
Dilute acid hydrolysis
acid
>160°C
75-90%
Alkaline hydrolysis
base
-
60-75%
Unanalyzed steam explosion
-
160-260°C
45-65%
Acid catalyzed steam explosion
acid
160-220°C
88%
Liquid hot water
none
190-230°C, p>psat
88-98%
ammonia
90°C
50-98%
co2
56.2 bar
-
Pre-treatment method
Ammonia fiber explosion C 0 2 explosion Source: Martinot 2005
hemicellulose into fermentable sugars. Physical methods of pretreatment include the high steam pressure explosion process, C 0 2 explosion, nitrogen explosion, and hot water treatment. Biological pretreatment includes treatment of biomass by fungus. After the pretreatment, the cellulose is hydrolyzed into glucose sugar, and the reaction is generally catalyzed by diluted or concentrated acid or by enzymes. The common method of hydrolysis is the use of
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
235
concentrated hydrochloric or sulfuric acid. Enzymatic hydrolysis is technically feasible and environmentally sound, but acid hydrolysis is more commercially available. The ethanol production process is highly toxic because the process utilizes various toxic chemicals in a series of processes. Hydrochloric and sulfuric acids, synthetic sodium hydroxide, and calcium hydroxide are highly toxic and corrosive chemicals. The enzyme hydrolysis method uses various synthetic surfactants to accelerate the reaction. The products of burning such contaminated fuel would emit toxic pollutants and pose severe environmental and health problems. These toxic chemicals contribute to water pollution that makes the water treatment process unsafe and expensive. Pretreatment, hydrolysis, and other processes use high heat and pressure, which require fossil fuel combustion. Thus, the conventional method of using fossil fuels as the primary energy input for methanol production makes the whole process unsustainable.
5.10
Biodiesel
Considerable attention has been paid to develop biodiesel as a replacement for petrodiesel in order to reduce environmental problems. Despite the fact that renewable resources such as vegetable oil produce biodiesel, the pathway of conventional biodiesel is similar to that of petrodiesel. The use of excessive heat, chemicals, and catalysts adds toxicity to the resulting biodiesel, which makes the process expensive and highly unsustainable and creates adverse impacts on the environment. Various additives used for biodiesel production inhibit the formation of sediments and other insolubles, making the biodiesel even worse. The formation of sediment or gum can result in operational problems with plugging and fouling at the end-use equipment. A recent EPA (2002) report indicates that even though biodiesel has less toxic pollution compared to petroleum diesel, the combustion of biodiesel still produces toxic emissions similar to those of petrodiesel, such as benzene, acetaldehyde, toluene, formaldehyde, acrolein, PAHs, and xylene (Table 5.4). Chhetri and Islam (2006b) developed a process that uses natural catalysts and non-toxic chemicals in biodiesel for catalysts and the alcoholysis medium. Natural catalysts such as hydroxides from wood ash and methanol from a renewable green source made the
236
THE GREENING OF PETROLEUM OPERATIONS
Table 5.4 Difference in average toxic effects of two biodiesel blend levels. Average % change compared to base fuel 20% biodiesel
100% biodiesel
Acetaldehyde
-7.10
-14.40
Acrolein
-1.50
-8.50
Benzene
16.50
-0.80
1,3-butadiene
39.00
-12.30
Ethylbenzene
-44.90
-61.00
Formaldehyde
-7.80
-15.10
n-hexane
-48.70
-12.10
Naphthalene
-13.80
-26.70
Styrene
-3.70
39.30
Toluene
19.90
13.30
Xylene
-12.30
-39.50
Toxins
Source: EPA 2002 biodiesel process truly green. The feedstock for biodiesel has a great role to play in order to reduce the higher cost of biodiesel. Nonedible oils, such as Jatropha oil, could revolutionize the biodiesel industry. (See Chapter 7, section 7.8 for more details.)
5.11
Nuclear Power
Nuclear energy comes from uranium, a metal mined from natural ores. A fission process produces nuclear energy, in which splitting of the nuclei takes place to produce heat. Naturally occurring uranium consists of approximately 99.28% 238U and 0.71% 23S[J. Uranium-235, uranium-238, and plutonium-239 are the fissile materials that produce fission reactions in today's nuclear power plants. However, Β 5 υ is the main fissile material used for energy production (Letcher and Williamson 2004). In the nuclear reactor, the energy produced is
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
237
used to heat the water to make the steam, and the steam turbine runs the generator to produce electricity. Figure 5.12 is the schematic of a uranium processing facility. The dotted line shows the alternative path for reprocessing the spent fuel in order to send it to the enrichment unit to re-use it as fuel. 235 U is enriched from a concentration of 0.7% to approximately 2-5%. EIA (2005) reported that most of the ores in the United States contain 0.05-0.3% of uranium oxide (U,O0). Enrichment of uranium consists of various processes such as gaseous-diffusion and centrifugal isotope separation. Because the uranium separation achieved per diffusion is extremely low, the gas must pass through some 1,400 stages in order to obtain a product with a concentration of 3-4% (Uranium Enrichment 2006). EIA (2006b) reported that the total world nuclear energy consumption was 2.523 trillion kWh. Based on the current trend, the total nuclear power development projected for 2030 is 3.239 trillion kWh. It seems that nuclear energy will not take the major share of energy supply during the projected period. One of the major concerns with nuclear fuel is the safe disposal of spent fuel either from the spent fuel of the reactors or from the waste of reprocessing plants. The gaseous diffusion process has a possibility of releasing UF6 from the "enriched" uranium. This process emits highly radioactive radiation, gases, and rays such as a and ß particles. Wise Uranium Project (2005) reported that the halflives of the natural uranium isotopes are 244,500 years for U-234, 7.03 x 108 years for U-235, and 4.468 x 109 years for U-238. But when uranium is "enriched," it might have longer half-lives than natural uranium, and nature does not recognize such "enriched" uranium and does not degrade it. The a radiation from uranium processing
Figure 5.12 Flow diagram for electricity production from uranium.
238
THE GREENING OF PETROLEUM OPERATIONS
has a high possibility of causing cancer, the danger being higher with enriched uranium. The combustion products deplete uranium such as uranium trioxide (U0 3 ), and others do not behave like natural uranium and pose several health hazards (Salbu et al. 2005). The proponents of nuclear power plants claim that it is a clean energy source. However, nuclear power plants pose environmental threats in many ways. First, the nuclear reactors release radioactive wastes that are extremely harmful to humans and the environment. The extraction of uranium, mining, milling, and the conversion to uranium hexafluoride also produce radioactive waste. These processing steps also use huge amounts of fossil fuels, producing lots of carbon dioxide and other air emissions. Thus, nuclear power plants are not reducing global C 0 2 emissions; rather, they add hazardous pollution into the environment. The sequestration of spent uranium inside the geological traps will pose long-term effects on the biodiversity and global environment. Moreover, the cooling systems require huge amounts of water much higher than used by any fossil fuel plants. The impact of hot water discharged on large water bodies and aquatic systems are more significant than any other fuel processing technologies. Most importantly, the possible failure of the nuclear power plants could create big catastrophic accidents with severe consequences for living beings. It has been argued that nuclear power plants are one of the most efficient technologies. The local efficiency of nuclear energy conversion has been reported to reach up to 50% (Ion 1997). Uranium extraction consists of an expensive leaching process during mining as well as hundreds of stages of gas diffusion or centrifugation required for the uranium enrichment. Conventional mining has an efficiency of about 80% (Gupta and Mukherjee, 1990). There is also a significant conversion loss in the conversion of uranium to UF6, the efficiency of which is usually considered less than 70%, and enrichment efficiency is less than 20%. Considering the 50% thermal to net electric conversion and 90% efficiency in transmission and distribution, the global efficiency of the nuclear processing technology is considered to be less than 5%. If we consider the environmental impact caused by radioactive hazards and the cost of the overall system, including the disposal of uranium spent fuel, the global efficiency is even less than mentioned above. Nuclear technology uses huge amounts of fossil fuel for all the processes. Thus, the life of nuclear technology will last until the fossil fuels remain usable. Moreover, the total uranium available in the world is considered exhaustible. However, as
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
239
a natural process, the formation of uranium should continue forever because every process on earth is reversible. Service (2005) reported that, despite the number of discussions in the political arena, not a single nuclear power plant was built in the U.S. after 1973. He further argued that even if we want to supply one-third of the energy demand, which is approximately 10 Terrawatt (TW), by 2050, that means 10,000 nuclear power plants each producing a gigawatt of power are needed. This is equivalent to opening one reactor every other day for the next 50 years, which is beyond imagination. High upfront capital costs, waste disposal, corporate liability, and nuclear proliferation are the major concerns for developing nuclear energy. Thus, nuclear energy is not going to be the solution for the global energy supply as many argue today.
5.12
Geothermal Energy
Geothermal energy is the energy stored as heat in the earth's interior. This energy source, as natural steam, hot brine, and hot water, has been exploited for decades to generate electricity for space heating and industrial processes. Geothermal energy is extracted by transferring the heat of the earth's interior by various methods. The heat is transferred from depths to sub-surface regions first by conduction and then by convection, with geothermal fluids acting as the heat carrier. These geothermal fluids are primarily rainwater that has penetrated into the earth's crust from recharge, which is then heated at contact with hot rocks that have accumulated in aquifers at high pressures and temperatures more than 300°C. The temperature in the core of the earth is around 4000°C. Active volcanoes erupt lava at about 1200°C and thermal springs, numerous on land and present on the oceanic floor, can reach 350°C (WEC 2001). Rao and Parulekar (1999) mentioned that the average geothermal gradient inside the earth is 30°C for each 1,000 m depth. The hydro geothermal energy sources are available in the forms of hot water, hot brine, and steam at depths less than 3,000 m. Petro-geothermal energy resources consist of hot, dry rocks at depths below 2,000 m. The temperature available in the form of a mixture of hot water and steam is up to 200°C. The efficiency of electricity production from a geothermal source is in the range of 10-17% (Barbier 2002). Geothermal fluid consists of different types of particulates, matters, and molten impurities, which need a centrifugation to protect the turbine from weathering.
240
THE GREENING OF PETROLEUM OPERATIONS
It is clear that the conversion of geothermal energy to electricity is not an attractive option. However, the direct use of heat for space heating and various industrial processes would significantly increase the thermal efficiency of a system. C 0 2 reduction due to the use of geothermal energy would be an added advantage in utilizing geothermal sources. Geothermal energy is from a free source, which makes this energy source environmentally friendly. The efficiency can be significantly increased by using thermal fluids such as vegetable oil, which can be heated up to 4000°C (Khan et al. 2006a). Considering thermal application without electricity production, the global efficiency of the geothermal energy could be higher than 60%. However, geothermal energy involves high drilling costs, which also depend on the type of geology, the salinity of fluid, and the constituent particulates in the fluid. WEC (2001) reports showed that the total worldwide use of geothermal power offers a contribution both to saving energy (around 26 million tons of oil per year) and reducing C 0 2 emissions (80 million tons of oil per year, compared with equivalent oil-fuelled production).
5.13
Hydrogen Energy
Hydrogen is a clean fuel and an energy carrier that can be used for a broad range of applications from transportation to electricity production. Hydrogen is high in energy value and produces no pollution after combustion. Hydrogen is found in many organic compounds such as gasoline, natural gas, methanol, propane, and biomass. Hydrogen is produced by the electrolysis of water or by applying heat to hydrocarbons, which is called hydrogen reforming. Currently, reforming from the natural gas produces most hydrogen. Hydrogen is used to produce electricity using fuel cells. The conventional way of producing electricity from hydrogen is that electricity is used to electrolyze water to separate hydrogen, and the hydrogen produced in turn produces electricity. Electricity production by using electricity for electrolysis is not an attractive option (Mills et al. 2005). They reported that very high temperature reactors with direct solar heating for hydrogen production are attractive options. In the sulfur-iodine process, there are virtually no by-products or harmful emissions (Figure 5.13). The reaction products, such as sulfuric acid, can be decomposed at 8500°C, and hydrogen formed during the reaction can be decomposed at 4000°C.
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
241
Solar Energy
Water
Hydroge
' Sulfur-Iodine Cycle Very High Temperature Reactor
Waste Heat
Oxygen
'
Desalination Plant
Figure 5.13 High temperature reactor with desalination plant (Mills et al. 2005).
The system's efficiency could be further enhanced if the waste heat generated during the reaction could be utilized for other purposes, such as for the desalination process. It was reported that the very high temperature reactors could achieve an efficiency of u p to 70% (Mills et al. 2005). International Energy Agency (2004) projected that if current policies were not changed, the world's energy demand in 2030 would be 60% higher than in 2003, and the CO z emissions would increase by even more than 60%. For this reason, non-carbon based energy has gained more attention. Because energy combustion is responsible for producing various greenhouse gases including C0 2 , a non-carbon energy source, such as hydrogen, that does not produce C 0 2 holds good promise. Hydrogen can be used for both mobile and stationary applications. Hydrogen is the stored fuel in fuel cells for transportation vehicles, and water is the emission product. Despite being attractive from the environmental point of view, the current cost of hydrogen is very high for production, storage, transportation, and distribution. Since hydrogen is a secondary source of energy, the primary energy source and the primary energy input are two important factors that determine the economy and feasibility. Reforming natural gas uses high heat and catalysts, making the process problematic to the environment due to toxic effects (Khan and Islam 2006b). The catalysts used in natural gas processing, such as glycol, produces carbon monoxide during combustion, and oxidation products of amines are carcinogenic (Chhetri and Islam 2006a). Because natural gas has direct application as fuel, reforming natural gas to produce
242
THE GREENING OF PETROLEUM OPERATIONS
hydrogen is not a feasible solution. Gasification of biomass has also been considered one of the major future sources of hydrogen production. However, biomass gasification and the breakdown of gas into hydrogen would significantly reduce the efficiency of the systems. The only feasible option would be to use the waste biomass for this purpose. Hydrogen production from some bacteria or algae through photosynthesis could be a prospective option. This process may become economical in the long term because it does not need high energy for the process and the global efficiency could be significantly high. Ramesohl and Merten (2006) argued that there is a fear of restrictions on the enhanced growth of hydrogen production from renewable energy sources. This is a wrong assumption. Renewable energy sources, such as solar, can be a good option for splitting the water into hydrogen and oxygen so that hydrogen can be used for the desired application. In addition to the environmental and energy issue, safety concerns for production, storage, transportation, and use should be taken into consideration so as to provide enough security with the system. Hydrogen will play a major role in the energy scenario in the years to come.
5.14 Carbon Dioxide and Global Warming Energy production and use are considered major causes of greenhouse gas emissions. The emission of greenhouse gases, particularly C0 2 , is of great concern today. Even though C 0 2 is considered one of the major greenhouse gases, production of natural C0 2 is essential for maintaining life on Earth. Note that all C 0 2 is not the same and plants do not accept all types of C 0 2 for photosynthesis. There is a clear difference between old C 0 2 from fossil fuels and new C 0 2 produced from renewable biofuels (Dietze 1997). The CO z generated from burning fossil fuel is an old and contaminated CÖ2. Because various toxic chemicals and catalysts are used for oil and natural gas refining, the danger of generating C 0 2 with higher isotopes cannot be ignored (Islam 2003; Chhetri et al. 2006a). Hence, it is clear that C 0 2 itself is not a culprit for global warming, but the industrial C 0 2 that is contaminated with catalysts and chemicals likely becomes heavier with higher isotopes and plants cannot accept this C0 2 . Plants always accept the lighter portion of CO z from the atmosphere (Bice 2001). Thus, C 0 2 has to be distinguished between natural and
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
243
industrial C0 2 , based on the source from which it is emitted and the pathway that the fuel follows from the source to combustion. Chhetri and Islam (2006c) showed that even though the total C 0 2 is increasing in the atmosphere, the natural C 0 2 has been decreasing since the industrial revolution (Figure 5.14). They further argued that industrial C 0 2 is responsible for global warming (Chhetri and Islam 2006c). Thus, generalizing C 0 2 as a precursor for global warming is an absurd concept and is not valid. (See Chapter 7 for more details.)
5.15 Nuclear Energy and Global Warming Nuclear energy has also been promoted as one of the clean alternatives to reducing the pressure on fossil fuel resources. A recently held summit of a group of eight industrial nations (G8) has also endorsed nuclear energy as non-carbon. Based on the current trend of nuclear power development, EIA (2006a) indicates that nuclear power will not take the major share of global energy supply during its projected period by 2030. As stated previously, there are several problems with nuclear power: very high initial costs for building nuclear power plants, the expensive enrichment process, the environmental impacts during mining, milling, and operations, and it poses a great threat for the safety of the communities close to the power plants. The proliferation of nuclear technology will have severe impacts on public health, and it has a high possibility of causing cancer. The
Total C0 2 j
C0 2 level
Industrial COz
/ /
""""--— Natural C0 2 Time
Figure 5.14 Total, industrial, and natural C 0 2 trend (redrawn from Chhetri and Islam 2006c).
244
THE GREENING OF PETROLEUM OPERATIONS
disposal of spent uranium is a great concern to the environment. The Associated Press (2006) reported that the demolition of a Trojan nuclear power plant in Oregon, which started in 2006, will take up to 2024 to fully decommission, yet the federal depository for the disposal of spent fuel has not been planned. Public security of the nuclear power plant is a matter of debate because the impact of the Chernobyl nuclear accident is multiplying each day. Thus, nuclear energy cannot be the energy solution, except in the extremely short term and at the cost of humanity. Godoy (2006) reported that the extremely hot summer in Europe of 2006 restricted nuclear energy generation and showed the limits of nuclear power. The heat waves led authorities in France, Germany, Spain, and elsewhere in Europe to override their own environmental norms on the maximum temperature of water drained from the plants' cooling systems. The justification offered to support this issue was to guarantee the provision of electricity for the country. Of the 58 nuclear power plants, 37 are situated at the banks of the rivers, which they use as outlets for water from their cooling system. The environmental rules limit the maximum temperature for wastewater in order to protect river flora and fauna. Hot water temperature likely leads to high concentrations of ammociac, which is potentially toxic to river fauna. It was recently reported that one of eight Spanish nuclear reactors was shut down due to the high temperatures recorded in the river Ebro, into which the reactor drained the water used in its cooling system. This indicates that nuclear energy has several limitations to its use. Generating electricity at the cost of environment is based on an extremely short-term approach, and it is truly anti-natural. This is another example of misinformation given to the public from the scientific community who touted nuclear energy as cleaner energy. The nuclear industry still lacks solutions to several environmental and public safety concerns. All parts of the nuclear fuel cycle from uranium mining, milling, enrichment, and processing emit hazardous radiation. Nuclear accidents, leaks, and releases of radiation are common in the nuclear industry. There is no safe level of nuclear radiation in any process of the nuclear activities. There is not a single place on the earth where the waste of the enriched uranium can be safely disposed. Despite industries' repetitive assurance for safe disposal of waste, the problem remains unsolved and never will be solved. There is no more explanation needed to tell that global warming is showing the limits of
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
245
nuclear power plants and nuclear power plants are destroying the natural environment irreversibly. Nuclear power is promoted as a solution to global warming based on the consideration that C 0 2 is not emitted from the power plant. However, considerable fossil fuels are used during mining, milling, fuel enrichment, manufacturing, and plant and equipment construction. Considering the life cycle of C 0 2 emissions from the nuclear power system, it appears that nuclear power plants emit a significant amount of C0 2 . Mortimer (1989) reported that nuclear power releases 4-5 times more C 0 2 than equivalent power production from other renewable energy systems. Besides this, nuclear industry generates various wastes such as gloves, clothing, tools, equipment, and others that are contaminated with radioactivity, and disposal of such waste is debatable. Hence, ignoring the history of nuclear issues and the fundamental realities of the nuclear fuel cycle, power generation from nuclear plants is an absurd model. Nuclear energy neither solves the global energy problem nor helps in reversing it.
5.16
Impact of Energy Technology and Policy
Currently, fossil fuels provide approximately 85% of the world's energy demand. According to the projection of EIA (2006d), it is estimated that the world's total energy consumption will rise by 59% between 1999 and 2020. The same report predicts a 20-year increase of carbon dioxide emissions by 60%. It is clear that fossil fuels will still remain the mainstream of global energy supply and demand. Supplying this huge quantity of energy demand is a big challenge. Due to the environmental problems caused by the use of fossil fuels, another series of environmental problems, including global warming, is inevitable (Chhetri and Islam 2006c). However, recent studies indicate that toxicity and other negative effects are the results of all environmental problems that have emerged from oil refining and natural gas processing (Khan and Islam 2006b; Chhetri et al. 2006a). This global energy demand must be met in a sustainable way with few or no impacts on the natural environment. However, the current development trend does not seem to move in that direction. According to Islam (2006), each technological development model has gone from bad to worse, similar to what happened with food products. (We call it the degradation of the following chemical
246
THE GREENING OF PETROLEUM OPERATIONS
technology chain: Honey —> Sugar —> Saccharine —> Aspartame.) A paradigm shift in the conventional energy conversion technologies and policy is necessary in order to achieve true sustainability in technology development and environmental management (Khan and Islam 2006a). It is often thought that running vehicles on hydrogen fuel cells would help reduce oil consumption, as the technology does not require gasoline, and pollution in the form of water is the only emission (Loven 2006). Of course, hydrogen is a great fuel, however as mentioned earlier, most of the energy coming from splitting water through electrolysis will be spent to electrolyze itself, thus the vicious cycle of electricity generation using electricity. Moreover, fuel cells are extremely expensive and toxic, and the hydrogen transportation requires a news distribution system to replace today's natural gas distributing stations. A recent study shows that most expensive components of fuel cells are the high-purity catalysts, pure hydrogen, and synthetic membranes (Mills et al. 2005). These three elements are unnatural elements and should be replaced with natural catalysts and bio-membranes. If hydrogen is produced using direct solar energy, which is the most abundant and free energy source on the earth, it will be a feasible option. Production of hydrogen from biological sources using bacteria is another option that may be sustainable in the long term. However, these technologies, despite having a good prospect for the future, have received little attention from the scientific community or the general public. Ethanol is another strong alternative. Ethanol is produced from biomass sources such as corn, sugarcane, switch-grass, and other cellulose sources. Currently, it does not appear likely that ethanol can compete with gasoline. However, the ethanol conversion technologies are also not truly sustainable. These technologies are usually based on the use of toxic chemical additives that make the whole process unsustainable. For example, ethanol production from switchgrass involves acid hydrolysis as a major process. Bakker et al. (2004) reported that the concentrated sulfuric acid at 1:1-4:1 (acid:biomass ratio) is generally used for breaking down the biomass before it is sent to fermentation. Because of the use of highly toxic acids, fermentation inhibitors such as 5-hydroxymethylfurfural (5-HMF) and furfural acid are produced. They reduce the conversion efficiency significantly. The use of toxic acids for ethanol production creates several environmental problems. Moreover, there are
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
247
other shortcomings that the ethanol industry faces today such as inadequate infrastructures, higher fuel cost, and conversion system performance. Only biological methods of fermentation may stand as sustainable ethanol production technology in the future. A strong policy intervention guided by science and good intension is necessary to develop the ethanol base industry. The U.S. policy to replace more than 75% of oil imports from the Middle East by 2025 will never be realized with the status quo on the technological development. The projection of the Energy Information Administration indicates that the U.S. will import more than 70% in 2025 compared to 62% in 2005. The corn- or sugarcane-based ethanol industry is also blamed for competing with the food industry. However, the waste cellulose materials could be considered feedstock to avoid such competition. Brazil is one of the best examples where ethanol and other biofuels made the country independent of foreign oil imports.
5.17
Energy Demand in Emerging Economies
Asia has emerged as the prospective biggest consumer of energy. In India and China, both characterized by the largest population and highest economic growth rate, the demand of energy is dramatically increasing. According to Kuroda (2006), over the last 10 years China grew at an average annual rate of 9.1% and India at a rate of 6.3%. Most forecasters see continued rapid growth in these countries in the years ahead - likely 8-9% in China and 7-8% in India. The projection of the Asian Development Bank showed an estimated average GDP growth of 6.6% across the developing economies of Asia and the Pacific, which is said to continue in the coming years. In order to maintain this economic growth, the developing countries need large amounts of energy, especially electrical energy to run the industrial operations. Butler (2005) reported that the Chinese economy grew by 49% between 1999 and 2004. China increased its oil import from 2003 by 990,000 barrels a day in 2004. Similar consumption patterns could be seen in India in the years to come. The energy demand is significantly increasing in Taiwan, Korea, Thailand, Indonesia, Malaysia, and other East Asian countries as well. The global energy demand would put great pressure on fossil fuel resources, resulting in worse environmental consequences.
248
5.18
THE GREENING OF PETROLEUM OPERATIONS
Conventional Global Energy Model
Richard E. Smalley, a Nobel Laureate in chemistry in 1996, forecasted that, based on the current population growth and increase in energy demand, the global energy demand in 2050 for a population of 10 billion would be approximately 60 Terrawatts, which is 900 million barrels of oil per day. The projection in Figure 5.15 indicates that fuel sources such as solar, wind, and geothermal could play significant roles in the global energy supply (Smalley 2003). The second significant energy source suggested is nuclear energy. As discussed earlier, nuclear power can never be the solution to the global energy problem. Rather, it deteriorates the global environment with impacts on a magnitude higher than the current level. Energy sources such as wind, solar, and geothermal need to be utilized because they are freely available in nature. However, Service (2005) argued that using 10% efficient solar panels to harvest 20 TW of energy requires 0.16% of Earth's land. Moreover, the current photovoltaic technology has long-term environmental impacts. Various heavy metals such as lead, chromium, and silicon are used to make solar panels. The storage batteries also have several toxic chemicals. With increasing awareness of environmental impacts, there cannot be a safe place to dispose these anti-natural chemicals. None of the process reactions are reversible, and there is practically no hope to rehabilitate these chemicals back into nature. Thus, photovoltaics neither solve energy problem nor are they friendly to the environment. Geothermal energy
Figure 5.15 Energy mix for 2050 (Smalley 2003).
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
249
has vast potential but electricity production from this energy source loses significant efficiency.
5.19 Renewable vs. Non-renewable: No Boundary as Such The demand of fossil fuel, such as oil, coal, and natural gas, will still be significant in the next several decades. Figure 5.16 shows that as the natural processing time increases, the energy content of the natural fuel increases from wood to natural gas. The average energy value of wood is 18 MJ/kg (Hall and Overend 1987) and the energy content of coal, oil, and natural gas are 39.3MJ/kg, 53.6MJ/kg, and 51.6MJ.kg, respectively (Chhetri and Islam, 2008). Moreover, this shows that renewable and non-renewable energy sources have no boundary. It is true that solar, geothermal, hydro, and wind sources are being renewed at every second based on the global natural cycle. Fossil fuel sources are solar energy stored by the trees in the form of carbon, and due to temperature and pressure they emerge as coal, oil, or natural gas after millions of years. Biomass is renewed from a few days to a few hundred years (as a tree can live up top several hundred years). These processes continue forever. There is not a single point where fossil fuel has started or stopped its formation. So, why are these fuels called non-renewable? The current
2 3 Natural processing time Figure 5.16 Energy content of different fuels (MJ/kg).
250
THE GREENING OF PETROLEUM OPERATIONS
technology development mode is based on an extremely short-term approach because our solutions to the problems start with the basic assumption of "At tends to =0." Only technologies that fulfill the criteria of time approaching infinity are sustainable (Khan and Islam 2006a). The only problem with fossil fuel technology is that they are made more toxic after they are refined using high heat, toxic chemicals, and catalysts. From the above discussion, it is clear that fossil fuel can contribute significant amounts of energy by 2050. It is widely believed that fossil fuels will be depleted soon. However, there are still huge reserves of fossil fuel. The current estimation of the total reserves is based on exploration to-date. As the number of drillings or exploration activities increases, more recoverable reserves can be found (Figure 5.17C). In fact, Figure 5.17 is equally valid if the abscissa is replaced by "time" and the ordinate is replaced by "exploratory drillings" (Figure 5.17B). For every energy source, more exploration will lead to larger fuel reserve. This relationship makes the reserve of any fuel type truly infinite. This relationship alone can be used as a basis for developing technologies that exploit local energy sources. The U.S. oil and natural gas reserves reported by EIA (2000) and EIA (2002) show that the reserves over the years have increased (Table 5.5). These additional reserves were estimated after the analysis of geological and engineering data. Hence, as the number of exploration increases, the reserves will also increase. Figure 5.18 shows that the discovery of natural gas reserves increases as exploration activities or drillings increase. Biogas is naturally formed in swamps, paddy fields, and other places due
Reserves
Explorations
Time (A)
Reserves
Time (B)
Figure 5.17 Fossil fuel reserves and exploration activities.
Explorations (C)
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
251
Table 5.5 US crude oil and natural gas reserve (Million barrels).
Crude Oil Reserve
Natural Gas
Year
Reserve
% Increment
1998
21,034
1999
217,65
3.5%
2000
22,045
1.3%
2001
22,446
1.8%
1998
164,041
1999
167,406
2.1%
2000
177,427
6.0%
2001
183,460
3.4%
Discovery of new reserves Gas hydrates
-
Tight gas
Devonion Shale gas
Deep gas Shallow gas Biogas
Exploratory drilling activities Figure 5.18 Discovery of natural gas reserves with exploration activities.
to the natural degradation of organic materials. As illustrated in Figure 5.18, there are huge gas reservoirs including deep gas, tight gas, Devonian shale gas, and gas hydrates that are not yet exploited. The current exploration level is limited to shallow gas, which is a small fraction of the total natural gas reserves. Hence, by increasing the number of exploration activities, more and more reserves can be found, which indicates the availability of unlimited amounts of
252
THE GREENING OF PETROLEUM OPERATIONS
fossil fuels. As natural processes continue, the formation of natural gas also continues. This is also applicable to other fossil fuel resources such as coal, light and heavy oil, bitumen, and tar sands. Figure 5.19 shows the variation of resource bases with time, from biomass to natural gas. Biomass is available in huge quantities on Earth. Due to natural activities, the biomass undergoes various changes. The slope of the graph indicates that the volume of reserves decreases as it is further processed. Hence, there is more coal than oil and more oil than natural gas, meaning unlimited resources. Moreover, the energy content per unit mass of the fuel increases as the natural processing time increases (Figure 5.16). Just as the biomass resource is renewable and the biological activities continue on Earth, the formation of fossil fuel also continues forever. From this discussion, the conventional boundary between renewable and non-renewable is dismantled, and it is concluded that there is no boundary because all natural processes are renewable. The only problem with fossil fuel arises from the use of toxic chemicals and catalysts during oil refining and gas processing. Provided that fossil fuels are processed using natural and non-toxic catalysts and chemicals, or provided that we make use of crude oil or gas directly, fossil fuel will still remain a good energy source in the global energy scenario in the days to come. These resources are completely recyclable.
Reserves Ψ
Biomass
Φ
Green wood ©
Dry wood
\
Coal \.
Tar Sand ^ \
Heavy Oil "SL
Light Oil ^^-Q^^
I Figure 5.19 Continuity of resource bases.
Natural Gas
Time
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
5.20
253
Knowledge-based Global Energy Model
Smalley argued that the potential of biomass is limited by the need to use arable land to grow food. However, biomass is regenerable, and utilizing only the waste biomass could contribute significantly without competing with food sources. Since half of the world's population now is surviving on biomass as a mainstream fuel, a small intervention in the traditional technology could have a significant impact on the economy and environment. Khan et al. (2006b) developed a high efficiency zero-waste cook stove with a particulate trap mechanism for the exhaust, and they demonstrated that the stove is almost 90% efficient considering its multiple uses. The stove is completely safe in terms of health safety because the particulates, which are considered to be health hazards, are trapped in an oil-water trap, leaving only the clean gas that is essential for plants for photosynthesis. The soot collected in the oil water trap could be excellent nano-materials for future industries. It could also be used as non-toxic paints. Thus, biomass could still remain the mainstream fuel for billions of people in the world. Chhetri et al. (2006) demonstrated the pathways of oil refining and showed that crude oil is not the culprit for the global environmental problems. The problem lies with the unsustainable, synthetic, chemical-based technologies that fill the earth with synthetic chemicals also. Islam (2004) also studied the pathway of crude oil and argued that there is nothing harmful about crude oil. Vafaei (2005) designed a jet engine that can run on any kind of solid or liquid, demonstrating that technology development needs to be focused on the use of crude oil directly. This will not only save larger amounts of money spent on expensive refineries for their refining, but it will also have absolutely no negative impact on the environment. Because the crude oil is completely biodegradable, the C 0 2 produced by burning crude oil could be easily recycled by the plants (AlDarbi et al. 2005; Chhetri and Islam 2006c). (See Chapter 7, section 7.9 for more details on sustainable technology.) Based on the above discussions, the energy projections for the next 50 years are shown in Figure 5.20. Crude oil could be used directly without refining and is therefore considered an alternative source. The use of biomass in a sustainable manner will remain pre-dominant for a majority of people, especially in developing countries. Hydropower, especially vast ocean thermal and wave energy resources, could be exploited, and it is a free energy source. Wind
254
THE GREENING OF PETROLEUM OPERATIONS
energy and geothermal sources would also contribute significantly. Since the most abundant energy source available on the earth is solar energy, the direct application of energy is the only solution to the global energy problem. However, the current solar photovoltaic technology cannot be the solution to the current global energy problem. The direct application of solar energy as a heat source in order to split water into hydrogen and to produce steam that generates electricity would be the most feasible solution. Khan et al. (2006a) developed a refrigerator, utilizing direct heat from solar energy, that runs only with thermal power. Mills et al. (2005) showed that hydrogen production by direct solar energy is one of the most feasible and environmentally friendly options. Solar energy could be a significant contributor to solving the energy problem universally. However, the current technology development mode needs to be reversed to pro-nature technology development. The total solar energy potential is 170,000 TW, so it is only a matter of how the world can exploit this vast resource. Solar hydrogen and direct solar energy should remain the dominating energy sources because of their environmental benefits and sustainability. Hence, technology development based on the principle of nature is the only solution to the current problem. The conventional engineering technologies violate characteristics of time and nature in almost all aspects and are anti-nature. Knowledge-based technology development/management is instrumental in solving the global energy problem.
70 60 g 40 §30 K
20 10 0 &
o<*
ΦHf*
&
** ^
& o
«? & *
,<*°
Figure 5.20 Energy projection for next 50 years.
SCIENTIFIC CHARACTERIZATION OF GLOBAL ENERGY SOURCES
5.21
255
Concluding Remarks
Various energy sources have been characterized scientifically based on their benefits and impacts on humans and the environment. In this chapter, various types of energy sources, status of their development to-date, and their possible impacts on the environment have been discussed. It is argued that the world will not run out of oil, yet fossil fuel cannot be taken as the single source for the global energy supply. It has further been argued that there is no boundary between renewable and non-renewable sources. It is concluded that the current global energy and environmental problems are not due to the use of fossil fuels, but it is due to the current mode of technology development, which is heavily dependent on the use of synthetic chemicals for their refining and processing. A holistic approach in energy development that leads to zero waste living should be the basis for any technology development. As this cannot be achieved from the current technology development models, a paradigm shift towards knowledge-based technology development and energy management is essential to achieve true sustainability in energy and environmental management. An integrated use of energy resources such as solar, hydro, biomass, wind, geothermal, and hydrogen could lead to the global energy solution. Directly using crude oil has been proposed in this research. This chapter has deconstructed current technology development models that are aphenomenal and non-scientific. It has been concluded that only knowledge-based technology and policy options that are technically feasible, environmentally attractive, and socially responsible are the keys to the solution of global energy and environment problems.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
6 Scientific Characterization of Light and Light Sources 6.1
Introduction
Light is a form of energy that produces a sensation of brightness that makes seeing possible. A source of light can be either natural or artificial. A natural light is produced from a natural source, such as lightning or the sun. Natural sources of light monetarily cost nothing, while artificial sources of light, or man-made sources, cost money to produce and use. These sources include candles, incandescent and fluorescent lamps, and light-emitting diodes (LEDs). The sun is not only a source of light but also a source of life, and an essential element of life, for the whole universe. Daylight on earth is a vital part of the light consumed by humans, animals, and plants for a healthy life. Each sunrise infuses life into all living things (Liberman 1991). Natural light follows a natural path for illumination, using only natural materials and processes. Natural sources of light have roles in keeping nature, harmony, and chaos. The human body uses natural light for its normal functioning. The sun emits light that is materialized by particles bombarding the human body, which produces hormones and vitamins amid natural light-body interactions and reactions. The human eye and body 257
258
THE GREENING OF PETROLEUM OPERATIONS
were created to function in natural light because light is rich in quality and meaning and is connected with many profound functions of nature (Hale 1993). For instance, research has demonstrated that the full spectrum of daylight is important in stimulating the endocrine system properly and that humans suffer side effects when forced to spend too much time under artificial light sources that reproduce only a limited portion of the daylight spectrum (Ott 2000). Regarding animals, natural light is essential. For instance, the domestic laying hen is a day-active, gregarious bird that requires sunlight like its ancestor, the red jungle hen, which evolved in 12 hours of light and 12 hours of darkness in the equatorial jungle (Collias and Collias 1996; Gunnarsson et al. 2008). The lighting environment is crucial for laying hens and their production (Gunnarsson et al. 2008). If birds intended for organic production are not given access to natural light, then they have to adapt to a new lighting environment at an early age, possibly resulting in behavioral problems, e.g., cannibalism (Manser 1996; Gunnarsson et al. 1999; Gunnarsson et al. 2008). Pullets used for organic egg production need to be reared with access to natural light (Gunnarsson et al. 2008). Plants also require natural light. Plants grow as a result of their ability to absorb light energy and convert it into reductive chemical energy, which is used to fix carbon dioxide (Attridge 1990). This natural light comes mainly from the sun. An artificial light from an artificial source, such as a light bulb, results from a man-made process that uses artificial materials such as glass or plastics. Artificial sources of light are made only to illuminate darkness. It is known now that artificial light is damaging to the eye, brain, and the whole body due to the stress provoked by artificial light sources and their limited light spectra. Modern light technology products, such as incandescent and fluorescent lamps, have caused many vision disorders, such as eye fatigue. In addition, modern technology tools, such as computers and televisions, demand a high and unnatural visual effort. As a result, computer users, for example, experience symptoms related to dry eyes, refractive error, accommodation infacility and hysteresis, exophoria and esophoria, and presbyopia. Although both types of light sources can provide light, only natural sources ensure perfect and healthy lighting. This chapter reviews the heterogeneous sun composition and microstructure that guarantees the perfect lighting. Also, a light energy model is developed showing the outcome of the light source composition on the light
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
259
quality. This model shows the effect of the size, and therefore the number, of the particles that emit light on the corresponding spectrum and resulting light quality. This chapter also investigates the natural and artificial lights based on their pathways and the light spectra for the sun, incandescent white and red lamps, fluorescent lamps, and red and yellow LEDs. This study indicates that the sun, a natural light source, produces a continuous spectrum and the best coverage of visible colors. However, artificial light sources, such as candles, incandescent and fluorescent lamps, and LEDs, all consisting of homogeneous materials, are limited in size, light coverage, and service life. For example, their spectra show spikes and troughs, and they do not cover all the visible lights properly. Finally, the effect of lamp coating and the use of eyeglasses and sunglasses were examined. The use of transparent, medical eyeglasses keeps the form of the light spectrum. However, sunglasses reduce the intensity and brightness of visible colors in addition to eliminating some visible colors. This is bad for clear eye vision but possibly good for other applications regarding machines, such as computers, photocopiers, televisions, and cell phones.
6.2
Natural Light Source: The Sun
6.2.1 Sun Composition Figure 6.1 shows a view of the sun taken at 9:19 a.m. (EST) on Nov. 10, 2004 by the SOHO (Solar and Heliospheric Observatory) spacecraft. Table 6.1 indicates the sun composition based on known tangible data. Nevertheless, it is important to mention that elements of the sun are infinite and mostly undiscovered to date. This table of elements is based on the analysis of the solar spectrum, which comes from the photosphere and chromosphere of the sun (Chaisson and McMillan 1997). About 67 known and tangible elements have been detected in the solar spectrum (Chaisson and McMillan 1997).
6.2.2
Sun Microstructure
The sun matter microstructure consists of molecules, which represent the smallest part of a chemical compound. A molecule is the smallest physical unit of a substance that can exist independently, consisting of one or more atoms held together by chemical forces.
260
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
Figure 6.1 Sun picture taken at 9:19 a.m. EST on Nov. 10, 2004, by the SOHO (Solar and Heliospheric Observatory) spacecraft (NASA/European Space Agency, 2004).
Table 6.1 Sun composition (Chaisson a n d McMillan 1997).
Element
Abundance (percentage of total number of atoms)
Abundance (percentage of total mass)
Hydrogen
91.2
71.0
Helium
8.7
27.1
Oxygen
0.078
0.97
Carbon
0.043
0.40
Nitrogen
0.0088
0.096
Silicon
0.0045
0.099
Magnesium
0.0038
0.076
Neon
0.0035
0.058
Iron
0.0030
0.14
Sulfur
0.0015
0.040
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
261
An atom is the smallest part of an element, into which it can be divided and still retain its properties, which include a dense, positively charged nucleus surrounded by a system of electrons. The size of an atom is around 10"10 m (see Table 6.2). Atoms usually do not divide in chemical reactions except for some removal, transfer, or exchange of specific electrons. An atom is composed of protons, neutrons, and electrons. The sun emits an infinite number of invisible elements called neutrinos. A neutrino is defined as a stable neutral elementary particle of the lepton group with a zero rest mass and no charge. There are three types of neutrinos, associated respectively with the electron, muon, and tau particle, all of which have a spin of 1/2.
Table 6.2 Atom structure. Element
Size (m)
Atom
»to- 10
Proton
~io- 15
Neutron
=io- 15
Electron
Nucleus
~io- 14
Quark
<10" 19
Table 6.3 Types of interaction field (Cottingham and Greenwood 2007). Interaction field
Boson
Spin
Gravitational field
'Gravitons' postulated
2
Weak field
W+, W~, Z particles
1
Electromagnetic field
Photons
1
Strong field
'Gluons' postulated
1
262
THE GREENING OF PETROLEUM OPERATIONS
Table 6.4 Leptons (Cottingham and Greenwood 2007).
Electron e Electron neutrino v
Mass (MeV/c2)
Mean life (s)
Electric charge
0.511
00
-e
<3 x 10~6
e
Muon μ -
0 2.197 x 10"6
105.658
Muon neutrino v
0
μ
Tau τ -
1777
Tau neutrino v
-e
(291.0 ±1.5) x 10-'5
-e 0
τ
According to the standard model, there are 12 fundamental matter particle types and their corresponding antiparticles. The matter particles are classified into 2 categories: quarks and leptons. Each category includes 6 particles and 6 corresponding antiparticles. Another group of fundamental particles corresponds to the force carrier particles. They are called gluons, photons, W and Z. W and Z are responsible for strong, electromagnetic, and weak interactions respectively (Figure 6.2).
u up
c charm
f top
y photon
d down
s strange
b bottom
9 gluon
Quarks
Force carriers electron neutrino
muon neutrino
tau neutrino
e electron
M muon
tau
1
2
3
Z Z boson
Leptons Γ
Three families of matter
Figure 6.2 Elementary particles.
W W boson
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
263
The subatomic particles of the sun discovered so far are limited in number. However, the sun's subatomic particles are infinite and interact with each other continuously. Let's consider the sun as the system for particle size analysis. There is only one sun. However, the sun consists of an infinite number of particles, including just a few of known particles. As the size of a subatomic sun element decreases, the number of the considered element in the sun increases. Figure 6.3 indicates that the number of sun particles decreases with their size.
6.3 Artificial Light Sources This chapter investigates the following artificial light sources (see Figures 6.4 to 6.8): 1. 2. 3. 4.
Mulled cider candle Incandescent white and red lamps Fluorescent lamp Red and yellow light-emitting diodes (LEDs)
In addition, the effect of using eyeglasses or sunglasses is studied by adding a lens and dark green sunglasses to the light system (see Figures 6.9 and 6.10).
Number of particles
Particle size Figure 6.3 Sun particle number as a function of the particle size.
264
THE GREENING OF PETROLEUM OPERATIONS
Figure 6.4 Mulled cider candle.
Figure 6.5 Incandescent lamp.
Figure 6.6 Incandescent red lamp.
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
Figure 6.7 Fluorescent lamp.
Figure 6.8 Light emitting diodes (LED).
Figure 6.9 Lens.
265
266
THE GREENING OF PETROLEUM OPERATIONS
Figure 6.10 Sunglasses.
6.4
Pathways of Light
6.4.1 Natural Light The sun, a natural source of light, is an essential element in the universe. One of the benefits of the sun is daylight and nightlight, via the moon. The sun does not produce waste because all its resulting particles and effects are used by nature. The sun's light service life is infinite, which is another benefit of a natural light source. The sun consists of heterogeneous materials and particles, and this type of light source is natural, heterogeneous, clean, vital, and efficient. Figure 6.11 shows the pathway of natural light.
Natural heterogeneous clean vital efficient Figure 6.11 Natural light pathway.
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
267
6.4.2 Artificial Light Artificial light sources such as candles, incandescent lamps, and fluorescent lamps are made by humans for the unique benefit that is light. Therefore, they cannot be used for additional purposes, which makes them useless and cumbersome waste once they are destroyed. Their components and resulting particles and effects are homogeneous, toxic, and not accepted by nature because they generate discomfort, stress, and various diseases, in addition to the waste in the environment. Moreover, their service lives are limited. So, this type of light sources is artificial, toxic, harmful, and inefficient. Figure 6.12 illustrates the pathway of artificial light.
6.5
Light Energy Model
Since there are infinite particles emitting light, the light is the result of all the finite light elements. Every finite light element is defined by a specific set of properties, including the element particle size, mass, and temperature, and each light element corresponds to a particle. The total light brightness is the sum of the brightness of all the light elements. The resulting light color corresponds to the spectral balance that is characteristic to the emitting material. Light intensity (or energy), efficiency, and quality are functions of the light source composition. The light source is composed of
Artificial homogeneous toxic harmful inefficient Figure 6.12 Artificial light pathway.
268
THE GREENING OF PETROLEUM OPERATIONS
infinite particles with different sizes, d, masses, m., and temperatures, T. The light source mass equals: M = JTm,.
(6.1)
i=l
A particle energy function equals: E,. = «,/;
(6.2)
where a. is a contant, and f. is the frequency for the particle i. The light energy of a particle i is also defined as follows: E. = b.mPiv!ii I
1
I
(6.3)
I
where v. is the speed of the particle i. Equation 6.3 yields: a.f. = b.m.fivßi
(6.4)
Then, the frequency /_. for the particle i comes to: f.=h-mnvi.
(6.5)
where b., p., and q. are the constants defining the particle composition and properties. As a result, the particle speed v. amounts to:
ViJ^-T
(6.6)
The total light energy is the sum of all particle energy values: E = JTEf 1=1
The wavelength is the inverse of the frequency:
(6.7)
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
A f =v ( //.
269
(6.8)
where v. is the speed of the particle i: Vl=l,/t,
(6.9)
and I. is the distance traveled by the particle i, and t is the travel time. The distance traveled by a particle i is a function of its size, d., mass, m., and temperature, Γ. The particle mass m. depends on the particle composition. Since this particle i consists of the smallest particle in the universe, its composition is unique and corresponds to one material. The density of the particle i is: ft =
ffif/v;.
(6.io)
where V is the particle volume: V;.=afd,.A
(6.Π)
and a. and ß. are the particle size constants. The distance traveled by light particle is described by: ',=v,f,
(6.12)
which is equivalent to:
6.6
Spectral Analysis of Light
Figure 6.13 exhibits the spectrometer setup, comprised of a spectrometer, a USB fiber optic cable, a metallic holder, and a laptop computer including the spectrometer operating software. The light
270
THE GREENING OF PETROLEUM OPERATIONS
Figure 6.13 Spectrometer setup.
spectrum data are collected by this system, and the resulting light intensity as a function of the wavelengths is shown on the computer screen once ready. Figure 6.14 shows an example of spectrometer data file specifications.
OOIBase32 Version 2.0.6.5 Data File +++++++++++++++f+++++++++++++++++f++ Date: 04-25-2008, 15:26:16 User: Valued Ocean Optics Customer Spectrometer Serial Number: I2J345 Spectrometer Channel: Master Integration Time (msec): 7 Spectra Averaged: 4 Boxcar Smoothing: 0 Correct for Electrical Dark: Disabled Time Normalized: Disabled Dual-beam Reference: Disabled Reference Channel: Master Temperature: Not acquired Spectrometer Type: S2000 ADC Type: USB2000 Number of Pixels in File: 2048 Figure 6.14 Example of spectrometer data file specifications.
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
6.6.1
271
M e a s u r e d a n d P l a n c k ' s M o d e l Light Spectra
Let's consider a black body that is a box with hot walls emitting and receiving photons at an equal rate. If a small window is opened in the box, the released radiation will have the same spectrum, which is measured with the spectrometer to determine the black body radiation spectrum. Planck's model establishes the light energy in J m 3 n r 1 as follows: 4nhc 1 λ exp[hc/kXT)-\ where h is the Planck constant in J s, c is the speed of light in m s 1 , λ is the wavelength in m, k is the Boltzmann constant in J K 1 , and T is the temperature in K. The light intensity is determined by the following:
l=fnE
(6.15)
Then, the light intensity in J s_1 irr 2 sr -1 m_1 equals: i =
he2
T^
1 (U -Μ7Λ
1
(6 16)
-
The corresponding frequency in Hz is given by:
f-i
(6 ,7)
·
Then, Equation 6.14 comes to: Γ
_ 4*Λ/ 1 4 λ exp(/i//itT)-l
(6.18)
Wien's displacement model describes the displacement of the light spectrum peak at the maximum light energy or intensity. The corresponding wavelength Amax in m equals: Amax=2.9xlO-'Vr
(6.19)
272
THE GREENING OF PETROLEUM OPERATIONS
where T is the absolute temperature of the light source surface in K. This temperature corresponds to the one of an equivalent black body at radiation equilibrium. As a result, the equivalent absolute temperature is given by: Γ = 2.9χ1(Γ3ΑΜΧ
(6.20)
Figure 6.15 shows the normalized light spectra for a tungsten filament lamp based on measured data and Planck's model data. Figure 6.16 illustrates the corresponding visible light spectra. These figures prove that Planck's model does not corroborate the measured data.
6.6.2 Natural and Artificial Light Spectra Table 6.5 shows the perceived color as a function of the wavelength ranges. It is important to note that all the colors, whether they are visible or not, are needed by the human body. Sunlight is characterized by a full spectrum, as illustrated in Figures 6.17 and 6.18, covering ultraviolet and infrared lights in addition to the visible lights. The artificial lights produced by the candle and the incandescent lamp display continuous spectra since the light results from material burning. The candle consists of wax and the incandescent lamp includes a tungsten filament.
Figure 6.15 Normalized light spectra for tungsten filament lamp.
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
273
Figure 6.16 Normalized visible light spectra for tungsten filament lamp. Table 6.5 Perceived color based on light wavelength. Wavelength (nm)
Color
<400
Ultraviolet (invisible)
400-450
Violet
450-490
Blue
490-560
Green
560-590
Yellow
590-630
Orange
630-670
Bright red
670-750
Dark red
>750
Infrared (invisible)
However, the artificial light from the fluorescent lamps and the LEDs show spectra with spikes and troughs. The fluorescent lamps emit light after an electric discharge through their gases and the LEDs are monochromatic. The artificial light spectra are not balanced in the visible light region, which proves one of the negative aspects of artificial lighting. The sun size is infinite when compared to the artificial light sources.
274
THE GREENING OF PETROLEUM OPERATIONS
Figure 6.18 Sun visible light spectrum.
As a result, the particles that compose the sun produce an infinite number of spectra. However, the particles that form the artificial light sources are limited. Therefore, their generated spectra are the sums of a limited number of spectra, which explains the incomplete coverage of the visible light region and the presence of spikes and troughs. The light spectrum of the light-emitting element is the sum of all the spectra developed by the various particles composing the element. Figures 6.19 and 6.20 show that, for the candle and the incandescent lamps, the light intensity increases with the wavelength. The fluorescent lamp spectrum displays spikes and troughs,
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
275
Figure 6.20 Visible light spectra.
confirming the presence of its mercury vapor and gases such as neon, argon, and xenon in addition to the phosphor coating. The red LED light spectrum also exhibits spikes in the orange and red color areas due to the phosphor and additional red coating. The incandescent and fluorescent lamps are considered fullspectrum light sources. A full-spectrum light source is supposed to imitate the spectrum of natural light. However, the natural light source, such as the sun, has a huge amount of material that undergoes spontaneous and continuous combustion in order to produce light. As a result, an infinite number of spectra are generated producing
276
THE GREENING OF PETROLEUM OPERATIONS
the perfect, smooth, and continuous spectrum. Regarding humans, sunlight is the best for perfect visual clarity, color perception, body growth, behavior, mood, mental awareness, performance, and productivity. In addition, the other elements of the human environment, including the flora and fauna, also benefit from sunlight because it ensures their normal growth and behavior. Based on light spectra shown in Figures 6.19 and 6.20, the wavelength values at the maximum radiation values for the corresponding light sources are the following: 1) 2) 3) 4) 5) 6) 7)
Sun: 504.63 nm Candle: 644.99 nm Incandescent lamp: 557.34 nm Incandescent red lamp: 714.27 nm Fluorescent lamp: 544.79 nm Red LED: 591.47 nm Yellow LED: 675.61 nm
Then, the equivalent black body temperature values for the various light sources are as follows: 1) 2) 3) 4) 5) 6) 7)
Sun: 5746.79 K Candle: 4496.19 K Incandescent lamp: 5203.29 K Incandescent red lamp: 4060.09 K Fluorescent lamp: 5323.15 K Red LED: 4903.04 K Yellow LED: 4292.42 K
6.7 Effect of Lamp Coating on Light Spectra Figures 6.21 and 6.22 show the light spectra for incandescent white and red lamps. The phosphorus coating in the white lamp produces a better lighting because it covers a larger region of visible light than the red lamp. In the latter case, the spectrum light is red, which confirms the red coating. So, the color of the material particles composing the light source is displayed in the overall light spectrum. Figures 6.23 and 6.24 show a less pronounced effect of the LED coating on the resulting light because LEDs have a low light intensity compared to the incandescent lamps. The range of visible light
SCIENTIFIC C H A R A C T E R I Z A T I O N OF L I G H T A N D L I G H T SOURCES
Figure 6.21 Light spectra for incandescent white and red lamps.
Figure 6.22 Visible light spectra for incandescent white and red lamps.
Figure 6.23 Light spectra for red and yellow LEDs.
277
278
THE GREENING OF PETROLEUM OPERATIONS
Figure 6.24 Visible light spectra for red and yellow LEDs.
covered is limited in both red and yellow LED cases. Here, the red LED light covers the orange to dark red colors, whereas the yellow LED light color is red.
6.8 Effect of Eyeglasses and Sunglasses on Light Spectra The use of eyeglasses might be necessary for normal eye vision. However, sunglasses are not essential for normal eye vision. Sunglasses are coated with paints, other than white, to lower the light intensity and brightness perceived by the eye while driving, for instance. Figures 6.25 to 6.32 exhibit the spectra for the various sources of light investigated in this study comprised of the candle, incandescent white and red lamps, and fluorescent lamp. A lens was used to simulate the effect of medical eyeglasses. These figures show that the lens reproduced the same form of spectrum covering the same colors because it is not coated and, therefore, transparent. As expected, the effect of the dark green sunglasses is obvious. The dark green coating affected the final spectrum form and color coverage for all the light sources studied, except the incandescent red and fluorescent lamps. A darkening of the observed colors means there was a reduction in the light color intensity and brightness. Concerning the fluorescent lamp, the original spectrum is not continuous and does not properly cover the range of visible lights.
SCIENTIFIC C H A R A C T E R I Z A T I O N OF L I G H T A N D L I G H T SOURCES
Figure 6.25 Light spectra for candle.
Figure 6.26 Visible light spectra for candle.
Figure 6.27 Light spectra for incandescent white lamp.
279
280
THE GREENING OF PETROLEUM OPERATIONS
Figure 6.28 Visible light spectra for incandescent white lamp.
Figure 6.29 Light spectra for incandescent red lamp.
Figure 6.30 Visible light spectra for incandescent red lamp.
SCIENTIFIC CHARACTERIZATION OF LIGHT AND LIGHT SOURCES
281
Figure 6.32 Visible light spectra for fluorescent lamp.
Therefore, the sunglasses did not affect the final spectrum for this type of lamp.
6.9 Concluding Remarks Natural light sources monetarily cost nothing and produce the perfect light quality because the light covers all the colors needed by living things, including the human body. However, artificial light sources, including candles, incandescent and fluorescent lamps, and LEDs, cost money and do not ensure complete color coverage as expected by a natural body. As a result, the light lacks a natural quality. The light energy model developed shows that light source
282
THE GREENING OF PETROLEUM OPERATIONS
composition and microstructure play important roles in the final outcome for clear eye vision and normal body functioning. This is apparent from the colors covered by the light. Each color is characteristic of a certain type of particles. So, if a certain color is not well covered, the human body will not benefit from the corresponding emitted light and bombarded particles. This would definitely cause a weak body and provoke diseases. Then, the natural light, such as the sun, is necessary for humans and other living things. However, the artificial light sources can accomplish a limited role when it is difficult to get natural light or in the use of a machine, for instance. But the artificial light sources never achieve the expected quality due to their inefficient service life, the toxicity of their composition, and the deficiency of light color coverage and related essential particles.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
7 The Science of Global Warming 7.1
Introduction
Global warming has been a subject of discussion since the late 1970s. It is thought that the accumulation of carbon dioxide in the atmosphere causes global warming, resulting in irreversible climate change. Even though carbon dioxide has been blamed as the sole cause for global warming, there is no scientific evidence that all carbon dioxides are responsible for global warming. Precisely to address this critical gap, this chapter includes a detailed analysis of greenhouse gas emissions from the Pre-industrial Age to the (for some) "Golden Era" of petroleum. A new theory has been developed, which shows that not all carbon dioxides contribute to global warming. For the first time, carbon dioxide is characterized based on various criteria, such as the origin, the pathway it travels, and the isotope number. In this chapter, the current status of greenhouse gas emissions from various anthropogenic activities is summarized, and the role of water in global warming is discussed. Various energy sources are classified based on their global efficiencies. The assumptions and implementation mechanisms of the Kyoto Protocol are critically reviewed. The idea that the Clean Development Mechanism of the Kyoto 283
284
THE GREENING OF PETROLEUM OPERATIONS
Protocol has become the "license to pollute" due to its improper implementation mechanism is argued in this chapter. In addition, the issues raised in Copenhagen Summit of 2009 are discussed in light of environmental and political fallout. The conventional climatic models are deconstructed and guidelines for new models are proposed in order to achieve true sustainability in the long term. This chapter presents a series of sustainable technologies that produce natural C0 2 , which does not contribute to global warming. Various zero-waste technologies that have no negative impact on the environment are key in reversing global warming. Because synthetic chemicals, which are inherent to the current technology development mode, are primarily responsible for global warming, there is no hope for reversing global warming without fundamental changes in technology development. The new technology development mode must foster the development of natural products, which are inherently beneficial to the environment. The discussion of the possibility that a build-up of carbon dioxide in the atmosphere results in irreversible climate change has been transformed into a "controversy" of the type seen all too often on every other subject. Propositions have been advanced, dividing people according to their support for one side or the other, all before anything objective and scientific in connection with the originating subject matter is even established. As far as the science of the question goes, and despite the various series of standards set by international and government organizations to reduce the carbon dioxide level in the atmosphere due to anthropogenic activities, the current climatic models show that the global temperature is still increasing. A very large amount of pseudo-science is already afoot on all aspects of the issue of global warming. Much of it is used to divide - if not aimed at dividing in the first place - public opinion over whether nature or humanity is the chief culprit. The crying need for a serious scientific approach to be taken has never been greater. On this note, paraphrasing Albert Einstein, it can truly be said that the system that got us into the problem is not going to get us out. Absent a comprehensive characterization of C 0 2 and all its possible roles and forms, any attempt to analyze the symptoms of global warming or design a solution must collapse under the weight of incoherence if it is based on univariate correlations, or even correlations of multiple variables, and assumes that the effects of each variable can be superposed linearly and still mean anything. The absurdity is so
THE SCIENCE OF GLOBAL WARMING
285
well known that one popular graph on the Internet depicts a strictly proportional increase in incidences of piracy in all the world's oceans as a function of increasing global temperature. The current status of greenhouse gas emissions due to industrial activities, automobile emissions, and biogenic and natural sources is systematically presented here. In this chapter, a newly developed theory, that all carbon dioxides are not same, has been detailed. Thus, not all carbon dioxides may be contributing to global warming. For the first time, carbon dioxide is characterized based on normally ignored criteria such as its origin, the pathway it travels, the isotope number and age of the fuel source from which it was emitted. Fossil fuel exploration, production, processing, and consumption are major sources of carbon dioxide emissions. Here, various energy sources are characterized based on their efficiency, environmental impact, and quality of energy based on the new criteria. Different energy sources follow different paths from origin to end-use and contribute emissions differently. A detailed analysis has been carried out on potential precursors to global warming. The focus is on supplying a scientific basis as well as practical solutions after identifying the roots of the problem. Similarly, this chapter presents an evaluation of existing models on global warming, based on the scenario of Kyoto Protocol, that are under satisfactory and partial implementation. Shortcomings in the conventional models have been identified based on this evaluation. The sustainability of conventional global warming models has been argued. Here, these models are deconstructed and new models are developed based on new sustainability criteria. Conventional energy production and processing use various toxic chemicals and catalysts that are harmful to the environment. Moreover, all energy systems are totally dependent on fossil fuel, at least as the primary energy input or in the form of embodied energy. This chapter offers unique solutions, based on truly green technologies that satisfy the new sustainability criteria, to overcoming such problems. These green energy technologies are highly efficient technologies that produce zero net waste. In this chapter, various energy technologies are ranked based on their global efficiency. For the first time, this research offers energy development techniques that produce what might best be described as "good C0 2 ," which does not contribute to global warming. This chapter discusses natural transport phenomena, specifically the
286
THE GREENING OF PETROLEUM OPERATIONS
role of water and its interaction with various energy sources and climate change, taking into account the memory of water. Conventional models are evaluated based on the long-term impact of C 0 2 and their contribution to global warming. It is concluded that conventional energy development systems and global warming models are based on ignorance. Only knowledge-based technology development offers solutions to global warming.
7.2
Historical Development
The history of technological development from the pre-industrial age to the petroleum era has been reviewed. There is a colloquial expression to the effect that exact change plus faith in the Almighty will always get you downtown on the public transit service. On the one hand, with or without faith, all kinds of things could happen with the public transit service, before the matter of exact fare even enters the picture. On the other hand, with or without exact fare, other developments could intervene to alter the availability of the service and even cancel it. This helps isolate one of the key difficulties in uncovering and elaborating the actual science of increased carbon-dioxide concentrations in the atmosphere. All kinds of activities can increase C 0 2 output into the atmosphere; but precisely which activities can be held responsible for consequent global warming or other deleterious impacts? Both the activity and its C 0 2 output are necessary, but neither by itself is sufficient, for establishing what the impact may be and whether it is deleterious. 7.2.1
Pre-industrial
One commonly encountered argument attempts to frame the historical dimension of the problem, more or less, as follows: once upon a time, the scale of humanity's efforts at securing a livelihood was insufficient to affect overall atmospheric levels of C0 2 . The implication is that, with the passage of time and the development of more extensive technological intervention in the natural-physical environment, everything just got worse. However, from prehistoric times onward, there have been important periods of climate change in which the causes could not have had anything to do with human intervention in the environment, especially not at the level we see today that is widely blamed for "global warming."
THE SCIENCE OF GLOBAL WARMING
287
Nevertheless, these had consequences that were extremely significant and even devastating for wide swaths of subsequent human life on this planet. One of the best-known climate changes was the period of almost two centuries of cooling in the northern hemisphere during the 13th and 14th centuries CE, in which Greenland is said to have acquired much of its most recent ice cover. This definitively brought to an end any further attempts at colonizing the north and northwest Atlantic by Scandinavian tribes (descended from the Vikings), creating the opening for later commercial fisheries to expand into the northwest Atlantic by using Basque, Spanish, Portuguese and eventually French and British fishermen and fishing enterprises - the starting-point of European colonization of the North American continent.
7.2.2
Industrial Age
Even events like the volcanic eruption in the Indonesian archipelago in 1816 incurred tremendous consequences. It spewed an enormous volume of dust into the atmosphere that travelled around the globe in the jet stream and led to the "year with no summer" in Europe and the northern half of North America,. In 1817, grain crops on the continent of Europe failed. In industrial Great Britain, where the factory owners and their politicians boasted how the country's relatively (compared to the rest of the world) highly advanced industrial economy had overcome the "capriciousness of Nature," hunger and famine actually stalked the English countryside for the first time in more than a century and a half. The famine conditions were blamed on the difficulties attending the import of extra supplies of food from the European continent and led directly to a tremendous and unprecedented pressure to eliminate the Corn Laws - the system of high tariffs protecting English farmers and landlords from the competition of cheaper foodstuffs from Europe or the Americas. Politically, the industry lobby condemned the Corn Laws as the main obstacle to cheap food, winning broad public sympathy and support. Economically, the Corn Laws actually operated to keep hundreds of thousands employed in the countryside on thousands of small agricultural plots, at a time when the demands of expanding industry required uprooting and forced the rural population to work as factory laborers. Increasing the industrial reserve army would enable British industry to reduce wages. Capturing
288
THE GREENING OF PETROLEUM OPERATIONS
command of that new source of cheaper labor was, in fact, the industrialists' underlying aim. Without the famine of "the year with no summer," it seems unlikely that British industry would have targeted the Corn Laws for elimination, therefore blasting its way into dominating world markets. Even then, because of the still prominent involvement of the anti-industrial lobby of aristocratic landlords who dominated the House of Lords, it would take British industry nearly another 30 years. Between 1846 and 1848 Parliament eliminated the Corn Laws, industry captured access to a desperate workforce fleeing the ruin brought to the countryside, and overall industrial wages were driven sharply downwards. On this train of economic development, the greatly increased profitability of British industry took the form of a vastly whetted appetite for new markets at home and abroad, including the export of important industrial infrastructure investments in "British North America," i.e., Canada, Latin America, and India. Extracting minerals and other valuable raw materials for processing into new commodities in this manner brought an unpredictable level of further acceleration to the industrialization of the globe in regions where industrial capital had not accumulated significantly, either because traditional development blocked its role or because European settlement remained sparse.
7.2.3 Age of Petroleum The world economy entered the Age of Petroleum with the rise of an industrial-financial monopoly in one sector of production after another in both Europe and America before and after World War I. Corresponding to this has been the widest possible extension of chemical engineering - especially the chemistry of hydrocarbon combination, hydrocarbon catalysis, hydrocarbon manipulation and rebonding - on which the refining and processing of crude oil into fuel and myriad byproducts, such as plastics and other synthetic materials, crucially depend. As a result, there is no activity, be it production or consumption, in any society today that is tied to the production and distribution of such an output where adding to the C 0 2 burden in the atmosphere can be avoided or significantly mitigated. In these developments, carbon and C 0 2 are, in fact, vectors carrying many other toxic compounds and byproducts of these
THE SCIENCE OF GLOBAL WARMING
289
chemically engineered processes. Atmospheric absorption of carbon and C 0 2 from human activities or other natural non-industrial activities would normally be continuous. However, what occurs when hydrocarbon complexes combine with inorganic and other substances, which occurs nowhere in nature, is much less predictable, and - on the available evidence - not benign, either. From a certain standpoint, there is logic in attempting to estimate the effects of these other phenomena by taking carbon and C 0 2 levels as vectors. However, there has never been any justification to assume the C 0 2 level itself is the malign element. Such a notion is a non-starter in science in any event, which raises the question: Just what is the role of science? Today, there is no large petrochemical company or syndicate that has not funded a study or group interested in C 0 2 levels as a global warming index - whether to discredit or to affirm such a connection. It is difficult to avoid the obvious inference that these very large enterprises, fiercely competing to retain their market shares against rivals, do not have a significant stake in engineering a large and permanent split in public opinion based on confusing their intoxication of the atmosphere with rising C 0 2 levels. Whether the consideration is refining for automobile fuels, processing synthetic plastics, or concocting synthetic crude, behind a great deal of the propaganda regarding "global warming" stands a huge battle among oligopolies, cartels, and monopolies over market share. The science of "global warming" is the only means that can separate the key question, "What is necessary to produce goods and services that are nature-friendly?" from the toxification of the environment as a byproduct of the anti-nature bias of chemical engineering in the clutches of the oil barons.
7.3 Current Status of Greenhouse Gas Emissions Industrial activities, especially related to the burning of fossil fuels, are major contributors of global greenhouse gas emissions. Climate change due to anthropogenic greenhouse gas (GHG) emissions is a growing concern for the global society. In the third assessment report, the Intergovernmental Panel on Climate Change (IPCC) provides the strongest evidence so far that the global warming of the last 50 years is due largely to human activity and the C 0 2 emissions that arise when burning fossil fuels (Farahani et al. 2004).
290
THE GREENING OF PETROLEUM OPERATIONS
It has been reported that the C 0 2 level now is at the highest point in 125,000 years (Service 2005). Approximately 30 billion tons of C 0 2 are released from fossil fuel burning each year. The C 0 2 concentration level in the atmosphere traced back to 1750 was reported to be 280 ± lOppm (IPCC, 2001). It has risen continuously since then, and the CO z level reported in 1999 was 367 ppm. The present atmospheric C 0 2 concentration level has not been exceeded during the past 420,000 years (IPCC 2001; Houghton et al. 2001; Houghton 2004). The latest 150 years have been a period of global warming (Figure 7.1). Global mean surface temperatures have increased 0.5-1.0°F since the late 19* century. The 20* century's 10 warmest years all occurred in the last 15 years of the century. Of these, 1998 was the warmest year on record. Sea level has risen 4 - 8 inches globally over the past century, and worldwide precipitation over land has increased by about one percent. The industrial emissions of C 0 2 consist of process emissions and production emissions. Coal mining, oil refining, gas processing, petroleum fuel combustion, pulp and paper, ammonia, petroleum refining, iron and steel, aluminum, electricity generation, and cement production are the major industries responsible for producing various types of greenhouse gases. Besides these industrial sources, the transportation sector also contributes a large share of greenhouse
Figure 7.1 Global temperature changes froml880 to 2000 (Modified after EPA Global Warming site: US National Climate Data Center 2001).
THE SCIENCE OF GLOBAL WARMING
291
gas emissions. Greenhouse gas emissions from bio-resources are also significant. However, the National Energy Board of Canada does not consider C 0 2 from biomass as contribution to greenhouse problems (Hughes and Scott 1997). The justification emerges from the fact that greenhouse gas emissions from bio-resources, such as fuel wood, agricultural waste, and charcoal, are carbon neutral because plants synthesize this C0 2 . However, if various additives are added during the production of fuel, such as pellet making and charcoal production, the C 0 2 produced is no longer carbon neutral. For instance, pellet making involves the addition of binders such as carbonic additives, coal, and coke breeze, which all emit carcinogenic benzene as a major aromatic compound (Chhetri et al. 2006). The C 0 2 contaminated with such chemical additives is not favored by plants for photosynthesis, and, as a result, C 0 2 will accumulate in the atmosphere. Moreover, deforestation, especially the unsustainable harvesting of biomass due to urbanization or so as to fulfill the industrial biomass requirement, also results in net C 0 2 emission from bio-resources. The worldwide C 0 2 emissions from the consumption of fossil fuels amounted to 24,409 million metric tons in 2002, and it is projected to reach to 33,284 million metric tons in 2015 and 38,790 million tons in 2025 (IEO 2005). The worldwide C 0 2 production from consumption and flaring of fossil fuels in 2003 was 25,162.07 million metric tons. The U.S. alone had a share of 5802.08 million tons of C 0 2 emission in 2003 (IEA 2005). Current CO z emission levels are expected to continue increasing in the future as fossil fuel consumption is sharply increasing (WEC 2006). The projection shows that emissions from all sources are expected to grow by 36% in 2010 (to 18.24 Gt/y) and by 76% in 2020 to 23.31 G t / y (compared to the 2000 base level). The variation of CO z concentrations at different time scales is presented in Figure 7.2 This figure shows the increase in C 0 2 emissions exponentially after 1950. However, present methodology does not classify C 0 2 based on its source. Industrial activities during this period also went up exponentially. Because of this industrial growth and extensive use of fossil fuels, the level of "industrial" C 0 2 emissions increased sharply. The worldwide supply of oil in 1970 was approximately 49 million barrels per day, but the supply has increased to approximately 84 million barrels per day (EIA 2006). At the same time, the level of "natural" C0 2 , which comes by burning biomass, went down due to deforestation. However, researchers, industry, and governments are focused on the total C0 2 , which
292
THE GREENING OF PETROLEUM OPERATIONS
Year
Figure 7.2 Variation in atmospheric C 0 2 concentration (IPCC 2001).
is not correct in terms of its impacts on global warming. NOAA (2005) defined the annual mean growth rate of C 0 2 as the sum of all C 0 2 added to and removed from the atmosphere by human activities and natural processes during a year. Natural C 0 2 cannot be the same as that of industrial C 0 2 and should be examined separately (See Chapter 5, section 5.17 for more details). Some recent studies reported that the human contribution to global warming is negligible (Khilyuk and Chilingar 2004). The global forces of nature, such as solar radiation, outgassing from the ocean and the atmosphere, and microbial functions, are driving the Earth's climate (Khilyuk and Chilingar 2006). These studies showed that the COz emissions from human-induced activities are far less in quantity than the natural C 0 2 emissions from ocean and volcanic eruptions. Others use this line of argument to demonstrate that the cause of global warming is, at least, a contentious issue (Goldschmidt 2005). These studies fail to explain the differences between natural and human-induced C 0 2 and their impacts on global warming. Moreover, the CO z from the ocean and natural forest fires were a part of the natural climatic cycle even when no global warming was noticed. All the global forces mentioned by Khilyuk and Chilingar (2006) are also affected by human interventions. For example, more than 70,000 chemicals used worldwide for various industrial and agricultural activities are exposed, in one way or another, to the atmosphere or ocean water bodies, therefore contaminating the natural C0 2 . The CO z produced from fossil fuel burning is not accepted by plants for their photosynthesis, and for this reason, most organic plant matters are depleted in carbon ratio 6,3C (Farquhar et al. 1989; NOAA 2005). Finally, the
THE SCIENCE OF GLOBAL WARMING
293
notion of "insignificant" has been used in the past to allow unsustainable practices, such as the pollution of harbors, commercial fishing, and the massive production of toxic chemicals that were deemed to be "magic solutions" (Khan and Islam 2006). Today, banning chemicals and pharmaceutical products has become almost a daily affair (Globe and Mail 2006; New York Times 2006). None of these products were deemed "significant" or harmful when they were introduced. Khan and Islam (2006) have recently catalogued an array of such ill-fated products that were made available in order to "solve" a critical solution (Environment Canada 2006). In all these engineering observations, a general misconception is perpetrated; that is, if the harmful effect of a product can be tolerated in the short-term, the negative impact of the product is "insignificant." According to Thomas and Nowak (2006), human activities have already demonstrably changed the global climate, and further, much greater changes are expected throughout this century. The emissions of CO z and other greenhouse gases will further accelerate global warming. Some future climatic consequences of human induced C 0 2 emissions, sea-level rise, for example, cannot be prevented, and human societies will have to adapt to these changes. Other consequences can perhaps be prevented by reducing C 0 2 emissions. Figure 7.3 shows the pathway of crude oil. Crude oil is refined to convert into various products including plastics. More than four million tons of plastics are produced from 84 million barrels of oil per day. It has been further reported that burning plastics produces more than 4,000 toxic chemicals, 80 of which are known carcinogens (Islam 2004). In addition to C 0 2 , various other greenhouse gases have contributed to global warming. The concentration of other greenhouse gases has increased significantly in the period between 1750 and 2001. Several classes of halogenated compounds, such as chlorine,
Crude oil -» Gasolene + Solid residue + diesel + kerosene + volatile HC + numerous petroleum products Solid residue + hydrogen + metal (and others) -> plastic Plastic + Oxygen -> 4000 toxic chemicals (including 80 known carcinogens) Figure 7.3 The crude oil pathway (Islam 2004).
294
THE GREENING OF PETROLEUM OPERATIONS
bromine, and fluorine, are also greenhouse gases and are the direct result of industrial activities. None of these compounds existed before 1750 but are found in significant concentrations in the atmosphere after that period (Table 7.1). Chlorofluorocarbons (CFCs) and hydrohloroflorocarbons (HCFCs), which contain chlorine and halocarbons such as bromoflorocarbons (contains bromine), are considered potent greenhouse gases. The sulfur hexafluoride (SF6) that is emitted from various industrial activities, such as aluminum production, semi-conductor manufacturing, electric power transmission and distribution, magnesium casting, and nuclear power generating plants, is also considered a potent greenhouse gas. Table 7.1 shows that the concentration of these chemicals has significantly increased in the atmosphere after 1750. For example, CFC-11 was not present in the atmosphere before 1750. However, the concentration after 1750 reached 256 ppt after 1750. It is important to note here that these chemicals are totally synthetic in nature and cannot be manufactured under natural conditions. This would explain why the future pathway of these chemicals is so rarely reported. The transportation sector consumes a quarter of the world's energy and accounts for about 25% of the total C 0 2 emissions, 80% of which is attributed to road transportation (EIA 2006). Projections for Annex I countries indicate that, without new CO z mitigation measures, C 0 2 emissions from road transportation might grow from 2,500 million tons in 1990 to 3,500 to 5,100 million tons in 2020. The transportation sector's fossil fuel consumption is also sharply increasing in the Non Annex I countries, as well. Thus, the total greenhouse gas emissions from transportation will rise in the future. It is also reported that as much as 90% of global biomass burning is human-initiated and that such burning is increasing with time (NASA 1999). Forest products are the major source of biomass, along with agricultural and household wastes. C 0 2 from biomass has long been considered to be the source of feedstock during photosynthesis by plants. Therefore, the increase in CO z from biomass burning cannot be considered unsustainable, as long as the biomass is not contaminated through "processing" before burning. C 0 2 from unaltered biomass is distinguished from CO z emitted from processed fuels. To date, any processing involves the addition of toxic chemicals. Even if the produced gases do not show detectable concentrations of toxic products, it is conceivable that the associated CO z will be different from C 0 2 of organic origin. CO z emissions from biomass that is contaminated with various chemical additives during processing
THE SCIENCE OF GLOBAL WARMING
295
Table 7.1 Concentrations, global warming potentials (GWPs), and atmospheric lifetimes of GHGs. Pre-1750 concentration
Current topospheric concentration
GWP (100-yr time horizon)
Life time (years)
carbon dioxide (C0 2 )
280 ppm
374.9
1
varies
methane (CH4)
730 ppb
1852ppb
23
12
270
319 ppb
296
114
CFC-11 (trichlorofluoromethane) (CC13F)
0
256 ppt
4600
45
CFC-12 (dichlorodifluoromethane) (CC12F2)
0
546 ppt
10600
100
CFC-113 (trichlorotrifluoroethane) (C2C1,F3)
0
80 ppt
6000
85
carbon tetrachloride (CC14)
0
94 ppt
1800
35
methyl chloroform (CH3CC1,)
0
28 ppt
140
4.8
HCFC-22 (chlorodifluoromethane) (CHC1F2)
0
158 ppt
1700
11.9
HFC-23 (fluoroform) (CHF3)
0
14ppt
12000
260
perfluoroethane (C2F6)
0
3 ppt
11900
10000
sulfur hexafluoride (SF6)
0
5.21 ppt
22200
3200
trifluoromethyl sulfur pentafluoride (SF5CF3)
0
0.12 ppt
18000
3200
Gas
nitrous oxide (N 2 0)
Source: (IPCC 2001)
296
THE GREENING OF PETROLEUM OPERATIONS
has been calculated and deducted from the C 0 2 that is good for the photosynthesis and does not contribute to global warming.
7.4
Comments on Copenhagen Summit
For a decade, the entire world pinned hope on Kyoto protocol. Even though for some scientists it was just business as usual, for the vast majority of general public, it was supposed to offer reversal of global warming. As the developed nations came to realize the goals set for Kyoto protocol cannot be achieved, new hope was introduced in the name of Copenhagen summit. The following section offers a critical review of the Copenhagen summit of 2009.
7.4.1 Copenhagen Summit: The political implication Thanks to the transparency of the information age, it has become possible to observe the political chaos created during the Copenhagen Summit. To some extent, the world public could witness the humiliating treatment of heads of states, and thousands of high-profile representatives from various countries. On December 18, 2009, the final day of the Summit was suspended by the Danish government to hand over the principal conference room to US President Obama, where he and a select group of invitees, 16 in total, would have the exclusive right to speak. Obama's speech failed to imply any binding commitment to environmental integrity and more importantly, undid any hope of Kyoto Framework Protocol. He left the room after listening to a few more speakers. Among those invited to take the floor were the most industrialized countries, a number of the emerging economies and some of the least developed countries (LDC). The leaders and representatives of more than 170 only had the right to listen. From the night of December 17 to the early hours of the 18th, the prime minister of Denmark and senior U.S. representatives met with the president of the European Commission and the leaders of 27 countries in order to propose to them, on Obama's behalf, a draft agreement which did not have the participation of any of the other leaders from the rest of the world. If sustainability means bottom-up participation, this was indeed an unsustainable move to 'fix the climate'. During the entire night of the 18th to three in the morning on the 19th, when many heads of state had already gone, the country representatives
THE SCIENCE OF GLOBAL WARMING
297
were waiting for the re-initiation of the sessions and the closing session. Obama had meetings and press conferences all day on the 18lh. The European leaders did likewise. Then they left. Then an unheard of event took place: at three in the morning on the 19th, the prime minister of Denmark convened a meeting for the closing of the Summit. Ministers, officials, ambassadors and technical personnel remained representing their countries. This move did not go unchallenged. A number of third world country representatives insisted on their voice heard. It was a remarkable move, particularly by the members of the ALBA (Bolivarian Alternative for the Americas). The following statements of Cuban, representative summarizes the nature of the Copenhagen Summit and how it offered no hope for greening the environment. "The document that you affirmed on a number of occasions did not exist, Mr. President, has now appeared. We have seen versions that were circulating surreptitiously and being discussed in small secret meetings... Cuba considers the text of this apocryphal project as insufficient andinadmissible. The goal of two degrees centigrade is unacceptable and would have incalculable disastrous consequences... The document that you, lamentably, are presenting has no commitment whatsoever to reduced emissions of greenhouse gases... I am aware of earlier versions that, via questionable and clandestine procedures, were being negotiated in closed corridors... The document that you are now presenting precisely omits the already meager and insufficient key phrases that that version contained... For Cuba, it is incompatible with the universally recognized scientific criterion which considers it urgent and unavoidable to assure levels of reduction of at least 45% of emissions by the year 2020, and a reduction of no less than 80% or 90% by 2050... Everything proposed around the continuation of negotiations for adopting, in the future, agreements on reductions of emissions, must inevitably include the concept of the validity of the Kyoto Protocol. Your paper, Mr. President, is the death certificate of the Kyoto Protocol, which my delegation does not accept... The Cuban delegation wishes to emphasize the preeminence of the principle of 'common but distinguished responsibilities'
298
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
as a central concept of the future negotiation process. Your paper does not say a single word about that. Thisdraft declaration omits concrete commitments of funding and the transfer of technologies to the developing countries as part of meeting the obligations contracted by the developed countries under the United Nations Framework Convention on Climate Change The developed countries which are imposing their interests via this document, Mr. President, are evading any concrete commitment. Mr. President, what you refer to as 'a group of representative leaders' is, for me, a gross violation of the principle of sovereign equality consecrated in the Charter of the United Nations... Mr. President, I am formally asking for this declaration to be included in the final report on the work of this lamentable and shameful 15th Conference of the Parties".
The state representatives were given only one hour to express their views. Following that hour, there came a long debate in which the delegations of the developed countries exercised heavy pressure in an attempt to make the Conference adopt the said document as the final result of their deliberations. This did not bode well with developing countries that pressed on fundamental issues, such as, -
absence of any commitment on the part of the developed countries in terms of the reduction of carbon emissions; - funding for the nations of the South to adopt measures of mitigation and adaptation. At the end, the Conference confined itself to "taking note" of the existence of that document as the position of a group of approximately 25 countries.
7.4.2 The Copenhagen 'Agreement' Following is the draft decision that was being touted by the President of Conference. It is available on website 26. Draft decision -/CP.15 Proposal by the President Copenhagen Accord The Heads of State, Heads of Government, Ministers, and other heads of delegation present at the United Nations Climate Change Conference 2009 in Copenhagen,
THE SCIENCE OF GLOBAL WARMING
299
In pursuit of the ultimate objective of the Convention as stated in its Article 2, Being guided by the principles and provisions of the Convention, Noting the results of work done by the two Ad hoc Working Groups, Endorsing decision x/CP.15 on the Ad hoc Working Group on Long-term Cooperative Action and decision x/CMP.5 that requests the Ad hoc Working Group on Further Commitments of Annex I Parties under the Kyoto Protocol to continue its work, Have agreed on this Copenhagen Accord which is operational immediately. 1. We underline that climate change is one of the greatest challenges of our time. We emphasise our strong political will to urgently combat climate change in accordance with the principle of common but differentiated responsibilities and respective capabilities. To achieve the ultimate objective of the Convention to stabilize greenhouse gas concentration in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system, we shall, recognizing the scientific view that the increase in global temperature should be below 2 degrees Celsius, on the basis of equity and in the context of sustainable development, enhance our long-term cooperative action to combat climate change. We recognize the critical impacts of climate change and the potential impacts of response measures on countries particularly vulnerable to its adverse effects and stress the need to establish a comprehensive adaptation programme including international support. 2. We agree that deep cuts in global emissions are required according to science, and as documented by the IPCC Fourth Assessment Report with a view to reduce global emissions so as to hold the increase in global temperature below 2 degrees Celsius, and take action to meet this objective consistent with science and on the basis of equity. We should cooperate in achieving the peaking of global and national emissions as soon as possible, recognizing that the time frame for peaking will be longer in developing countries and bearing in mind that social and economic development and poverty eradication are the first and overriding priorities of developing countries and that a low-emission development strategy is indispensable to sustainable development. 3. Adaptation to the adverse effects of climate change and the potential impacts of response measures is a challenge faced by all
300
THE GREENING OF PETROLEUM OPERATIONS
countries. Enhanced action and international cooperation on adaptation is urgently required to ensure the implementation of the Convention by enabling and supporting the implementation of adaptation actions aimed at reducing vulnerability and building resilience in developing countries, especially in those that are particularly vulnerable, especially least developed countries, small island developing States and Africa. We agree that developed countries shall provide adequate, predictable and sustainable financial resources, technology and capacity-building to support the implementation of adaptation action in developing countries. 4. Annex I Parties commit to implement individually or jointly the quantified economywide emissions targets for 2020, to be submitted in the format given in Appendix I by Annex I Parties to the secretariat by 31 January 2010 for compilation in an INF document. Annex I Parties that are Party to the Kyoto Protocol will thereby further strengthen the emissions reductions initiated by the Kyoto Protocol. Delivery of reductions and financing by developed countries will be measured, reported and verified in accordance with existing and any further guidelines adopted by the Conference of the Parties, and will ensure that accounting of such targets and finance is rigorous, robust and transparent. 5. Non-Annex I Parties to the Convention will implement mitigation actions, including those to be submitted to the secretariat by non-Annex I Parties in the format given in Appendix II by 31 January 2010, for compilation in an INF document, consistent with Article 4.1 and Article 4.7 and in the context of sustainable development. Least developed countries and small island developing States may undertake actions voluntarily and on the basis of support. Mitigation actions subsequently taken and envisaged by Non-Annex I Parties, including national inventory reports, shall be communicated through national communications consistent with Article 12.1(b) every two years on the basis of guidelines to be adopted by the Conference of the Parties. Those mitigation actions in national communications or otherwise communicated to the Secretariat will be added to the list in appendix II. Mitigation actions taken by Non-Annex I Parties will be subject to their domestic measurement, reporting and verification the result of which will be reported through their national communications every two years. Non-Annex I Parties will communicate information on the implementation of their actions through National Communications, with provisions for international consultations and analysis under
THE SCIENCE OF GLOBAL WARMING
301
clearly defined guidelines that will ensure that national sovereignty is respected. Nationally appropriate mitigation actions seeking international support will be recorded in a registry along with relevant technology, finance and capacity building support. Those actions supported will be added to the list in appendix II. These supported nationally appropriate mitigation actions will be subject to international measurement, reporting and verification in accordance with guidelines adopted by the Conference of the Parties. 6. We recognize the crucial role of reducing emission from deforestation and forest degradation and the need to enhance removals of greenhouse gas emission by forests and agree on the need to provide positive incentives to such actions through the immediate establishment of a mechanism including REDD-plus, to enable the mobilization of financial resources from developed countries. 7. We decide to pursue various approaches, including opportunities to use markets, to enhance the cost-effectiveness of, and to promote mitigation actions. Developing countries, especially those with low emitting economies should be provided incentives to continue to develop on a low emission pathway. 8. Scaled up, new and additional, predictable and adequate funding as well as improved access shall be provided to developing countries, in accordance with the relevant provisions of the Convention, to enable and support enhanced action on mitigation, including substantial finance to reduce emissions from deforestation and forest degradation (REDD-plus), adaptation, technology development and transfer and capacity-building, for enhanced implementation of the Convention. The collective commitment by developed countries is to provide new and additional resources, including forestry and investments through international institutions, approaching USD 30 billion for the period 2010. 2012 with balanced allocation between adaptation and mitigation. Funding for adaptation will be prioritized for the most vulnerable developing countries, such as the least developed countries, small island developing States and Africa. In the context of meaningful mitigation actions and transparency on implementation, developed countries commit to a goal of mobilizing jointly USD 100 billion dollars a year by 2020 to address the needs of developing countries. This funding will come from a wide variety of sources, public and private, bilateral and multilateral, including alternative sources of finance. New multilateral funding for adaptation will be delivered through effective and efficient fund
302
THE GREENING OF PETROLEUM OPERATIONS
arrangements, with a governance structure providing for equal representation of developed and developing countries. A significant portion of such funding should flow through the Copenhagen Green Climate Fund. 9. To this end, a High Level Panel will be established under the guidance of and accountable to the Conference of the Parties to study the contribution of the potential sources of revenue, including alternative sources of finance, towards meeting this goal. 10. We decide that the Copenhagen Green Climate Fund shall be established as an operating entity of the financial mechanism of the Convention to support projects, programme, policies and other activities in developing countries related to mitigation including REDD-plus, adaptation, capacity building, technology development and transfer. 11. In order to enhance action on development and transfer of technology we decide to establish a Technology Mechanism to accelerate technology development and transfer in support of action on adaptation and mitigation that will be guided by a country-driven approach and be based on national circumstances and priorities. 12. We call for an assessment of the implementation of this Accord to be completed by 2015, including in light of the Conventions ultimate objective. This would include consideration of strengthening the long-term goal referencing various matters presented by the science, including in relation to temperature rises of 1.5 degrees Celsius. APPENDIX I Quantified economy-wide emissions targets for 2020 Annex I Parties Quantified economy-wide emissions targets for 2020 Emissions reduction in 2020 Base year APPENDIX II Nationally appropriate mitigation actions of developing country Parties Non-Annex I Actions
7.5
Classification of C 0 2
Carbon dioxide is considered to be the major precursor for current global warming problems. Previous theories were based on the
THE SCIENCE OF GLOBAL WARMING
303
"chemicals are chemicals" approach initiated by Linus Pauling's vitamin C and antioxidant experiments. This approach advanced the principle that, whether from a natural or synthetic source and irrespective of the pathway it travels, all vitamin C is same. This approach essentially disconnects a chemical product from its historical pathway. Even though the role of pathways has been understood by many civilizations for centuries, systematic studies questioning the principle have only been a recent development. For instance, only recently Gale et al. (1995) reported that vitamin C supplements did not lower death rates among elderly people and may actually have increased the risks of dying. Moreover, ß carotene supplementation may do more harm than good for patients with lung cancer (Josefson 2003). Obviously, such a conclusion cannot be made if subjects were taking vitamin C from natural sources. In fact, the practices of people who live the longest lives indicate that natural products do not have any negative impact on human health (New York Times 2003). More recently, it has been reported that patients being treated for cancer should avoid antioxidant supplements, including vitamin C, because cancer cells gobble up vitamin C faster than normal cells which might give greater protection from tumors (Agus et al. 1999). Antioxidants present in nature are known to act as anti-aging agents. Obviously these antioxidants are not the same as those synthetically manufactured. The previously used hypothesis, "chemicals are chemicals," fails to distinguish between the characteristics of synthetic and natural vitamins and antioxidants. The impact of synthetic antioxidants and vitamin C in body metabolism would be different than that of natural sources. Numerous other cases can be cited demonstrating that the pathway involved in producing the final product is of utmost importance. Some examples have recently been investigated by Islam and coworkers (Islam 2004; Khan et al. 2006; Khan and Islam 2006; Zatzman and Islam 2006). If the pathway is considered, it becomes clear that organic produce is not the same as non-organic produce, natural products are not the same as bioengineered products, natural pesticides are not the same as chemical pesticides, natural leather is not the same as synthetic plastic, natural fibers are not the same as synthetic fibers, natural wood is not the same as fiber-reinforced plastic, etc. (Islam 2006). In addition to being the only ones that are good for the long term, natural products are also extremely efficient and economically attractive. Numerous examples are given in Khan
304
THE GREENING OF PETROLEUM OPERATIONS
and Islam (2006). Unlike synthetic hydrocarbons, bacteria easily degrades natural vegetable oils (AlDarbi et al. 2005). The application of wood ash to remove arsenic from aqueous streams is more effective than removing it by the use of any synthetic chemicals (Rahman et al. 2004; Wassiuddin et al. 2002). Using the same analogy, carbon dioxide has also been classified based on the source from where it is emitted, the pathway it traveled, and age of the source from which it came (Khan and Islam 2006). Carbon dioxide is classified based on a newly developed theory. It has been reported that plants favor a lighter form of carbon dioxide for photosynthesis and discriminate against heavier isotopes of carbon (Farquhar et al. 1989). Since fossil fuel refining involves the use of various toxic additives, the carbon dioxide emitted from these fuels is contaminated and is not favored by plants. If the C 0 2 comes from wood burning, which has no chemical additives, this C 0 2 will be more favored by plants. This is because the pathway the fuel travels, from refinery to combustion devices, makes the refined product inherently toxic (Chhetri et al. 2006). The C 0 2 that the plants do not synthesize accumulates in the atmosphere. The accumulation of this rejected CO z must be accounted for in order to assess the impact of human activities on global warming. This analysis provides a basis for discerning between natural C 0 2 and man-made C 0 2 that could be correlated with global warming.
7.6 The Role of Water in Global Wanning The flow of water in different forms has a great role in climate change. Water is one of the components of natural transport phenomenon. A natural transport phenomenon is a flow of complex physical processes. The flow process consists of production, the storage and transport of fluids, electricity, heat, and momentum (Figure 7.4). The most essential material components of these processes are water and air, which are also the indicators of natural climate. Oceans, rivers, and lakes form both the source and the sink of major water transport systems. Because water is the most abundant matter on earth, any impact on the overall mass balance of water is certain to impact the global climate. The interaction between water and air in order to sustain life on this planet is a testimony to the harmony of nature. Water is the most potent solvent and also has very high heat storage capacity. Any movement of water through the surface and the Earth's crust can act as a vehicle for energy
THE SCIENCE OF GLOBAL WARMING
305
distribution. However, the only source of energy is the sun and sunlight, the most essential ingredient for sustaining life on earth. The overall process in nature is inherently sustainable, yet truly dynamic. There isn't one phenomenon that can be characterized as cyclic. Only recently, scientists have discovered that water has memory. Each phenomenon in nature occurs due to a driving force, such as pressure for fluid flow, electrical potential for the flow of electricity, thermal gradient for heat, and chemical potential for a chemical reaction. Natural transport phenomena cannot be explained by simple mechanistic views of physical processes described by a function of one variable. Even though Einstein pointed out the possibility of the existence of a fourth dimension a century ago, the notion of extending this dimensionality to infinite numbers of variables is only now coming to light (Islam 2006). A simple flow model of natural transport phenomenon is presented in Figure 7.4. This model shows that nature has numerous interconnected processes, such as the production of heat, vapor, electricity and light, the storage of heat and fluid, and the flow of heat and fluids. All these processes continue for infinite time and are inherently sustainable. Any technologies that are based on natural principles are sustainable (Khan and Islam 2006). Water plays a crucial role in the natural climatic system. Water is the most essential as well as the most abundant ingredient of life. Just as water covers 70% of the earth's surface, water constitutes 70% of the human body. Even though the value and sanctity of water has
Figure 7.4 Natural transport phenomenon (Fuchs 1999).
306
THE GREENING OF PETROLEUM OPERATIONS
been well known for thousands of years in eastern cultures, scientists in the west are only now beginning to examine the concept that water has memory, and that numerous intangibles (most notably the pathway and intention behind human intervention) are important factors in defining the value of water (Islam 2006). However, at the industrial/commercial level, preposterous treatment practices include the following: the addition of chlorine to "purify;" the use of toxic chemicals (soap) to get rid of dirt (the most potent natural cleaning agent) (Islam 2006); the use of glycol (very toxic) for freezing or drying (getting rid of water) a product; the use of chemical C 0 2 to render water into a dehydrating agent (opposite to what is promoted as "refreshing"), then again to demineralize it by adding extra oxygen and ozone to "vitalize" it. The list seems to continue forever. Similar to what happens to food products (the degradation of the following chemical technology chain: Honey —> Sugar —> Saccharine —> Aspartame), the chemical treatment technique promoted as water purification has taken a turn, spiraling downward (Islam 2005). Chlorine treatment of water is common in the west and is synonymous with civilization. Similarly, transportation through copper pipes, distribution through stainless steel (enforced with heavy metal), storage in synthetic plastic containers and metal tanks, and mixing of ground water with surface water (collected from "purified" sewage water) are common practices in "developed" countries. More recent "innovations," such as Ozone, UV, and even H 2 0 2 , are proving to be worse than any other technology. Overall, water remains the most abundant resource, yet "water war" is considered to be the most certain destiny of the 21 st century. What Robert Curl (a Novel Laureate in Chemistry) termed as a "technological disaster," modern technology development schemes seem to have targeted as the most abundant resource (Islam 2006). Water vapor is considered to be one of the major greenhouse gases in the atmosphere. The greenhouse gas effect is thought to be one of the major mechanisms by which the radiative factors of the atmosphere influence the global climate. Moreover, the radiative regime of the radiative characteristics of the atmosphere is largely determined by optically active components, such as C 0 2 and other gases, water vapor, and aerosols (Kondratyev and Cracknell 1998). As most of the incoming solar radiation passes through atmosphere and is absorbed by the Earth's surface, the direct heating of the surface water and the evaporation of moisture results in heat transfer from the Earth's
THE SCIENCE OF GLOBAL WARMING
307
surface to the atmosphere. The transport of heat by the atmosphere leads to the transient weather system. The latent heat, released due to the condensation of water vapors, and the clouds play an important role in reflecting incoming short-wave solar radiation and absorbing and emitting long wave radiation. Aerosols, such as volcanic dust and the particulates of fossil fuel combustion, are important factors in determining the behavior of the climate system. Kondratyev and Cracknell (1998) reported that the conventional method of calculating global warming potential only accounts for C0 2 , ignoring the contribution of water vapor and other gases in global warming. Their calculation scheme took into account the other components that affect the absorption of radiation, including CO z , water vapor, N 2 , 0 2 , CH 4 , NO x , CO, S0 2 , nitric acid, ethylene, acetylene, ethane, formaldehyde, chlorofluorocarbons, ammonia, and aerosol formation of different chemical composition and various sizes. However, this calculation fails to explain the effects of pure water vapor and the water vapor that is contaminated with chemical contaminants. The impact of water vapor on climate change depends on the quality of the water evaporated, its interaction with the atmospheric particulates of different chemical composition, and the size of the aerosols. There are at least 70,000 synthetic chemicals being used regularly throughout the world (Icenhower 2006). It has further been estimated that more than 1,000 chemicals are introduced every year. Billions of tons of fossil fuels are consumed each year to produce these chemicals that are the major sources of water and air contamination. The majority of these chemicals are very toxic and radioactive, and the particulates are continuously released into the atmosphere. The chemicals also reach water bodies by leakage, transportation loss, and as by-products of pesticides, herbicides, and water disinfectants. The industrial wastes, which are contaminated with these chemicals, also reach water bodies and contaminate the entire water system. The particulates of these chemicals and aerosols, when mixed with water vapor, may increase the absorption characteristics in the atmosphere, thereby increasing the possibility of trapping more heat. However, pure water vapor is one of the most essential components of the natural climate system and has no impacts on global warming. Moreover, most of the pure water vapors end up transforming into rain near the Earth's surface and have no effect on the absorption and reflection. The water vapor in the warmer parts of the earth could rise to higher altitudes since
308
THE GREENING OF PETROLEUM OPERATIONS
they are more buoyant. As the temperature decreases in higher altitude, the water vapor gets colder, and it will hold less water vapor, reducing the possibility of increasing global warming. Because water is considered to have memory (Tschulakow et al. 2005), the assumption of water vapor's impact on global warming cannot be explained without the knowledge of memory. The impact depends on the pathway water vapor travels before and after the formation of vapor from water. Gilbert and Zhang (2003) reported that nanoparticles change the crystal structure when they are wet. The structure change that takes place in the nanoparticles of water vapor and aerosols in the atmosphere has a profound impact on climate change. This relation has been explained based on the memory characteristics of water and analysis of its pathway. It is reported that water crystals are entirely sensitive to the external environment and take different shape based on the input (Emoto 2004). Moreover, the history of water memory can be traced by analysis of its pathway. The memory of water might have a significant role to play in technological development (Hossain and Islam 2006). Recent attempts have been made towards understanding the role of history on the fundamental properties of water. These models take into account the intangible properties of water, and this line of investigation can address the global warming phenomenon. The memory of water not only has impacts on energy and ecosystems but also plays a key role in the global climate scenario.
7.7
Characterization of Energy Sources
Various energy sources are classified based on a set of newly developed criteria. Energy is conventionally classified, valued, or measured based on the absolute output of a system. The absolute value represents the steady state of an energy source. However, modern science recognizes that such a state does not exist and every form of energy is at a state of flux. This section characterizes various energy sources based on their pathways. Each form of energy has a set of characteristic features. Anytime these features are violated through human intervention, the quality of the energy form declines. This analysis enables one to assign a greater quality index to a form of energy that is closest to its natural state. Consequently, the heat coming from wood burning and the heat coming from electrical power will have different impacts on the quality of heat. Just as
THE SCIENCE OF GLOBAL WARMING
309
all chemicals are not the same, different forms of heat coming from different energy sources are not the same. The energy sources are based on the global efficiency of each technology, the environmental impact of the technology, and the overall value of energy systems (Chhetri et al. 2006). Energy sources are classified based on the age of the fuel source in nature as it is transformed from one form to another (Chhetri et al. 2006). Various energy sources are also classified according to their global efficiency. Conventionally, energy efficiency is defined for a component or service as the amount of energy required in the production of that component or service, e.g., the amount of cement that can be produced with one billion Btu of energy. Energy efficiency is improved when a given level of service is provided with reduced amounts of energy inputs or when services or products are increased for a given amount of energy input. However, the global efficiency of a system is calculated based on the energy input, product output, the possibility of multiple uses of energy in the system, the use of the system's by-products, and its impacts to the environment. The global efficiency calculation considers the source of the fuel, the pathways the energy system travels, conversion systems, impacts on human health and the environment, and by-products of the energy system. Farzana and Islam (2006) calculated the global efficiency of various energy systems (Figure 7.5). They showed that global efficiencies of higher quality energy sources are higher than those of lower quality energy sources. With their ranking, a solar energy source (when applied directly) is the most efficient (because the source is free and
Efficiency
Nuclear O
Solar direct Wood
PV }
Local efficienc
Biogas >Q
geothermal (-* Wood
Global efficient
Wind y'
Hydropower
^n™ Coal
GaS
^ - O - O Nuclear ' Technology
Figure 7.5 Global and local efficiency of different energy sources.
310
THE GREENING OF PETROLEUM OPERATIONS
has no negative environmental impacts), while nuclear energy is the least efficient, among many other forms of energy studied. They demonstrated that previous findings failed to discover this logical ranking because the focus had been on local efficiency. For instance, nuclear energy is generally considered to be highly efficient, which is a true observation if one's analysis is limited to one component of the overall process. If global efficiency is considered, the fuel enrichment alone involves numerous centrifugation stages. This enrichment alone will render the global efficiency very low. (See Chapters 5 and 10 for more details.)
7.8 The Kyoto Protocol Various climate models that use different scenarios have been analyzed. Emission scenarios under satisfactory, as well as partial, fulfillment of Kyoto Protocol have also been evaluated. Based on the conclusion that current global warming and climate change are caused by the emissions of greenhouse gases from industrial activities, the Kyoto Protocol was negotiated by more than 160 nations in 1997 and aimed to reduce greenhouse gases, primarily C 0 2 (EIA 2005). In this protocol, the industrial nations (Annex I countries) have committed to making substantial reductions in their emissions of greenhouse gases by 2012. For the first time, the Kyoto Protocol includes an international agreement for reducing greenhouse gases emissions. Global warming is a major environmental concern, especially in the case of many developed countries, where the greenhouse gas emissions responsible for this change are concentrated. As a result, there are uncertainties and fears about possible consequences for the development of manufacturing activities in the future. According to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), which brings together the world's leading experts in this field, the globally averaged surface temperature is projected to increase 1.4°C to 5.8°C from 1990 to 2100 under a business-as-usual projection. This temperature rise corresponds to a sea level rise of 9 cm to a maximum of 88 cm. More recently, the chief scientific advisor to the UK government estimated an increase of 3°C in the coming decade. Despite deserting views and skepticism (Lindzen 2002; Lindzen 2006; Carter 2006), it has become increasingly clear that global warming is not a natural phenomenon and that it emerges from industrial practices that are not sustainable (Khan 2006).
THE SCIENCE OF GLOBAL WARMING
311
The Kyoto Protocol has set special targets in order to reduce greenhouse gas emissions from Annex I countries, as outlined by Article 2 in the protocol (Kyoto Protocol 1997). Various measures suggest reducing the greenhouse gas emissions through the promotion of sustainable development, enhancement of energy efficiency, protection and enhancement of sinks and reservoirs of greenhouse gases, promotion of sustainable forms of agriculture in light of climate change considerations, reduction of methane through waste management, and through the promotion of increased use of new and renewable forms of energy. Article 3, as a major provision of the Kyoto Protocol, reflects agreement that all parties must reduce their greenhouse gas emission level to 5% below their 1990 levels during the "commitment period" by 2008 to 2012. Some countries, including the US, Canada, European Union countries and Japan, will have to reduce emissions up to 8% below their 1990 level. Some on this list, including Australia and Iceland, will be allowed to increase emissions by varying amounts of u p to 10%. Another feature of the protocol is that the parties included in Annex B may participate in emissions trading for the purposes of fulfilling their commitments. Clean Development Mechanism (CDM) is defined under Article 12 to assist sustainable development for developing countries. Annex I countries could consider reductions in greenhouse gases achieved in this way against their own targets. Despite a series of targets for emission reduction, the Kyoto Protocol has many flaws. The standards for emission reduction are not based on scientific facts. The time scale proposed for reducing emission levels also had no justification. The standard emission level, taken in 1990, also had no basis. Developing countries such as China and India, both newly emerging economies, were excluded from meeting the targets. Emission trading has become "license to pollute" for industrialized countries and big corporations. This situation has become worse due to the introduction of emission trading. The CDM, instituted to assist developing countries, is not functional. Moreover, there are significant bureaucratic formalities that slow down the project approval for CDM granting to develop clean energy technologies. The main difficulties in making the CDM work arise from the dual issues of "additionality" and "baselining." To obtain a "certified emission reduction," or new emission rights, from investments in developing countries, investors must demonstrate that emission reductions are "additional" to any that would occur in the absence of the certified project activities. As a result, only 49 projects out of the 1,030 submitted were approved by 2002
312
THE GREENING OF PETROLEUM OPERATIONS
(Pershing and Cedric 2002). The monitoring and administration of such emission certification would be a cost burden for relatively small companies. As a result, only big companies that can administer and monitor the certification would get the benefits. The present targets of greenhouse gas emissions in industrialized countries are not tough enough for 2008-2012. Such provisions results from many factors and are likely to become a "license to pollute" that will enable global emissions to increase further. The Kyoto Protocol doesn't even set a long-term goal for atmospheric concentrations of C0 2 , so the Kyoto Protocol does not hold good promise to achieve its set targets. Possibly the most important shortcoming of the Kyoto Protocol is in its failure to recommend any change in the current process of energy production and utilization. Any change in the current practice could alter the global warming scenario drastically. For instance, if toxic chemicals were not used in crude oil refining, allowable CO z emissions would increase significantly. Such an analysis is absent in most of the previous work (Khan 2006; Khan and Islam 2006b). The Intergovernmental Panel on Climate Change stated that there was a "discernible" human influence on climate and that the observed warming trend is "unlikely to be entirely natural in origin" (IPCC 2001). The Third Assessment Report of IPCC stated, "There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities." Khilyuk and Chilingar (2004) reported that the COz concentration in the atmosphere between 1958 and 1978 was proportional to the C0 2 emission due to the burning of fossil fuel. In 1978, C 0 2 emissions into the atmosphere due to fossil fuel burning stopped rising and were stable for nine years. They concluded that if burning fossil fuels was the main cause, then the atmospheric concentration should stop rising, and, thus, fossil fuel burning would not be the cause of the greenhouse effect. However, this assumption is extremely shortsighted and the global climate certainly does not work linearly, as envisioned by Khilyuk and Chilingar (2004). Moreover, the "Greenhouse Effect One-Layer Model," proposed by Khilyuk and Chilingar (2003, 2004), assumes there are adiabatic conditions in the atmosphere that do not practically exist. The authors have concluded that the human-induced emissions of carbon dioxide and other greenhouse gases have a very small effect on global warming. This is due to the limitation of the current linear computer models that cannot predict temperature effects on the atmosphere other than the low level. Similar
THE SCIENCE OF GLOBAL WARMING
313
arguments were made while promoting dichlorodifluoromethane (CFC-12) in order to relieve environmental problems incurred by ammonia and other refrigerants after decades of use. CFC-12 was banned in USA in 1996 for its impacts on stratospheric ozone layer depletion and global warming. Khan and Islam presented detailed lists of technologies that were based on spurious promises (2006). Zatzman and Islam complemented this list by providing a detailed list of economic models that are also counterproductive (2006). Khilyuk and Chilingar explained the potential impact of microbial activities on the mass and content of gaseous mixtures in Earth's atmosphere on a global scale (2004). However, this study does not distinguish between biological sources of greenhouse gas emissions (microbial activities) and industrial sources (fossil fuel burning) of greenhouse gas emissions. Emissions from industrial sources possess different characteristics because they derive from diverse origins and travel different paths that, obviously, have significant impacts on atmospheric processes. Current climate models have several problems. Scientists have agreed on the likely rise in the global temperature over the next century. However, the current global climatic models can predict only global average temperatures. Projection of climate change in a particular region is considered to be beyond current human ability. Atmospheric Ocean General Circulation Models (AOGCM) are used by the IPCC to model climatic features, but these models are not accurate enough to provide a reliable forecast on how climate may change. They are linear models and cannot forecast complex climatic features. Some climate models are based on C 0 2 doubling and transient scenarios. However, the effect of climate in these models, while doubling the concentration of C 0 2 in the atmosphere, cannot predict the climate in other scenarios. These models are insensitive to the difference between natural and industrial greenhouse gases. There are some simple models that use fewer dimensions than complex models and do not predict complex systems. The Earth System Models of Intermediate Complexity (EMIC) are used to bridge the gap between the complex and simple models, but these models are not able to assess the regional aspect of climate change (IPCC 2001). Unsustainable technologies are the major cause of global climate change. Sustainable technologies can be developed following the principles of nature. In nature, all functions are inherently sustainable, efficient, and functional for an unlimited time period. In
314
THE GREENING OF PETROLEUM OPERATIONS
other words, as far as natural processes are concerned, "time tends to Infinity." This can be expressed as t or, for that matter, At —> °°. By following the same path as the functions inherent in nature, an inherently sustainable technology can be developed (Khan and Islam 2006). The "time criterion" is a defining factor in the sustainability and virtually infinite durability of natural functions. Figure 7.6 shows the direction of nature-based, inherently sustainable technology contrasted with an unsustainable technology. The path of sustainable technology is its long-term durability and environmentally wholesome impact, while unsustainable technology is marked by At approaching 0. While developing the technology for any particular climatic model, the sustainability criterion is truly instrumental (see Chapter 4). The great flaw of conventional climate models is that they are focused on the extremely short term, or At = 0.
7.9 Sustainable Energy Development Different technologies that are sustainable for a long term and do not produce any greenhouse gases are presented. Technology plays a vital role in modern society. The use of thousands of toxic chemicals in fossil fuel refining and industrial processes that make products for personal care, such as body lotion, cosmetics, soaps, and others, has polluted much of the world in which we live (Chhetri et al. 2006; The Globe and Mail 2006). Present day technologies are based on the use of fossil fuels for primary energy supply, production or
Beneficial t Inherently sustainable technology
At->°° / yS
•"^τΤΤΓΓ Unsustainable technology
* Time '"-» % *χ N
At - » 0
T Harmful Figure 7.6 Direction of sustainable/green technology (Islam 2005b).
THE SCIENCE OF GLOBAL WARMING
315
processing, and feedstock for products like plastics. Every stage of development involves the generation of toxic waste and renders a product harmful to the environment, according to the criterion presented by Khan and Islam (2005a, 2005b). The toxicity of products mainly comes from the addition of chemical compounds that are toxic. This leads to continuously degrading the quality of the feedstock. Today, it is becoming increasingly clear that the "chemical addition," which once was synonymous with modern civilization, is the principal cause of numerous health problems, including cancer and diabetes. A detailed list of these chemicals has been presented by Khan and Islam (2006b). Proposing wrong solutions for various problems has become progressively worse. For instance, the US is the biggest consumer of milk, most of which is "fortified" with calcium. Yet the US ranks at the top of the list of osteoporosis patients per capita in the world. There are similar standings regarding the use of vitamins, antioxidants, sugar-free diet etc. Potato farms on Prince Edward Island in eastern Canada are considered a hot bed for cancer (The Epoch Times 2006). Chlorothalonil, a fungicide, which is widely used in the potato fields, is considered a carcinogen. The United States EPA has classified chlorothalonil as a known carcinogen that can cause a variety of ill effects, including skin and eye irritation, reproductive disorders, kidney damage, and cancer. Environment Canada (2006) published lists of chemicals that were banned at different times. This indicates that all the toxic chemicals used today are not beneficial and will be banned from use some day. This trend continues for each and every technological development. However, few studies have integrated these findings in developing a comprehensive cause and effect model. A comprehensive scientific model, developed by Khan and Islam (2006), is applied for screening unsustainable and harmful technologies directly at the onset. Some recently developed technologies that are sustainable for the long term are presented. One of the sustainable technologies presented in this chapter is the true green bio-diesel model (Chhetri and Islam 2006). As an alternative to petrodiesel, bio-diesel is a renewable fuel that is derived from vegetable oils and animal fats. However, the existing bio-diesel production process is neither completely "green" nor renewable because it utilizes fossil fuels, mainly natural gas, as an input for methanol production. Conventional bio-diesel production process entails the use of fossil fuels, such as methane, as an input
316
THE GREENING OF PETROLEUM OPERATIONS
to methanol. It has been reported that u p to 35% of the total primary energy requirement for bio-diesel production comes from fossil fuel (Carraretto et al. 2004). Methanol makes up about 10% of the feedstock input, and since most methanols are currently produced from natural gas, bio-diesel is not completely renewable (Gerpen et al. 2004). The catalysts and chemicals currently used for bio-diesel production are highly caustic and toxic. The synthetic catalysts used for the transesterification process are sulfuric acid, sodium hydroxide, and potassium hydroxide, which are all highly toxic and corrosive chemicals. The pathway for conventional bio-diesel production and petrodiesel production follows a similar path (Figure 7.7). Both fuels have similar pollutants in their emissions, such as benzene, acetaldehyde, toluene, formaldehyde, acrolein, PAHs, and xylene (EPA 2002). However, bio-diesel has fewer pollutants in quantity than petrodiesel. Chhetri and Islam (2006) developed a process that renders the bio-diesel production process truly "green," using waste vegetable oil as bio-diesel feedstock. The catalysts and chemicals used in the
Petroleum diesel pathway
Bio-diesel pathway Oil/Fat
Crude oil
Catalysts + Chemicals + High heat
i
Catalysts + Chemical + Heat
'r Oil Refining
Transesterification
''
'
Polymers, Wax, etc.
Gasoline, Diesel, etc.
Glycerol
Bio-diesel
'' C0 2 , CO, Benzene, acetaldehyde, toluene, formaldehyde, acrolein, PAHs, NOx, xylene, etc.
C0 2 , CO, Benzene, acetaldehyde, toluene, formaldehyde, acrolein, PAHs, NO„, xylene, etc.
Figure 7.7 Pathway of mineral diesel and conventional bio-diesel (Chhetri and Islam 2006).
THE SCIENCE OF GLOBAL WARMING
317
process were non-toxic, inexpensive, and natural. The catalysts used were sodium hydroxide, obtained from the electrolysis of natural sea salt, and potassium hydroxide, obtained from wood ash. The new process substituted the fossil fuel-based methanol with ethanol produced by grain-based, renewable products. The use of natural catalysts and non-toxic chemicals overcame the limitation of the existing process. Fossil fuel was replaced by direct solar energy for heating, making the bio-diesel production process independent of fossil fuel consumption. Khan et al. (2006) developed a criterion to test the sustainability of the green bio-diesel. According to this criterion, to consider any technology sustainable in the long term, it should be environmentally appealing, economically attractive, and socially responsible. The technology should continue for infinite time, maintaining that the indicators function for all time horizons. For a green bio-diesel, the total environmental benefits, social benefits, and economics benefits are higher than the input for all time horizons. For example, in the case of environmental benefits, burning green bio-diesel produces "natural" C 0 2 that can be readily synthesized by plants. The formaldehyde produced during bio-diesel burning is also not harmful because there are no toxic additives involved in the bio-diesel production process. The plants and vegetables for bio-diesel feedstock production also have positive environmental impacts. Thus, switching from petrodiesel to bio-diesel fulfils the condition: ^!L>O
at
(7.D
where Cn is the total environmental capital of the life cycle process of bio-diesel production. Similarly, the total social benefit, (Cs)^>0 at
(7-2)
(Ce)^>0
(7-3)
and economic benefit,
at
Become positive by switching from mineral diesel to bio-diesel (Khan et al. 2006). Bio-diesel can be used in practically all areas
318
THE GREENING OF PETROLEUM OPERATIONS
where petrodiesel is being used. This substitution will help to significantly reduce the C 0 2 responsible for the current global warming problem. Bio-ethanol is another sustainable technology that offers replacement for gasoline engines. The global gasoline consumption is approximately 12 billion liters per year (Martinot 2005). This is one of the major sources of C0 2 emissions. Current gasoline replacement by bio-ethanol fuel is approximately 32 billion liters worldwide. Conventional bio-ethanol production uses chemicals to break down various feedstocks, such as switchgrass and other biomass, in various stages. For example, producing ethanol from switch-grass involves acid hydrolysis as a major production process. It is reported that the conversion of switchgrass into bio-ethanol uses concentrated sulfuric acid at 4:1 (acid biomass ratio), which makes the process unsustainable. The produced fuel is highly toxic and produces formentation inhibitors, such as 5hydroxymethylfurfural (5-HMF) and furfural acid, during the hydrolysis process, which reduces the efficiency (Bakker et al. 2004). Moreover, conventional bio-ethanol production also consumes huge amounts of fossil fuel as the primary energy input, making ethanol production dependent on fossil fuels. The development of bio-energy on a large scale requires the deployment of environmentally acceptable, low cost energy crops as well as sustainable technologies to harness them with the least environmental impact. Sugarcane, corn, switchgrass, and other biomass are the major feedstocks for ethanol production. Chhetri et al. (2006) developed a process that makes the bio-ethanol production process "green." They proposed the use of non-toxic chemicals and natural catalysts to make the bio-ethanol process environmentally friendly. The technology has been tested for long-term sustainability using a set of sustainability criteria. The ethanol produced using natural and non-toxic catalysts will produce natural C 0 2 after combustion that has no impact on global warming. Recently, a jet engine has been designed that converts saw dust waste to electricity (Vafaei 2006). This is one of the most efficient technologies because it can use a variety of fuels for combustion. It was designed primarily to use sawdust to produce power for the engine (Figure 7.8). In this jet, saw dust is sprayed from the top where the air blower works to make a jet. A start-up fuel, such as organic alcohol, is used to start the engine. Once the engine is started, the saw dust and blower will be enough to create power for the engine to run. The main advantage of this jet engine is that
THE SCIENCE OF GLOBAL WARMING Fuel entrance (waste vegetable oil, saw dust,...)
319
Sheet metal chimney -—>.
\ /
16"-18" Shelves for supporting plates
Sheet metal inlet (Sawdust) Valvo
Pyrex window
12VDC Blower 3" Dia.
Air Flamei
Rheostat Sheet metal air inlet
() +
Thermometer in firebox
Fuel gas thermometer
Sheet metal firebox with removable end plate
<
£ Concrete W-
-
Battery
(Entrance of fuel supporting the flame) Figure 7.8 Saw dust to electricity model jet engine (Vafaei 2006).
it can use a variety of fuels, such as waste vegetable oil and tree leaves. It has been reported that crude oil can be directly burnt in such engines (Vafaei 2006). The utilization of waste saw dust and waste vegetable oil increases the global efficiency of the system significantly. The possibility of directly using crude oil can eliminate the various toxic and expensive refining processes that alone emit large amounts of greenhouse gases into the atmosphere. This technology is considered one of the most sustainable technologies among the others currently available.
7.10 Zero Waste Energy Systems Modern civilization is synonymous with the waste generation (Islam 2004; Khan and Islam 2006). Waste production has the most profound impact on energy and mass utilization. Conventional energy systems are mostly inefficient technologies (Farzana and Islam 2006). The more that is wasted, the more inefficient the system is. Almost all
320
THE GREENING OF PETROLEUM OPERATIONS
industrial and chemical processes produce wastes, most of which are toxic. The wastes not only reduce the efficiency of a system but also pose severe impacts on our health and the environment, leading to further degradation of global efficiency. The treatment of this toxic waste is also highly expensive. Plastics' derivatives from refined oil are more toxic than original feedstocks, and the oxidation of a plastic tire at high temperature produces toxins such as dioxin. The more refined the products are, the more wastes are generated. This section presents a series of different zero waste technologies that eliminate the production of industrial C0 2 . They are analogous to the "five zeros," similar to the Olympic logo, which are zero emissions, zero resource waste, zero waste in activities, zero use of toxics, and zero waste in the product life cycle. This model, originally developed by Lakhal and H'Midi (2003), was called the Olympic Green Chain model. Khan (2006) used this model for proposing an array of technologies that fulfill the requirements of the green chain model. The utilization of waste as an energy source offers multiple solutions for energy as well as mitigation of environmental problems. Production of energy from waste reduces the cost of waste treatment and, at the same time, gives added value to the waste products. This enhances the global efficiency of the system. Local efficiency is calculated based on the output and input of the turbine and generating engine. This method does not consider the exploration and processing, the total transmission and distribution losses, the environmental and social cost associated with the system, and the possible uses of byproducts. Global efficiency considers the efficiency of fuel exploration and processing, the efficiency of all moving parts (turbine and generator), the transmission and distribution losses, the embodied energy cost and emission during parts manufacturing, the total impact on health and the environment, and biodiversity due to the technological intervention. The global efficiency of burning coal to produce electricity is calculated considering the efficiency of the entire operations, from coal mining to electricity transmission. The electricity production, without considering the environmental impacts, has a global efficiency of 12.40% (Farzana and Islam 2006). Figure 7.9 is the schematic of calculating the global efficiency of an energy system. The efficiency can be improved by using the fly ash for other purposes, such as making cement, if toxic catalysts are not used during coal conversion or cracking. Chemicals, such as Fe-Mn oxides, or acidic compounds are added during coal cleaning and refining before they
THE SCIENCE OF GLOBAL WARMING
321
% Of Complete combustion = 75% Loss through flue gas = 10% (efficiency = 90%) Heat loss through boiler or chamber wall = 20% (efficiency = 80%) Turbine efficiency = 35% Coal mining (90%)
►
Coal pulverization and processing (90%)
—►
Combustion chamber
Steam turbine
" Electricity
Transmission
Generator
Transmission efficiency = 90%
Generator efficiency = 90%
Figure 7.9 Global efficiency of coal to electricity system (Farzana and Isla 2006).
are burned (Guo et al. 2004). These additives contaminate the C 0 2 making it toxic. A large amount of sulfur and arsenic are also released during coal burning. The global efficiency of the whole system is thus reduced. The global efficiency of wood combustion has been calculated to be more than 90% (Chhetri et al. 2006), whereas the global efficiency of nuclear power generation is approximately 5%. Nuclear waste storage has been the subject of discussion for a long time, and no feasible methods have been successfully worked out. Development of nuclear power generation utilizes fossil fuel during various process operations, thus contributing to the emission of C0 2 . The major problem of nuclear power generation is that the half-lives of natural uranium isotopes U-234 is 244,500 years - U-235 is 7.03 x 108 years, and that of U-238 is 4.46 x 109 years (Wise Uranium Project 2005). The half-lives of the "enriched" uranium is much higher than this. For these reasons, nuclear power has the lowest global efficiency. Nuclear waste is a big problem, cannot be utilized, and is almost impossible to store safely. Solar energy is free energy, extremely efficient. Direct solar energy is a benign technology. Khan and Islam (2005) developed a direct solar heating unit that heats waste vegetable oil then uses it as a heat transfer medium (Figure 7.10). The solar concentrator can heat the oil to more than 300°C, and the heat can be transferred through a heat exchanger for space heating, water heating, or any other purposes. In conventional water heating, the maximum heat
322
THE GREENING OF PETROLEUM OPERATIONS
Figure 7.10 Details of solar heating unit (after Khan et al. 2006).
that can be stored is 100°C. However, in the case of direct oil heating, the global efficiency of the system is more than 80%. No waste is generated in the system. Khan et al. (2006) developed a heating/cooling and refrigeration system that uses direct solar heat without converting it into electricity. The single pressure refrigeration cycle is a thermally driven cycle that uses three fluids. One fluid acts as a refrigerant, the second as a pressure-equalizing fluid, and the third as an absorbing fluid. Because the cycle operates at a single pressure, no moving parts, such as a pump or compressor, are required. In order to facilitate fluid motion, the cycle uses a bubble pump that uses heat to ensure the drive. All of the energy input is in the form of heat. The utilization of direct heat for heating, cooling, or refrigeration replaces the use of large amounts of fossil fuel, thus reducing the C 0 2 emission significantly. This type of refrigerator has silent operations, higher heat efficiency, no moving parts, and portability. Khan et al. (2006) developed a novel, zero waste sawdust stove in order to utilize the waste sawdust (Figure 7.11). At present, sawdust is considered a waste, and management of waste always involves cost. Utilizing the waste sawdust enhances the value addition of the waste material and generates valuable energy, as well. Figure 7.12 is the schematic of the zero waste sawdust stove. A simple heat exchanger was used to transfer heat from flue gas to cold water. They also designed an oil-water trap to trap all particulate matters emitted from wood combustion. The particulates or the carbon soot collected from the oil-water mixture are good
THE SCIENCE OF GLOBAL WARMING
323
Figure 7.11 Pictorial view of zero waste models for wood stove (after Chhetrietal. 2006).
Sewage waste
Sewage water
Non compostable waste
Fresh kitchen waste
10
11
Recycle of over size ,
Biogas collector
Solid separator
12 13
Different - ^ uses of 22 biogas
23
14 Feces
Biogas Burner
./ Shredder Water storage tank
3 5
Saline water
Digester Used water
Sewage water Water treatment plant
Flue gas (C0 2 )
Solid waste (sludge)
Digested solid Leachate (Ammonia)
Treated water
28
Desalination plant
Desaline water ► 30
16 Curing unit
18
Odor control unit
17 ' Manure
Figure 7.12 Zero waste mass utilization scheme.
29
Sodium bi-carbonate, Ammonium chloride
324
THE GREENING OF PETROLEUM OPERATIONS
nano-materials that have a very high industrial demand. The soot can also be used as a non-toxic paint. When the particulates are trapped in the oil-water trap and the heat is extracted for water heating, the C 0 2 emitted is a "fresh" or "new" C0 2 , which is most favored by plants for photosynthesis (Chhetri et al. 2006a). Thus, the combustion of wood fuel does not contribute to the greenhouse effect. Other products of combustion such as NO x and formaldehyde are also not harmful compared to the similar products from fossil fuel burning. The heat loss in the heater itself will contribute small amounts of heat loss that lowers the local efficiency somewhat. Assuming the 5-10% radiation and conduction loss, the global efficiency of wood combustion in the stove is considered to be more than 90%. Thus, wood combustion in effectively designed stoves has one of the highest efficiencies among the combustion technologies. A high efficiency jet engine has also been developed (Vafaai and Islam 2006). This jet engine can use practically any type of solid fuel, such as waste sawdust, and liquid fuel, such as waste vegetable oil and crude oil, to run the engine. A jet is created to increase the surface area in order to increase the burn rate. Direct combustion of crude oil is possible in this engine. This development would eliminate the costly refining and processing of petroleum products and would have a significant impact in reducing industrial CO z , since the use of heavy metals and toxic catalysts in the refining process is what makes the C 0 2 a toxic product. Khan et al. (2005) proposed an approach to zero waste (mass) utilization for typical urban settings that includes the processing and regeneration of solids, liquids, and gases. In this process, kitchen waste and sewage waste are utilized for various purposes including biogas production, desalination, water heating from flue gas, and making good fertilizer for agricultural production. The carbon dioxide generated from biogas burning is utilized for desalination. This process achieves zero-waste in mass utilization. The process is shown in Figure 7.12. The technology development in this line has no negative impact on global warming.
7.11 Reversing Global Warming: The Role of Technology Development This section discusses a series of techniques used to reduce industrial C0 2 . Besides emitting a toxic C 0 2 when burned, fossil fuels have
THE SCIENCE OF GLOBAL WARMING
325
greater properties of the carbon isotope 13C, which makes them not likely to be readily absorbed by plants. This leads to altering the characteristic recycle period of carbon dioxide, causing delays that result in an increase of the total C 0 2 in the atmosphere. Billions of people in the world use traditional stoves, fueled by biomass, for their cooking and space-heating requirement. It is widely held that wood burning stoves emit more pollution into the atmosphere compared to oil and natural gas burning stoves. However, a small intervention in wood burning stoves would result in the emission of natural CO z , which is essential for natural processes. A new technique has been developed to achieve zero waste in such technologies. This line of development would have a great impact on technological development in industrial sectors and other sectors, as well. Figure 7.13 is a sawdust-packed cook-stove developed by Khan et al. (2006). This is a highly efficient clay stove that has no waste. Even though not all by-products are captured and not all value is added, this stove is still considered to be a zero waste stove because it produces only organic gases that are readily absorbed by the environment when producing useful final products. This has a special oil-water trapping mechanism that captures the particulates of
Figure 7.13 Sawdust packed stove (after Khan et al. 2006).
326
THE GREENING OF PETROLEUM OPERATIONS
incomplete combustion in an oil water mixture. A heat exchanger is designed to trap the heat from the flue gas that is utilized for water heating. As all the particulates are trapped, the C 0 2 emitted is a clean and natural C 0 2 that is an essential feedstock for plant photosynthesis. The particulates trapped may be used as paint material. This can also be an excellent source of nanomaterial for industrial application. This technology offers solutions for the production of a natural form of C 0 2 that is readily synthesized by plants. The other emissions, such as methane and oxides of nitrogen, are not harmful, unlike those emitted from petroleum based fuels. The production of green bio-diesel and bio-ethanol discussed earlier is a key element in the production of natural C0 2 . Only non-toxic chemicals and catalysts are used in the processes to produce biodiesel and bio-ethanol. Even the benzene, NO x , methane, and other emissions are not harmful as there are no toxic chemicals involved during the production. These fuels are derived from renewable sources such as plants and vegetables. Plants and vegetables are essential components of natural food cycles for both the plant and animal kingdoms. It has been reported that petroleum fuels will be exhausted within the next few decades, whereas renewable biofuel sources continue for infinite time. These biofuels could replace all petroleum fuels, provided that the biomass farming is planned in a sustainable way. Replacing the petroleum fuels with clean biofuels in a sustainable way can eventually lead to the reversal of global warming. This reversal could be accelerated if the processing of fossil fuels became non-toxic by avoiding the use of toxic additives (Khan and Islam 2006; Al Darby et al. 2002). Recent studies indicate that crude oil refining could be avoided altogether by modifying the design of the combustion engine (Vafaai 2006). Energy systems are classified based on their global efficiencies. According to Farzana and Islam (2006), global efficiency should be one of the major indicators considered while selecting the technology for any energy system. For instance, a conventional, oil-heated steam turbine used for electricity production has a global efficiency of approximately 16%. For combined heat and power turbines, the global efficiency is approximately 18%. Similarly, the global efficiency of a coal-fired power system is approximately 15%, for hydropower systems it is 43%, from biomass to electricity conversion it is 13%, and nuclear power plants have a global efficiency of approximately 5%. The environmental cost due to these technologies has not been added up yet, which would further reduce the global efficiency of
THE SCIENCE OF GLOBAL WARMING
327
these technologies. The solar photovoltaic conversion efficiency is reported to be around 15% (Islam et al. 2006). This system also uses toxic batteries and synthetic silicon cells, which include toxic heavy metals inside solar cells. Their efficiency starts decreasing when we store them in the batteries. The efficiency of a battery itself is not very high, and batteries contain toxic compounds inside. The inert gas-filled tubes radiate very toxic light. 15% global efficiency of a system means that for every 100 units of energy produced, 85% of the energy is lost in the process. This indicates that the prevailing technologies have a very high waste generation, and most of these systems produce toxic CO z that causes global warming. Moreover, the C 0 2 emissions from embodied energy associated with all the massive equipment production and manufacturing is also very high. Direct application of solar energy for heating has the highest efficiency. Solar energy is a free source and has no environmental impact. Similarly, wood combustion in a simple stove also has a very high global efficiency due to the use of by-products and waste heat. The C 0 2 from wood combustion is also an essential ingredient for maintaining the photosynthesis process in plants. The energy systems that have highest efficiency have the lowest environmental impacts. Considering the long term impacts of various energy systems, C 0 2 emissions from the combustion of oil and gas, coal, embodied energy, and associated C 0 2 emission for hydropower and photovoltaic systems have been ranked based on the quality of C0 2 . Similarly, C 0 2 emissions from geothermal energy and biomass burning have also been ranked. Based on this ranking, natural C0 2 , which does not contribute to global warming, is deducted from the industrial C 0 2 , which does contribute to global warming.
7.12
Deconstructing the Myth of Global Warming and Cooling
There are two schools of thought regarding global warming. The first is based on the argument that global warming is caused by greenhouse gas emissions from various residential, commercial, and industrial activities. This argument has received widespread attention, and several national and international organizations in the world are working to reduce the greenhouse gas emissions from different sectors. A recent report by IPCC (2007) declared that most
328
THE GREENING OF PETROLEUM OPERATIONS
of global warming is contributed by human activities. Global concentrations of greenhouse gases, such as carbon dioxide, methane, and nitrous oxides, among others, are considered major precursors of this warming effect (IPCC 2001). However, none of the studies differentiate the impacts of such gases according to whether their origins are natural or synthetic. Available evidence lends no support for the assumption that naturally sourced concentrations of such gases cause or participate in global warming. On the contrary, they should accordingly be excluded from the models being used to predict climate change (Chhetri and Islam 2007a, 2007b; Khan and Islam 2007; Chhetri and Zatzman 2008). However, with technologies that process or refine, by way of synthesizing any part of a concentration of "greenhouse gas" (GHG), matters stand differently. Without taking up a different path from that of current technological development, it seems unlikely that there can be any significant abatement of global warming effects. Despite spending billions of dollars, no significant results have been achieved so far. Several studies show that the current global warming problem is real and is due to the anthropogenic emission of greenhouse gases. However, different models developed to describe this phenomenon are not without controversy. Alexiadis (2007) recently developed a feedback model that describes the effects of carbon dioxide emissions from human activities on global temperature and atmospheric CO z concentration and argued that anthropogenic carbon dioxide is the main driving force of global warming. Even with reducing the emissions, the temperature will keep increasing for a certain time. The second school of thought is based on the argument that anthropogenic impacts on global warming are negligible compared to natural driving forces, such as solar radiation, the precession of Earth, Earth's outgassing, microbial activities, volcanoes, ocean currents, etc. (Khilyuk and Chilingar, 2006). This theory claims that the earth is actually in a cooling phase rather than a warming phase, based on the historical periods beginning with the formation of Earth. Sorochkin et al. (2007) wrote a book based on this second school of thought, in which they describe in detail the scientific evidences that argue the human impacts on current global warming are negligible. In this work, it has been clearly stated in the Foreword and Preface that current debate over global warming substantially lacks the scientific breadth of the problem and sufficient data to
THE SCIENCE OF GLOBAL WARMING
329
describe this subject. The authors argue that computer models are made to argue human contributions to greenhouse gas emission but lack historical observation and influence of natural processes that in fact dominate the climate change. It has been stated that temperature evolution is one of Earth's major dynamic processes and human impact in this process could be negligible. To the contrary, the authors have also agreed that, prior to the industrial revolution, all climate changes were naturally driven and climate has been changing continuously for the last 4.5 billions of years in terms of intensity and duration. Their book includes an analysis of climate change based on the theory of Earth's evolution, outgassing, the precession of Earth, solar systems, and ocean formation from the beginning of Earth's formation to date. In Chapter 1 of the book by Sorochkin et al. (2007), the authors present the theory of evolution that describes the process of chemicaldensity differentiation of Earth's matter, which has been considered the main planetary process driving the evolution of earth. This theory explains the formations and growth of the dense iron oxide core, the emergence of chemical density convection in the mantle, and the formation of the lighter silicate crust of Earth. The authors claimed that, although it varies with geological history, the input into the total inner earth's energy is constituted 90% by endogenous energy, 9% by radioactive decay, and 1% by tidal deformation generated inside the earth's body. Moreover, gravitational energy of space matter was the dominant energy at the time of the Earth's formation. These natural forces still play a key role in tectonic and other evolutionary processes on Earth. The authors believe that the impact of these natural forces, which play a significant role in global warming, has been absent in the current scientific debates. Chapter 2 describes Earth's degassing and the stages of formation of the hydrosphere and atmosphere. The author's argue that juvenile Earth was completely deprived of a hydrosphere and the atmosphere consisted almost entirely of nitrogen. From the early Archaen time, when the first sea basins were formed, degassed carbon dioxide entered into the atmosphere and formed a carbon dioxide-nitrogen atmosphere. Ocean water then started interacting with the oceanic crust rocks. It is argued that this process resulted in the considerable changes in composition, pressure, and the formation of iron bearing deposits, which suppressed the oxygenation of the atmosphere and the generation of abiogenic methane that could have been the basis for life forms on Earth.
330
THE GREENING OF PETROLEUM OPERATIONS
In Chapter 3, the authors describe the adiabatic theory of the greenhouse effect as the major theory advanced to account for the global warming phenomenon. As they describe, more than 67% of heat transfer in the atmosphere occurs by convection possessing adiabatic properties. This adiabatic model is used to quantitatively explain the temperature regimes of the planetary troposphere and the impacts of composition and pressure on climate systems. It also explains that the Earth's precession angle has a direct link to climate change. Based on the adiabatic theory, the authors compare the relative effects of natural and anthropogenic influences on Earth's climate change and conclude that the anthropogenic influence is negligible compared to natural factors. Based on this theory, the total temperature rise predicted in the worst-case scenario is approximately 0.01 °C, attributed to total anthropogenic emissions of greenhouse gases such as carbon dioxide and methane. The authors have also linked the cooling and warming with the Earth's precession angle. They describe that the earth's climate cools down when the Earth's precession angle is smaller and vice-versa. Hence, they conclude that the traditional explanation of global warming due to man-made sources is no more than a myth. However, consideration of adiabatic conditions in the atmosphere is rife with linearized and linearizing assumptions, and the model's basic assumption is itself questionable (Khan et al. 2006a). For an adiabatic condition, the following three conditions are generally assumed: 1) the existence of a perfect vacuum between the system and surrounding area; 2) the existence of a perfect reflector around the system, like the thermo flux mechanism, to resist radiation; and 3) the presence of zero heat diffusivity material that isolates the system. Can any of these conditions can be found or maintained anywhere in the atmosphere? In this respect, the predictive value of what the authors have modeled seems little better than the worst of the predictions of ocean level rise, glacial melting, etc., advanced by marketing scenarios of imminent doom stalking the fundamental natural order of the planet, stemming from driving once too often to the local convenience store. No one knows whether the occurrence, either simultaneously or relatively close in time, of similar local effects across various parts of the earth's surface may be evidence of a larger, more fundamental global phenomenon. Therefore, hypothesizing an explanation on the basis of remaining in ignorance about those missing pieces cannot be accepted as scientifically grounded. On the other hand, although the present
THE SCIENCE OF GLOBAL WARMING
331
work addresses phenomena on a scale that seems far more appropriate in accounting for atmospheric-level climate change — as distinct from phenomena that really address the destruction of human habitat by thoughtlessness — the climate change model it advances based on the adiabatic theory fails to make its case for how fundamental, atmosphere-wide climate change actually works. Chapter 4 of their book deals with the evolution of Earth's climate throughout the geological history. For all geological periods, the temperature regimes under the most probable initial and boundary conditions have been computed assuming the adiabatic conditions. This theory explains that Earth is actually in a cooling geological period. Based on the analysis of geological periods, Earth's atmospheric temperature is predicted with numerous illustrations. The bacterial nature of Earth's glaciations, which could have occurred due to the removal of nitrogen from atmosphere by nitrogen consuming bacteria that reduced the total pressure of atmosphere, could be one of the main causes of reduction in temperature. The prediction based on adiabatic theory claims that Earth's temperature 2-3 billions of years ago was much higher than the current increase. However, as explained earlier, there is no adiabatic condition as such in the atmosphere to fully support this assumption. A non-linear model, that omits the assumption that the atmospheric system works adiabatically, could fully explain the climate changes to predict the global warming. In Chapter 5, the evolution of climate is used to describe nonuniformity of distribution in time periods of accumulation of mineral deposits. This theory was used to explain the formation of largest iron ore deposits at the end of the Archaean and Early Proterozoic periods. This chapter illustrates the scientific evidence from prehistoric time to date that explains how the mineral deposits interacted with the atmospheric gases that had an impact on atmospheric composition and its chemical density. In Chapter 6, the authors link the origin of life on Earth to the formation of a reduced environment. This environment evolved from an intense generation of abiogenic methane at the beginning of Archaean period, which then led to the formation of the first organic compounds, such as formaldehyde and hydrogen cyanide, which served as the building materials for the primitive life forms. The subsequent development of life on Earth unfolded according to biological laws under strong influences of geochemical and climatic conditions. The authors believe that the main stages of life transformations coincided
332
THE GREENING OF PETROLEUM OPERATIONS
with the main geotectonic breaks during the development of Earth. Moreover, the development of life forms significantly influenced the biological processes on Earth. They also reinforce the statement that nitrogen that consumed bacteria was the main cause of reducing the partial pressure of nitrogen, resulting in the total pressure of the atmosphere that led to the beginning of cooling phase at the start of the mid-Proterozoic period. In Chapter 7, the authors describe how solar radiation, Earth's outgassing, and microbial activities operate as the major three forces of nature driving the earth's climate. They quantify the extent of impacts from natural driving forces. Solar luminosity, solar system geometry, and the gaseous composition of the atmosphere were considered as the first-order climate drivers. Global distribution of continents and oceans on Earth's surface were considered the second-order climate drivers. Similarly, orbital and solar variability, large scale oceanic tidal cycles, and variation in the structure of oceanic currents were considered the third-order climate drivers. Volcanoes, natural weathering, regional tectonics, El Nino, solar storms and flares, short ocean tidal cycles, meteorite impacts, and human interventions were considered to be the forth-order climate drivers. They consider that global forces of nature are at least 4-5 orders of magnitude greater than results from human activities. Thus, the authors argue that the effects of anthropogenic influence on the global climate are negligible. The authors are among the most renowned climate scientists in the world. The inclusion of the formation of Earth from prehistoric time to study the global climate has made this book more relevant, because most of the current scientific analysis lacks such an analysis. Sorokthin, Chilingar, and Khilyuk have provided a new approach that combines both Earth's geological history and climate history, highlighting the dynamic nature of Earth's processes. This book is a valuable resource for students, practicing engineers, and scientists in the field of geophysics, geology, environment, climate change, and biological sciences. It could be a guideline for policy makers and an interesting resource for those who are interested in global warming debates. Overall, their book can benefit a large section of the scientific and public community that is confronting the lack in explanation of climate theories in the present context. Nothing in their work, however, should be taken to suggest that the destruction of human habitat and living environments in the short term cannot be addressed or should be left to nature to
THE SCIENCE OF GLOBAL WARMING
333
"solve" without anyone lifting a finger here and now. Let us grant that anthropogenic sources of pollution cause many local problems throughout the planet, especially on the earth's surface, with very little consequence for the fundamental mechanisms most be responsible for adjusting the earth's global climate at the atmospheric and geological levels over the long term. Does it follow that we cannot or should not be taking action here and now to alleviate and mitigate whatever endangers human habitat and its relationship to the environment? On the contrary, action may be the sphere in which much of what we do not yet know about the relationship of the local to the global will begin to be sorted out. One simple analysis makes the point clear. It is true that there is no amount of C 0 2 that humans can produce that would create an imbalance to the global climate. Even if all C 0 2 were the same, irrespective of their natural of industrial origins, two things still happen that can change the fate of human species irreversibly — artificial products are made, and they impacts on humans (before they reach the global ecosystem). For instance, what would be the impact of Freon or DDT, considering that they did not exist in nature before? In that sense, they are an infinite change from what nature offered millions of years ago. So, if the changes due to C 0 2 are miniscule, hence, can be ignored, what happens to the infinite change invoked by, say, Freon that did not exist in nature before. The same principle applies to every artificial product that surrounds us today. Because this change is infinite, it will impact the global system no matter how vast the ecosystem is. In the same vein, humans will be the first victim and the most affected because we live in an environment that is affected more by our artificial products than by nature. This remains an area that neither side in the current global warming debate seems prepared to address.
7.13 Concluding Remarks It is concluded that current, synthetic, chemical-based technological development is a major cause of global warming and climate change problems. Industrial C 0 2 emissions, which are contaminated by the addition of toxic chemicals during fuel refining, processing, and production activities, are responsible for global warming. Natural CO z , which is beneficial to the environment and an essential ingredient for life and biodiversity on Earth, does
334
THE GREENING OF PETROLEUM OPERATIONS
not cause global warming. For the first time, natural and industrial C 0 2 have been differentiated. Carbon dioxide is characterized based on various criteria, such as the origin, the pathway it travels, and isotope numbers. The current status of greenhouse gas emissions from various anthropogenic activities has been discussed. The role of water in global warming has been detailed. Various energy sources have been classified based on their global efficiencies. The assumptions and implementation mechanisms of the Kyoto Protocol have been critically reviewed, and this chapter argued that the Clean Development Mechanism of the Kyoto Protocol has become the "license to pollute" due to its improper implementation mechanism. The conventional climatic models have been deconstructed, and guidelines for new models have been developed in order to achieve true sustainability in technology development in the long term. A series of sustainable technologies that produce natural C 0 2 were presented. Various zero-waste technologies that have no negative impact on environment are keys to reversing global warming. This chapter shows that a complete reversal of the current global warming problem is possible only if pro-nature technologies are developed.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
8 Diverging Fates of Sustainable and Unsustainable Products 8.1
Introduction
Generally, the idea of "sustainability" implies there is a moral responsibility for technological development to be accountable for their effects related to the natural environment and future generations. However, most of the widely accepted technological developments are not sustainable. The main, contentious issue is that most of them are mislabeled as "sustainable" due to improper sustainability assessment criteria. With a recently developed scientific sustainability criterion (see Chapter 4), most of the technologies that belong to the "chemical approach" can be proven unsustainable (Khan et al. 2005). Modern technological advancement has created many different products for daily uses in human life. Most of them are not environmentally friendly and cause numerous problems. However, these products have been so widely accepted that no one questions their sustainability. Nowadays, most popular household items
335
336
THE GREENING OF PETROLEUM OPERATIONS
are plastics, which are completely unsustainable, environmentally unacceptable, and incontrovertibly harmful to the ecosystem. In the last two decades, especially after the UN Conference of Economic Development, sustainability and sustainable development have become commonly and loosely used terms. However, it is hardly achieved in present technological and other resource development (Khan 2006). Many guidelines or frameworks have been developed, based mainly on socio-economic and narrowly environmental objectives, to achieve sustainability (GRI 2002; UNCSD 2001; IChemE 2002). Khan (2006) proposed a new protocol for technological and other developments. Khan (2006) developed a method that evaluates sustainability considering economic, environmental, and social impacts. In this chapter, a detailed pathway study is performed including the origins, degradation, oxidation, and decomposition in order to demonstrate how a natural product is sustainable and how a synthetic product is unsustainable. In this chapter, two homologous products, polyurethane fiber and wool fiber, are selected for the sustainability assessment. They both appear to be the same in terms of durability. However, one is of natural origin and the other is made synthetically, even though its source is natural (extracted from crude oil). The pathways of these products (chemical for polyurethane and biochemical for wool) were studied, and the results show how they diverge. Their degradation behaviors, both oxidation and photo degradation, were also studied. The study suggests that the wool is sustainable and that the synthetic fiber is not sustainable. Finally, a direct laboratory degradation experiment, the application of the microwave on these products, was also undertaken. Further examination indicates that the similarity between wool and polyurethane fibers stops at t = "right now." Table 8.1 shows detailed differences between these two seemingly similar fibers. This experimental result further confirmed the sustainability statuses of non-synthetic wool fiber and polyurethane. Natural fibers exhibit many advantageous properties having such low-density materials and yielding such lightweight composites with highly specific properties (O'Donnell 2004). These natural fibers are cost effective, easy to process, and renewable resources, in turn reducing the dependency on foreign and domestic petroleum oil.
DIVERGING FATES
337
Table 8.1 Basic differences between polyurethane and wool fiber. Polyurethane
Wool
Type
Artificial fiber; alien products to the nature
Natural Fiber that grows in most of the organism
Composition
Urethane monomer, a completely humongous compound and same pattern
Made of alpha-karyotin, which is a valuable protein. However, wool is a heterogeneous compound that varies from species to species. Even protein itself is different and complex in a single species.
Diversity
There is no diversity, urethane
Highly diverse; complex process of synthesis of which very little is known so far. The different segments like different monomers.
Functionality
Single-fu nctional just as plastic
Multifunctional such as for the protection of organisms and supplies of nutrients
Adaptability
It is non-adjustable, non-adoptable, and cannot change itself like natural products can.
It can adapt with changes in different conditions, such as temperature, humidity, and light intensity. It protects itself and protects the organism on which it grows.
Time factor
Non-progressive; does change with time
It is regressive and changes according to time; for example, it degrades with time.
Perfectness
It creates all kinds of problems, from carcinogenic products to unknown products.
It is perfect and does not create problems. It solves problems, instead.
8.2
Chemical Composition of Polyurethane Fiber
Polyurethane's fiber is a polymeric fiber. In a polyurethane molecule, urethane linkages are in the backbone. Figure 8.1 shows a simple polyurethane chain. A more complex form will have any polymer
338
THE GREENING OF PETROLEUM OPERATIONS
containing the urethane linkage in its backbone chain. Crystal can form in any object in which the molecules are arranged in a regular order and pattern. A polymeric fiber is a polymer whose chains are stretched out straight (or close to straight) and lined up next to each other, all along the same axis (Figure 8.1). A urethane linkage is chain that forms polyurethane, as presented in Figure 8.2. Polymers arranged in fibers can be spun into threads and used as textiles (Figures 8.3 and 8.4). Clothes, carpet, and rope are made out of polymeric fibers. Some other plastic polymers, which can be drawn into fibers, are polyethylene, polypropylene, nylon, polyester, Kevlar, nomex, and polyacrylonitrile. Figure 8.4 shows the scanning electron microscopic (SEM) microphotograph of polyurethane. Each fiber is linear and there is no scale or segment, which is different from natural fiber (Khan and Islam 2005c).
o
o
Figure 8.1 Polyurethane polymeric fiber.
O
II
N/
c
-o/R
I H a urethane O
O
The urethane linkages in a polyurethane Figure 8.2 Urethane linkage in polyurethane chain.
DIVERGING FATES
339
Figure 8.3 Polyethylene or nylon fiber.
Figure 8.4 SEM microphotograph of polyurethane fiber.
8.3
Biochemical Composition of Wool
Wool is an extremely complex, natural, and biodegradable protein fiber that is both renewable and recyclable. It has a built-in environmental advantage because it is a natural fiber grown without the use of any herbicides and fertilizers. Wool fibers grow in small bundles called "staples," which contain thousands of fibers. Wool fiber is so resilient and elastic that it can be bent and twisted over 30,000 times without danger of breaking or being damaged (Canesis 2005). Every wool fiber has a natural elasticity that allows it to be stretched by as much as one third and then still spring back into place. As a biological product, wool is mainly composed of 45.2% carbon, 27.9% oxygen, 6.6% hydrogen, 15.1% nitrogen, and 5.2% sulfur. About 91% of the wool is made up of alpha keratins, which are fibrous proteins. Amino acids are the building blocks of alpha keratins. The keratin found in wool is called "hard" keratin. This type
340
THE GREENING OF PETROLEUM OPERATIONS
of keratin does not dissolve in water and is quite resilient. Keratin is an important, insoluble protein, and it is made from eighteen amino acids. The amino acids present in wool are cysteine, aspartic acid, serine, alanine, glutamic acid, praline, threonine, isoleucine, glycine, tyrosine, leucine, phenylalanine, valine, histidine, arginine, and methionine. The most abundant of these amino acids is cystine, which gives hair much of its strength. The amino acids are joined to each other by chemical bonds called peptide bonds, or end bonds. The long chain of amino acids is called a polypeptide chain and is linked by peptide bonds (Figure 8.5). The polypeptide chains are intertwined around each other in a helix shape. The alpha keratins in the wool are fibrous proteins consisting of parallel chains of peptides. The various amino acids in the keratin are bound to each other via special peptide bonds in order to form a peptide chain. The linear sequence of these amino acids is called the primary structure. However, these bound amino acids also have a three-dimensional arrangement. The arrangement of neighboring amino acids is the secondary structure. The secondary structure of alpha keratin is that of an alpha helix, due to the amino acid composition in the primary structure. This is a twirled-like structure of the amino acid chain. This chain is depicted in Figure 8.5. The molecular structure of wool fibers behaves like a helix, which gives wool its flexibility and elasticity (Figure 8.6). The hydrogen bonds (dashed lines) that link adjacent coils of the helix provide a stiffening effect. Figure 8.7 indicates that wool has several micro air pockets that retain air in it. Still air pockets have excellent insulation qualities, making wool fibers ideal for thermal protection (Rahbur et al. 2005). The alpha helix in wool is reinforced by weak hydrogen bonding between amino acids above and below other amino acids in the helix
H
\/
h1
R
I
N
C
H
0
I
°
II
II
C
H
H
/\ R
\/
N H
I
Figure 8.5 Structural bond for amino acid chain.
H
R
I C
II
0
C
H
/\
R
DIVERGING FATES
341
Figure 8.6 Alpha helix wool structure (Canesis 2004).
Figure 8.7 Insulating pockets of still air (Canesis 2004).
(Figure 8.8). In wool, three to seven of these alpha helices can be curled around each other to form three-strand or seven-strand ropes. Alpha keratin is one of the proteins in hair, wool, nails, hooves, and horns. It is also a member of a large family of intracellular keratins found in the cytoskeleton. In keratin fibers, long stretches of alpha helices are interspersed with globular regions. This pattern is what gives natural wool fibers their stretchiness. In the keratin represented here, the first 178 amino acids and the last 155 form globular domains on either end of a 310 amino acid fibrous domain. The fibrous region itself is composed of three helical regions separated by shorter linkers. In the fibrous domain, a repeating 7-unit sequence stabilizes the interaction between pairs of helices in two adjacent strands that wind around each other to form a duplex. In the formation of this
342
THE GREENING OF PETROLEUM OPERATIONS
Figure 8.8 Chemical bond of alpha helix (Kaiser 2005).
coil, more hydrophobic amino acids of the 7-unit sequence meet to form an insoluble core, while charged amino acids on opposing strands attract each other to stabilize the complex (Canesis 2004). The magnified wool fiber is shown in Figure 8.9(A). Figures (B) and (C) are the staples from fine and course wooled sheep. The scanned
(A)
(B)
(C)
Figure 8.9 (A) Magnified wool fiber (B) Staples from fine wooled sheep (C) Staples from coarse wooled sheep (Canesis 2004).
DIVERGING FATES
343
electronic picture of wool is shown in Figure 8.10. It shows that natural fiber wool has many scales around (Figure 8.10)
8.4
Pathways of Polyurethane
Detailed pathways of polyurethane are presented in Figure 8.11. This product is manufactured from hydrocarbon. The exploration of hydrocarbon is environmentally very expensive and causes many environmental problems (Khan and Islam 2005a, 2005c, 2006; Khan et al. 2006b). The present hydrocarbon refinery process also uses toxic catalysts and heavy metals (Lakhal et al. 2005). Each step of the production of polyurethane, especially from monomer to dimer and from oligomers to polymers, has many toxic catalysts and releases known and unknown toxic and carcinogenic compounds (Table 8.2, Figures 8.11 and 8.13). In the presence of a small molecule called diazobicyclo octane (DABCO), diol and diisocyanyte make polymer. When two monomers are stirred with DABCO, polymerization takes place. In continuing the polymerization, a brand new urethane dimer is formed.
Figure 8.10 SEM photomicrograph of wool fiber where it shows the presence of scales.
344
THE GREENING OF PETROLEUM OPERATIONS
Table 8.2 Mothers' breast milk contaminated with poly-brominated (PBDEs) fire retardants that are found in everyday consumer products. Materials used in
Types of PBDEs used
Examples of consumer products
Polyurethane Fibers
Deca, Penta
Back coatings and impregnation of home and office furniture, industrial drapes, carpets, automotive seating, aircraft and train searing, insulation in refrigerators, freezers, building insulations
Polyurethane foam
Penta
Home and office furniture (couches and chairs, carpet padding, mattresses and mattress pads), automobile, bus, plane, and train seating, sound insulation panels, imitation wood, packaging materials
Plastics
Deca, Octa, Penta
Computers, televisions, hair dryers, curling irons, copy machines, fax machines, printers, coffee makers, plastic automotive parts, lighting panels, PVC wire and cables, electrical connectors, fuses, housings, boxes and switches, lamp sockets, waste-water pipes, underground junction boxes, circuit boards, smoke detectors
Sources: (WHO 1994; Lunder and Sharp 2003)
This urethane dimer has an alcohol group on one end and an isocyanate group on the other, so that it can react with either a diol or a diisocyanate to form a trimer. Otherwise, it can react with another dimer, a trimer, or even higher oligomers. In this way, monomers and oligomers combine and combine until we get high molecular weight polyurethane (Figures 8.11 and 8.12). As polyurethane degrades, the most toxic form of PBDEs (penta-BDE) escapes into the environment. High amounts of PBDEs have been traced in human breast milk. This compound is commonly found in every consumer product (Table 8.2). It is a highly
DIVERGING FATES Drilling mud, heavy metals
Hydrocarbon
Vegetable protein Enzyme
^' Toxic catalysts
Toxic compounds
345
Toxic compounds
Refining
Digestion
Beneficial compounds
Metabolism
ATR NADPH
Enzyme
Amino acids
Highly toxic + carcinogens
Poly-peptides
Highly toxic + carcinogens
Alpha-karyotin
Highly toxic + carcinogens
Sheep's wool
Figure 8.11 Pathways of unsustainable polyurethane and inherently sustainable wool, both of which have similar functions.
Γ\
■=K>°*-©- c
~Q
+ H—O — C H 2 - C H 2 - 0 — H -
a diisocyanate
0 = c = N - ^ - C H , - < Q - r l l - O -CH,-CH,-0 — H 4-
:fJ
/
N:
Figure 8.12 Chemical reaction urethane production by using diisocyanate and diol in the presence of DABCO.
346
THE GREENING OF PETROLEUM OPERATIONS
toxic compound, and exposure to it causes adverse health effects, including thyroid hormone disruption, permanent learning and memory impairment, behavioral changes, hearing deficits, delayed puberty onset, decreased sperm count, fetal malformations, and possibly cancer (Lunder and Sharp 2003). Lunder and Sharp (2003) reported that exposure to PBDEs during infancy leads to more significant harm at a much lower level than exposure during adulthood. Recently, reported PBDE contamination in breast milk might create a disaster in the near future. Scraps of flexible polyurethane foams from manufacturing slabstock are a serious environmental threat (Molero et al. 2006). This study reported that the glycolysis of flexible polyurethane foams is executed in order to chemically recycle the polyol, a constituent of the polyurethane manufacturing process. Among the various types, diethylene glycol was found most effective in obtaining high purity in the polyol phase. The polyurethane foam is thus contaminated by glycol, which during oxidation produces toxic carbon monoxide. Matsuoka et al. (2005) reported a study on the electro-oxidation of methanol and glycol and found that the electro-oxidation of ethylene glycol at 400 mV gave glycolate, oxalate, and formate. Chhetri et al. (2006) carried out a pathway analysis and found that glycol oxidation produces glycolate and formate, which in turn produces toxic carbon monoxide (CO) and is called a CO poisoning path (Figure 8.13). Thus, the use of polyurethane creates a health problem and will remain harmful to the environment for generations. Various types of amines used in polyurethane manufacturing will have significant impacts on human health and the environment. Samuel and Steinman (1995) investigated that laboratory reports on animals showed that diethanolamine (DEA) is a carcinogen that has major impacts on the kidney, liver, and brain. Nitrosamine, a byproduct of DEA, is also considered a carcinogen.
CO ooisoninq oath -* Formate
->
CO
Ethylene glycol Glycol —► oxidation aldehyde + . Non poisonin g path
r-,
,
-► Glyoxylate —► Oxalate
Figure 8.13 Pathway of ethylene glycol (after Chhetri et al. 2006).
DIVERGING FATES
8.5
347
Pathways of Wool
Wool is a natural product, and it follows a completely natural path that does not have any negative environmental impacts. Figure 8.11 shows the pathways of sheep's wool. In the whole process, a sheep only requires vegetables for food. The sheep digests the plant leaf/grass, turning the grass into simple nutrients that its body can absorb readily. This simple nutrient is converted into amino acids, polypeptides, and finally into alpha Keratin, which is the composition of wool. The whole process takes place through biological activities. As a result, during the wool generation process there is no release of toxic elements, gases, or products. Some biological products/byproducts are generated that are actually essential for the sheep. One such component is adenosine tri-phosphate (ATP). Therefore, the wool generation process is truly sustainable and it can run for an infinite time period without harming the environment.
8.6
Degradation of Polyurethane
Polyurethane and other plastic products are widely accepted for their non-degradability. The incomplete lifecycle of polyurethane is shown in Figure 8.14. It creates irreversible environmental problems. Generally, synthetic polymers are rarely biodegradable because of the fact that in a polymer chain, apart from carbon atoms, there are frequently N and O atoms where oxidation and enzymatic degradation should take place. Synthetic polymers are only susceptible to microbial degradation if they have biodegradable constituents introduced into the technological process. Overall, the degradation rate of the polyurethane depends on the components of a polymer, their structure, and the plasticizers added during the manufacturing. Polyurethanes containing polyethers are reported to be highly resistant to biodegradation (Szostak-Kotowa 2004). Polyurethane fibers are mainly used for plastic carpets. Several recent studies have also linked higher occurrences of asthma to plastic carpets (Islam 2005c), due to small particulates from the carpet that enter the human lung and, therefore, the oxidation cycle that consists of billions of cells of the body. When a plastic product, including polyurethane, is burnt it releases 400 toxic products. Similarly, due to low-temperature-oxidation (LTO) the same amount of
348
THE GREENING OF PETROLEUM OPERATIONS
Figure 8.14 Incomplete life cycle of polyurethane due to non-biodegradability.
toxic products can be released even at the temperature of a home. According to Islam (2005a), the point frequently overlooked here is that, in a manner that is likely analogous to LTO identified in petroleum combustion (Islam etal. 1991; Islam and Ali 2001), oxidation products are released even when the oxidation takes place at the relatively low temperature of the human respiration process. To date, little has been reported about the LTO of polyurethane in the context of human health. Only recently, an epidemiological study has linked polyurethane to asthma (Jaakkola et al. 2006).
8.7
Degradation of Wools
Wool is a natural product, and microorganisms can decompose it. It is a bio-based polymer that is synthesized by bacteria and is an integral part of ecosystem functions. Thus, biopolymers are capable of being utilized (biodegraded) by living matter. Therefore, they can be safely and ecologically disposed of through disposal processes (waste management) such as composting, soil application, and biological wastewater treatment (Narayan 2004). Bio-based materials,
DIVERGING FATES
349
such as wool, offer value in the sustainability/life cycle equation because they become a part of the biological carbon cycle. The Life Cycle Assessment (LCA) of these bio-based materials indicates a reduced environmental impact and energy use when compared to petroleum-based materials. Figure 8.15 shows the life cycle of wool. The complete cycle shows the natural regeneration of wool. Both bacteria and fungi cause the degradation of wool. However, fungi, especially those belonging to the genera Microsporum, Trichophyton, Fusarium, Rhizopus, Chaetomium, Aspergillus, and Penicillium, basically degrade keratin. Further investigations of the fungi-induced biodegradation of wool indicate that keratinolysis is proceeded by denaturation of the substrate with disulphide bridges. This results in natural resistance of keratin by hydrolytic degradation of protein via extracellular proteinases. The rate of bacterial degradation depends on the chemical composition, molecular structure, and the degree of substrate polymerization (Agarwal and Puvathingal 1969). Besides these, keratinolytic bacteria species in the genera Bacillus (B. mesentericus, B. subtilis, B. cereus,
Figure 8.15 Complete lifecycle of wool showing a natural regeneration process.
350
THE GREENING OF PETROLEUM OPERATIONS
and B. tnycoides) and Pseudomonas, and some actinomycetes, e.g., Streptomyces fradiae, have more influence in the degradation process (Agarwal and Puvathingal 1969). Microorganisms attach to wool at various stages, from acquisition to utilization. In general, the fatty acid in wool has high resistance to microbial attack. However, the raw material contains many impurities, which makes the wool highly susceptible to microbial degradation. McCarthy and Greaves (1988) reported that bacteria simultaneously degrade and stain the impure wool. For example, Pseudomonas aeruginosa, under alkaline conditions cause green coloration of wool and under acid conditions cause red coloration. A laboratory study of the degradation of wool was carried out (Khan and Islam, 2005c). The wool was heated in a microwave oven, and the degradation was compared with the original structure. In Figure 8.16, (A) and (B) show pictures of the wool before and after microwave degradation. Interestingly, natural wool fiber does not change structurally compared to the synthetic product, polyurethane. In Figure 8.17, (A) and (B) show the change of the polyurethane fiber due to the microwave. Both wool and polyurethane are treated in a microwave for similar condition parameters. Within the same time period the natural wool fiber did not change at all, but the polyurethane completely changed. It became a completely liquid form and then a solid ball, creating a strong burning smell. This experimental result proves the resilience of natural wool. This quality makes wool an ideal fiber and naturally safer. Polyurethane, on the other hand,
(A)
(B)
Figure 8.16 SEM photomicrograph of wool fiber before (A) and after (B) microwave oxidation.
DIVERGING FATES
(A)
351
(B)
Figure 8.17 SEM photomicrograph of polyurethane before (A) and after microwave (B) treatment.
is inherently harmful to the environment. Electronic microscopy (Figure 8.16 (A) and (B)) shows that the chemical composition and the presence of moisture both enable the wool to resist burning. Instead of burning, wool chars when it is flamed. Comparative characteristics of wool and polyurethane fiber are shown in Table 8.3.
8.8
Recycling Polyurethane Waste
Polyurethane has been used in massive scales during the manufacturing of appliances, automobiles, beddings, carpet cushion, and upholstered furniture. Industries are recovering and recycling polyurethane waste materials form discarded products and from manufacturing processes. The recycling of any plastic waste is done generally in three ways: mechanically, chemically, and thermally. Mechanical recycling consists of melting, shredding, or granulating waste plastic. Even though plastic is sorted manually, sophisticated techniques, such as X-ray fluorescence, infrared and near infrared spectroscopy, electrostatics, and flotation, have recently been Table 8.3 Plastic oxidation experimental data. Time (minutes)
0
0.5
1.5
3.45
3.55
4.2
Temperature (°C)
25
102
183
122
89
30
352
THE GREENING OF PETROLEUM OPERATIONS
introduced. Sorted plastic material is melted down directly and molded into a new shape, or it is melted down after being shredded into flakes to process into granules. Plastic wastes are highly toxic materials, and further exposure to X-rays or infrared rays will make the products even more toxic. The chemical or thermal recycling process for converting certain plastics back to raw materials is called depolymerization. The chemical recycling process breaks down the polymer into its constituent monomers, which are again reused in the refineries, petrochemicals, and chemical processes. This process is highly energy and cost intensive and requires very large quantities of used plastic for reprocessing in order to be economically viable. Most of the plastic waste consists of polyurethane and also fiber reinforced materials that cannot be easily recycled simply with conventional processes. These reinforced plastics are thermoset and contain a significant fraction of glass or other fibers and heavy filler materials, such as calcium carbonate. In chemical recycling, monomers are broken down into their base components. One such method is the DuPont-patented process called "ammonolysis." This process depolymerizes plastic by an ammonolysis process, in which plastic is melted and pumped into a reactor and depolymerized at high temperatures and pressures using catalysts and ammonia (NR Canada 1998). In the case of thermal depolymerization, the waste plastic is exposed to high pressure and temperature. Under such conditions, the plastic waste is converted to distillate, coke, and oil, which are in the forms of raw materials to make the monomers and then polymers. Figure 8.18 shows the overview of the plastic recycling process. Recycling any plastic waste has several problems. The process produces solid carbon and toxic gases. The hydrolysis of pulverized polycarbonates in supercritical reactors produces Bisphenol A, which is a very toxic compound. Coloring pigments used to color finished products are highly toxic compounds. Dioxins are produced when plastics are incinerated. Phthalates are a group of chemicals that are hormone disrupters. Plastic toys made out of PVC are often softened with phthalates, and their burning produces toxic fumes. Recycling plastic has only a single reuse, unlike paper or glass materials. Various hazardous additives are added to the polymer to achieve the desired material quality. These additives include colorants, stabilizers, and plasticizers that include toxic components like lead and cadmium. Studies indicate that plastics contribute 28% of all
DIVERGING FATES Natural Gas/Oil
~f
Monomers -r *
Polymer Resins
-γ Fabricator
353
Consumer products
—1
i '
Thermal depolymerization to raw materials
f1
Chemical Depolymerization to Monomers
+
Mechanical method (Pellet/Rake)
Collection/ Sorting
♦
Figure 8.18 Schematic of plastic recycling process.
2 3 Time in minutes
4
Figure 8.19 Plastic oxidation (Time in minutes vs. Temperature in Celsius).
cadmium in municipal solid waste and about 2% of all lead (TSPE 1997). Also, huge amounts of natural gas and other fossil fuels are used for depolymerization and ammonolysis as energy sources. Today's industrial development has created such a crisis that life without plastic is difficult to imagine. Approximately four million metric tons of plastics are produced from crude oil every day (Islam 2005). Today, plastic production consumes 8% of the world's total oil production, but Maske (2001) reported it as 4% of the total oil production. Burning the plastics will produce more than 400 toxic fumes, 80 of which are known carcinogens. Though, there is talk of recycling, only about 7% of the total plastic is recycled today, and the rest of the plastic is either disposed of into the environment or susceptible to oxidation. Plastic products are difficult to degrade.
354
THE GREENING OF PETROLEUM OPERATIONS
Table 8.4 shows the decomposition rates for various plastics. Some plastic products, such Styrofoam, never degrade and emit lots of toxic compounds all the time. Some plastics, such as glass bottles, take 500 years to completely decompose. En experiment was carried out in order to determine the oxidation rate by burning the plastic in normal conditions. It took 3 minutes and 45 seconds to oxidize 2 gm of plastic. Table 8.3 is the summary of the lab data for plastic oxidation.
8.9
Unsustainable Technologies
At present, it is difficult to find any technology that brings benefits to human beings in the long-term. Plastic technology is one of the examples of an unsustainable technology. In this study, we are considering plastic as a case study. It is reported that, daily, millions of tons of plastic products are produced. About 500 billion to 1 trillion plastic bags are used worldwide every year, as reported by Vincent Cobb, the founder of reuseablebags.com. The first plastic sandwich bags were introduced in 1957. Department stores started using plastic bags in the late 1970s, and supermarket chains introduced the bags in the early 1980s.
Table 8.4 Decomposition rate for plastic. Plastic decomposition rates Paper
2-4 weeks
Leaves
1-3 months
Orange peels
6 months
Milk cartoon
5 years
Plastic bags
10-20 years
Plastic container
50-80 years
Aluminum can
80 years
Tin can
100 years
Plastic soda bottle
450 years
Glass bottle
500 years
Styrofoam Source: website 7
Never
DIVERGING FATES
355
Natural plastics have been used for thousands of years, dating back to the time of the pharaohs and old Chinese civilization. Natural resins, animal shells, horns, and other products were more flexible than cotton and more rigid than stone and have been used for household products, from toys and combs to plastic wraps and drum diaphragms. Until about 50 years ago, natural plastics were used for making buttons, small cases, knobs, phonograph records, mirror frames, and many coating applications worldwide. There was no evidence that these materials posed any environmental threat. The only problem with natural plastics, it seemed, was that they could not be mass-produced, or at least humankind did not know how to mass-produce these natural plastics. In order to find more efficient ways to produce plastics and rubbers, scientists began trying to produce these materials in the laboratory. Ever since American inventor Charles Goodyear accidentally discovered that the property of natural rubber could be altered with the addition of inorganic additives in 1839, the culture of adding unnatural materials in order to manufacture plastic began. During this development, the focus was to make sure the final products had homogeneity, consistency, and durability in macroscopic features, without regard Table 8.5 Characteristics comparison of polyurethane and wool. Polyurethane
Wool
Artificial fiber
Natural Fiber
Non-biological polymers composed of urethane monomer
Alpha-protein based biological polymer
Simple (same segments and same Complex (different segments like monomers) different monomers in) Homogenous
Heterogeneous
Photo-oxidation releases toxic compounds
Natural; no toxic gases
Non-biodegradable
Bio-degradable
Non-adjustable and non-adoptable
Adjustable (flexible by conditions, it can change itself in different conditions)
Incomplete lifecycle and not regenerate
Complete lifecycle in case of regeneration
Creates environmental problem
No environmental problem
356
THE GREENING OF PETROLEUM OPERATIONS
to the actual process of reaching this status. What has happened after this phase of mass production is what can be characterized as the plastic revolution. Today, some 90 million barrels of crude oil are produced in order to sustain our lifestyle. Crude oil is nothing but plants and other living objects, processed over millions of years. The original ingredients of crude oil are not harmful to living objects, and it is not likely that the older form of the same would be harmful, even if it contains trace elements that are individually toxic. It is true that crude oil is easily decomposed by common bacteria at a rate comparable to the degradation of biological waste (Livingston and Islam 1999). Even when some toxic chemicals are added to fractionated crude oil, e.g., motor oil, the degradation rate is found to be rather high (Chaalal et al. 2005). As long as bacteria are present in abundance, it seems any liquid will be degraded. The problem starts when the crude oil components are either turned into solid residues or burned to generate gaseous products. During this first phase of transformation, thermal cracking prevails, in which significant amounts of solid residue are produced. Much of this solid residue is used for producing tar and related products. Some of this residue is reinforced with metals to produce longchain molecules in the name of soft plastic and hard plastic. This is the phase that becomes most harmful for the environment over the long term. Suddenly, and easily, crude oil components are turned into materials that will last practically forever. The feature most responsible for plastics' broad popularity and ubiquity is also responsible for the most damaging long-term implications. We currently produce more than four million metric tons of plastic every day from the 90 million barrels of crude oil produced. More than 30% of this plastic is used by the packaging industry (Market Development Plan 1996). In 2003, 57% of the beach waste was identified as having come from plastic materials (Islam and Zatzman 2005). It is reported that, only in the United Kingdom alone, three million tons of plastic are thrown away every year (Waste Online 2005). Even though the talks of recycling are abound, only 7% of the plastics produced are recycled, and the rest of the plastic materials are disposed of into the environment and are susceptible to oxidation (lowtemperature oxidation, LTO, at the very least). Figure 8.20 shows used plastics in a collecting center that will later be processed for recycling. Figure 8.21 shows the same packed and ready for delivery to the recycling factory.
DIVERGING FATES
357
Figure 8.20 Waste plastic in a collecting center (Mann 2005).
Figure 8.21 Collected plastics are packed for recycling (Mann 2005).
Current daily production of plastics (from hydrocarbons) is greater than the consumption of carbohydrates by the entire human population (Islam 2005a). Our lifestyle is abounding in plastics. Households that boast "wall to wall carpets" are in fact covered with plastic. The vast majority of shoe soles are plastic. Most clothing is plastic. Television sets, refrigerators, cars, paints, computer chassis, and practically everything that "modern" civilization has to offer are plastic. Cookware boasting a non-stick liner is non-stick because of the plastic coating. The coating on hardwood is plastic. The material that makes virgin wool manageable is plastic. The
358
THE GREENING OF PETROLEUM OPERATIONS
material of medicinal capsule coatings is plastic. The list goes on. Recently it was disclosed that food products are dipped in "edible" plastic to give them the appearance of freshness and crispness. This modern age is synonymous with plastic in exactly the same way it is synonymous with cancer, AIDS, and other modern diseases.
8.10 Toxic Compounds from Plastic Plastic products and their production processes release numerous types of toxic compounds (Islam 2003). Table 8.2 shows the release of toxic compounds from plastics and their related effects. More than 70,000 synthetic chemicals and metals are currently commercially used in the U.S. The toxicity of most of these is unknown or incompletely studied. In humans, exposure to some may cause mutation, cancer, reproductive and developmental disorders, adverse neurological and immunological effects, and other injuries. Reproductive and developmental effects are a concern because of the important consequences for couples attempting to conceive and because exposure to certain substances during critical periods of fetal or infant development may have lifelong and even intergenerational effects. The industry responsible for creating raw plastic materials is by far the biggest user of listed chemicals, reportedly using nearly 270 million pounds in 1993 alone. Plastic materials and resins are the top industrial users of chemicals. The biggest problem with plastics, like that of nuclear waste from atomic power plants, is the absence of any environmentally safe method of waste disposal. If disposed of out-of-doors, the respiratory system in any ambient organic life form is threatened. If incinerated, toxic fumes almost as bad as cigarettes are released. Typically, plastic materials will produce some 400 toxic fumes, including 80 known carcinogens. Yet most plastics are flammable, and accidental burning is always a possibility (Islam, 2005a and 2005b).
8.11 Environmental Impacts Issues Today, these plastic products are manufactured entirely from petroleum products, which depend on the supply of a non-renewable resource. There are many different types of environmental impacts from the products. For example, plastics are generally produced
DIVERGING FATES
359
from fossil fuels, which are gradually becoming depleted. The production process itself involves energy consumption and further resource depletion. During production, emissions are released into the water, air, or soil. Emissions of concern include heavy metals, chlorofluorocarbons, polycyclic aromatic hydrocarbons, volatile organic compounds, sulfur oxides, and dust. Wastewater, bearing solvent residues from separation processes and wet scrubbers, enter in the food chain. The residual monomer in products and small molecules (plasticizers, stabilizers) slowly release into the environment, for example, by leaching slowly into water. These emissions have effects, such as ozone depletion, carcinogenicity, smog, acid rain, etc. Thus, the production of plastic materials can have adverse effects on ecosystems, human health, and the physical environment. Overall, the U.S. plastics and related industries employed about 2.2 million U.S. workers and contributed nearly $400 million to the economy in 2002, according to The Society of the Plastics Industry (Lowy 2004). The main issue with plastic products is the air emissions of monomer and volatile solvent. These emissions are released during the industrial production process as well as during the plastic products' use. When a plastic is burnt it is oxidized, releasing many highly toxic compounds. Modern household uses of plastic continuously releases toxic compounds by slower oxidation or photo-oxidation. Islam (2005a) reported potential impacts of plastics if they simply left inside the household. The conventional theory appears to suggest that nothing consequential happens because they are all so durable. In support of this conclusion, the absence of detecting anything leaching from these plastics into the environment on a daily basis is ritually cited. The unwarranted assumption, "if we cannot see (detect), it does not exist," represents the starting-point of the real problem. In fact, some portion of the plastic is being released continuously into the atmosphere at a steady rate, be it the plastic on the household carpet, the computer chassis, or the pacifier that the baby is constantly sucking. The current unavailability of tools that are capable of detecting a n d / o r analyzing emissions on this scale can hardly be assumed to prove the harmlessness of these emissions. Human beings, in particular, constantly renew their body materials, and plastics contain components in trace quantities small enough to "fool" the living organism in the process of replacing something essential.
360
THE GREENING OF PETROLEUM OPERATIONS
Each defective replacement is likely to induce some long-term damage. For instance, hydrocarbon molecules can be treated as a replacement of carbohydrates (it is fatal when it reaches lung diaphragms), lead can replace zinc, and so on. Recently it was noticed that plastic baby bottles release dioxins when exposed to the microwave (Mittelstaedt, M. 2006b). From this, two essential points may be inferred: plastics always release some toxins, and microwave exposure enhances molecular breakdown. In other words, something clearly unsafe following microwave irradiation was, in fact, already unsafe, prior to radiation exposure. Air emissions data for certain key criteria pollutants (ozone precursors) are available from the National Emission Trends (NET) database (1999), and hazardous air pollutant emissions data are available from the National Toxics Inventory (NTI) database (1996 is the most recent year for which final data are available). Major emissions from the plastic sector are shown in Figure 8.22. The total emissions of volatile organic compounds (VOCs), nitrogen oxides (NO x ), and hazardous air pollutants (HAPs) are 40,187,31,017, and 19,493 tons per year, respectively (Figure 8.22). The plastics sector contributes greenhouse gas emissions from both fuel and non-fuel sources. Another document in this series, Greenhouse Gas Estimates for Selected Industry Sectors, provides estimates based on fuel consumption information from the Energy Information Administration (EIA) of the U.S. Department of Energy
Figure 8.22 Total amount of VOCs, NO x , and HAPs released from the plastic industry.
DIVERGING FATES
361
and the Inventory of U.S. Greenhouse Gas Emissions and Sinks, issued by the EPA. (The EIA document is sector-specific for energy intensive sectors but does not provide emission data, while the EPA document provides emission data but not on a sector-specific basis. See the estimates document for details of how the calculation was carried out). Based on those calculations, the plastics sector in 2000 was responsible for 68.1 teragrams (Tg) (million metric tons) of C 0 2 equivalent emissions from fuel consumption, and 9.9 Tg of C 0 2 equivalent emissions (as nitrous oxide) from non-fuel sources (mostly for the production of adipic acid, a constituent of some forms of nylon). This totaled 78.0 Tg of C 0 2 equivalents. In comparison, the chemical sector as a whole (including plastics) accounted for 531.1 Tg of C 0 2 equivalents. Thus, plastics are a sizeable contributor, although not the dominant contributor, to greenhouse gas emissions compared to the entire chemical sector. However, if one considers that C 0 2 and other greenhouse gases released from plastics fall under the category of "bad gases" (high isotope number) and cannot be recycled by the ecosystem, the negative impact of plastics becomes very high. A special risk associated with products of the plastic sector is the leaching of plasticizers that are added to polymer formulations to improve material properties. An example is the concern over the leaching of the plasticizer DEHP from polyvinyl chloride used in medical devices. This was the subject of an FDA Safety Alert issued in 2002. Other phthalate plasticizers are found in a wide variety of consumer products, including children's toys and food wrap. Since phthalates are soluble in fat, PVC wrap used for meat and cheese is a particular concern. A number of common monomers are known or suspected reproductive toxins a n d / o r carcinogens. Vinyl chloride is a confirmed carcinogen that is commonly used in PVC. Styrene is a possible carcinogen that is used in polystyrene. Toluene diisocyanate is a possible carcinogen, has a known acute toxicity, and is commonly used in making polyurethane. A probable human carcinogen, acrylonitrile, is used in acrylic resins and fibers. A possible reproductive toxin, methyl methacrylate, is also used in acrylic resins and fibers.
8.12
How Much is Known?
The science that has developed the technologies of plastic also makes available the dangers of using them. Unfortunately, no
362
THE GREENING OF PETROLEUM OPERATIONS
research result is allowed to be published when it contradicts the expectation of the corporations that funded the research (Shapiro et al. 2007). Even government-funded research does not offer any hope because either industry sponsors screen the topics (the jointly funded projects) or the journals find excuses not to publish the results in fear of repercussions. Not to mention, the reviewers have vested interest in maintaining status quo. Therefore, the discussion of how much is known seems to be futile. Even if these facts are known, who is going to fight the propaganda machine? The website run by the ecological center as well as many others have long listed the hazards of plastic. Even the government sites have listed some cautious scientific results, albeit without much elaboration or inference (CDC 2001). Table 8.6 lists some of the findings that are readily available on the Internet. Note that these results rely only on measurable amounts that migrate from the plastic to the products contained within. In all reported studies, the possibility of contamination due to sustained exposure and / o r low temperature oxidation is not identified. Also, the focus in this study is on short-term implications and safety issues. For each item, longterm implication is immeasurably devastating. Unsustainable plastic products have been promoted due to their non-degradability, light weight, flexibility, and low cost (Table 8.7). However, plastics have health and environmental costs. The industry consumes fossil fuel, a non-sustainable, heavily polluting, and disappearing commodity. It produces pollution and utilizes high energy during manufacturing. It accumulates non-biodegradable waste plastic in the environment, relying on land indefinitely as a wastebasket. Plastics release dioxin continuously into to the atmosphere as well as toxic polymers and other chemicals that contaminate our food (Table 8.6). The released chemicals threaten human health and reproductive systems. Considering all these, the plastic revolution epitomizes what modern technology development is all about. Every promise made to justify the making of plastic products has been a false one. As evidenced from practically daily repeating of the mishaps of plastic products, ranging from non-stick cookware (Mittelstaedt, M. 2006b) to polyurethane tubes for the unborn, plastic products represent the mindset that allowed short-term (as in "right now") focus to obsessively dominate technological development.
DIVERGING FATES
363
Table 8.6 Known adverse health effects of commonly used plastics. Plastic
Common Uses
Adverse Health Effects
Polyvinyl chloride
Food packaging, plastic wrap, containers for toiletries, cosmetics, crib bumpers, floor tiles, pacifiers, shower curtains, toys, water pipes, garden hoses, auto upholstery, inflatable swimming pools
Can cause cancer, birth defects, genetic changes, chronic bronchitis, ulcers, skin diseases, deafness, vision failure, indigestion, and liver disfunction
Phthalates (DEHP, DINP, and others)
Softened vinyl products manufactured with phthalates include vinyl clothing, emulsion paint, footwear, printing inks, non-mouthing toys and children's products, product packaging and food wrap, vinyl flooring, blood bags and tubing, IV containers and components, surgical gloves, breathing tubes, general purpose labware, inhalation masks, many other medical devices
Endocrine disruption, linked to asthma, developmental and reproductive effects. Medical waste with PVC and pthalates is regularly incinerated causing public health effects from the release of dioxins and mercury, including cancer, birth defects, hormonal changes, declining sperm counts, infertility, endometriosis, and immune system impairment.
Polystyrene
Many food containers for meats, fish, cheeses, yogurt, foam and clear clamshell containers, foam and rigid plates, clear bakery containers, packaging "peanuts," foam packaging, audio cassette housings, CD cases, disposable cutlery, building insulation, flotation devices, ice buckets, wall tile, paints, serving trays, throw-away hot drink cups, toys
Can irritate eyes, nose, and throat and can cause dizziness and unconsciousness. Migrates into food and stores in body fat. Elevated rates of lymphatic and hematopoietic cancers for workers.
364
THE GREENING OF PETROLEUM OPERATIONS
Table 8.6 (cont.) Known adverse health effects of commonly used plastics. Plastic Polyethylene
Common Uses Adverse Health Effects Suspected human Water and soda bottles, carpet fiber, chewing carcinogen gum, coffee stirrers, drinking glasses, food containers and wrappers, heat-sealed plastic packaging, kitchenware, plastic bags, squeeze bottles, toys
Polyester
Bedding, clothing, disposable diapers, food packaging, tampons, upholstery Particle board, plywood, building insulation, fabric finishes
Can cause eye and respiratory-tract irritation and acute skin rashes
Polyurethane Foam
Cushions, mattresses, pillows
Bronchitis, coughing, skin, and eye problems. Can release toluene diisocyanate which can produce severe lung problems
Acrylic
Clothing, blankets, carpets made from acrylic fibers, adhesives, contact lenses, dentures, floor waxes, food preparation equipment, disposable diapers, sanitary napkins, paints Non-stick coating on cookware, clothes irons, ironing board covers, plumbing and tools
Can cause breathing difficulties, vomiting, diarrhea, nausea, weakness, headache and fatigue
Ureaformaldehyde
Tetrafluoroethylene
(Plastic Task Force 1999)
Formaldehyde is a suspected carcinogen and has been shown to cause birth defects and genetic changes. Inhaling formaldehyde can cause cough, swelling of the throat, watery eyes, breathing problems, headaches, rashes, tiredness
Can irritate eyes, nose and throat and can cause breathing difficulties
DIVERGING FATES
365
Table 8.7 Differences between natural and synthetic materials. Natural Materials
Synthetic Materials
1. Multiple/flexible (different segments, parts, different monomers in polymers; Nonsymmetric, non-uniform)
1. Exactness/simple (same monomers)
2. Non linear
2. Linear
3. Heterogeneous
3. Homogenous/uniform
4. Has its own natural process
4. Breaks natural process
5. Recycles, life cycle
5. Disposable/one time use
6. Infinite
6. Finite
7. Non symmetric
7. Symmetric
8. Productive design
8. Reproductive design
9. Reversible
9. Irreversible
10. Knowledge
10. Ignorance or anti-knowledge
11. Phenomenal and sustainable
11. Aphenomenal and unsustainable
12. Dynamic/chaotic
12. Static
13. No boundary
13. Based on boundary conditions
14. Enzyme
14. Catalyst
15. Self-similarity (fractal nature) is only a perception
15. Self similarity imposed
16. Multifunctional
16. Single functional
17. Reversible
17. Irreversible
18. Progressive (dynamic; youth marked by quicker change)
18. Non-progressive
19. Unlimited adaptability (infinite adaptability; any condition)
19. Zero-adaptability (controlled condition)
8.13
Concluding Remarks
To achieve sustainability in technological development, a fair, consistent, and scientifically acceptable criterion is needed. In this study, the time, or temporal, scale is considered the primary selecting criterion for assuring inherent sustainability in technological development. The proposed model shows that it is feasible
366
THE GREENING OF PETROLEUM OPERATIONS
and could be easily applied in achieving true sustainability. This approach is particularly applicable to assess sustainable technology and other management tools, and the straightforward flowchart model proposed should facilitate sustainability evaluation. Conventional technologies and management tools have been analyzed based on the proposed screening criterion. For example, the detailed pathway study was performed, including origin, degradation, oxidation, and decomposition, in order to demonstrate how a natural product is sustainable and a synthetic product is unsustainable. In this research, two similar products, polyurethane fiber and wool fiber, were selected for the sustainability study. It is shown that even when the two products have similar macroscopic characteristics, they can be completely on opposite ends of the sustainability spectrum. Natural fiber wool was found to be truly sustainable, while the polyurethane is completely unsustainable. A similar pathway analysis might well be applied to determine whether the development of an entire technology is sustainable or unsustainable.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
9 Scientific Difference Between Sustainable and Unsustainable Processes 9.1
Introduction
A process can be sustainable only if both the source and the process conform to natural processes. This simple rule of sustainability is scientific yet rarely understood by engineers. This chapter is dedicated to providing two sets of examples of products that are on the opposite sides of the sustainable spectrum. The detailed analysis of origin and pathway shows that a product can be rendered unsustainable by merely following an unsustainable pathway. However, the difference of the products are not immediately clear because their apparent features are similar, and, in some way, the unsustainable products are more appealing. However, long-term impact studies show that the unsustainable product will continue to insult the environment, whereas the sustainable product will actually improve the environment. This chapter analyzes two sets of products — paraffin wax and beeswax. The analysis is done utilizing Scanning electron microscopy (SEM) coupled with EDX. It establishes their main components 367
368
THE GREENING OF PETROLEUM OPERATIONS
and morphology. This chapter discusses the different physical and chemical properties of paraffin wax and beeswax that will be used to simulate rock drilling in the field.
9.1.1 Paraffin Wax and Beeswax Paraffin wax is used in the manufacture of candles, paper coating, protective sealants for food products and beverages, glass-cleaning preparations, hot-melt carpet backing, biodegradable mulch (hot melt-coated paper), impregnating matches, lubricants, crayons, surgical tools, stoppers for acid bottles, electrical insulation, floor polishes, cosmetics, photography products, anti-frothing agents in sugar refining, tobacco products, protection for rubber products against sun-cracking, and chewing-gum base. Our ancestors, from as early as the Neolithic period, have used waxy substances for a large range of activities (Regert et al. 2005). They used waxy substances for waterproofing, illumination, sealing, adhesion, and for many technical, medicinal, or symbolic purposes. Additives are often mixed with these waxy materials in order to improve their properties. Resins are used to harden and color the material. Fatty materials increase malleability and softness of waxes. Pigments and dyes color the material and starch is used as an extender. Beeswax is a type of natural wax found in the honeycomb of honeybees that make it in the hive. It is also known as Cera alba and Cera flava (Columbus Foods, 2002). It is yellow, brown, or white bleached solid. The color of beeswax changes with age, for example virgin wax is white but darkens rapidly as it ages, often becoming almost black. It has a faint honey odor. It consists largely of myricyl palmitate, cerotic acid, esters, and high-carbon paraffins. Beeswax is lipid by nature. It has saturated hydrocarbons, acids or hydroxyacids, alcohols, pigments mostly from pollen and propolis, as well as minute traces of brood (Leclercq, 2006). Beeswax has a very stable chemical make-up. Beeswax was the earliest waxy material exploited by men (Regert et al. 2005). However, many other natural substances have been used thereafter: Chinese insect wax, shellac wax, spermaceti, and wool wax, all from animal origin; and carnauba, candelilla, and Japan waxes, all secreted by various plants and fossil materials (Regert et al. 2005). Both beeswax and paraffin wax are used in candles, but paraffin wax has been linked to carcinogen emissions that are commonly attributed to the presence of such chemicals as Acrolein, formaldehyde and
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
369
acetaldehyde, dibutyl phthalate, diethyl phthalate, bis (2-ethylhexyl) phthalate, didecyl phthalate, toluene, styrene, benzene, styrene, toluene, ethyl benzene, naphthalene, benzaldehyde, benzene, ethanol, and 2-butanone (methyl ethyl ketone) and acetone. However, few realize that other chemicals, either from catalysts or additives, pose greater health hazards than the chemicals that are commonly present in crude oil. It is commonly said that beeswax contains numerous unknown components, most of which do not fall under the category of well known or well characterized chemicals (Tulloch 1970). Even though numerous characterization tools have surfaced in recent years, the proper identification of these natural chemicals continues to elude scientists (Zatzman et al. 2009). Craig et al. (1967) studied paraffin waxes, beeswax, and carnauba waxes. They determined the modulus of elasticity, compressive strength, and proportional limit for those waxes. Mancktelow (1989) used the waxes as an analogue material for rocks in order to study the deformations undergone by geological structures. He presented stress-strain relationships based on experimental data. The stressstrain curve for paraffin wax in the solid state has a clear elastic range, a rounded yield segment, and a stress-flow segment that, in a specific temperature range and confining pressure, approximates steady state. The results were focused on the stress-flow deformation regime, for which it was found that the stress-strain relationship for paraffin wax in the solid state is accurately described by a power-law. However, the results are applicable only for small temperature ranges. Kotsiomiti and McCabe (1997) measured mechanical properties for 26 blends of paraffin wax, beeswax, and inorganic filler for dental applications. They measured the dental wax's properties, such as plastic-flow stress, linear thermal expansion, elastic modulus, and flexural strength. Plastic-flow tests were conducted in accordance with the corresponding ISO specification (ISO Standard 1561 1975). The flow test measurements were usually conducted by calculating the percent height decrease of cylindrical specimens of 10 mm diameter and 6 mm height that were kept at the testing temperature for 10 minutes under a load of 2 kg. The flow stress of paraffin and beeswax binary mixtures did not vary with the addition of beeswax. The addition of filler particles to beeswax, even in small amounts, was found to dramatically reduce the flow of the beeswax, an effect that is termed hardening. It was observed that the degree of purity and constitution of waxes drastically affected the material's mechanical properties (Kotsiomiti and McCabe 1997). Morgan et al. (2002)
370
THE GREENING OF PETROLEUM OPERATIONS
studied the mechanical properties of beeswax and measured these properties as a function of temperature. They used a variety of techniques and compared with each other. In the study, the coefficient of the friction of beeswax was measured and compared with that of plasticine and Nylon 6-6. They found that the frictional behavior of beeswax departs from Amonton's laws and behaves instead as a classic, soft, elastic polymer.
9.1.2
Synthetic Plastic and Natural Plastic
The second set of products is synthetic plastic and natural plastic. With few exceptions, synthetic plastics are produced from crude oil, but more conventional materials from different plants, such as starch, cellulose, or latex from rubber trees, have also been used for different purposes. In the past century, the chemical industry introduced a way to modify plant-based materials, and products like cellulose acetate or cellulose nitrate were invented. The development of genetic engineering technology for several crops has opened the way for the genetic modification of traditional plant products, e.g., the development of modified starch. Furthermore, interesting proteins, originating from animals, and novel polymers can now be synthesized in transgenic plants (Jürgen a n d U d o 2005; Moire et al. 2003). In this synthesis of plastic, the crude oil source has routinely been touted as less environmentally friendly than other sources. In particular, genetically modified carbohydrate sources are considered to be environment-friendly, whereas crude oil that has been processed through natural processes is considered to be toxic to the environment. As evidence, a final product such as non-biodegradable plastic is presented. It is true that final synthetic products are toxic to the environment, but this is not because of the source. With regard to petroleum-based materials, crude oil itself poses no threat. In fact, all types of crude oil are readily biodegradable (Livingston and Islam 2000). However, the breaking down of crude oil and the synthesis of monomers through a series of toxic chemical reactions makes synthetic plastic non-degradable — a euphemism for perpetual toxicity. It is similar to the global warming link of C O r Although environmentalists tend to target CO z as the measure of pollution, C 0 2 is the only emission of a combustion engine that is actually good for the environment as it contributes to the making of carbohydrates through photosynthesis in plants, and the totality of the toxic effects of additional elements (considered
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
371
negligible) remain largely unnoticed. Synthetic plastics are considered virtually non-degradable; they simply become dissipated in the environment. The only known form of any change in their chemical composition comes through oxidation, a phenomenon that can take place at any temperature. The products of oxidation (low-temperature oxidation, when it is at room temperature and below; incineration when it is at high temperature) are invariably toxic because the process continuously emits dioxins. Today's synthetic plastics produce some 400 toxic chemicals, with 80 known carcinogens, when exposed to air at any temperature. Yet, the manufacturing of synthetic plastic continues at a record pace of about four million tons every day from crude oil alone . Natural polymers have existed from the beginning of time, and prehistoric humans exploited the applications of natural polymers. As technology developed, the properties of such materials have been improved by techniques, such as purification and modification with other substances. By the turn of 19* century, scientific developments in fields such as physics and chemistry, followed by industrial demands for materials with specific properties that could not be simply found in nature, were the initial temptation behind plastic production (Website 5). The first man-made plastic was unveiled in 1862 at the Great International Exhibition, London. This material was produced at a lower price than rubber, but it had all of rubber's characteristics, and it was an organic material derived from cellulose by-products. At the end of the 19lh century when billiards became a very popular game, finding a new material for ivory replacement was a goal. Billiards became so popular that thousands of elephants were killed for their valuable ivory (Website 8). An American, John Wesley Hyatt, discovered the solution in 1866, and the solution was celluloid. Hyatt, by spilling a bottle of collodion, discovered that the material congealed into a tough, flexible film (Website 9). He then substituted ivory with collodion. Unfortunately, due to its brittle characteristics, the billiard balls would shatter once they hit each other. The addition of camphor (a substance from laurel tree) was the only solution to this challenge. Camphor made celluloid the first thermoplastic. Molded with heat and pressure, this new material retained its shape even after heat and pressure have been removed (Website 8). In 1909, Leo Baekeland introduced Bakelite, the first synthetic polymer (a phenol-formaldehyde polymer), and in 1911 the first
372
THE GREENING OF PETROLEUM OPERATIONS
synthetic fiber, under the name of Ryon, was developed as a replacement for silk (Katz, 1981). Baekeland had developed an apparatus that enabled him to control the reaction of volatile chemicals at different temperatures and pressures. Using this apparatus, Baekeland developed Bakelite resin, which has the ability to harden and took the container shape. This new material did not have any chemical or physical reaction in any available acid or solvent. This meant that after it was completely set, it would never change. This one benefit made it stand out from existing plastics. While celluloid-based substances could be melted down several times and reformed, Bakelite was the first thermoset plastic that would retain its shape and form under any circumstances (Website 10). By adding Bakelite to almost any material, such as softwood, the durability and effectiveness of materials can instantly increase (Website 10). Bakelite also had domestic applications. Bakelite is electrically resistant, chemically stable, heat-resistant, and doesn't crack, fade, or discolor from exposure to sunlight, dampness, or sea salt (Website 10).
9.2
Physical Properties of Beeswax and Paraffin Wax
9.2.1 Paraffin Wax Paraffin wax is a tasteless and odorless, white, translucent solid. The source of paraffin wax is crude oil, which is derived from organic materials. Paraffin wax is produced by refining and dewaxing light lubricating oil stocks. It consists of a mixture of solid aliphatic hydrocarbons of high molecular weight, such as C36H74. Its molecular formula is C H. ,. Paraffin wax can be defined as a n
2n+2
fraction of petroleum dominated by n-alkanes that are solid at ambient temperature (Chouparova and Philp 1998). It contains above C8+, and smaller amounts of isoalkanes, cycloalkanes and aromatics. Paraffin waxes are chemically stable and have a negligible degree of sub cooling during nucleation. There is no phase separation, and the phase change process only results in a small volume change (He et al. 2004). It contains above C8+, and smaller amounts of isoalkanes, cycloalkanes, and aromatics. Paraffin waxes are chemically stable and have a negligible degree of sub cooling during nucleation. Paraffin waxes are commonly classified in the petroleum
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
373
industry literature as paraffin, intermediate and microcrystalline types (Jowett 1984; Speight 1991). Table 9.1 shows the paraffin wax properties including the values of wax density, melting point, flash point, and autoignition temperature. The density varies between 0.88 and 0.94 g/cm 3 (Lewis 2002; Krupa and Luyt 2001). The melting point ranges from 47 to 65°C (Lewis 2002). The flash point equals 390°F or 198°C (Lewis 2002). The autoignition temperature is reached at 473°F or 245°C (Lewis, 2002). The paraffin wax has a molar mass equal to 785 g/mol and a C / O ratio of 18.8/1 (Krupa and Luyt 2001). Its common properties are water repellency, smooth texture, low toxicity, and freedom from objectionable odor and color (Speight 1991). Paraffin waxes contain carcinogens because they are processed using toxic material. The threshold limit value for paraffin wax is 2 m g / m 3 (Lewis 2002). Paraffin wax is soluble in benzene, ligroin, warm alcohol, chloroform, turpentine, carbon disulfide, and olive oil. It is insoluble in water and acids. Paraffin wax is combustible and has good dielectric properties. Paraffin wax grades are yellow crude scale, white scale, and refined wax. Paraffin waxes are also graded by melting point and color. The higher melting grades are more expensive. Table 9.2 shows the classification of paraffin waxes and their properties.
9.2.2
Beeswax
Table 9.1 shows the properties of beeswax, including the values of wax density and melting point. Its density equals 0.95 g / cm 3 (Lewis, 2002; Leclercq, 2006). Its melting point ranges from 62 to 65°C (Lewis 2002; Columbus Foods 2002). It is completely Table 9.1 Wax properties. Wax type
Density (g/cm3)
Melting point (°C)
Paraffin wax
0.88-0.941·2
Beeswax
0.951
Flash point
Autoignition temp
47-65 1
390 (198)1
473 (245)1
62-65 1
-
-
Source: Lewis 2002; Krupa and Luyt 2001
op (OQ
Of ( O Q
374
THE GREENING OF PETROLEUM OPERATIONS
Table 9.2 Classification of paraffin waxes. Properties
Paraffin wax
Amorphous wax
N-alkanes dominant range
C -C
>£Λ
Amount of other HC (iso-, cycloalkanes, etc)
Lower
Higher
40-60°C
>60-90°C
Lower
Higher
Light distillate
Heavy distillate, residual oil, pipeline and tank wax deposits
Melting point range Adhesion Source fractions
Source: Chouparova and Philp 1998
insoluble in water due to its resistance to hydrolysis and natural oxidization. However, it is soluble in alcohol, chloroform, ether, and oils, and it is combustible. The properties of beeswax remain unspoiled by time. Apart from the larvae of the wax moth, no animal has the digestive acids and juices to break it down (Leclercq 2006). It is solid in appearance in normal temperatures. It becomes brittle when the temperature drops below 18°C, and it quickly becomes soft and pliable at around 35° to 40°C. Beeswax grades are technical, crude, refined, NF (National Formulary grade of chemical), FCC (Food Chemical Codex), and white USP (United States Pharmacopeia). It is used in the manufacture of furniture, floor waxes, shoe polishes, leather dressings, anatomical specimens, artificial fruit, textile sizes and finishes, church candles, cosmetic creams, lipsticks, and adhesive compositions. A wide variety of cosmetics use beeswax as an emulsifier, emollient, and moisturizer. It is often used in skincare products as a thickening agent. After processing, beeswax remains a biologically active product retaining anti-bacterial properties. It also contains vitamin A, which is essential for human cell development. Throughout time, people have used it as an antiseptic and for healing injuries. Beeswax has many other industrial uses, too.
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
375
Table 9.3 Beeswax and paraffin wax density values for different samples. Beeswax Sample
Mass Volume (cc) (8)
Paraffin wax Density (g/cc)
Mass
Volume (cc)
Density (g/cc)
20.0
0.77
1
18.2
20.0
0.91
(g) 15.4
2
14.8
17.0
0.870588
15.8
20.0
0.79
3
15.0
18.0
0.833333
15.7
20.0
0.785
4
15.0
17.0
0.882353
16.3
20.0
0.815
5
15.3
17.5
0.874286
15.1
20.0
0.755
6
19.6
21.0
0.933333
15.6
20.0
0.78
7
15.2
10.0
0.80
15.8
20.0
0.79
8
18.2
20.0
0.91
15.9
20.0
0.795
9
14.1
17.5
0.805714
15.8
20.0
0.79
10
14.6
20.0
0.73
15.7
20.0
0.785
Average
16.0
17.8
0.854961
15.71
20.0
0.7855
Source: Hossain 2008
Table 9.3 shows the density values of beeswax and refined paraffin wax for different samples. Physical properties alone would make paraffin wax appear more attractive. Paraffin wax has a lower density and more transparent flame than those of beeswax. This is also accompanied with the cleaner appearance of paraffin wax. In general, material refinement has the general theme of making materials appear more appealing, during which process toxicity is increased due to the addition of inherently toxic additives or catalysts.
9.3
Microstructures of Beeswax and Paraffin wax
For the physical and chemical characterizations of solid materials, SEM is one of the best and most widely used techniques (Vassilev and Vassileva 2005). SEM, using a focused electron beam to scan the surface of a sample, generates a variety of signals. The three most common modes of operation in SEM analysis are Back-Scattered
376
THE GREENING OF PETROLEUM OPERATIONS
Electron imaging (BSE), Secondary Electron Imaging (SEI), and EDS (Postek et al. 1980). In this study, EDS coupled with SEM were used to characterize paraffin wax and beeswax samples. The elemental analysis was performed in a spot mode, in which the beam is localized on a single area manually chosen within the field of view. The EDS detector was capable of detecting elements with an atomic number equal to or greater than six. The intensity of the peaks in the EDS is not a quantitative measure of elemental concentration, although relative amounts can be inferred from relative peak heights (Kutchko and Kim 2006). In order to examine the internal structure and composition of paraffin wax and beeswax, each sample was characterized by randomly selecting 3 fields of view and examining the samples. The
Table 9.4 SEM experimental test results for paraffin wax. Particulars
Spectrum 2
Spectrum 1 C
O
Total
C
O
Total
App. concentration
12.99
0.13
-
10.53
0.00
-
Intensity corn.
0.2877
0.0667
-
0.3058
0.0635
-
Weight (%)
45.14
1.95
47.09
34.42
0.00
34.42
Weight (%), sigma
0.96
0.68
-
0.83
0.00
-
Atomic (%)
96.86
3.14
-
100.00
0.00
-
Table 9.5 SEM experimental test results for beeswax. Particulars
Spectrum 1
Spectrum 2
C
O
Total
App. concentration
17.62
0.67
Intensity corn.
0.2584
Weight (%) Weight (%), sigma Atomic (%)
Spectrum 3
C
O
Total
C
O
Total
-
12.30
0.44
-
11.14
0.40
-
0.0736
-
0.2599
0.0732
-
0.2601
0.0731
-
68.17
9.10
77.28
47.34
6.07
53.40
42.82
5.45
48.27
1.23
0.97
-
1.04
0.86
-
0.98
0.80
-
90.89
9.11
-
91.22
8.78
-
91.28
8.72
-
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
377
SEM data (see Tables 9.4 and 9.5) clearly indicate that the main compositions of paraffin wax and beeswax are carbon and oxygen, which are shown in the Figures 9.1 to 9.5. The composition of carbon varies from 10.33 to 12.99 (as weight basis 34.42% to 45.14%), and oxygen varies from 0 to 0.13 (as weight basis 0% to 1.95%). For beeswax samples, the concentration of carbon varies from 11.14 to 17.62 (as weight basis 42.82% to 68.17%), and oxygen varies from 0.40 to 0.67 (as weight basis 5.45% to 9.10%). The EDS microanalysis of all spectrums in the paraffin wax and beeswax samples confirm the presence of carbon and oxygen, as illustrated in Figures 9.1 to 9.5. The other peaks appearing in those figures might be due to the sample coating process, which indicates the presence of gold, palladium, or both.
Spectrum 1 O
Au
" ■
'
I
0 5 10 Full scale 1648 cts Cursor: 0.000 keV
15
2( keV
Figure 9.1 Spectrum analysis for paraffin wax for Run 1 (Hossain 2008).
378
THE GREENING OF PETROLEUM OPERATIONS Spectrum 2
o
Au
Au Au
A
Au Au
-Λ
- i —
20 keV
3 5 10 15 Full scale 1019 cts Cursor: 0.000 keV
Figure 9.2 Spectrum analysis for paraffin wax for Run 2 (Hossain 2008).
Spectrum 1 O
Au Pd
Au Au
Au I I I I 0 5 10 15 Full scale 1648 cts Cursor: 0.000 keV
- -"
Figure 9.3 Spectrum for beeswax for Run 1 (Hossain 2008).
2φ keV
D I F F E R E N C E B E T W E E N SUSTAINABLE A N D U N S U S T A I N A B L E Spectrum 2
o
Au
Au Au A Au
Au
I I i ....,.,.. ! 0 5 10 15 2Q Full scale 1218 cts Cursor: 0.000 keV keV Figure 9.4 Spectrum for beeswax for Run 2 (Hossain 2008).
0 5 10 15 Full scale 1218 cts Cursor: 0.000 keV Figure 9.5 Spectrum for beeswax for Run 3 (Hossain 2008).
2fJ keV
379
380
THE GREENING OF PETROLEUM OPERATIONS
9.4 Structural Analysis of Paraffin Wax and Beeswax Sample handling, coating and preparation for SEM cause sample alteration, which modifies its composition and morphology structure. Therefore, the quality of the micrographs obtained is affected as well. Figures 9.6 to 9.8 show the SEM micrographs for paraffin wax following three magnifications of 250, 1050, and 2000 respectively. Some micrographs are affected by charging that alters the brightness and contrast levels. For example, the bright spots in Figure 9.6 exhibit the charging effect due to the presence of electrons that did not penetrate the wax. Figures 9.9 to 9.11 illustrate the SEM micrographs for beeswax at three magnifications of 250,1100, and 2000 respectively. In these figures, the bright spots are more dominant than in the case of paraffin wax, which implies that beeswax is more resistant to electron storming by SEM than paraffin wax is. This indicates that beeswax has a lower electric conductivity than paraffin wax. Regarding the wax morphology, Figures 9.6 to 9.8 expose the lamellar structure of paraffin wax, thus the corresponding sample consists of a blend of polymers.
Figure 9.6 Micrograph of paraffin wax sample, showing two distinctive shades, one darker with long chains and smaller oblongate shape, and lighter pattern, which is background. Condition: Vacc = 20kV, Mag = x250, WD = 12mm.
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
381
Figure 9.7 Micrograph of paraffin wax sample, in detail, illustrating the long and short chains, as well as individual oblongated patterns that are more visible. The background is shown in lighter color. Condition: Vacc = 20 kV, Mag = xl.05 k, WD = 11.5mm.
Figure 9.8 Paraffin wax sample with 2000 magnification indicating that long chains consist of oblongated patterns joining together. Condition: Vacc = 20 kV, Mag = x2.00k, WD = 11.5mm.
382
THE GREENING OF PETROLEUM OPERATIONS
A miscible blend of amorphous and crystalline polymers means a single phase in the melt and a tidy crystalline phase with a mixed amorphous region in the solid. Because of chain folding during crystallization, the crystal lamellae are formed. Their drastic growth typically leads to the formation of spherulites. Spherulites are ballshaped, spherical masses of radiating crystal fibers. When the miscible blend is submitted to crystallization, the non-crystalline impurity is excluded from the crystalline area. The paraffin wax is a solid crystalline mixture of solid hydrocarbons of high molecular weight ranging from C20 to C30 and higher, e.g., C36H74. It is derived from the portion of crude petroleum commonly designated as paraffin distillate, from shale distillate, or from hydrocarbon synthesis by low-temperature solidification and expression or by solvent extraction. It is distinguished by its solid state at ambient temperatures and relatively slight deformation even under considerable pressures. Paraffin-wax crystals are long and narrow and form in plates. In the fully refined grades, they are dry, hard, and glossy. Paraffin wax is characterized by its homogeneous constitution and distribution due to its refining process. Then, the separation between the polymers is complete in the paraffin wax samples as indicated in Figures 9.6 to 9.8. Honeybees secrete beeswax in a liquid state at an ambient temperature. Then, it crystallizes at the same ambient temperature. It consists of various components and is characterized by long hydrocarbon chains. It is made largely of a blend of myricyl palmitate, cerotic acid and esters, and some high-carbon paraffins. This reveals the heterogeneous constitution and distribution of beeswax, due to its polymers diversity. According to Figure 9.9, the beeswax consists of superposed plates. At higher magnifications, the beeswax SEM micrographs (see Figures 9.10 and 9.11) display a pasty, colloidal, and cloudy structure due to the amorphous and heterogeneous nature of the polymers composing the beeswax. Figures 9.9 to 9.11 prove that the multiple polymers in the beeswax did not separate. So, these polymers are still solidly interconnected, which explains the beeswax's toughness and resistance to electric conductivity. Beeswax is natural, and it is younger and fresher than paraffin wax because the former has a shorter pathway period than the latter. Beeswax does not undergo refining stages, whereas paraffin wax is extracted from crude petroleum after refining and purification.
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
Figure 9.9 Micrograph of beeswax sample showing topographic variations on the surface. Condition: Vacc = 10 kV, Mag = x250, WD = 12.2mm.
Figure 9.10 Beeswax sample with 1100 magnification. Closer observation of the beeswax surface with more details showing irregular ridges and valleys. Condition: Vacc = 20kV, Mag = xl.lOk, WD = 12.4mm.
383
384
THE GREENING OF PETROLEUM OPERATIONS
Figure 9.11 Beeswax sample with 2000 magnification. Detail of the beeswax surface with well visible ridges and valleys, indicating that the sample is not very hard, based on visual observation. Condition: Vacc = 10 kV, Mag = x2.00 k, WD = 11.9mm.
9.5 Response to Uniaxial Compression Uniaxial compression tests performed on beeswax and paraffin wax samples reveal inherent differences between these two materials. Figures 9.12 and 9.13 show the paraffin wax and beeswax sample attached to a compressive strength test machine along with a strain meter (linear traveling dial machine). The figures show the setup before starting the experiment. Figures 9.14 and 9.15 display the paraffin wax and beeswax samples after the compressive strength test, presenting the rupture of both wax samples. There is a shear failure due to compressive load. Figure 9.16 shows the stress-strain curve for paraffin wax at room temperature. Initially, with the increase of stress, strain increases linearly. This simply means that the strain rate increases with the increase of load on the test machine. This trend continues to up to 295 lbs at 10.5 minutes. However, when the strain rate is in the range of 1.4 to 1.7, there is no change of stress, which is 658.4 kPa at a load of 300 lbs. In this range, the linear elongation starts from 2.34 mm to 2.72 mm. At the time of 12.5 minutes, the elongation of 2.72 mm, and the load of 300 lbs, the failure of the paraffin wax sample
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
Figure 9.12 Paraffin wax sample for compressive strength test.
Figure 9.13 Beeswax sample for compressive strength test.
385
386
THE GREENING OF PETROLEUM OPERATIONS
Figure 9.14 Paraffin wax sample after compressive strength test.
Figure 9.15 Beeswax sample after compressive strength test.
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
387
Figure 9.16 Stress variation with strain for paraffin wax.
occurs, which is the yield strength point of the sample at room temperature. Table 9.6 shows the average density, compressive strength, and modulus of elasticity of paraffin wax. The shape and nature of the curve and failure pattern of paraffin wax indicates the similar nature of steel material. Therefore, the synthetic wax represents the processed materials of natural resources. Figure 9.16 represents the stress-strain curve of paraffin wax up to the first maximum stress values. The curve trend of the synthetic wax that is ostensibly linear can be mathematically explained by an empirical relation. The trend line for stress (σ ) with strain (ε ) is presented in Figure 9.17. This is a straight line representing the linear behavior of the material. The
Table 9.6 Wax mechanical properties. Wax type
Density (g/cc)
Compressive strength (kPa)
Modulus of elasticity (MPa)
Paraffin wax
0.7855
658.4
55.7
Beeswax
0.854961
526.7
39.0
388
THE GREENING OF PETROLEUM OPERATIONS Petroleum Wax 800
σ = 509.8 ε -2.812 R2 = 0.991
1.5
1
2.5
Strain(%) Figure 9.17 Stress variation with strain for empirical relation based on paraffin wax.
empirical relationship between stress-strain has been derived by best fit regression analysis. The equation can be presented by: σρ = 509.8 ερ - 2 . 8 1 2
(Equation 9.1)
Figure 9.18 presents the stress-strain curve for beeswax at room temperature. Initially, when elongation starts, beeswax took more load than paraffin wax (60 lbs, whereas paraffin wax took 10 lbs for same elongation). Therefore, there is a jump of stress value of 131.68 kPa for a slight increase of strain, 0.047 (0.06 mm of elongation). The increasing trend of the stress-strain curve is a non-linear type that is fluctuating at its closer range of maximum strength (yield strength value). For the strain range of 1.68 to 2.18, there is no change of stress value, which is 526.72 kPa. At this range of strain values, the linear elongation is continued from 2.89 mm to 3.54 mm, and the load is 240 lbs. After that point of strain value (2.18), with the increase of stress value up to 537.69 kPa (245 lbs), strain is decreased to 1.48. After this strain, for the same stress value, strain starts to increase
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
600
389
Beeswax
1.5 Strain (%) Figure 9.18 Stress variation with strain for beeswax indicating true load.
up to 1.78. At this point, the failure of the beeswax sample occurs. The normal decreasing trend of the curve continued up to the strain of 2.65 where the stress is 471.85 kPa (215 lbs). After this strain point, there is a certain decrease of strain up to 1.16, and then it increases again for the same stress value. Table 9.6 shows the average density, compressive strength, and modulus of elasticity of beeswax. Figure 9.19 represents the stress-strain curve of beeswax up to the first maximum stress values, as reported in Figure 9.18. Here, the non-linear pattern of the curve is more visible and self-explanatory. Therefore, the extremely non-linear and chaotic behavior of the stress-strain relationship of beeswax is quite unpredictable. It has no regular shape and pattern of the conventional stress-strain curve. The behavior of the curve is similar to natural materials, which can be used as a rock sample in a laboratory to simulate rock in field scale. The non-linear trend of the natural wax that has complex features can be mathematically explained by an empirical relationship. The trend line for stress (ab) with strain (ε^ is shown in Figure 9.19. The empirical relationship between these two parameters has been derived by best-fit regression analysis, which is shown in Equation 2. :-120.1ε 6 2 + 456.4 ε | ) +92.33
(Equation 9.2)
The above discussion indicates that natural wax (beeswax) represents a complex nonlinear behavior, which is a true representation of a reservoir rock sample.
390
THE GREENING OF PETROLEUM OPERATIONS Beeswax
600-.
^*±++-+-+4$
500400CO
a. CO CO CD
55
300♦/
200-
σ =-120.1 ε2 + 456.4 ε + 92.33
100-
Ft2 = 0.978
0< 0 i
1
0.5
1
I
1 1.5 Strain(%)
I
2.5
Figure 9.19 Stress variation with strain for empirical relation based on beeswax.
9.6
Ultrasonic Tests on Beeswax and Paraffin Wax
Ultrasonic nondestructive testing (NDT) introduces high frequency sound waves into a test object in order to obtain information about the object without altering or damaging it. Sound generated above the human hearing range (typically 20 KHz) is called ultrasound. However, the frequency range normally employed in ultrasonic testing and thickness gauging is 100 KHz to 50MHz. Although ultrasound behaves in a similar manner to audible sound, it has a much shorter wavelength. This means it can be reflected off very small surfaces, such as defects inside materials. Two basic quantities are measured in ultrasonic testing: time of flight or the amount of time for the sound to travel through the sample, and the amplitude of the receiver signal. A series of tests were performed in order to determine the extent of damage caused to the two types of waxes when exposed to a water jet (Hossain 2008). Even though this series of experiments was carried out in order to demonstrate the suitability of using beeswax as a model for rock and paraffin wax as a model for steel, these results are useful in showing the difference between natural and artificial materials. Figures 9.21 to 9.25 display the water jet-wax interaction.
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
Figure 9.20 Ultrasonic water immersion C-scan imaging system.
Figure 9.21 Ultrasonic C-scan of paraffin wax sample.
391
392
THE GREENING OF PETROLEUM OPERATIONS
Figure 9.22 Ultrasonic C-scan depth view of paraffin wax sample 1 for predicting water jet drilling depth.
Figure 9.23 Ultrasonic C-scan top view of paraffin wax sample 1 for predicting water jet drilling hole diameter.
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
393
Figure 9.24 Ultrasonic C-scan depth view of paraffin wax sample 2 for predicting water jet drilling depth.
Figure 9.25 Ultrasonic C-scan top view of paraffin wax sample 2 for predicting water jet drilling hole diameter.
394
THE GREENING OF PETROLEUM OPERATIONS
They illustrate the hole drilled by the water jet. They also show the effect of the water jet at various levels of the wax layers. As the water jet goes deeper, it becomes more difficult to drill, which explains the lower effect on the deeper wax layers.
9.7 Natural Plastic and Synthetic Plastic Plastics are polymers, and we are known to be living in the polymer age. With just over 100 years of synthetic plastic production, plastic today is ubiquitous. Plastics, fibers, elastomers, adhesives, coating, rubber, and nylon are all polymers. They are common in our modern life and the world is unimaginable without them (Malcolm 1998). Polymers have been used for thousands of years, and natural rubber, silk and other proteins, cellulose (found in wood and cotton), and starch are a few examples of the most useful natural materials. Polymers are made of many units connected together like a chain. Polymer refers to a molecule formed from many smaller molecules called monomers, which are linked together to make a large molecule or macromolecule. Polymers derive from carbon and hydrogen. Other elements that can be involved in polymer structure are oxygen, nitrogen, phosphorous, sulfur, and silicon. Although the basic elements of polymers are carbon and hydrogen, the polymers can be of various types (Website 10). Generally polymers are divided into two groups: thermoplastics and thermosets. Thermoplastic polymers are reformable, which is not true for thermoset polymers. Since processing and recycling are possible for thermoplastics but not for thermosets, the differences between the two groups are considerable (Website 10). Every polymer has distinct characteristics and can be produced in various ways. One common trait is strength and resistance against chemical reactions, e.g., chemical resistance is apparent in all cleaning liquids that have been packed in plastics. Another common trait is resistance against heat and electricity, e.g., thermal resistance is evident everywhere in a kitchen from cookware handles to the foam core of refrigerators and freezers, and electrical resistance is also obvious in appliances with wires and cords. Thermoplastic polymers are produced in the polymerization process, which is an amorphous process. Amorphous organization
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
395
can be produced as a result of controlling the polymerization process (Website 8). The polymer chains can be produced in a crystalline or non-crystalline method. Molecular structure of crystalline materials is considerably harder than a non-crystalline material due to its highly ordered molecular structure. However, adding modifiers and fillers can improve the hardness of non-crystalline materials (Website 11). A crystalline material by definition has atoms, ions, or molecules in a distinct arrangement (Website 12). Amorphous arrangements in crystalline materials can be produced as a result of quenching, and the degree of crystallinity is under the control of processing (Website 13). All polymers that are derived from these kinds of processes, like polymerization, quenching, and processing which can result in amorphous arrangements and structures, are not recognized by nature and natural cycles. As a result, they will convert to nondegradable materials. Scientists and engineers are always affecting the final molecular structure of polymers by manipulating their production process (Website 10). Manufacturers and processors introduce various fillers, reinforcements, and additives into the base polymers, expanding product possibilities and ignoring the wide consequences of these materials in nature (Website 10). It is said that "plastics will deteriorate by time but never decompose completely" (Website 7). The origin of plastics is crude oil, which is totally natural (both in source and process), but the result of the refining process is completely anti-nature and nondegradable - exactly the opposite of natural polymers. Although the basic structure of natural and synthetic polymers is the same, the differences are apparent in the details. Most synthetic polymers have a twin in nature. That is, for each kind of synthetic polymer produced, there is a similar formulation in nature, but the one that naturally exists is harmless to living things while the other is harmful to nature.
9.8 Plastic Pathway from Crude Oil Figure 9.26 shows the entire pathway from crude oil to all plastic categories and derivatives until market.
396
THE GREENING OF PETROLEUM OPERATIONS
Figure 9.26 Pathway followed during manufacturing of plastics from crude oil.
9.9 Theoretical Comparison Between Nylon and Silk Nylon is a linear polymer that contains the amide bonds CONH (Tetsuya et al. 1998). Natural polymers, such as protein, also have amide groups in their molecular structure. However, nylon with the exception of nylon-1, is resistant against proteolytic enzymes, whereas protein is easily hydrolyzed by these enzymes (Tetsuya et al. 1998). Nylon has the same structure as silk but different
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
397
behaviors in the degradation process. One is degradable, but the other one is non-degradable . Silk and Nylon 6,6 are considered to be twin polymers because both of them have a similar structure, but these twins have different behaviors in the decomposition process. Nylon was the result of research directed by Wallace Hume Carothers at du Pont. Nylon gained rapid attention for use in stockings and in making parachutes (Katz 1981). To find the problem the whole pathway of producing nylon (Crude oil —» Naphtha —> Benzene —> hexamethylene diamine -» Adipic acid —> Nylon 6,6) needs supervision. Nylon 6,6 is the most commercially successful polyamide and has been widely used for a long term. One of the most popular topics is the relationship between the microstructure of nylon and its twin polymer (silk), because the knowledge about it is very important for controlling the physical and mechanical performance. (Lu et al. 2004; Ramesh 1994; Keller 1994) As it is shown in the whole pathway from crude oil to plastic, nylon 6,6 is synthesized by reacting adipic acid with hexamethylene diamine. Nylons (polyamides) have the characteristics of amide groups in the chain, which can form hydrogen bond with each other. H-bonds connect neighboring chain segments and form an extended planar sheet such that N H groups are able to form strong hydrogen bonds (H-bonds) with the CO group, which causes a crystal structure of nylon. The intermolecular contains the H-bonds. The formation of the extended sheets dominates the structure. As shown in Figure 9.27, nylon has a symmetrical structure with the same monomers in the chain. This is common in most synthetic polymers. In the whole process, from crude oil to nylon (and many other oil-based-polymers), large molecules (macromolecules) are broken down into small molecules that build the body of future polymers. This is a linearized process, which is rare in nature. Since everything in nature is non-linear and consequently sustainable,
o
o
r "
'I
-f- o — C H 2 — C H 2 — O H 2 — C H 2 — C — N H — C H 2 — C H 2 — C H 2 — C H 2 — C H 2 — C H 2 — N H ^
I
I Six carbon atoms
I
I Six carbon atoms
Figure 9.27 Basic structure of nylon (Website 14).
398
THE GREENING OF PETROLEUM OPERATIONS
NH,
Hexamettiylene diamine
Adipic acid
H
o -
^ ,
N
I H Nylon 6,6
Figure 9.28 Nylon synthesis (Website 14).
-fC—CH 2 —CH 2 —CH2—CH2—C —NH —CH2—CH2 —CH2—CH2—CH2—CH2 — N H ^ I I Amide group
Figure 9.29 Amide group in nylon (Website 14).
Figure 9.30 3D image of nylon structure (Website 14).
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
399
most synthetic processes disagree with natural processes (Khan et al. 2005). At first look at the silk chemical structure, it seems that nylon and silk have exactly the same structure. However, one is degradable and harmless, and the other is non-degradable and resistant to decomposition process. So, where is the point that makes these two materials different? From the structure of silk (Figure 9.31), it can be seen that it has four, five, or six carbons between amide units. Nature acts much more economical by using only one carbon between amide groups (website 13). Nature substitutes this carbon with several different functional segments and groups (Website 13). In other words, in synthetic production the same groups of molecules are used, but nature uses diverse groups for building a polymer. In fact, this difference is apparent not only between silk and nylon but also in most polymers that have been imitated from the nature. Nature uses diverse groups of monomers for a polymer like silk, and, more importantly, there are different kinds of silk in nature, such as silks produced by spiders, for which little study has been done. The amino acid sequence repeats in spider fibroins, but each
/
C
R
\
H
C -— R
/
H
C
/
\
/
\
/
\
/
\
C=
0
H
N
N
— C
R
R
C
C := o - - - H H
H
\
\
\
/
--o=c \
\
/
/
C
\
N
H
0=
H
C
R
R
\
/
\
/
\
/
0
H
N
N
C == o - - - H
= Hydrogen bonding Figure 9.31 Silk chemical structure.
N
C
/
C=
C
/
N-— H - -
C -— H
C ^
N
/
C
R
N
\
H C
=
400
THE GREENING OF PETROLEUM OPERATIONS
• f C — CH 2 —CH 2 —CHg
o II CH 2 —C —NH —CH 2 —CH 2 — C H 2 — C H 2 — C H 2 — C H 2 — N H · ] -
) 4-CH — C — C-ri
i
R
H
"
This is nature's polyamide. This is a synthetic polyamide, Nylon 6,6. In nature, each repeat unit has a specific and different R group. The nature of the R groups and the order in which they come can give infinitely variable properties. How's that for tailored microstructure! The synthetic grows dull in comparison, each repeat unit exactly the same. How feeble are we.
Figure 9.32 Differences between natural silk and synthetic silk (nylon) (Website 13).
of the seven glands' produced proteins has a unique amino acid composition (Anderson 1970). The compositions of most spider silks are similar to those of the textile silk produced by the silk moth B. mori. Here is how we can understand how nature produces silk and how weak synthetic processes are in imitating it: "B. mori fibroin is known to contain multiple repeats of a hexapeptide and the hexapeptide GAGAGS occurs in blocks of 8-10 repeats, separated by another repeating motif that is more variable. These two large sequence blocks repeat four or five times between a small, approximately 30 residue, section of nonrepetitive sequence called the 'amorphous domain.' Thus, B. mori fibroin contains a preponderance of long crystal forming blocks, which establishes a strong potential for crystal formation, and this probably accounts for the high crystal content of B. mori silk." (Gosline et al. 2002, 3299; Mita et al. 1994) This e x a m p l e illustrates t h e complexity a n d diversity of silk in the p o l y m e r w o r l d c o m p a r e d to synthetic silks (nylons), w h i c h are simple a n d repeatable.
9.10 Theoretical Comparison Between Synthetic Rubber and Latex (Natural Rubber) Another example is the chemical structure of natural rubber (latex) and synthetic rubber and their pathway from crude oil to rubber.
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
401
Natural latex can be extracted from the inner bark of many trees. According to Katz, the white sticky sap of plants such as milkweed and dandelions is also latex, and natural rubber is a polymer of isoprene (2-methyl-l, 3-butadiene) in the form of folded polymeric chains which are joined in a network structure and have a high degree of flexibility (1981). Below is the pathway of synthetic rubber products: Crude oil —> naphtha —> butadiene + styrene —> synthetic rubber and latex —> tires and rubber products Carbon and hydrogen are the main atoms in the molecular structure of natural rubber. Natural rubber has a flexible molecular chain due to its amorphous mass of coiled structures that make it too soft to be used for any useful purposes. Therefore, its properties were changed using special processing techniques. The long and flexible chain structure of natural rubber allows it to retain its original shape when it is compressed or stretched. Tensile load can cause changes as the bonds and the chains become elongated. More and more stress can increase the degree of crystallinity, and crystallinity causes greater strength, hardness, and rigidity in rubber. The natural rubber is soft and degradable. UV light, oxygen, and heat all break down rubber's structure. In order to make natural rubber more strong to meet the basic requirement of a useful material, it can be processed to yield better mechanical strength. Processing means changing the internal structure of materials by adding different materials or by applying different treatment methods to improve or change the properties of the original material. These types of processing are against natural pathways, which reflects the fact that all processing systems, including enriching, concentrating, injecting, or deleting molecules and materials, are anti-nature. In 1839, Charles Goodyear discovered the process of converting soft natural rubber to a harder, less flexible rubber, called vulcanization. In this process, sulfur, when combined with the natural compounds of rubber, cross-links the molecular chains at their double bonds in order to restrict molecular movement and increase hardness. The sulfur molecule acts as a bridge between rubber molecules and forms a three dimensional network with the assistance of other ingredients. This network assists natural rubber to improve its weaknesses for practical applications. Rubber strands have carbon, hydrogen, and sulfur molecules and are very long. The
402
THE GREENING OF PETROLEUM OPERATIONS
Figure 9.33 3D image of rubber structure (Website 8).
cross-linking of sulfur atoms helps rubber products have flexible properties. Generally, the higher the sulfur content is, the higher the resilience and elasticity (Website 8). Here is the point that causes a huge environmental problem in long term. The discovery of this fact and the industry's need to produce a stronger rubber were the main reasons humans increased the number of sulfur cross-links in the rubber chain, adding more and more sulfur molecules in the rubber structure. This is rare in the environment because nature does it more economically and uses less sulfur in natural rubber. More importantly, the enriching process of rubber has started, and, consequently, rubber has been converted to something unrecognizable by nature called non-degradable. These cross-links in the rubber chain do not allow rubber to enter in natural decay. Another ambiguity in imitating polymers from natural ones is apparent by considering isomers in polymers. Almost all polymers in nature consist of various numbers of isomers, whereas omitting undesired isomers and producing polymers with desired ones, we are making it unrecognized for natural cycles. Also, changing the molecular structure of each isomer will complete the mistake of the synthetic polymer production process, converting it into something practical in terms of human needs but totally dangerous and toxic for the environment and, consequently, for future generations. In order to solve the problems that initially rose by way of imitating nature through chemical synthesis in polymers production,
DIFFERENCE BETWEEN SUSTAINABLE AND UNSUSTAINABLE
403
it is necessary to figure out how nature does it. The main difference between synthetic and natural processes is apparent in the role of enzymes and catalysts. Enzymes are polypeptides and are crucial to life. Living organisms use enzymes to produce and modify polymers. The enzymatic modification of materials is a suitable approach to explaining the differences of naturally built polymers and synthetically built ones. Enzymes are catalysts with specific jobs. In fact, oftentimes each enzyme does only one type of job or produces only one kind of molecule. Therefore, there has to be a lot of different enzymes, from different combinations of amino acids joined in unique ways in polypeptides, in order to keep a living organism active (Website 8). Every creature has hundreds or thousands of different types of enzymes. Each enzyme has to be made by other enzymes. This leads to very complicated control mechanisms. However, it is not known how nature manages enzymes' activities and responsibilities. Materials' properties are dependent upon the internal and external arrangements of atoms a n d / o r the interaction with neighboring atoms. Sometimes atoms, ions, monomers, or the whole molecular structure in both natural and synthetic polymers are the same, but their interactions with neighboring atoms, ions, or monomers are different. As stated above, different kinds of interactions between molecules cause strong characteristics in synthetic polymers that are rare in natural polymers. Therefore, sometimes the molecular structure of materials does not show any difference, but they have different properties and characteristics. The result of this chapter shows that much more study is needed to determine the diverse features of natural polymers.
9.11
Concluding Remarks
This chapter shows intangible factors (e.g., the internal arrangements of molecules and the way they interact with each other) have a direct impact on the life cycle of a material with explicit long-term effects on the environment. By reviewing these properties in two sets of natural and artificial materials, it is established that they follow starkly diverging pathways. If one is considered sustainable, the other one must be unsustainable, meaning if natural materials are considered to be inherently sustainable, all artificial materials are inherently unsustainable.
404
THE GREENING OF PETROLEUM OPERATIONS
External features of natural and non-natural processes cannot be used as a measure of sustainability. Because synthetic materials are produced based on external features and a single-criterion (based on commercialization prospects), it is possible that natural materials would appear to be less appealing than artificial materials. If this fact is not considered, subsequent environmental and economic considerations will be flawed. In this analysis, it is also important to recognize that if bulk composition were the only one considered, a natural material would have a similar composition to that of an artificial material. It is not the bulk composition that determines the sustainability of a product. In addition, it is important to recognize that any chemical component that is a product of an artificial process will render the entire process unsustainable, even if the original source material is of natural origin (e.g., crude oil). As a result, the product would not be sustainable. This unsustainability should be attributed to the chemicals that were derived through unsustainable process.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
10 Comparison of Various Energy Production Schemes 10.1
Introduction
Energy sources are generally classified based on various parameters such as efficiency, forms of energy available, and the type of fuel sources. It is important to note that any of these factors cannot be used as a single criterion because the resulting characterization will be inherently skewed. This chapter provides one with a scientific basis for comparing various energy solutions then follows up with a ranking that is in conformance with the sustainability of a scheme. A new term, "global efficiency," is introduced that represents the overall integrated system efficiency considering the origin, the pathways, waste utilization, and impacts on the natural environment. An energy system that has a very high local efficiency does not necessarily mean that the system will have a higher global efficiency. The introduction of global efficiency offers a paradigm shift in the characterization of energy sources because this method takes into account the pathway and reuse efficiency of the by-products. Considering only the local efficiency of an energy system, it is likely that the energy sources are either overvalued or undervalued. Even though there is an argument that the deployment of renewable energy 405
406
THE GREENING OF PETROLEUM OPERATIONS
reduces the greenhouse gas emissions (GHGs), without reducing the total energy consumption it is highly unlikely that reduction in GHGs is possible. The only way to reduce greenhouse gas emissions is to increase the global efficiency of the energy systems being employed. Energy sources are also classified based on the various sources from which they are developed, such as hydropower, solar, biomass, or fossil fuels. Moreover, the conversion of energy into usable forms, such as electricity, light, or heat, is an important activity of the energy industry. During this energy conversion from one form to another, the efficiency the energy system is determined based on output received from certain energy input. Energy efficiency is the ratio of output (energy released from any process) and input (energy used as input to run the process). Hence, efficiency: output 1=—JLT CO·1) input The efficiency of a particular unit, measured based on the ratio of output to input, is also considered the local efficiency. However, the conventional local efficiency does not include the efficiency of the use of the by-products of the system and the environmental impacts caused during processing or after the disposal of the systems. It is likely that conventional efficiency calculation does not represent the real efficiency of the systems. The economic evaluation of an energy project, carried out by the local efficiency alone, may be either unvalued or overvalued due to the fact that several impacts from the system are ignored. Energy efficiency is considered to be a cost effective strategy in improving the global economies without increasing the consumption of energy. This chapter introduces the term "global efficiency," which is calculated not only with the ratio of output and input but also taking into account the system's impact on the environment and the potential re-use of the by-products from the whole life cycle of the system. The concept of calculating the global efficiency of various types of energy systems has been discussed. Global efficiency not only considers the energy sources but also the pathway of the complete conversion process from the sources to the end-use. Provided that the global efficiency is considered, conventional economic evaluation of energy systems would appear differently than what is observed today.
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
10.2
407
Inherent Features of a Comprehensive Criterion
Section 16 of Chapter 4 discusses the inherent features of a comprehensive criterion. The essence of this criterion is that the process be real as continuous function of time. This can be assured only if 1. The base or source of a process of matter is true or natural; 2. Continuity (both in terms of logic and in terms of matter that includes mass and energy); 3. Any break-up of continuity or exception must be supported by true criterion or bifurcation point. The above statement regarding truth is followed by the postulate: Only truth is beneficial at all time, with the exception of an exception of an infinitesimal object. As an example, one can cite the case of lightening. Lightening is essential for sustaining ecosystem, yet it is can be fatal for a human being at a particular time. The same criterion was used in previous civilizations to distinguish between real and artificial. The assertion that the source be real or natural for a process to be real is attributed to Averroes, the father of secular philosophy in Europe. On the other hand, the continuity requirement is attributed to Aristotle. Khan (2007) introduced a criterion that identifies the end-point, by extending time to infinity. This criterion avoids scrutiny of the intangible source of individual action (namely, intention). However, Zatzman and Islam (2007a) pointed out that the end-point at time f = infinity can be a criterion, but it will not disclose the pathway unless a continuous time function is introduced. Mathematically, the above criterion can be restated by making the continuous time function real or natural. Hossain (2008) introduced this concept in terms of memory function.
10.3
The Need for a Multidimensional Study
Mousavizadegan et al. (2007) indicated that the ultimate truth can be revealed only with an infinite number of dimensions. Abou-Kassem et al. (2008) argued that by invoking Einstein's theory of relativity through the expression of any event as a continuous function of time, one forces the solution to include infinite
408
THE GREENING OF PETROLEUM OPERATIONS
dimension. This argument makes it possible to solve problems without extending to an infinite number of dimensions, which would be impractical at this point of human knowledge. The problem is then reduced to being solved with only known factors, irrespective of how little the impact of the variable may be on the outcome of scientific analysis. Kvitko (2007) discredited Einstein's relativity altogether. However, he did not elaborate on the first premise of the theory. Our contention is that Einstein's relativity theory appears to be spurious if processed through the science of tangibles. So far, there is no evidence that the first premise of the theory of relativity, as Einstein envisioned, is aphenomenal. It is important to note that the observation of natural phenomena as a continuous function of time, including differential frames-ofreference for component processes, is a matter of documenting and reconstructing the actual pathway and steps of an overall process. Because of its implicit standpoint of the neutral external observer, conventional analysis is not capable of fully sorting out these pathways and their distinctive components. The form in which this standpoint expresses itself is embedded in the conventions that come with the "usual" linearizations, namely, viewing time as the independent variable that varies independently of the processes being observed. Both Eulerian and Lagrangian approaches have the concept of the external observer embedded in them. For the Eulerian approach, the external observer is static, which is a physically impossible and, hence, absurd state anywhere within nature. For the Lagrangian approach, the external observer is in motion but within the same pre-defined pathway (conditions for independent variable). To an external observer, intermediate changes-of-state at the interface of successive sub-processes are "invisible," much in the same way that the third dimension is invisible at the interfaces of processes observed in two dimensions. (This is why analysis based on comparing output to input "works" so well with the most linearized models.) Within nature, there is no external observer, a state of affairs that renders the processes of tangible science "aphenomenal." Some researchers have indeed recognized the notion of "external" as being aphenomenal. However, rather than discarding this notion, they adapted the same principle calling it "God's eye view" (He 2005), while using Einstein's relativity (continuous time function) as the "human eye view." We consider this process of scientific investigation aphenomenal.
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
409
The following corollary is the core of the argument advanced in this section: just because an equation, or set of equations, describing the transformation of an overall process from input to output, can or may be decomposed into a set of linear superpositions, it does not follow that any or each of these superpositions describes or represents any actual pathway, or portion thereof, unfolding within nature. Consider the following logical train: Perfect is preferable. Nature is perfect. Therefore, anything natural is preferable. Given that "seeking perfection" is embedded in humanity, the first premise sets the selection criterion for any conscience-driven human action. However, this does not guarantee aphenomenality of the scientific process, as the definition of "perfect" is linked to the notion of ideal. If the "ideal" is aphenomenal, the meaning of "perfect" is reversed. As for the second premise, the idea that "nature is perfect" is intricately linked with what is nature. The case in point is a Stanford professor's argument (Roughgarden 2005). She argues if more than 400 species are found to be practicing "part-time homosexuality," it must be natural for humans to engage in similar practices. In fact, this argument can be used to demonstrate "homosexuality is preferable." What is the problem with this logic? Only one dimension of the problem is considered. If another dimension is used, then it can also be deduced that incestuous relationships are natural and, hence, preferable. When a generalization is made, one must not violate characteristic features of the individual or group of individuals. Conscience, here, is not to be confused with moral or ethical values that are not inherent to humans or at least that are subject to indoctrination, learning, or training. Humans are distinct from all other creatures that we know because of the presence of conscience — the ability to see the intangibles (both past and future), analyze the consequences of one's action, and decide on a course of action. Another example can be given: Perfect is preferable. Nature is perfect. Earthquakes are natural. Therefore, earthquakes are preferable.
410
THE GREENING OF PETROLEUM OPERATIONS
Reverse arguments can be made to curse nature. There are two problems with this argument. First of all, it is not a matter of "preference." Anything that takes place without human intervention cannot be preferred or discarded. It is not a matter of intention; rather, it is a matter of wish, which does not necessitate any follow-up human action. Any natural phenomenon (including disasters and calamities) will take place as a grand scheme of natural order or as a necessary component of total balance. This total balance cannot be observed in finite time or finite space. All that can be observed of such a phenomenon in finite time and space are fragmentary aspects of that balance. The phenomenon may not appear to be balanced at all, or, alternatively, there may occur an equilibrium state, and because the observation period is sufficiently finite the equilibrium state is assumed to be "normal." Secondly, if nature is perfect and dynamic at the same time, nature must be moving toward an increasingly better state with time. This logic then contradicts Lord Kelvin's assertion that nature is moving from an active to a passive state, reaching a state of useless "heat death." This contrasts what has been found in the Nobel Prizewinning work (2001) of Eric Cornell and others. As Eric Cornell outlined in his most popular lecture, titled Stone Cold Science: Things Get Weird Around Absolute Zero, Kelvin's concept of nature and how nature functions is starkly opposite of the modern concept. At very cold temperatures, phase changes do occur, and this has nothing to do with losing power or strength as commonly understood by the term "death." This is further corroborated by later discoveries (Ginsberg et al. 2007). Once again, unless the long-term is being considered over a large scale in space, this transition in universal order or in a laboratory cannot be observed. This is true for floods, lightning, and every natural phenomenon that we observe.
10.4 Assessing the Overall Performance of a Process Chapter 4 (Section 16) presents the need for having the correct first premise for evaluating the performance of a process. The correct first premise is: natural materials or processes are good for all times. If this criterion is not met, the subsequent criteria for material selection or efficiency evaluation will be spurious. Any conclusion based on these criteria or the process involving those criteria will render
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
411
aphenomenal conclusion. At times, those conclusions might be correct, but they will not have any scientific legitimacy because the conclusion cannot be applied to any other application. In the following sections of global efficiency evaluation, comprehensive criteria that involve both correct first premise and proper sequence of criteria will be used.
10.5
Global Efficiency of Solar Energy to Electricity Conversion
10.5.1 Photovoltaic Cells Solar energy is the most abundant energy source available on Earth. Service (2005) wrote that Earth receives 170,000 TW of energy every moment, one third of which is reflected back to the atmosphere. Earth received more energy in an hour than what is consumed by humans in a year. However, utilizing such a huge energy source is not easy. Even though the cost of power generation from solar energy is decreasing, it is still the most expensive power generation option compared to wind, natural gas coal, nuclear, and others. In terms of pollutant avoidance, compared to fuels burning it is argued that PV can avoid a significant amount of pollutant emissions, such as C0 2 , ΝΟ χ , S0 2 , and particulates. However, in terms of other environmental aspects, wide scale deployment of solar photovoltaic technologies has several potential long-term environmental implications (Tsoutsos et al. 2005). Bezdek (1993) argued that, given the current technologies on a standardized energy unit basis, solar energy systems might initially cause more greenhouse gas emissions and environmental degradation than the conventional nuclear and fossil energy systems do. They further argued that it is important to recognize the substantial costs, hazardous wastes, and land-use issues associated with solar technologies. Solar cells are manufactured using silica. Even though silica is considered non-toxic in its elemental form, the formation of silicon dioxide in the environment cannot be avoided. Silicon dioxide is considered to be a potent respiratory hazard. Figure 10.1 shows a schematic of the components of a complete solar photovoltaic system. EDD guidelines (2002) reported the environmental and social impacts of PV cells from manufacturing to decommissioning. It was reported that the manufacturing of solar cells uses toxic and
412
THE GREENING OF PETROLEUM OPERATIONS
1I
Solar panel
12
Battery
Is
Compact fluorescent
Silica life cycle-toxic to environment
Toxic and heavy metals such as lithium ion, sulfuric acid, etc.
5 mg Hg per CFL
Global efficiency μ = ηλ η2 η3 I Figure 10.1 Evaluation of global efficiency of solar PV system.
hazardous materials such as phosphine (phosphorous hydride, PH3). The PH 3 used during the manufacturing of amorphous silicon cells poses a severe fire hazard through spontaneous chemical reaction. This poses occupational and public health hazards during manufacturing and operation. The decommissioning of PV cells releases atmospheric emissions of toxic substances and contaminates land and ground water (EDD Guidelines 2002). Table 10.1 summarizes some of the chemicals used in the manufacturing of solar cells. The use of hydrofluoric acid (HF), nitric acid (HN0 3 ), and alkalis (e.g., NaOH) for wafer cleaning, removing dopant oxides, and reactor cleaning poses occupational health issues related to chemical burns and the inhalation of fumes. These chemicals are also released into the atmosphere during processing. The process also generates toxic P 2 O s and Cl2 gaseous effluents, which are hazardous to health (Fthenakis 2003). Hence, the life cycle of the solar cell shows that it has several environmental issues that all need to be addressed in order to avoid the long-term environmental impacts.
10.5.2 Battery Life Cycle in PV System Batteries consist of various heavy metals such as lead, cadmium, mercury, nickel, cobalt, chromium, vanadium, lithium, manganese and zinc, as well as acidic or alkaline electrolytes (Morrow 2001). The exposure of such metals and electrolytes may have adverse impacts on human and the natural environment. Even though, the recycling and reuse of batteries have been practiced, they cannot be recharged forever, and the metals and electrolytes leak into the
C O M P A R I S O N OF V A R I O U S ENERGY P R O D U C T I O N S C H E M E S
413
Table 10.1 Some Hazardous Materials used in Current PV Manufacturing. TLV-TWA (ppm)
STEL (ppm)
GaAs CVD
0.05
-
Arsenic compounds
GaAs
0.01 m g / m 1
Cadmium compounds
CdTe and CdS deposition CdC12 treatment
0.01 m g / m 1 (dust) 0.002 m g / m 1 (fumes)
Carbon tetrachloride
Etchant
5
Chloro-silanes
a-Si and x-Si deposition
5
—
Diborane
a-Si dopant
0.1
Hydrogen sulfide
CIS sputtering
10
Lead
Soldering
0.05 m g / m 1
Nitric acid
Wafer cleaning
2
4
25
-
Irritant, Corrosive
Phosphine
a-Si dopant
0.3
1
50
0.5
Irritant, flammable
Phosphorous oxychloride
x-Si dopant
0.1
-
-
-
Irritant, kidney
Tellurium compounds
CIS deposition
0.1 m g / m 1
Material Arsine
Source
IDLH (ppm) 3
ERPG2 (ppm)
Critical Effects
0.5
Blood, kidney Cancer, lung
NA
Cancer, kidney
100
Liver, cancer, greenhouse gas
800
—
Irritant
-
40
1
pulmonary
15
100
30
Irritant, flammable
10
blood, kidney, reproductive
cyanosis, liver
Source: Fthenakis 2003 Note: TLV-TWA: The Threshold Limit Value, Time Weighted Average; STEL: ShortTerm Exposure Level; IDLH: The Immediately Dangerous to Life or Health; ERPG-2: The Emergency Response Planning Guideline-2
414
THE GREENING OF PETROLEUM OPERATIONS
environment during their life cycle of operation. Moreover, other factors, such as voltage, ampere-hour rating, cycle life, charging efficiency, and self-discharge characteristics, are also important in evaluating the total amounts of hazardous waste generated per unit of battery use. The use of corrosive electrolytes and toxic heavy metals needs to be addressed before disseminating large numbers of batteries through PV systems. Rydh and Sande (2005) reported that the overall battery efficiency is 0.41-0.80, including direct energy losses during operation and the energy requirements for production and the transport of the charger, the battery, and the inverter. For some batteries, the overall battery efficiency is even lower than the direct efficiency of the charger, the battery, and the inverter (0.50-0.85). A nickel metal hydrate (NiMH) battery usually has lower efficiency (0.41) compared to a Li-ion battery (0.8 maximum). However, if we consider the global efficiency of the battery, the impact of heavy metals and corrosive electrolytes released into the environment needs to be considered, which significantly lowers the global efficiency of the battery system itself. 10.5.3
Compact Fluorescent Lamp
Compact fluorescent lamps (CFLs) are considered to be the most popular lamps recently. Each CFL is reported to prevent the emission of 500-1000 kg of carbon dioxide and 4-8 kg of sulfur dioxide every year in the U.S. (Polsby 1994). It is considered that a CFL consumes 4-5 times less energy and can last u p to 13 times longer than the standard incandescent lamp producing the same lumen (Kumar et al. 2003). However, CFLs have other environmental concerns. Energy Star (2008) reported that CFLs contain an average of 5 milligrams of mercury, one of the essential components of CFLs. CFLs also contain phosphor, which is copper-activated zinc sulfide or the silver-activated zinc sulfide. Exposure to these chemicals has environmental impacts in the long-term. Hence, despite being energy efficient, CFLs still need improvement to avoid environmental impacts. The global efficiency of a solar cell system, including its pathways, is shown in Figure 10.2. The average efficiency of such a solar panel is 15%. The solar panels are made of silicon cells, which are very inefficient in hot climate areas and are inherently toxic because they consist of heavy metals, such as silicon, chromium, lead, and
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
415
Negative environmental impact
t Solar PV λΐ>°/οη2
73
I
t
t
Battery -► 30-80%
i_
►
Fluorescent light 57% 1*
—► Total < 5%
*
Energy consumption during manufacturing
Figure 10.2 Flow chart showing pathway of generating artificial light from natural sunlight.
others. The energy is stored in batteries for nighttime operations. These batteries are exhaustible and have short life even if they are rechargeable. The batteries have a maximum efficiency of 41-80% (Rydh and Sande 2005). The compact fluorescent lamp, generating 800 lumens of light, uses 14 watts of electricity, and its lumens per watt efficiency is over 57% (Adler and Judy 2006). Thus, considering the local efficiency of all the components in the system in Figure 10.2, the global efficiency of the overall PV system is less than 5%. Natural and 100% efficient light is converted to less than 5% efficient artificial light. Global efficiency of PV system (^G) = ηλχη2 xr/3 xr/4
(10.2)
Solar panels are widely used for lighting in isolated areas where other sources of energy are not available. Moreover, the embodied energy of the solar cells is very high and emits huge amount of CO z due to the fossil fuel use during manufacturing the solar cells. Toxic materials in the batteries and CFLs are one of the most environmental polluting components (Islam et al. 2006). The severity is particularly intense when they are allowed to oxidize. Note that oxidation takes place at any temperature.
10.5.4 Global Efficiency of Direct Solar Application In case of direct solar heating using fluid heat transfer, there are no toxic chemicals used in the systems. However, there is a significant efficiency loss from the source to the end use (Figure 10.3). There is heat lost on a reflecting surface, such as a parabolic reflector, during the heat transfer from heat exchanger to the fluid, from fluid to
416
THE GREENING OF PETROLEUM OPERATIONS
Solar 100%
End use device 1?
Reflector
Fluid
Steam generation
1i
12
13
Transmission 1e
Generator is
Turbine 1i
Figure 10.3 Global efficiency of solar to electricity conversion system.
steam conversion in the Einstein cycle, and from the steam turbine to the generators and transmission. All of these losses decrease the global efficiency of the system. Hence, the global efficiency (//G) of the direct solar application is: ^G=
f l
X
f
? 2 X ^3
Xi
?4
X
l5XJl6XJ17
(10.3)
The total solar radiance, known as global solar irradiance, on the earth's surface is made up of direct and diffuse components. However, for the solar collector, the global solar irradiation (Ißc) on a slope with an inclination angle ß is: ßc = lßB + lßD + lßR
(10.4)
where lßB is the direct solar radiation propagating along the line joining the receiving surface and the sun, \ßD is the scattered solar radiation, and IßR is the ground reflected irradiance plane with surface inclination ß (Li and Lam 2004). After incidence, or radiation energy on the surface, some heat is lost, which can be summarized as follows (Tiwari 2002): Surface Assembly 1) 2) 3) 4)
Variation in Shape Ambient temperature Heat diffusivity and conductivity of absorber Optical consideration of the reflective surface
Receiver Assembly 5) 6) 7) 8)
Placement of receiver Heat loss in the receiver Behavior of receiver Heat transfer fluid in the receiver
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
417
For a parabolic surface, ambient temperature is considered one of the important parameters for heat loss from the surface. The lower the ambient temperature, the higher the temperature difference (between the temperature of the absorbed material and the ambient temperature) and the higher the convection heat loss. When the solar beam incidents on the parabolic surface, a portion of the heat from the beam is diffused through the surface materials. The parabolic trough collector (PTC) is compiled by a number of materials, and each material has a different heat conductivity, which is related to conduction heat loss. Some heat loss takes place after reflection due to the receiver assembly. If the receiver is not placed properly, it cannot receive all the reflection in the line of focus. The material of the receiver itself absorbs some heat, depending on the material of the receiver. Some heat is transmitted from the receiver to the ambient air by convection. The selection of the heat transfer fluid is also important because the thermal capacity of the heat transfer fluid dictates the performance of the energy transfer. Figure 10.4 shows the energy balance of a parabolic trough collector. If x is the total reflection on the surface and y is the heat transfer to the fluid in the receiver, then Total loss of radiation, L = (x-y)
(10.5)
Heat loss from the receiver (y1)
Direct radiation (x1) —-—i
Diffused radiation (x2)
Parabolic trough collector (Solar)
Reflected radiation (x3)i
Heat loss from the surface assembly (y2) Figure 10.4 Energy balance of a parabolic trough collector.
Heat transfer to the fluid
418
THE GREENING OF PETROLEUM OPERATIONS
For a parabolic surface with the same receiver assembly and parabolic surface assembly (except the incident surface) and same ambient temperature, the loss is a function of the incident surface: L = f (incident surface)
(10.6)
Incident surface is the most important for a parabolic surface. It actually dictates the reflection of the solar beam from the surface. The reflectivity of a surface depends on the color and the gloss of the surface. It is already known that a white surface is the most reflective and black surface is the least reflective. So, the more that the surface is close to white, the more reflective it is. On the other hand, surface smoothness depends on how polished it is. The polished surface is glossy and reduces the fracture or coarseness of the surface, thus reducing the diffused reflection on those coarse surfaces. Polished surfaces exhibit a specular reflection, which has the same incidence and reflection angles. The reflective surface can be a thin layer of any material on the top of a collector surface assembly. Due to the thin nature of the reflective surface, the diffusivity and conductivity of the surface's heat is not that important, which is why it can be speculated that white, polished surfaces can be used as reflective surfaces for a parabolic trough collector. The use of natural materials as reflective materials can enhance the global efficiency by minimizing the energy input. For example, the natural mineral surface of limestone can be used. Erdogan (2000) reported that rocks made of a single mineral, such as marble or limestone, show increasing value of surface reflectance with decreasing grain size. Moreover, a small crystal size and a dense crystallization have a brightness enhancing effect on the polished rock surfaces. The reflection coefficient K depends on the mineral composition of rocks, grain size, color, etc. It is reported that for rich massive ore with an admixture of pyrite (K = 16.0-18.5%) and for rich massive ore with an admixture of chalcopyrke (K = 11.5-13.5%), coarse-grained pyrite has a value of K = 10.5-19.0%, and fine-grained pyrite has a value of K = 29.0-56.5% (Shekhovtsov and Shekhovtsov 1970). The same study reported that dark-gray limestone has K = 15.5-16.5%, but white limestone has K = 72.0-78.0%. From the viewpoint of energy efficiency, it is found that solar energy is very efficient and suitable to be used directly in the water heating system. It can also be applied to a number of direct heating applications. Figure 10.5 shows the global efficiency of direct solar heating application.
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
'Sun 100% >h
Parabola with lime surface i/2
419
Water heating 13
Figure 10.5 Global efficiency of the direct solar to electricity conversion system.
The global efficiency of a steam power plant to a cooling system is: η^η,Χη^η,
(10.7)
The global efficiency from primary heating to the cooling input includes the efficiencies of several units (Figure 10.6). This global efficiency, calculated by Khan et al. (2007), is as follows: Global heat transfer efficiency (vChbJ = Heat-to-Steam efficiency (70%) x Turbine efficiency or thermal efficiency (nt) x Generator efficiency (80%) x Transmission efficiency (90%) χ Compressor's rotor efficiency (80%). Hence: (10.8)
Global h e a t transfer efficiency (f7Globa,) = 40% x (nt)
This global efficiency helps to find out the co-efficient of performance (COP) of a vapor compression cooling/refrigeration system. So far, COP for a vapor cooling/refrigeration system is calculated by the ratio of the heat removed to the net work (output of a compressor),
Heat to steam efficiency (70%)
Turbine efficiency (40%) Mechanical energy
Compressor efficiency (80%)
Transmission efficiency (90%)
Electrical Work
Compressor's rotor
Energy
Transmission system
Generator efficiency (80%) Electrical Generator Energy
Figure 10.6 Global efficiency of a steam power plant to cooling system (Khan et al. 2007a).
420
THE GREENING OF PETROLEUM OPERATIONS
while disregarding the efficiency of the units that bring the energy from the primary heat to the compressor's input. That is why Khan et al. (2007) propose to include the global efficiency, so that the real COP is obtained as follows: ^^„ heat removed COPp = ηαΜ x — net work
(10.9)
Due to the inclusion of global efficiency, the true COP of a vapor compression system is found to decrease a lot. However, the scenario is different for an absorption cooling/refrigeration system because that system includes the primary heat from the source. According to Khan et al. (2007a), for the same level of surrounding and cooling temperatures, the COP of an absorption system is almost 2.5 times greater than that of a vapor system. It is noted that the source of heat was not included in the calculation of the global efficiency or COP. The heat source could be from renewable sources or non-renewable sources. The extraction efficiency from fossil fuel (non-renewable source) is actually the combustion efficiency of the fossil fuel, which can vary from 50% to 90% depending upon fuel specification (Khan et al. 2007a). On the other hand, the extraction of solar energy from a parabolic solar collector (PTC) is the combined efficiency of receiving energy to the heat transfer fluid in the receiver and the transmission line of that fluid, which can vary from 50% to 70% (Khan et al. 2007a).
10.5.5 Combined-Cycle Technology In combined heat and power (CHP) technology, also called cogeneration, heat and power are sequentially generated from a single, primary energy source. The two different forms of energy could be electrical energy and thermal energy, or mechanical energy and thermal energy. The sequence of generation could also be in any combination of different forms of energy. If an industry that needs both electrical energy as well as low pressure process steam, CHP could be ideally beneficial. It has an advantage of reducing the primary energy use, thus reducing the overall cost of the system. Even though, CHP technology is considered one of the most efficient energy technologies, there is significant loss in the global efficiency of the system. Figure 10.7 is the schematic of combined heat and power generation technology. The efficiency for cycle 1 and cycle 2
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
421
Figure 10.7 Global efficiency from natural gas burning to electricity.
are calculated separately and added to get the overall global efficiency of the CHP system.
Efficiency of Cycle l(nGl) = μ 1 χ μ 2 χ μ 6 χ μ ? χ μ 8 where μΐ is the local efficiency of combustion chamber in which heat loss through flue gas, incomplete combustion of the fuels and the loss through boiler or chamber wall should be deducted. Efficiency of Cycle 2(T|G2) = μ3χ μ4χ μ 5 χμ 8 Overall Global Efficiency (r|G) = r|G1+ r\G2
10.5.6 Hydroelectricity to Electric Stove Hydroelectricity is generated by utilizing the energy of falling water. Electricity from hydropower is a renewable form of energy and is considered an environmentally friendly energy source. A hydroelectric power plant can either be a "run-of-the-river" type or a storage reservoir type. In "run-of-the-river" type power plants, it is not necessary to build large dams to store water. The water is
422
THE GREENING OF PETROLEUM OPERATIONS
simply diverted from the river into the channel carrying the water and then to penstock pipe. However, in storage type reservoirs, a high dam is constructed to store water that increases the water head and supplies water at the peak load requirements. Power development from water resources utilizes valuable natural resource. Hence, increasing the efficiency of power production significantly contributes to environment and economy. Calculating the global efficiency, which is the summation of the individual efficiency of each unit, is of very important. The storage reservoir will lose its water through evaporation in the reservoir as well as from the conveyance channel. There is a huge loss in penstock pipe due to friction, bend, and joint loss. The overall loss in this system could reach up to 15%, making the total efficiency 85% before the turbine. Different types of turbines have different efficiencies; however, the pelton turbine efficiency could range from 70-90% (DOE, 2001). Natural Resource Canada (2004) published the average efficiencies of different impulse turbines (Pelton 80-90%, Turgo 80-95%, and cross flow 65-85%) and reaction turbines (Francis 80-90%, pump as turbine 60-90%, Propeller 80-95%, and Kaplan 80-90%). The same publication reported that the efficiency of synchronous generators varies from 75 to 90%, depending on the size, and the efficiency of induction generators is approximately 75% at full load and reduces to up to 65% at part load. Hence, for calculation, the average turbine efficiency is taken as 70%, and generator efficiency is taken as 90%. Green (2004) reported that the average loss in electrical transmission lines is approximately 10%. The electric heating stove efficiency is approximately 90%. Hence, the global efficiency of hydropower to cooking stove is calculated as shown in Figure 10.8. Global Efficiency of Hydroelectricity to Electric Stove = 85% x 70% x 90% x 90% x 90% x 90% = 39.06%
10.6 Global Efficiency of Biomass Energy Biomass energy is one of the most sustainable sources of energy. It originates from sunlight and continues to be in one of the form of biomass in a system. The key to the system's sustainability lies in its energy balance. Here is where natural sources of biomass and non-biomass must be distinguished from non-natural, noncharacteristic, industrially synthesized sources of non-biomass.
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
423
Loss trough evaporation, friction, pipes and fittings = 15% Efficiency, r)L = 85% i l
Dam/ Reservoir
Turbine T|L = 70%
Generator T1L = 90%
1 Electric Stove, η L = 90%
Electricity to heat r|L = 90%
Transmission r|L = 90%
Figure 10.8 Global Efficiency of Hydroelectricity to Electric Stove.
In the same way that sunlight photosynthesizes plant material into living material (as opposed to fluorescent lighting that would freeze that process), synthetic, naturally non-characteristic non-biomass can never convert natural non-biomass into biomass, no matter how much solar energy is available anywhere in the system. In Figure 10.9, the atmosphere acts as a bioreactor, along the outlines of the figure, and does not and will not enable the conversion of synthetic non-biomass into biomass. The key problem of mass
Figure 10.9 Sustainable pathway for material substance in the environment.
424
THE GREENING OF PETROLEUM OPERATIONS
balance in the atmosphere, as in the entire natural environment of the earth as a whole, is laid out in Figure 10.10. The accumulation rate of synthetic non-biomass continually threatens to overwhelm the natural capacities of the environment to use or absorb such material. Hence, such analysis could form the basis of calculating the global efficiency of biomass combustion technologies. It is conventionally reported that combustion of wood in traditional stoves has relatively low efficiency in the ranges of 10-15% (Shastri et al. 2002). While efforts have been made to improve on this efficiency, most authors missed the most important point regarding this efficiency analysis. The traditional efficiency calculation is based on the local efficiency considering only the fuel input and heat output in the system itself. This method does not consider the utilization of byproducts such as, the fresh CO z which is essential for the plant photosynthesis, use of exhaust heat for household water heating using a heat exchanger, use of ash as surfactant for enhanced oil recovery, fertilizer and good sources of natural minerals such silica, potassium, sodium, calcium, and others. This analysis is typical of modern engineering calculations that fail to include factors beyond those with immediate implications. In Chapter 7, Pathway of natural and synthetic material
Figure 10.10 Synthetic non-biomass that cannot be converted into biomass will accumulate far faster than naturally sourced non-biomass, which potentially can be converted to biomass.
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
425
Figure 10.11 Pictorial view of zero-waste models for wood stove.
several stove designs have been proposed. Figure 10.11 is the pictorial representation of a zero-waste stove model. In the figure four rectangular boxes represent the outputs from the stove, which are energy, C0 2 , particulates and heat from exhaust gas. To make this stove sustainable these outputs should be completely utilized. Indeed, the output of this system will be the input for others. In Chapter 7, it has been shown that most wastes will be utilized. If that is the case, the global efficiency of the process increases dramatically. In addition, the benefit of creating pollutant-free C 0 2 is immense. It has been established in previous analysis that major source of C 0 2 pollution is the array of heavy metals and other chemicals that are used during petroleum processing.
10.7 Global Efficiency of Nuclear Power Nuclear power is considered as one of the most efficient technology for power generation. This is true if the criteria for evaluation are the local efficiency. The efficiency of thermal to net electric conversion from a nuclear power plants is considered to be over 50%. However, if we consider the global efficiency, the scenario can be entirely different. Conversion of uranium ore from natural state to UF6 and UO z , enrichment, processing and power generation
426
THE GREENING OF PETROLEUM OPERATIONS
Enriched Uranium U235(2-3%)
Fabrication (UF6 to UO2 Fuel Rods)
Uranium extraction Power +-
Reactor
1
i i T = 00 to be in natural state i L Under natural process
£
(Spent fuel) U235-half life of a emission = 7x108 years
Figure 10.12 Schematic of power generation from nuclear power.
involves emission of radioactive radiation that has very long half life. Hence, it takes infinite time for uranium to return to the natural state because of the violation its characteristics time. The spent fuel is the major concern in the nuclear power generation. Even though, it is argued that it is feasible to store the spent fuel into the geological storage for thousands of years, it is highly unlikely that this can be a solution for long-term. Since the radiation continues millions of years, current design of storage systems for some thousand years will not solve the problem. Mortimer (1989) reported that nuclear power system releases 4—5 times more C 0 2 from its life cycle operations than equivalent power productions from other renewable energy sources. This is because it involves huge amount of energy from mining, fuel conversion, fabrication and enrichment. Hence, considering the life cycles emission of C 0 2 and spent fuel management perspective, the global efficiency of nuclear energy is significantly less than what is advocated.
10.8
Discussion
Conventional analysis of efficiency does not include the efficiency of the whole life cycle of different component and utilization of by products of energy systems. For example, solar energy is 100%
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
427
efficient. This efficiency emerges from the well known conservation of energy theory: Energy cannot be created or destroyed. There isn't one component of sunlight that is not useful. It is continuously beneficial to the environment. Sunlight gives immediate vision, but it also help produce vitamin D. Sunlight is crucial for photosynthesis that would give benefits to the environment by triggering many beneficial chain reactions. As time progresses, environmental benefit of each photon continues to grow. It is of great value. Even if it is possible create carbohydrates artificially, the quality of this product will be questionable, even if this may not be evident to all. When chemical fertilizers were introduced in the name of a 'Green Revolution', few realized those fifty years later, this would be most important trigger for the non-greening of the Earth. However, the use of solar energy by converting into the electricity through PV systems does translate into the higher efficiency, instead, there are several environmental problems created during the entire process. Consider also the make-up of the batteries. The most modern batteries are more toxic than earlier types, filled with heavy metals of all sorts. With plastic covers and toxic inside, there is no hope for these batteries to stop polluting the environment indefinitely. The severity is particularly intense when they are allowed to oxidize and oxidation takes place at any temperature (unlike the common perception that it takes place only when materials are incinerated). The final converter into artificial light, the 'inert' gas filled tubes continue to radiate very toxic light, so toxic to the eye. Photosynthesis wouldn't occur without sunlight, vitamin D wouldn't form, human life-protecting skin pigments wouldn't exist. In Figure 10.13, the conventional analysis shows that nuclear energy has the highest efficiency. However, it is only valid if we do not take into account the pathway of the nuclear energy which included from uranium exploration, mining, and enrichment to conversion and heating. The energy consumed in hundreds of stages during uranium enrichment, fuel conversion and fuel heating needs to be taken into account. The most important but crucial issue in the nuclear energy is the safe management of nuclear waste. Till date, the issue of nuclear waste management has not got a clear answer. Kelly (2006) argued that a general misconception exists amongst the general public and some policy makers that renewable energy
428
THE GREENING OF PETROLEUM OPERATIONS η of energy source Nuclear If the pathway is considered
Nuclear
If pathway is not considered ► Global efficiency
Figure 10.13 Local and global efficiency of the energy systems with and without considering the pathways.
deployment is the solution to carbon emissions. It was further argued that as renewable sources are regarded as potential sources for employment and economic growth, the expected economic benefits on the energy supply side acts to the detriment of energy efficiency. Hence, without reducing the total energy consumption, deployment of renewable energy does not reduce the greenhouse gas emission. Hence, in order to have lower energy consumption, the only way to reduce the greenhouse gas emission is to increase the global efficiency of an energy technology.
10.9
Concluding Remarks
Various energy sources have been characterized based on the global efficiency. It has been shown that conventional calculation of efficiency lacks several factors which are needed to take into consideration. The most usual practice is to consider the local efficiency of the system as a measure of efficiency. However, considering the local efficiency does not necessarily represent the true efficiency of a system. In order to have the true evaluation of any system, global efficiency has been introduced. Global efficiency represents the overall integrated system efficiency considering the pathways of the system,
COMPARISON OF VARIOUS ENERGY PRODUCTION SCHEMES
429
waste utilization and impacts to the natural environment. It has been argued here that an energy system that has high local efficiency does not necessarily mean that the system will have higher global efficiency. The introduction of global efficiency offers paradigm shift in characterization of energy sources. Moreover, considering only the local efficiency, the energy sources might have either overvalued or undervalued. As energy consumption is related with the higher emission of pollutants including greenhouse gases, the only way to increase the efficiency and reduce the environmental impact is to increase the global efficiency of any energy system.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
11 The Zero-Waste Concept and its Application to Petroleum Engineering 11.1
Introduction
The modern age is synonymous with wasting habits, whereas nature does not produce any waste. The fundamental notion that mass cannot be created or destroyed dictates that only the transformation of materials from one phase to another phase can take place. However, the mass balance alone does not guarantee zero waste. Nature is perfect, which means it operates at 100% efficiency. This means that any product that is the outcome of a natural process must be entirely usable by some other process, which in turn would result in products that are suitable as inputs to the process. A perfect system is 100% recyclable and, therefore, zero waste. Such a process remains zero waste as long as each component of the overall process also operates at zero waste. In a desired zero-waste scheme, the products and byproducts of one process are used for another process. The scientific definition of a zero-waste scheme is followed by an example of zero waste, with detailed calculations showing how this scheme can be formulated. Following this, various stages of petroleum engineering are discussed in light of the zero-waste scheme. 431
432
THE GREENING OF PETROLEUM OPERATIONS
Fossil fuel energy sources are of predominantly used today. Nearly 90% of today's energy is supplied by oil, gas, and coal (Salameh 2003). The burning of fossil fuel accounts for more than 100 times greater dependence than the energy generated through "renewable"' sources (solar, wind, biomass, and geothermal energy). The panic starts when it is promoted that fossil fuels are limited, and a switch to "renewable" is the only sustainable option for the viability of human civilization. The question arises as to how one can begin to make the switch. In this, the science of energy production presents a comprehensive analysis that shows that running out of energy sources is not in conformance with overall energy and mass balance. In addition, it is shown that the currently used "renewable" schemes are not truly renewable and are not even efficient compared to conventional petroleum production schemes. According to present consumption level, known reserves for coal, oil, gas, and nuclear correspond to a duration of the order of 230,45, 63, and 54 years, respectively (Rubbia 2006). Note that these numbers correspond to energy production from known reserves with currently established techniques. If petroleum operations can be rendered sustainable, this time limitation will become irrelevant. This is not to say there should be no effort to make use of other natural energy sources. It is important, however, to remain cognizant about the truly "natural" status of these energy sources. Crude oil is a natural energy source because it can be used without resorting to unnatural processes. Radioactive ores are also natural, but current technologies are not capable of using them as an energy source without resorting to enrichment processes that are highly unnatural. Solar energy is obviously the most appealing energy source, but the process of turning solar energy into "usable" energy through a series of inefficient conversions, using toxic photovoltaic battery materials and fluorescent light distributors, is not sustainable and far more insulting to the environment than flaring natural gas. For instance, the fact that the most common usage is the use of photovoltaic, in which the maximum efficiency can only be 15% (Gupta et al. 2006), can be a reason to reject this particular use of solar energy. The same comment stands for wind energy, for which direct grinding is sustainable (centuries of practice in the Netherlands) but the conversion to electricity is not. Bio-fuel in this regard offers an interesting take. The direct burning of wood or vegetation is sustainable and the resulting C 0 2 is beneficial to the environment but with the condition that chemical fertilizer, pesticide, or genetic modification was not used.
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
433
The argument made in this book is that all currently used energy solutions are energy-inefficient and wasteful. This chapter establishes that a sustainable technology is based on zero-waste and, therefore, offers the greatest possible global efficiency. Inherent to this is the environmental benefit that is an added bonus to the sustainable technology. Following this, petroleum technologies are discussed with a focus on current practices and recommendations on how to make these practices sustainable.
11.2
Petroleum Refining
Crude oil is a mixture of hydrocarbons. These hydrocarbon mixtures are separated into commercial products by numerous refining processes. They have very similar compositions as vegetable oils. As a result, many properties of the two sets of fluids are similar, including biodegradability, flashpoint, dead oil viscosity, density, bactericidal properties, etc. However, petroleum fluids are rarely used in their original form. Even though it is known that petroleum fluids have been used in various cultures from ancient times to the Renaissance, in the post-Renaissance culture petroleum fluids are rarely used directly. One exception is the use of crude oil as mosquito repellant in the former Soviet Union. Even though it eradicated malaria from much of the Soviet Union, they joined in the production of DDT after the Nobel-Prize winning synthesis of this toxic chemical, but most likely for commercial reasons. After DDT was banned in 1972, the use of crude oil as a pesticide did not return into practice. Today, petroleum fluids are transported to refineries prior to any usage. Oil refineries are enormous complex processes. Figure 11.1 shows an oil refinery complex in Dartmouth, Nova Scotia. Refining involves a series of processes to separate and sometimes alter the hydrocarbons in crude oil. The fundamental process of refining involves the breakdown of crude oil into its various components and the separation of them to sell as a value added product. Because each component loses its properties, chemicals are added to restore original qualities. This is a typical chemical decomposition and re-synthesis process that has been in practice in practically all sectors of the modern age, ranging from the plastic industry to pharmaceutical industries. Figure 11.2 shows the major steps of a conventional refining process. The first step is transportation and storage. In the crude
434
THE GREENING OF PETROLEUM OPERATIONS
Figure 11.1 Dartmouth refinery, Nova Scotia.
Transportation and storage of crude oil
''
1
k
Vacuum distillation
Hydrocarbon Separation —fc
'r
Atmospheric distillation Cracking, Coking etc.
Hydrocarbon creation Alkylation, reforming etc.
'' Hydrocarbons blending
''
r—►
Removal of sulfur other chemicals
-*
Solvent dewaxing, caustic washing
Cleaning impurities
Figure 11.2 Major steps of refining process in oil refining.
oil refining process, fractional distillation is the main process that separates oil and gas. For this process, the distillation tower is used, which operates at atmospheric pressure and leaves a residue of hydrocarbons with boiling points above 400°C and more than
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
435
Figure 11.3 Pictorial view of fractional column.
70 carbon atoms in their chains. Small molecules of hydrocarbons have low boiling points, while larger molecules have higher boiling points. The fractionating column is cooler at the top than at the bottom, so the vapors cool as they rise. Figure 11.3 shows the pictorial view of a fractional column. It also shows the ranges of hydrocarbons in each fraction. Each fraction is a mix of hydrocarbons and each fraction has its own range of boiling points and comes off at a different level in the tower. Petroleum refining has evolved continuously in response to changing consumer demands for better and different products, such as from aviation gasoline to jet fuel. Each requires various degrees of "refinement" to conform to specific needs of machineries that are designed according to certain "ideal" fluid behavior. A summary of a detailed process flow chart for oil refining steps is presented in Table 11.1. The table also describes the different treatment methods for each of the refining phases. The third column in the above table shows how the refining process can render natural petroleum fluids into toxic chemicals. If the
436
THE GREENING OF PETROLEUM OPERATIONS
Table 11.1 Details of oil refining process and various types of catalyst used. Process
Description
Distillation Processes
It basically relies on the Heat difference of the boiling point of various fluids. Density also has an important role to play in distillation. The lightest hydrocarbon at the top and the heaviest residue at the bottom are separated.
Coking and Thermal process
Coking unit converts heavy feedstocks into solid coke and lower boiling hydrocarbon products that are suitable to offer refinery units to convert to higher value transportation fuel. This is a severe thermal cracking process to form coke. Coke contains high boiling point hydrocarbons and some volatiles that are removed by calcining at a temperature of 1095-1260°C. Coke is allowed sufficient time to remain in high temperature heaters in insulated surge drums, hence, it is called delayed coking.
Heat
Thermal Cracking
The crude oil is subjected to pressure, and large molecules are broken into small ones to produce additional gasoline. The naphtha fraction is useful for making many petrochemicals. Heating naphtha in the absence of air makes the molecules split into shorter ones.
Excessive heat and pressure
Catalyst/Heat/ pressure used
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
437
Table 11.1 (cont.) Details of oil refining process and various types of catalyst used. Process
Description
Catalyst/Heat/ pressure used
Catalytic Cracking
Catalytic cracking converts heavy oils into high gasoline, less heavy oils, and lighter gases. Paraffins are converted into C3 and C4 hydrocarbons. The benzene rings of aromatic hydrocarbons are broken. Rather than distilling more crude oil, an alternative is to crack crude oil fractions with longer hydrocarbons. Larger hydrocarbons split into shorter ones at low temperatures if a catalyst is used. This process is called catalytic cracking. The products include useful short chain hydrocarbons.
Nickels, zeolites, acid treated natural alumina silicates, amorphous and crystalline synthetic silica alumina catalyst.
Hydro-processing Hydroprocessing (325°C and 50 atm) includes both hydrocracking (350°C and 200 atm) and hydrotreating. Hydrotreating involves the addition of hydrogen atoms to molecules without actually breaking the molecule into smaller pieces and improves the quality of various products (e.g., by removing sulfur, nitrogen, oxygen, metals, and waxes and by converting olefins to saturated compounds). Hydrocracking breaks longer molecules into smaller ones. This is a more severe operation using higher heat and longer contact time. Hydrocracking reactors contain fixed, multiple catalyst beds.
Platinum, tungsten, palladium, nickel, and crystalline mixture of silica alumina; cobalt and molybdenum oxide on alumina nickel oxide, nickel thiomolybdate tungsten, nickel sulfide, vanadium oxides, and nickel thiomolybdate are used for sulfur removal, and nickel molybdenum catalyst is used for nitrogen removal.
438
THE GREENING OF PETROLEUM OPERATIONS
Table 11.1 (cont.) Details of oil refining process and various types of catalyst used. Process
Description
Catalyst/Heat/ pressure used
Alkylation
Alkylation or "polymerization" is the process of forming longer molecules from smaller ones. Another process is isomerization, in which straight chain molecules are made into higher octane branched molecules. The reaction requires an acid catalyst at low temperatures and low pressures. The acid composition is usually kept at about 50%, making the mixture very corrosive.
Sulfuric acid, or hydrofluoric acid, HF(1-40°C, 1-10 atm). Platinum onAlCl 3 /Al 2 0 3 catalyst is used as a new alkylation catalyst.
Catalytic Reforming
This uses heat, moderate pressure, and fixed bed catalysts to turn naphtha, short carbon chain molecule fraction, into high-octane gasoline components — mainly aromatics.
Catalyst used is a platinum (Pt) metal on an alumina (A120,) base.
Treating Nonhydrocarbons
Treating can involve chemical reactions and/or physical separation. Typical examples of treating are chemical sweetening, acid treating, clay contacting, caustic washing, hydrotreating, drying, solvent extraction, and solvent dewaxing. Sweetening compounds and acids desulfurizes crude oil before processing and treats products during and after processing.
heat source and catalysts used are products of unsustainable practices, their contact with petroleum fluids will result in unsustainable products. Unless this is recognized, further refinement of the process, e.g., optimization of catalysts, automatization of heating
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
439
elements, blending of various additives, and corrosion protection, will not solve the sustainability problem. Catalysts used in processes that remove sulfur are impregnated with cobalt, nickel, or molybdenum. During the separation process, sulfur from crude oil is removed only in exchange for traces of these catalysts. As seen in previous Chapter 3 of this book, trace elements are not negligible and must be accounted for in determining long-term impacts. These trace elements will accompany the refined oil and will end u p in combustion chambers, eventually polluting the C 0 2 emitted from a combustion engine. The inability of current detection techniques to identify these trace elements will not ensure that the pollution of C 0 2 does not take place. It has been discussed in previous chapters that contaminated C 0 2 is not acceptable by plants or trees, which reject this strand of C0 2 . This process ends u p contributing to the overall concentration of C 0 2 in the atmosphere, delaying natural consumption and utilization of C 0 2 in the ecosystem. If the removal of sulfur is the objective, the use of zeolite can solve this problem. It is well known that naturally occurring zeolite has the composition to act as a powerful agent that would adsorb unwanted matters with high levels of adsorption, ion exchange, and catalytic actions (Shimada 1996). Even before the detailed composition of naturally occurring zeolite is known, the natural state of such a powerful agent should confirm that its usage is not harmful to the environment. Similar properties have been identified in limestone as well as in vegetable oils, which can be used as a solvent for removing sulfur compounds. The use of zeolite or similar naturally occurring separation materials would be benign to the environment and would also eliminate the additional cost of cobalt, nickel, and molybdenum processing, bringing in double dividend to the petroleum processing industry. Conventionally, synthetic catalysts are used for enhancing the petroleum cracking process. Even when naturally occurring chemicals are used, they are acid-treated. With the acid being synthetically produced, the process becomes irreversibly contaminated. More recently, microwave treatment of natural materials is being proposed in order to enhance the reactivity of natural materials (Henda et al. 2006). With microwave heating not being a natural process, this treatment will also render the process unsustainable. However, such treatment is not necessary because natural materials, such as zeolite, clay, and others, do contain properties that would help the cracking process (Lupina and Aliev 1991). Acid enhancing,
440
THE GREENING OF PETROLEUM OPERATIONS
if at all needed, can be performed with organic acid or acid derived from natural sources. Acid-function catalysts impregnated with platinum or other noble metals are used in isomerization and reforming. Research on this topic has focused on the use of refined heavy metal elements and synthetic materials (Baird, Jr. 1990). These materials are known carcinogens and have numerous long-term negative effects on the environment. In addition, the resulting products are aromatic oils, carcinogenic polycyclic aromatic compounds, or other hazardous materials, and they may also be pyrophoric. This becomes a difficult short-term problem. When such a problem is addressed, solutions that are no more sustainable are usually offered. For instance, in order to combat pyrophoricy, a patented technology uses aromatic hydrocarbons such as alkyl-substituted benzenes including toluene, xylene, and heavy aromatic naphtha. Heavy aromatic naphtha comprises xylene and higher aromatic homologs (Roling and Sintim 2000). The entire process spirals further down the path of unsustainability. Table 11.2 shows the various processes and products used during the refining process. Each of the above functions can also be performed with natural substitutes that are cheaper and benign to the environment. This list includes the following: zeolites, alumina, silica, various biocatalysts, and enzymes in their natural state. The use of bacteria to decompose large hydrocarbon molecules offers an attractive alternative because the process is entirely sustainable. Khan and Islam (2007b) also suggest the use of gravity segregation from distillate lighter components to heavier ones. The use of solar heating, in conjunction with heating from flares that are available in the oil field, will bring down the heating cost and make the process sustainable.
11.2.1 Zero-waste Refining Process The zero-waste scheme is the only way to sustainability. Recent works by Lakhal and H'Mida (2003) and Lakhal et al. (2006) propose a sustainable refining scheme with the so-called Olympic model. Their work analyzes the structure of the supply chain from production, transportation, and distribution to the end users. The specific aspects of the model include: (1) the actual contaminants through the supply chain; and (2) an analysis of operations, processes, materials design, and selection, according to environmental policy. The research asserts that environmental practices would accrue
Separation
Separation
Atmospheric distillation
Vacuum distillation
Purpose
Thermal
Thermal
Feedstock(s)
Separate w / o icking
Atmospheric tower residual
Separate fractions Desalted crude oil
FRACTIONATION PROCESSES
Method
Alteration
Polymerize
Hydrogenate
Decompose
Decompose
Decompose
Catalytic cracking
Coking
Hydrocracking
Hydrogen steam reforming
Steam cracking
Visbreaking Thermal
Thermal
Gas oil coke, distillate
Gas oil coke, distillate
Reduce viscosity
Crack large molecules
Atm tower residual
Atm tower, heavy fuel/ distillate
Desulfurized gas, O,, steam
Convert to lighter Gas oil, cracked HCs oil residual
Convert vacuum residuals
Upgrade gasoline
Catalytic / thermal Produce hydrogen
Catalytic
Thermal
Catalytic
CONVERSION PROCESSES - DECOMPOSITION
Action
Process name
Table 11.2 Various processes and products in oil refining process.
Distillate tar
Cracked naphtha, coke, residual
Hydrogen, CO, CO,
Lighter higher quality products
Gasoline, petrochemical feedstock
Gasoline, petrochemical feedstock
Gas, gas oil, lube, residual
Gas, gas oil, distillate, residual
Product(s)
Unite 2 or more olefins
Catalytic
Polymerizing
Cracker olefins
Lube oil, fatty acid, alky metal
Tower isobutane/ cracker olefin
Alteration/ dehydration
Rearrange
Treatment
Dehydration
Treatment
Catalytic reforming
Isomerization
Amine treating
Desalting
Drying
Straight chain to branch
Catalytic
High oct. Reformate/ aromatic
High-octane naphtha, petrochemical stocks
Lubricating grease
Iso-octane (alkylate)
Crude oil Liq Hcs, LPG, alky feedstk
Remove acidic contaminants Remove contaminants Remove H 2 0 & sulfur cmpds
Absorption Abspt/therm
Sour gas, HCs w / C O , & H,S
Sweet & dry hydrocarbons
Desalted crude oil
Acid free gases & liquid HCs
Butane, Isobutane/ pentane, hexane pentane / hexane
Coker/ hydro-cracker naphtha
Absorption
TREATMENT PROCESSES
Upgrade low octane naphtha
Catalytic
CONVERSION PROCESSES-ALTERATION OR REARRANGEMENT
Polymerize
Combine soap and oils
Thermal
Grease compounding Combining
Unit olefins and isoparaffins
Catalytic
Combining
Alkylation
CONVERSION PROCESSES - UNIFICATION
Table 11.2 (cont.) Various processes and products in oil refining process.
a
z
O
I
w
►■a
o
c|
be
P1
O
ffl H
oz o
»
a w
Solvent extraction
Treatment
Treatment
Solvent extr.
Treatment
Phenol extraction
Solvent deasphalting
Solvent dewaxing
Solvent extraction
Sweetening
Source: OSHA 2005
Catalytic
Hydrogenation
Hydrotreating
Catalytic
Abspt/precip.
Cool/filter
Absorption
Abspt/therm
Catalytic
Treatment
Hyfrodesulfarization
Absorption
Solvent extraction
Furfural extraction
Remove H 2S, convert mercaptan
Separate unsat. oils
Remove wax from lube stocks
Remove asphalt
Improve vise, index, color
Remove impurities, saturate HC's
Remove sulfur, contaminants
Upgrade mid distillate & lubes
Untreated distillate/ gasoline
Gas oil, reformate, distillate
Vac. tower lube oils
Vac. tower residual, propane
Lube oil base stocks
Residuals, cracked HC's
High-quality distillate/ gasoline
High-octane gasoline
Dewaxed lube basestock
Heavy lube oil, asphalt
High quality lube oils
Cracker feed, distillate, lube
High-sulfur Desulfurized residual/ gas oil olefins
High quality Cycle oils & lube feed-stocks diesel & lube oil
z
o
n
r
•■d
«5
1—1
z a
>
H
M ►a
n o zn
H M
>
o
N w se
M
a
H
444
THE GREENING OF PETROLEUM OPERATIONS
competitive benefits to petroleum companies and enhance corporate performance (Sharma 2001). This section defines attributes of a green supply chain for an oil refinery, using the framework to assess greenness efforts of an oil refinery through its supply chain. The section proceeds to develop the concept of the Olympic green supply chain. The following are the primary features of this model: The five zeros of waste or emissions (corresponding to the five circles in the Olympic flag): • Zero emissions (air, soil, water, solid waste, hazardous waste) • Zero waste of resources (energy, materials, human) • Zero waste in activities (administration, production) • Zero use of toxics (processes and products) • Zero waste in product life-cycle (transportation, use, end-of-life) The Zero-waste Organization (Zero Waste 2005) defends the zero waste approach by using a visionary goal of zero waste to represent the endpoint of "closing-the-loop," so that all materials are returned at the end of their life as industrial nutrients, thereby avoiding any degradation of nature. A 100% efficiency of use of all resources energy, material, and human - is promoted by Zero Waste, working toward a goal of reducing costs, easing demands on scarce resources, and providing greater availability for all. Zero Waste's principles applied to products reduce negative impacts during manufacture, transportation, during use, and at the end of use. For petroleum, as a unit of analysis, Figure 11.4 illustrates the concept of the green supply chain. Such an approach, which is always the norm in nature, is only beginning to be proposed in the petroleum sector (Bjorndalen et al. 2005) and even in the renewable energy sector (Khan et al. 2005b). The above emissions from the refinery are contaminated with toxic catalysts and other additives even when they are present in trace amounts. If those toxic agents were not added throughout the supply chain (e.g., well head, separators, pipeline, refining), these emissions would not be harmful to the environment (similar to organic methane), and only spurious assumptions that organic chemicals are the same as non-organic chemicals would show conclusions otherwise. Five primary activities in refinery processes are (1) materials transfer and storage, (2) separating hydrocarbons (e.g., distillation),
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
445
Table 11.3 Emissions from Refinery.
(3) creating hydrocarbons (e.g., cracking/coking, alkylation, and reforming), (4) blending hydrocarbons and removing impurities (e.g., sulfur removal), and (5) cooling. Tables 11.3, 11.4, and 11.5 (compiled from the Environmental Defense 2005) enumerate the primary emissions at each activity level. There are seven primary air release emissions and 23 primary hazardous/solid wastes.
446
THE GREENING OF PETROLEUM OPERATIONS
Table 11.4 Primary wasters from oil refinery. Cracking/coking
Alkylation and reforming
Sulfur removal
Air releases: carbon monoxide, nitrogen oxides, particulate matter, sulfur dioxide, VOCs
Air releases: carbon monoxide, nitrogen oxides, particulate matter, sulfur dioxide, VOCs
Air releases: carbon monoxide, nitrogen oxides, particulate matter, sulfur dioxide, VOCs
Hazardous/solid wastes, wastewater: ammonia, anthracene, benzene, 1,3-butadiene, copper, cumene, cyclohexane, ethylbenzene, ethylene, methanol, naphthalene, nickel, phenol, PAHs, propylene, toluene, 1,2,4-trimethy Ibenzene, vanadium (fumes and dust), xylene
Hazardous/solid wastes: ammonia, benzene, phenol, propylene, sulfuric acid aerosols or hydrofluoric acid, toluene, xylene Wastewater
Hazardous / solid wastes: ammonia, diethanolamine, phenol, metals Wastewater
The primary hazardous/solid wastes include the following: 1,2,4trimethylbenzene, 1,3-butadiene, ammonia, anthracene, benzene, copper, cumene, cyclohexane, diethanolamine, ethylbenzene, ethylene, hydrofluoric acid, mercury, metals, methanol, naphthalene, nickel, PAHs, phenol, propylene, sulfuric acid aerosols or toluene, vanadium (fumes and dust), and xylene. The most important resource in the refinery process is energy. Unlike the manufacturing industry, labor costs do not constitute a high percentage of expenses in a refinery. In this continuous process, there is no waste of materials in general. The waste of human resources could be measured by a ratio of accident (number of work accidents/number of employers) and absenteeism due to illness (number of days lost for illness/number of work days x number of employees). In the Olympic refinery, ratios of accident and absenteeism would be near zero. The refining process uses a lot of energy. Typically, approximately 2% of the energy contained in crude oil is used for distillation. The efficiency of the heating process can be increased drastically by combining direct solar heating (with non-engineered thermal fluid) with direct fossil fuel burning. The advantage of this process is a
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
447
Table 11.5 Pollution prevention options for different activities in material transfer and storages. Cracking/coking
Alkylation and reforming
Sulfur removal
Using catalysts with fewer toxic materials reduces the pollution from "spent" catalysts and catalyst manufacturing.
Using catalysts with fewer toxic materials reduces the pollution from "spent" catalysts and catalyst manufacturing.
Use "cleaner" Ozone or bleach crude oil," should replace chlorine to containing control biological less sulfur growth in and fewer cooling systems metals. Switching from Using oxygen rather than water cooling to air cooling could air in the reduce the use Claus plant of cooling water reduces the by 85%. amount of hydrogen sulfide and nitrogen compounds produced.
Cooling
gain in global efficiency as well as environmental benefit. It is estimated that the total energy requirement for petroleum refining can be reduced to less than 0.5% of the energy contained in crude oil by designing the heating systems with a zero-waste scheme, as outlined earlier in this chapter. A number of procedures are used to turn heavier components of crude oil into lighter and more useful hydrocarbons. These processes use catalysts or materials that help chemical reactions without being used up themselves. Table 11.6 shows different toxic catalysts and base metals. Refinery catalysts are generally toxic and must be replaced or regenerated after repeated use, turning used catalysts into a waste source. The refining process uses either sulfuric acid or hydrofluoric acid as catalysts to transform propylene, butylenes, and / o r isobutane into alkylation products, or alkylate. Vast quantities of sulfuric acid are required for the process. Hydrofluoric acid (HF), also known as hydrogen fluoride, is extremely toxic and can be lethal. Using catalysts with fewer toxic materials significantly reduces pollution. Eventually, organic acids and enzymes, instead
448
THE GREENING OF PETROLEUM OPERATIONS
Table 11.6 Catalysts and materials used to produce catalysts base metals and compounds. Name of Catalysts
Name of metals base
Activated alumina, Amine, Ammonia, Anhydrous hydrofluoric acid Anti-foam agents - for example, oleyl alcohol or Vanol, Bauxite, Calcium chloride, Catalytic cracking catalyst, Catalytic reforming catalyst, Caustic soda, Cobalt molybdenum, Concentrated sulphuric acid, Demulsifiers - for example, Vishem 1688, Dewaxing compounds (catalytic) - for example, P4 Red, wax solvents Diethylene glycol, Glycol -Corrosion inhibitors), Hydrogen gas, Litharge, Na MBT (sodium 2-mercaptobenzothiazole) - glycol corrosion inhibitor (also see the taxable list for Oil Refining - Corrosion inhibitors), Na Cap - glycol corrosion inhibitor (also see the taxable list for Oil Refining - Corrosion inhibitors), Nalcolyte 8103, Natural catalysts - being compounds of aluminum, silicon, nickel, manganese, iron and other metals, Oleyl alcohol - anti-foam agent, Triethylene glycol, Wax solvents - dewaxing compounds
Aluminum (Al), Aluminum Alkyls, Bismuth (Bi), Chromium (Cr), Cobalt (Co), Copper (Cu), Hafnium (Hf), Iron (Fe), Lithium (Li), Magnesium (Mg), Manganese (Mn), Mercury (Hg), Molybdenum (Mo), Nickel (Ni), Raney Nickel, Phosphorus (P), Potassium (K), Rhenium (Re), Tin (Sn), Titanium (Ti), Tungsten (W), Vanadium (V), Zinc (Zn), Zirconium (Zr), and more to write
Source: CTB 2006 of catalysts, must be considered. Thermal degradation and slow reaction rates are often considered to be biggest problems of using organic acid and catalysts. However, recent discoveries have shown that this perception is not justified. There are numerous organic products and enzymes that can withstand high temperatures, and many of them induce fast reactions. More importantly, recent developments in biodiesel indicate that the process can be modified in order to eliminate the use of toxic substances (see Table 11.7). The
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
449
Table 11.7 Chemicals used in refining. Chemicals used in refining
Purpose
Ammonia
Control corrosion by HCL
Tetraethyl lead (TEL) and tetramethyl lead (TML)
Additives to increase the octane rating
Ethyl tertiary butyl ether (ETBE), methyl tertiary butyl ether (MTBE), tertiary amyl methyl ether (TAME)
To increase gasoline octane rating and reduce carbon monoxide
Sulfuric Acid and Hydrofluoric Acid
Alkylation processes, some treatment processes.
Ethylene glycol
Dewatering
Toluene, methyl ethyl ketone (MEK), methyl isobutyl ketone, methylene chloride, ethylene dichloride, sulfur dioxide
Dewaxing
Zeolite, aluminum hydrosilicate, treated bentonite clay, fuller's earth, bauxite, and silica-alumina
Catalytic cracking
Nickel
Catalytic cracking
Granular phosphoric acid
Polymerization
Aluminum chloride, hydrogen chloride
Isomerization
Imidazolines and Surfactants Amino Ethyl Imidazoline Hydroxy-Ethyl Imidazoline Imidazoline/Amides Amine/Amide/DTA
Oil soluble corrosion inhibitors
Complex Amines Benzyl Pyridine
Water soluble corrosion inhibitors
Diamine Amine Morpholine
Neutralizers
Imidazolines Sulfonates
Emulsifiers
Alkylphenolformaldehyde, polypropeline glycol
Desalting and emulsifier
Cobalt Molybdate, platinum, chromium alumina A1C1,-HC1, Copper pyrophosphate
450
THE GREENING OF PETROLEUM OPERATIONS
same principle applies to other materials, e.g., corrosion inhibitors, bactericides, etc. Often, toxic chemicals lead to very high corrosion vulnerability, and even more toxic corrosion inhibitors are required. The whole process spirals down to a very unstable process, which can be eliminated with the new approach (Al-Darbi et al. 2002).
11.3 Zero Waste in Product Life Cycle (Transportation, Use, and End-of-Life) The complex array of pipes, valves, pumps, compressors, and storage tanks at refineries are potential sources of leaks into air, land, and water. If they are not contained, liquids can leak from transfer and storage equipment and contaminate soil, surface water, and ground water. This explains why, according to industry data, approximately 85% of monitored refineries have confirmed groundwater contamination as a result of leaks and transfer spills (EDF 2005). To prevent the risks associated with the transportation of sulfuric acid and on-site accidents associated with the use of hydrofluoric acid, refineries can use a solid acid catalyst that has recently proven effective for refinery alkylation. However, they are also more toxic than the liquid counterpart. As pointed out earlier, the use of organic acid or organically prepared acid would render the process inherently sustainable. A sustainable petroleum process should have storage tanks and pipes aboveground to prevent ground water contamination. There is room for improving the efficiency of these tanks with natural additives. Quite often, the addition of synthetic materials makes an otherwise sustainable process unsustainable. Advances in using natural materials for improving material quality have been made by Saeed et al. (2003). Sulfur is the most dangerous contaminant in a refinery's output products. When.fuel oils are combusted, the sulfur in them is emitted into the air as sulfur dioxide (S0 2 ) and sulfate particles (S0 4 ). Emissions of S0 2 , along with emissions of nitrogen oxides, are a primary cause of acidic deposition (i.e., acid rain), which has a significant effect on the environment, particularly in central and eastern Canada (2002). Fine particulate matter (PM 25 ), of which sulfate particles are a significant fraction (30-50%), may affect human health adversely. In the absence of toxic additives, the produced products
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
451
Table 11.8 Level regulation for p e t r o l e u m p r o d u c t s .
Fuel Type Heavy Fuel Oil, HFO Motor Gasoline Light fuel Oil, LFO Diesel Fuel
Sulfur
Benzene
1 % by weight1* 30 mg/kg 2
1 % by volume 3
0.1% by weight3* 15 mg/kg 4
Aviation Gasoline
N/A
Lubricants
N/A
'Starting January 1, 2002 in Europe; Starting January 2005 in Canada; "Starting January 1, 2008 in Europe, "Starting June 1, 2006 in Canada; "Effect in July 1999 in Canada; N / A : Not yet available * There is no Canadian standard for this product
will perform equally well but will not release contaminated natural products to the environment.
11.4
No-Flaring Technique
Flaring is a commonly used technique in oil refinery to burn out low quality gas. With increasing awareness of the environmental impact, gas flaring is likely to be banned in the near future. This will require significant changes in the current practices of oil and gas production and processing. The low quality gas that is flared contains many impurities, and during the flaring process toxic particles are released into the atmosphere. Acid rain, caused by sulfur oxides in the atmosphere, is one of the main environmental hazards resulting from this process. Moreover, flaring natural gas accounts for approximately a quarter of the petroleum industry's emissions (UKOO 2002). However, the alternative solution that is being offered is not sustainable. Consider the use of synthetic membranes or synthetic solvents to remove impurities such as C0 2 , water, SOz, etc. These impurities are removed and replaced with traces of synthetic materials either from the synthetic membrane or the synthetic solvent. Recently, Bjorndalen et al. (2005) developed a novel approach to avoid flaring from petroleum operations. Petroleum products contain materials in various phases. Solids in the form of fines, liquid
452
THE GREENING OF PETROLEUM OPERATIONS Component
Methods EVTN System
Solid-liquid separation
—► Surfactant from Waste -> Biodegradation
Value Addition of Waste Utilization of cleaned fines as in construction material Mineral extraction from cleaned fines
Paper Material Purification of formation water using wastes (fish scale, human hair, ash)
Liquid-liquid separation Human Hair
Human Hair
''
Gas-gas separation
Limestone
Re-injection of gas for Enhance Oil Recovery processes
Hybrid: membrane + biological solvent
Figure 11.5 Breakdown of no-flaring method (Bjorndalen et al. 2005).
hydrocarbon, carbon dioxide, and hydrogen sulfide are among the many substances found in the products. According Bjorndalen et al. (2005), by separating these components through the following steps no-flare oil production can be established: effective separation of solid from liquid; effective separation of liquid from liquid; and effective separation of gas from gas. Many separation techniques have been proposed in the past (Basu et al. 2004; Akhtar 2002). However, few are economically attractive and environmentally appealing. This option requires an innovation approach that is the central theme of this report. Once the components for no-flare have been fulfilled, value-added end products can be developed. For example, the solids can be utilized for minerals, the brine can be purified, and the low quality gas can be re-injected into the reservoir for enhanced oil recovery. Figure 11.5 outlines the components and value-added end products that will be discussed.
11.4.1
Separation of Solid-Liquid
Even though numerous techniques have been proposed in the past, little improvement has been done in the energy efficiency of
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
453
solid-liquid separation. Most techniques are expensive, especially if a small unit is operated. Recently, a patent has been issued in the U.S. for a new technique that removes solids from oil, called an EVTN system. This system is based on the creation of a strong vortex in the flow to separate sand from oil. The principle is that by allowing the flow to rotate rapidly in a vortex, centrifugal force can be generated. This force makes use of the density differences between the substances. The conventional filtration technique requires a large filter surface area (in the case of high flow rate) and the replacement of filter material and back flush. The voraxial technique eliminates this problem. Moreover, it is capable of maintaining high gravity or "g" force as well as a high flow rate, which will be very effective in oil-fines separation (EVTN 2003). This product shows great potential for the separation of liquid and fines. The use of surfactants from waste can be another possible technique that separates solids from liquids. The application of the combination of waste materials with water to separate fines from oil is attractive due to the relatively low cost and environmentally sound nature. Waste products such as cattle manure, slaughterhouse waste, okra, orange peels, pine cones, wood ash, paper mill waste (lignosulfate), and waste from the forest industry (ferrous chloride) are all viable options for the separation of fines and oil. Cattle manure and slaughterhouse wastes are plentiful in Alberta where flaring is very common. Researchers from UAE University have determined that okra is known to act like soap (Chaalal 2003). Okra extract can be created through pulverization methods. Orange peel extract should also be examined because it is a good source of acid. A study conducted at Dalhousie University determined that wood ash can separate arsenic from water and, therefore, may be an excellent oil/fines separator. Pinecones, wood ash, and other plant materials may also be viable. Industrial wastes such as lignosulfate and ferrous chloride, which have been very beneficial in the industrial areas of cellulose production and sewage treatment, respectively, can be potential separators. Finally, a biodegradation method for stripping solid waste from oily contaminants is currently under study as a collaborative effort between Dalhousie and UAE University. Thermophilic bacteria are found to be particularly suitable for removing low-concentration crude oils from the solid surface (Tango and Islam 2002). Also, it has been shown that bioremediation of flare pits, which contain many of the same substances that need to be removed for an appealing no-flare design, has been successful (Amatya et al. 2002).
454
THE GREENING OF PETROLEUM OPERATIONS
Once the fines are free from liquids, they can be useful for other applications. For example, the fines can be utilized as a substitution for components in construction materials. Drilling wastes have been found to be beneficial in highway construction (Wasiuddin 2002), and an extension of this work can lead to the usage of fines. Studies have shown that the tailings from oil sands are high in titanium content. With this in mind, the evaluation of valuable minerals in fines will be conducted. To extract the minerals, usually chemical treatment is used to modify the surface of the minerals. Treatment with a solution derived from a natural material has a great potential. Microwave heating has the potential of assisting this process (Haque 1999; Hua et al. 2002; Henda et al. 2006). It enhances selective floatability of different particles. Temperature can be a major factor in the reaction kinetics of a biological solvent with mineral surfaces. Various metals respond in different manners under microwave conditions, which can make significant changes in floatability. The recovery process would be completed through transferring the microwave-treated fines to a flotation chamber. However, microwave heating might render the entire process unsustainable because a microwave itself is neither efficient (global efficiency-wise) nor natural (the source being unnatural). Similarly, effective separation processes can be induced by using naturally produced acids and hydrogen peroxide. More research should be conducted to this effect. 11.4.2
S e p a r a t i o n of L i q u i d - L i q u i d
Once the oil has been separated from fines via water and waste material, it must be separated from the solution and the formation water. Oil water separation is one of the oldest practices in the oil industry because it is almost impossible to find an oil reservoir that is absolutely free of connate water. In fact, the common belief is that all reservoirs were previously saturated with water, and after oil migration only a part of the water was expelled from the reservoir and replaced by oil. There are two sources of water that cause water to flow to the wellbore. The first source is connate water that usually exists with oil in the reservoir, saturating the formation below the Oil-Water-Contact (OWC), or in the form of emulsion even above OWC. The other source is associated with water flooding that is mainly considered in secondary oil recovery. In almost all oil reservoirs, connate water coexists with oil as a percentage, filling pore spaces. In some reservoirs, where water
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
455
pressure is the main driving mechanism, water saturation may even exceed oil saturation as the production continues. Therefore, when the wellbore is operational, oil production mixed with water is inevitable. As production continues, more water invades the oil zone, and water-cut in production stream consequently increases. Before taking an oil well to production, the oil zone is perforated above the OWC to allow oil flow into the wellbore. The fact that part of the formation above OWC is still saturated with water consequently causes water production. The problem becomes more severe with time as the continuous drainage of the reservoir causes OWC to move upward, resulting in excessive water production. Because of this typical phenomenon that occurs in oil reservoirs, it is inevitable to avoid water production. Moreover, water flooding is an obvious practice for enhancing oil recovery after oil production declines. The problems of high water production and early breakthroughs are common obstacles of the water flooding practice and cause high production costs. Water production is not only associated with high cost considerations (installation and operation), it is also a major contributing factor to the corrosion of production facilities and reservoir energy loss. Moreover, the contaminated water can be an environmental contamination source if it is not properly disposed of. Water production can be tolerated to a certain extent depending on the economic health of a given reservoir. The traditional practice of separating oil from water is applied after the simultaneous production of both. Single stage or multi-stage separators are installed where both oil and water can be separated by gravity segregation. Down-hole water oil separation has been investigated and thought of since the early days of producing oil from petroleum reservoirs. Hydrocyclone separation has been used (Bowers et al. 2000) to separate oil from water at the surface, but its small size made its application down-hole even more attractive. Hydrocyclone separation has the best efficiency of 25-50% water content in the outlet stream to make the water stream as clean as possible, with a few exceptions, under ideal conditions. This makes the consideration of this technique limited to a number of circumstances, where high water-cut is expected and costs involved are justified. Stuebinger et al. (2000) compared hydrocyclone separation to gas-oil-water segregation and concluded that all down-hole separations are still premature and suggested more research and investigation for optimizing and enhancing these technologies.
456
THE GREENING OF PETROLEUM OPERATIONS
Recently, the use of membrane and ceramic materials has been proposed (Fernandez et al. 2001; Chen et al. 1991). While some of them show promises, these technologies do not fall under the category of economical appeal. The future of liquid-liquid separation lies within the development of inexpensive techniques and preferably down-hole separation technology. The first stage is material screening, searching for potential material that can pass oil but not water. The key is to consider the fundamental differences between oil and water in terms of physical properties, molecular size, and structure and composition. These materials must, in essence, be able to adsorb oil and, at the same time, prevent water from passing through. Having discovered at least one material with this feature, the next stage should be studying the mechanism of separation. Understanding the separation mechanism would assist in identifying more suitable materials and better selection criteria. The third stage is material improvement. Testing different down-hole conditions, using selected material for separation, then the possibility of improving the material by mixing, coating, or coupling with others should be investigated. The outcome of this stage would be a membrane sheet material that gives best results and a suitable technique that optimizes the procedure. Investigating the effect of time on the separation process, fouling problems and their remedy should be carried out in this stage. Eventually, a new completion method should be designed. The material with the ability to separate oil and water can also be used in above ground separation units. The main advantages of this technique would be the reduced size of the separation units and the increase in their efficiency. This is very critical, especially in offshore rigs where minimizing the size of different units is of essence and any save in space would substantially reduce the production cost. Preliminary studies and initial lab-scale experiments show that using a special type of long fiber paper for the membrane could be a good start (Khan and Islam 2006). These preliminary experiments have been performed during the material selection stage, and quite encouraging results have been encountered. A lab scale experimental setup consists of a prototype-pressurized reservoir that contains an oil-water emulsion and tubing on which the perforated section is wrapped with the membrane. The oil-water emulsion has been produced, and it was found that the selected material
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
457
recovers 98-99% of the oil without producing any water. These findings are subject to the lab conditions in which ΔΡ is kept about 20 psi. Continuous shaking was employed to keep the emulsion intact during the whole process of separation. An emulsion made up of varying ratios of oil and water was utilized in different sets of experiments. Almost all the oil-water ratios gave the same separation efficiency with the material used. The mentioned paper material is made of long fibrous wood pulp treated with a waterproofing material as a filtering medium. This treatment prevents water to flow through and at the same time allows oil to pass easily. The waterproofing agent for paper used in these experiments is "Rosin Soap" (rosin solution treated with caustic soda). This soap is then treated with alum to keep the pH of the solution within a range of 4-5. Cellulose present in the paper reacts reversibly with rosin soap and the presence of alum. It forms a chemical coating around the fibrous structure and acts as a coating to prevent water from seeping through it. This coating allows long-chain oil molecules to pass, making the paper a good conductor for an oil stream. Because of the reversible reaction, it was also observed that the performance of the filter medium increases with the increased acidity of the emulsion and vice versa. It was also observed that the filter medium is durable enough to be continuously used for a long time, keeping the cost of replacement and production down. Different experiments are being done to further strengthen the findings for a longer time period and high production pressures. It must be noted that the material used as a filtering medium is environmentally friendly and can easily be modified prone to down-hole conditions. Based on the inherent property of the material used, some other materials can also be selected to give the equivalent good results, keeping in mind the effect of the surrounding temperature, pressure, and other parameters present in down-hole conditions. Human hair has great potential in removing oil from the solution. Hair is a natural barrier against water and it easily absorbs oil. This feature was highlighted during a U.S. DoE funded project in 1996 (reported by CNN in February, 1997). A doughnut-shaped fabric container filled with human hair was used to separate oil from a low-concentration oil-in-water emulsion. The Dalhousie petroleum research team has adapted this technique with remarkable success in both the separation of oil and water as well as heavy metals from aqueous streams.
458
THE GREENING OF PETROLEUM OPERATIONS
Purifying formation water after oil separation will ensure an allaround clean system. Wastes such as fish scale have been known to absorb lead, strontium, zinc, chromium, cobalt (Mustafiz 2002; Mustafiz et al. 2002), and arsenic (Rahaman 2003). Wood ash can also adsorb arsenic (Rahman 2002). Both of these waste materials are great prospects that can be implemented in the no-flare process.
11.4.3 Separation of Gas-Gas The separation of gas is by far the most important phase of the noflare design. Current technologies indicate that separation may not be needed, and the waste gas as a whole can be utilized as a valuable energy income stream. Capstone Turbine Operation has developed a micro-turbine, which can generate up to 30 kW of power and consume 9000 it?/day of gas (Capstone 2003). Micro-turbines may be especially useful for offshore applications where space is limited. Another possible use of the waste gas is to re-inject it into the reservoir for pressure maintenance during enhanced oil recovery processes. The low quality gas can be re-pressurized via a compressor and injected into the oil reservoir. This system has been tested at two oil fields in Abu Dhabi to some praise (Cosmo Oil 2003). Also, simple incineration, instead of flaring, to dispose of solution gas has been proposed (Motyka and Mascarenhas 2002). This process only results in a reduction of emissions over flaring. The removal of impurities in solution gas via separation can be achieved both down-hole and at the surface, thus eliminating the need for flare (Bijorndalen et al. 2005). Many studies have been conducted on the separation of gases using membranes. In general, a membrane can be defined as a semi-permeable barrier that allows the passage of select components. An effective membrane system has high permeability to promote large fluxes. It will also have a high degree of selectivity to ensure that only a mass transfer of the correct component occurs. For all practical purposes, the pore shape, pore size distribution, external void, and surface area will influence the separation efficiencies (Abdel-Ghani and Davies 1983). Al Marzouqi (1999) determined the pore size distribution of membranes. He compared two models and validated the models with experimental data. Low concentrations of H2S in natural gas can be handled well by regenerable adsorbents, such as activated carbon, activated alumina, silica gel, and synthetic zeolite. Non-regenerable adsorbents, e.g., zinc and iron oxides, have been used for natural gas
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
459
sweetening. Many membrane systems have been developed including polymer, ceramic, hybrid, liquid, and synthetic. Polymer membranes have gained popularity in isolating carbon dioxide from other gases (Gramain and Sanchez 2002). These membranes are elastomers formed from cross-linked copolymers of high molecular weights. They are prepared as thin films by extrusion or casting. They demonstrate unique permeability properties for carbon dioxide and high selectivity towards H2, 0 2 , N 2 , and CH 4 . Ceramic membranes are used to separate hydrogen from gasified coal (Fain et al. 2001). With ceramic materials, very high separation factors have been achieved based on the ratios of individual gas permeances. Hybrid membranes combine thermodynamically based partitioning and kinetically based mobility discrimination in an integrated separation unit. Unfortunately, the permeability of common membranes is inversely proportional to selectivity (Kulkarni et al. 1983). Thus, the development of liquid membranes has lead to systems that have both a high permeability and high selectivity. Liquid membranes operate by immobilizing a liquid solvent in a micro porous filter or between polymer layers. A synthetic membrane is a thin barrier between two phases through which differential transport can occur under a variety of driving forces, including pressure, concentration, and electrical potential across the membranes. Pressure difference across the membrane can facilitate reverse osmosis, ultra-filtration, micro-filtration, gas separation, and pervaporation. Temperature difference across the membrane can facilitate distillation, and concentration difference can be used for dialysis and extraction. The feasibility of using a novel carbon, multi-wall membrane, for separating carbon dioxide from flue gas effluent from a power gas generation plant is being studied (Andrew et al. 2001). This membrane consists of nano-sized tubes with pore sizes that can be controlled. This would enhance the kinetic and diffusion rates, which in turn would yield high fluxes. The mass transport in nano-materials has not been clearly understood (Wagner et al. 2002). Although previous transient models of separation and adsorption suggest that high selectivity is the origin of selective transport, recent analyses indicate that specific penetrantmatrix interactions actually dominate the effects at transition stages. The primary difficulties in modeling this transport are that the penetrants are in continuous contact with the membrane matrix material and the matrix has a common topology with multiple length scales.
460
THE GREENING OF PETROLEUM OPERATIONS
In order to determine the effectiveness of a membrane for separating gases, it is important to have an accurate estimate of the pore size distribution (Al-Marzouqi 1999). This is a key parameter in determining the separation factor of gases through nano-porous membranes. Membrane systems are highly applicable for separating gases from a mixture. Continuing enhancement in technology development of membrane systems is a natural choice for the future. Proven techniques include zeolite (Izumi et al. 2002; Romanos et al. 2001; Robertson 2001; Jeong et al. 2002) when combined with pressure swing adsorption (PSA) techniques. PSA works on the principle that gases tend to be attracted to solids and adsorb under pressure. Zeolite is expensive, and therefore fly ash along with caustic soda has been used to develop a less expensive zeolite (Indian Energy Sector 2002). Waste materials can also be an attractive separation media due to the relatively low cost and environmentally sound nature. Since caustic soda is a chemical, other materials such as okra can be a good alternative. Rahaman et al. (2003) have shown that charcoal and wood ash are comparable to zeolite for removing arsenic from wastewater. Many of the waste materials discussed in the solid/liquid separation section also have the potential for effective separation materials. Carbon fibers (Fuertes 2001; Park and Lee 2003; Gu et al. 2002) as well as palladium fibers (Lin and Rei 2001; Chang et al. 2002; Karnik et al. 2002) have been studied extensively for separation. A low-cost alternative to this technology is human hair since it is a natural hollow fiber. Wasiuddin et al. (2002) outlined some of the potential uses of human hair confirming that it is an effective medium in removing arsenic from water. In addition to improving new-age techniques (e.g., polymeric membranes), research will be performed to test new materials, hybrid systems (e.g., solvent /membrane combination), and other techniques that are more suitable for the Atlantic region. For gas processing research, the focus will be on improving existing liquefaction techniques, addressing corrosion problems (including biocorrosion), and the treatment of regenerated solvents.
11.4.4 Overall Plan Figure 11.6 compares the flare design to the overall plan for no-flare design. Point 1 on the figure represents solid-liquid separation, Point 2 represents liquid-liquid separation, and Point 3 represents
THE ZERO-WASTE CONCEPT AND ITS APPLICATION
461
Figure 11.6 Pictorial presentation of faring techniques and proposed no-flaring method (Bjorndalen et al. 2005).
liquid-gas separation. Since current techniques for separating liquids from gas are relatively adequate, liquid-gas separation was not considered in this report. Point 4 represents the new portion of this design. It shows the addition of the gas-gas separator. For a noflare system to be effective, each one of these points must perform efficiently.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
12 Sustainable Refining and Gas Processing 12.1
Introduction
Petroleum fluids are considered the backbone of modern economic and industrial development. While crude oil and natural gas are inherently environment-friendly in their natural state, they are seldom used in their natural state because the modern age has seen an influx of engineering design and machine developments that use "ideal fluids." These "ideal fluids" are very different from natural fluids. Consequently, fluids that are available in a natural state are routinely refined, decomposed, or separated from their natural state making it mandatory for the fluids to undergo refining and gas processing. Because these refining and gas processing techniques contain synthetic chemicals that have undergone unsustainable processes, the processed fluids are rendered unsustainable. With this mode, all emissions contain products from the natural petroleum fluids that are contaminated with synthetic chemicals, making the total emission unacceptable to the ecosystem. In the end, this contamination process plays the most significant role in creating environmental consequences. This chapter shows how conventional refining and gas processing techniques lead to inherently unsustainable products. 463
464
THE GREENING OF PETROLEUM OPERATIONS
12.1.1
Refining
The refining of crude oil and processing of natural gas involves the application of large amount of synthetic chemicals and catalysts such as lead, chromium, glycol, amines etc. These synthetic chemicals contaminate the end products and are burnt along with the fuels producing various toxic by-products. The emission of such air pollutants that did not exist in nature before can cause environmental effects that can irreversibly damage the global ecosystem. It is found that refined oils degrade slower and last in the natural environment for a longer duration than crude oil. In this chapter, a pathway analysis of crude oil and fossil fuel refining and their impacts on the natural environment are discussed. Hence, it is clear that a significant improvement in current engineering practices is needed in order to reduce the emissions from refineries. Current engineering practices should follow natural pathways in order to reduce the emission of fluids that are inherently damaging to the environment. Only then, environmental problems including global warming can be reversed. Modern transportation systems, ranging from cars to aircrafts, are designed based on the use of oil, gas, and other fossil fuels. Because the use of processed fossil fuels creates several environmental and health problems, the environmental consequences are usually attributed to petroleum production. The total energy consumption in 2004 was equivalent to approximately 200 million barrels of oil per day, which is about 14.5 terawatts, over 85% of which comes from fossil fuels (Service 2005). However, not a drop of this is crude oil because the machinery used to convert crude oil into "usable" energy uses only "refined" or processed fluids itself. Globally, about 30 billion tons of C 0 2 is produced annually from fossil fuels, which includes oil, coal, and natural gas (EIA 2004). Because the contribution of toxic chemicals during the refining process is not accounted for (considered negligible in all engineering calculations), the produced industrial C 0 2 is considered solely responsible for the current global warming and climate change problems (Chilingar and Khilyuk 2007). From this point onward, all calculations indicate that burning fossil fuels is not a sustainable option. The subscriber of the alternate theory, however, makes the argument that the total emission of C 0 2 from petroleum activities is negligible compared to the total emission of C 0 2 from the overall ecosystem. For instance, Chilinger and Khilyuk (2005) argue
SUSTAINABLE REFINING AND GAS PROCESSING
465
that the emission of greenhouse gases by burning fossil fuels is not responsible for global warming and, hence, is not unsustainable. In their analysis, the amount of greenhouse gases generated through human activities is scientifically insignificant compared to the vast amount of greenhouse gases generated through natural activities. This scientific investigation is infamously called the "flat earth theory" by the likes of environmental giants, such as AI Gore (60 minutes interview, March 23, 2008). It is true that if only the composition of CO z is considered, the C 0 2 emissions from petroleum activities are negligible. However, this cannot form the basis for stating that global warming is not caused by petroleum activities. Similar to the phenomenal cognition that requires that the first premise be true before arriving to a conclusion, one must realize that the C0 2 , contaminated with trace elements from toxic catalysts and other chemicals (during refining, separation, transportation, and processing), plays a very different role than C 0 2 that is directly emitted form organic matter. Neglecting this fact would be equivalent to stating that because Freon concentration is negligible, it cannot cause a hole in the Ozone layer. Neither side of the global warming debate considers this factor. Refining crude oil and processing natural gas use large amounts of processed/synthetic chemicals and catalysts, including heavy metals. These heavy metals contaminate the end products and are burnt along with the fuels, producing various toxic by-products. The pathways of these toxic chemicals and catalysts show that they largely affect the environment and public health. The use of toxic catalysts creates many environmental effects that cause irreversible damages to the global ecosystem. The problem with synthetic additives emerges from the fact that they are not compatible with natural systems and not assimilated with biomasses in a way that would preserve natural order (Khan et al. 2008). This indicates that synthetic products do not have any place in the sustainability cycle. The use of natural catalysts and chemicals should be considered the backbone for the future development of sustainable practices. Crude oil is a truly nontoxic, natural, and biodegradable product, but the way we refine it is responsible for all the problems created by it on earth. At present, for every barrel of crude oil approximately 15% additives are added (CEC, 2004). These additives, with current practices, are all synthetic and / o r engineered materials that are highly toxic to the environment. With this "volume gain," the following distribution is achieved (Table 12.1).
466
THE GREENING OF PETROLEUM OPERATIONS
Table 12.1 Petroleum products yielded from one barrel of crude oil in California. Product
Percent of Total
Finished Motor Gasoline
51.4%
Distillate Fuel Oil
15.3%
jet Fuel
12.3%
Still Gas
5.4%
Marketable Coke
5.0%
Residual Fuel Oil
3.3%
Liquefied Refinery Gas
2.8%
Asphalt and Road Oil
1.7%
Other Refined Products
1.5%
Lubricants
0.9%
Source: California Energy Commission (CEC) 2004
Each of these products is subject to oxidation either through combustion or low-temperature oxidation, which is a continuous process. As one goes toward the bottom of the table, the oxidation rate decreases, but the heavy metal content increases, making each product equally vulnerable to oxidation. The immediate consequence of this conversion through refining is that one barrel of naturally occurring crude oil (convertible non-biomass) is converted into 1.15 barrels of potentially non-convertible non-biomass that would continue to produce more volumes of toxic components as it oxidizes either though combustion or through slow oxidation (Chhetri et al. 2008). As an example, just from the oxidation of the carbon component, 1 kg of carbon, which was convertible non-biomass, would turn into 3.667 kg of carbon dioxide (if completely burnt) that is not acceptable by the ecosystem, due to the presence of the non-natural additives. Of course, when crude oil is converted, each of its numerous components would turn into non-convertible non-biomass. Many of these components are not accounted for or even known, let alone have a scientific estimation of their consequences. Hence, the sustainable option is either to use natural catalysts and chemicals during refining or to
SUSTAINABLE REFINING AND GAS PROCESSING
467
design a vehicle that directly runs on crude oil based on its natural properties.
12.1.2 Natural Gas Processing Natural gas found in natural gas reservoirs are complex mixtures of hundreds of different compounds. A typical natural gas stream consists of a mixture of methane, ethane, propane, butane and other hydrocarbons, water vapors, oil and condensates, hydrogen sulfides, carbon dioxide, nitrogen, other gases, and solid particles (Table 12.2). Even though these compounds are characterized as contaminants, they are not removed because of environmental concerns. It is obvious that water vapor, carbon dioxide, nitrogen, sulfur components, etc., from a natural source are not a threat to the environment. The main reasons for their removal from a gas stream are the following: 1) the heating value of the gas is decreased in the presence of these gases. Consequently, suppliers are required to remove any levels of these gases beyond a desired value, depending on the standard set by the regulatory board. 2) The presence of water vapor, sulfur compounds, etc., increases the possibility of corrosion in the pipeline. This is a maintenance concern that has to do with the type of material used. 3) The presence of water in liquid form or hydrate in solid form can hinder compressors from functioning properly or even block the entire flow stream. This is a mechanical concern that can affect smooth operations. 4) The presence of H2S poses immediate safety concerns, in the case of accidental leaks. Conventionally, various types of synthetic chemicals, such as glycol, amines, and synthetic membranes including other adsorbents, are used to remove these impurities from natural gas streams. Even though these synthetic-based absorbents and adsorbents work effectively in removing these impurities, there are environmental degradations caused during the life cycle of the production, transportation, and usage of these chemicals. As outlined in Chapter 3, such materials are inherently toxic to the environment and can render the entire process unsustainable, even when traces of them are left in the gas stream. In this chapter, various gas processing techniques and chemicals used for gas processing are reviewed, and their impacts on the environment are discussed. Some natural and non-toxic substitutes for these chemicals are presented. Also presented is a detailed review of C0 2 and hydrogen sulfide removal methods used during natural gas processing.
468
THE GREENING OF PETROLEUM OPERATIONS
Table 12.2 Typical composition of natural gas. Methane
CH 4
Ethane
C2H6
Propane
C3H8
Butane
C
70-90%
0-20%
4H10
co 2
0-8%
Oxygen
o2
0-0.2%
Nitrogen
N2
0-5%
Hydrogen sulfide
H2S
0-5%
A, He, Ne, Xe
Traces
H20(g)
*16 to 32 m g / m 3 (typical)
Carbon dioxide
Rare gases Water vapor
Source: Natural Gas Organization 2004 *Eldridge Products 2003
There are certain restrictions imposed on major transportation pipelines regarding the make-up of the natural gas that is allowed into the pipeline called pipe "line quality" gas. Pipe line quality gas should not contain other elements such as hydrogen sulfide, carbon dioxide, nitrogen, water vapor, oxygen, particulates, and liquid water that could be detrimental to the pipeline and its operating equipment (EIA 2006). Even though the hydrocarbons, such as ethane, propane, butane, and pentahes, have to be removed from natural gas streams, these products are used for various other applications. The presence of water in natural gas creates several problems. Liquid water and natural gas can form solid ice-like hydrates that can plug valves and fittings in pipelines (Mallinson 2004). Natural gas containing liquid water is corrosive, especially if it contains carbon dioxide and hydrogen sulfide. It has also been argued that water vapor increases the volume of natural gas, decreasing its heating value, and in turn reducing the capacity of the transportation and storage system (Mallinson 2004). Hence, the removal of free water, water vapors, and condensates is a very important task during gas
SUSTAINABLE REFINING AND G A S PROCESSING
469
processing. Other impurities of natural gas such as carbon dioxide and hydrogen sulfide are generally considered impurities because acid gases must be removed from the natural gas prior to its transportation (Chakma 1999). Carbon dioxide is a major greenhouse gas that contributes to global warming. It is important to separate the carbon dioxide from the natural gas stream for its meaningful application, such as for enhanced oil recovery. Even though hydrogen sulfide is not a greenhouse gas, it is a source of acid rain deposition. Hydrogen sulfide is a toxic and corrosive gas that is rapidly oxidized to form sulfur dioxide in the atmosphere (Basu et al. 2004: Khan and Islam 2007). Oxides of nitrogen found in traces in the natural gas can cause ozone layer depletion and global warming. Figure 12.1 below shows the various stages in the life cycle of natural gas in which emissions are released from the natural gas into the natural environment.
12.2
Pathways of Crude Oil Formation
Crude oil is a naturally occurring liquid, found in formations in the earth, consisting of a complex mixture of hydrocarbons of various
Natural gas reservoirs
High process heat, glycol, amines and other catalysts
Residential (Cooking)
Exploration, extraction and production
Emissions through venting, flaring
Processing/purification
Emissions, wet seals, flaring
Transportation
Emissions from engines, pumps, and leaks
Storage and distribution
Emission from gates, pipes, meters and leaks
(End-uses)
Exhaust emissions, unburned, particulates emission, etc.
Commercial
Industrial
Transport
Electricity production
Figure 12.1 Emissions from life cycle of natural gas, from exploration to end use (after Chhetri et al. 2008).
470
THE GREENING OF PETROLEUM OPERATIONS
lengths. It contains mainly four groups of hydrocarbons: saturated hydrocarbons, which consist of a straight chain of carbon atoms; aromatics, which consist of ring chains; asphaltenes, which consist of complex polycyclic hydrocarbons with complicated carbon rings; and other compounds mostly consisting of nitrogen, sulfur, and oxygen. Crude oil, natural gas, and coal are formed from the remains of Zooplankton, algae, terrestrial plants, and other organic matters after exposure to heavy pressure and the temperature of the earth. These organic materials are chemically changed to kerogen. After more heat, pressure, and bacterial activities, crude oil, natural gas, and coal are formed. Figure 12.2 shows the pathway of the formation of crude oil, natural gas, and coal. These processes are all driven by natural forces. It is well known that the composition of crude oil is similar to that of plants (Wittwer and Immel 1980). Crude oils represent the ultimate in natural processing, from wood (fresh but inefficient) to coal (older but more efficient) to petroleum fluids (much older but much more efficient). Natural gas has much greater energy efficiency than liquid petroleum. This is not evident in conventional calculations because these calculations are carried out on the volume basis for gas. If calculations are made on the basis of weight, energy efficiency with natural gas would be higher than crude oil. Table 12.3 Heating value for various fuels. Fuel type
Heating value
Premium wood pellets
13.6 million Btu/ton
Propane
71,000 Btu/gal
Fuel oil #2
115,000 Btu/gal
Fuel oil #6
124,000 Btu/gal
Seasoned firewood
15.3 million Btu/cord
Ovendried switchgrass
14.4 million Btu/ton
Bituminous coal
26 million Btu/ton
Shelled corn @15% MC
314,000 Btu/bushel
Natural gas
1050 Btu/scf
Sources: Some of these data can be obtained from US government sites (website 15)
SUSTAINABLE REFINING AND G A S PROCESSING
471
Biomass
Decay and degradation
I Burial inside earth and ocean floors for millions of years
i Kerogen formation
i
Bacterial action, heat, and pressure
i
Bitumen, crude oil and gas formation Figure 12.2 Crude oil formation pathway - the natural processes.
The following table summarizes some of the heating values. More detailed heating values are given by J.W. Bartock, as listed in Table 12.4. Using this knowledge, techniques have been developed to accelerate (through pyrolysis) the natural process of crude oil formation in order to produce synthetic crude from coal starting in the late 1970s (Cortex and Ladelfa 1981; Stewart and Klett 1979). However, pyrolysis doesn't guarantee a natural process. In fact, the use of synthetic catalysts, synthetic acids, and other additives along with electric heating will invariably render the process unsustainable and inherently detrimental to the environment, in addition to being inefficient. The same comment stands for numerous processes that have been in place for converting wood into fuel (Guo 2004) and natural gas into synthetic crude (Teel 1994). The premise, that if the origin or the process is not sustainable, the final product cannot be sustainable, meaning acceptable to the ecosystem, is consolidated by the following consideration. Consider in Table 12.4 that the heating value of plastic is much higher than that of saw dust (particularly the green one). If the final heating value were the primary consideration, plastic materials would be a far better fuel source than saw dust. However, true sustainability must consider beyond a short-term single criterion. With the sustainability discussion presented in this book, plastic materials would be
472
THE GREENING OF PETROLEUM OPERATIONS
Table 12.4 Approximate Heating Value of Common Fuels. Fuel
Heating Value
Natural Gas
1,030 Btu/cu ft
Propane
2,500 Btu/cu ft 92,500 Btu/gal
Methane
1,000 Btu/cu ft
Landfill gas
500 Btu/cu ft
Butane
3,200 Btu/cu ft 130,000 Btu/gal
Methanol
57,000 Btu/gal
Ethanol
76,000 Btu/gal
Fuel oil Kerosene
135,000 Btu/gal
#2
138,500 Btu/gal
#4
145,000 Btu/gal
#6
153,000 Btu/gal
Waste oil
125,000 Btu/gal
Biodiesel - Waste vegetable oil
120,000 Btu/gal
Gasoline
125,000 Btu/gal
Wood Softwood 2-3,000 lb/cord
10-15,000,000 Btu/cord
Hardwood 4-5,000 lb/cord
18-24,000,000 Btu/cord
Sawdust - green 10-13 lb/cu ft
8-10,000,000 Btu/ton
Sawdust - kiln dry 8-10 lb/cu ft
14-18,000,000 Btu/ton
Chips - 45% moisture 10-30 lb/cu ft
7,600,000 Btu/ton
Hogged 10-30 lb/cu ft
16-20,000,000 Btu/ton
Bark 10-20 lb/cu ft
9-10,500,000 Btu/ton
Wood pellets 10% moisture 40-50 lb/cu ft
16,000,000 Btu/ton
SUSTAINABLE REFINING AND G A S PROCESSING
473
Table 12.4 (cont.) Approximate Heating Value of Common Fuels. Hard Coal (anthracite) 13,000 Btu/lb
26,000,000 Btu/ton
Soft Coal (bituminous) 12,000 Btu/lb
24,000,000 Btu/ton
Rubber-pelletized 16,000 Btu/lb
32-34,000,000 Btu/ton
Plastic
18-20,000 Btu/lb
Corn-shelled 7,800-8,500 Btu/lb
15-17,000,000 Btu/ton
Cobs
8,000-8,300 Btu/lb 16-17,000,000 Btu/ton
Sources: (Website 16)
rejected because the process followed in creating these materials is not sustainable (see Chapter 9). In general, a typical crude oil has the bulk composition shown in Table 12.5. In the above list, metals found in crude oil are numerous, including many heavy metals. Some of them are Ni, V, Cu, Cd, Hg, Zn, and Pb (Osujo and Onoiake 2004). In their natural state, these metals are not harmful because they are in a status similar to plants and other organic materials.
Table 12.5 Com position of a typical crude oil. Elements
Lower range (concentration, wt%)
Upper range (concentration, wt%)
Carbon
83.9
86.8
Hydrogen
11.0
14.0
Sulfur
0.06
8.00
Nitrogen
0.02
1.70
Oxygen
0.08
1.82
Metals
0.00
0.14
474
THE GREENING OF PETROLEUM OPERATIONS
Table 12.6 Density properties of crude oil (Alaskan North Slope). Weathering (%wt)
Temperature (C)
Density (g/mL)
0
0.8777
15
0.8663
0
0.9054
15
0.894
0
0.9303
15
0.9189
0
0.9457
15
0.934
0
10
22.5
30.5
Source: (Website 17) Typical crude oil density ranges from 800 k g / m 3 to 1000 kg/m 3 . The following table shows how density can vary with weathering and temperature for the North Slope Alaskan crude oil. Table 12.7 Viscosity properties of crude oil (Alaskan North Slope). Weathering (%wt) 0
10
22.5
30.5
Temperature (C)
Viscosity (cP)
0
23.2
15
11.5
0
76.7
15
31.8
0
614
15
152
0
4230
15
624.7
SUSTAINABLE REFINING AND GAS PROCESSING
475
Similarly, viscosity properties of the same crude oil are given in Table 12.7. Note that without weathering (causing low temperature oxidation), the viscosity values remain quite low. Table 12.8 shows the flash points of the same crude oil. Note how the flash point is low for unweathered crude oil, showing its efficiency for possible direct combustion. The same crude oil, when significantly weathered, becomes incombustible, reaching values comparable to vegetable oils, thus making them safe to handle. Table 12.9 shows further distribution of various hydrocarbon groups that occur naturally. This table shows how valuable components are all present in the crude oil, and if no toxic agents were
Table 12.8 Flash points of crude oil (Alaskan North Slope). Weathering (%wt)
Flash point (C)
0
<-8
10
19
22.5
75
30.5
115
Table 12.9 Hydrocarbon groups in a typical crude oil (North Slope Alaska). 0% weathered
10% weathered
22.5% weathered
30.5% weathered
Saturates
75
72.1
69.2
64.8
Aromatics
15
16
16.5
18.5
Resins
6.1
7.4
8.9
10.3
4
4.4
5.4
6.4
2.6
2.9
3.3
3.6
Components
Asphaltenes Waxes
476
THE GREENING OF PETROLEUM OPERATIONS
Table 12.10 Volatile Organic Compounds (VOC) in crude oil (North Slope Alaska). Component
0% weathered
30.5% weathered
Benzene
2866
0
Toluene
5928
0
Ethylbenzene
1319
0
Xylenes
6187
0
C^-Benzenes
5620
30
Total BTEX
16300
0
Total BTEX and C 3 -Benzenes
21920
30
added in order to refine or process the crude oil, the resulting fluid would remain benign to the environment. Finally, Table 12.10 shows various volatile organic (VOC) compounds present in a typical crude oil.
12.3 Pathways of Crude Oil Refining Fossil fuels derived from the petroleum reservoirs are refined in order to suit the various application purposes, from car fuels to airplane fuels and space fuels. Fossils fuels are a complex mixture of hydrocarbons varying in composition depending on source and origin. Depending on the number of carbon atoms the molecules contain and their arrangement, the hydrocarbons in the crude oil have different boiling points. In order to take advantage of the difference in boiling points of different components in the mixture, fractional distillation is used to separate the hydrocarbons from the crude oil. Figure 12.3 shows the fractional distillation column in which the temperature is lower at the top and increases as it goes down the column. Figure 12.3 gives a general schematic of the activities from the storage of crude oil to the complete refining process. The stored
SUSTAINABLE REFINING AND GAS PROCESSING
477
Figure 12.3 Fractional distillation unit for hydrocarbon refining (Chhetri and Islam, 2009).
crude oil is transported into the place where either vacuum distillation or atmospheric distillation is used for hydrocarbon separation. Chemical impurities of crude oil, such as sulfur or wax, are separated. Crude oil is refined through distillation, or fractionation, in order to form several different hydrocarbon groups, such as gasoline, diesel, aircraft fuel, kerosene, asphalt, and waxes. The fractions emerging from crude oil distillation are divided out based on their increasing molecular weight and boiling temperature in the distillation column. The distillation process continues until all the fractions are separated. Fractional distillation is the process of separating crude oil in atmospheric and vacuum distillation towers into groups of hydrocarbon compounds of different boiling points. The hydrocarbon conversion process consists of alkylation, thermal and catalytic cracking for decomposition, and polymerization for combining the hydrocarbon molecules and rearranging with catalytic reforming. To remove or separate the naphthenes, aromatics, and other undesirable compounds, various treatment processes, such as dissolving, adsorption, and precipitation, are carried out. In addition to this,
478
THE GREENING OF PETROLEUM OPERATIONS
desalting, drying, hydrodesulfurizing, solvent refining, sweetening, solvent extraction, and solvent dewaxing are also done to remove impurities from the fractions. Other activities, such as formulating and blending, are carried out to produce finished products with desired properties. Refining operations also include the treatment of wastewaters contaminated due to petroleum operations, solid waste management, process water treatment, and cooling and sulfur recovery. Other auxiliary operations include power generation and management for process operations, the flare system, the supply of air, nitrogen, steam, and other necessary system inputs, along with the administrative management of the whole refining systems. Even though distillation results in separate hydrocarbons, the resulting products of petroleum are directly related to the properties of the processed crude oil. These distillation products are further processed into more conventionally usable products by using cracking, reforming, and other conversion processes. The pathways of oil refining illustrate that the oil refining process utilizes toxic catalysts and chemicals, and the emission from oil burning also becomes extremely toxic. Figure 12.4 shows the pathway of oil refining. During the cracking of the hydrocarbon molecules, different types of acid catalysts are used along with high heat and
Crude Oil
Heat, pressure, acid catalysts H 2 S0 4 , HF, AICI3, Al 2 0 3 , Pt etc as catalysts
Boiler
—* Super-heated steam
Distillation Column
Cracking
—* Thermal/Catalytic
—►
Platinum, nickel, tungsten, palladium
High heat/pressure
—►
Alkylation
Hydro processing
—►
Distillation Other methocIs
Figure 12.4 Pathway of oil refining process.
SUSTAINABLE REFINING AND GAS PROCESSING
479
pressure. The process of breaking the hydrocarbon molecules is the thermal cracking. During alkylation, sulfuric acids, hydrogen fluorides, aluminum chlorides, and platinum are used as catalysts. Platinum, nickel, tungsten, palladium, and other catalysts are used during hydro processing. In distillation, high heat and pressure are used as catalysts.
12.4
Additives in Oil Refining and Their Functions
Oil refining and natural gas processing are very expensive processes in terms of operation and management. These operations involve the use of several chemicals and catalysts that are very expensive. Moreover, these catalysts and chemicals pose a great threat to the natural environment including air and water quality. Air and water pollution ultimately have impacts on the health of humans, animals and plants. For instance, the use of catalysts, such as lead, during crude oil refining to produce gasoline has been a serious environmental problem. Burning gasoline emits toxic gases containing lead particles, and the oxidation of lead in the air forms lead oxide, which is a poisonous compound affecting the lives of every living thing. Heavy metals such as mercury and chromium and the use of these metals in oil refining are major causes of water pollution. In the previous chapter, details of catalysts used in a refining process have been given. Consider the consequences of some of these chemicals.
12.4.1 Platinum It is well known that platinum salts can induce numerous irreversible changes in human bodies, such as DNA alterations (Jung and Lippard 2007). In fact, an entire branch of medical science evolves around exploiting this deadly property of platinum compounds in order to manufacture pharmaceutical drugs that are used to attack the DNA of cancer cells (Farrell 2004a, 2004b, 2004c, 2005). It is also known that platinum compounds cause many forms of cancer. Once again, this property of platinum is used to develop pharmaceutical drugs that could possibly destroy cancer cells (Volckova et al. 2003). Also, it is well known that platinum compounds can cause liver damage (Stewart et al. 1985). Similar damage to bone marrow is also observed (Evans et al. 1984). Platinum is also related
480
THE GREENING OF PETROLEUM OPERATIONS
to hearing loss (Rybak 1981). Finally, potentiation of the toxicity of other dangerous chemicals in the human body, such as selenium, can lead to many other problems. The above are immediate concerns to human health and safety. Consider the damage to the environment that might be incurred through vegetation and animals (Kalbitz et al. 2008). It is already known that platinum salts accumulate at the root of plants, from which it can easily enter the food chain, perpetually insulting the environment. In addition, microorganisms can play a role to broaden the impact of platinum. This aspect of ecological study has not been performed, yet.
12.4.2
Cadmium
Cadmium is considered to be a non-essential and highly toxic element to a wide variety of living organisms, including man, and it is one of the widespread pollutants with a long biological half-life (Plunket 1987; Klaassen 2001; Rahman et al. 2004). A provisional, maximum, tolerable daily intake of cadmium from all sources is l-1.2g/kg body mass (Bortoleto et al. 2004) and is recommended by FAO-WHO jointly. This metal enters the environment mainly from industrial processes and phosphate fertilizers and is transferred to animals and humans through the food chain (Wagner 1993; Taylor 1997; Sattar et al. 2004). Cadmium is very hazardous because humans retain it strongly (Friberg et al., 1974), particularly in the liver (half-life of 5 to 10 years) and kidney (half-life of 10 to 40 years). The symptoms of cadmium toxicity produced by enzymatic inhibition include hypertension, respiratory disorders, damage of kidney and liver, osteoporosis, formation of kidney stones, and others (Vivoli et al. 1983; Dinesh et al. 2002; Davis 2006). Environmental, occupational, or dietary exposure to Cd(II) may lead to renal toxicity, pancreatic cancer (Schwartz 2002), or enhanced tumor growth (Schwartz et al. 2000). The safety level of cadmium in drinking water in many countries is O.Olppm, but many surface waters show higher cadmium levels. Cadmium can kill fish in one day at a concentration of 10 ppm in water, whereas it can kill fish in 10 days at a concentration of 2 ppm. Studies with cadmium have shown harmful effects on some fish at concentrations of 0.2ppm (Landes et al. 2004). Plants can accumulate cadmium up to a level as high as 5 to 30 m g / k g , whereas the normal
SUSTAINABLE REFINING AND GAS PROCESSING
481
range is 0.005 to 0.02 m g / k g (Cameron 1992). Taken up in excess by plants, Cd directly or indirectly inhibits physiological processes, such as respiration, photosynthesis, cell elongation, plant-water relationships, nitrogen metabolism, and mineral nutrition, all of which result in poor growth and low biomass. It was also reported that cadmium is more toxic than lead in plants (Pahlsson 1989; Sanita'di Toppi and Gabbrielli 1999). 12.4.3
Lead
Lead (II) is a highly toxic element to humans and most other forms of life. Children, infants, and fetuses are at particularly high risk of neurotoxic and developmental effects of lead. Lead can cause accumulative poisoning, cancer, and brain damage, and it can cause mental retardation and semi-permanent brain damage in young children (Friberg et al. 1979; Sultana et al. 2000). At higher levels, lead can cause coma, convulsion, or even death. Even low levels of lead are harmful and associated with a decrease in intelligence, stature, and growth. Lead enters the body through drinking water or food and can accumulate in the bones. Lead has the ability to replace calcium in the bone to form sites for long-term release (King et al. 2006). The Royal Society of Canada (1986) reported that human exposure to lead has harmful effects on the kidney, the central nervous system, and the production of blood cells. In children, irritability, appetite loss, vomiting, abdominal pain, and constipation can occur (Yule and Lansdown 1981). Pregnant women are at high risk because lead can cross the placenta and damage the developing fetal nervous system; lead can also induce miscarriage (Wilson 1966). Animals ingest lead via crops and grasses grown in contaminated soil. Levels in plants usually range from 0.5 to 3 m g / kg, while lichens have been shown to contain up to 2,400 m g / k g of lead (Cameron 1992). Lead ingestion by women of childbearing age may impact both the woman's health (Lustberg and Silbergeld 2002) and that of her fetus, for ingested lead is stored in the bone and released during gestation (Angle et al. 1984; Gomaa et al. 2002).
12.5
Emissions from Oil Refining Activities
Crude oil refining is one of the major industrial activities that emits C 0 2 and many toxic air pollutants and has a high energy
482
THE GREENING OF PETROLEUM OPERATIONS
consumption. Because of the presence of trace elements, this C 0 2 is not readily absorbed by the ecosystem, creating an imbalance into the atmosphere. Szklo and Schaeffer (2002) reported that crude oil refining processes are highly energy-intensive, requiring between 7% and 15% of the crude oil from the refinery processes. The study showed that the energy use in the Brazilian refining industry will further increase by 30% between 2002 and 2009 to reduce the sulfur content of diesel and gasoline as well as to reduce C 0 2 emissions. For example, lube oil production needs about 1500 MJ/barrel and alkylation with sulfuric acid and hydrofluoric acid requires 360 MJ/ barrel and 430 MJ/barrel respectively. The energy consumption would further increase to meet the more stringent environmental quality specifications for oil products worldwide. The full recovery of such chemicals or catalysts is not possible, leading to environmental hazards. Concawe (1999) reported that CO z emissions during the refining process of petroleum products were natural gas (56 kg C0 2 /GJ), LPG (64 kg C0 2 /GJ), distillate fuel oil (74 kg C0 2 /GJ), residual fuel (79 kg C0 2 /GJ), and coke (117 kg C0 2 /GJ). Cetin et al. (2003) carried out a study located around a petrochemical complex and an oil refinery and reported that the volatile organic compounds (VOCs) concentrations measured were 4-20 folds higher than those measured at a suburban site in Izmir, Turkey. Ethylene dichloride, a leaded gasoline additive used in petroleum refining was the most abundant volatile organic compound, followed by ethyl alcohol and acetone. In addition to the VOCs, other pollutants such as sulfur dioxide, reduced sulfur compounds, carbon monoxide, nitrogen oxides and particulate matter are also emitted from petroleum refineries (Buonicare and Davis 1992). Rao et al. (2005) reported that several hazardous air pollutants (HAPs) including hydrocarbons such as Maleic anhydride (pyridines, sulfonates, sulfones, ammonia, carbon disulfide, methylethylamine, arsenic, coppers, beryllium, etc.), benzoic acid (benzene, xylene, toluene, formic acid, diethylamine, cobalt, zinc, formaldehyde, cadmium, antimony, etc.), and ketones and aldehydes (phenols, cresols, chromates, cyanides, nickel, molybdenum, aromatic amines, barium, radionuclides, chromium, etc.) are emitted during refining. Some studies have shown that oil refineries fail to report millions of pounds of harmful emissions that have substantial negative impacts on health and air quality. A report prepared by the Special Investigation Division, Committee on Government Reform for the
SUSTAINABLE REFINING AND GAS PROCESSING
483
U.S. House of Representatives concluded that oil refineries vastly underreport fugitive emissions to federal and state regulators that exceed 80 million pounds of VOCs and 15 million pounds of toxic pollutants (USHR 1999). The total VOCs emission was reported to be 492 million pounds in the U.S. This report confirms that the leaks in the valves are 5 times higher than the leaks reported to the state and federal regulators. Among other toxic air pollutants, it was also reported that oil refineries were the largest emitter of benzene (over 2.9 million pounds) in the U.S. during that report period. Other emissions reported were 4.2 million pounds of xylenes, 4.1 million pounds of methyl ethyl ketone, and 7 million pounds of toluene. Some of these pollutants are by-products of the catalysts or additives used during the refining process. Other pollutants, such as VOCs, are formed during high temperature and pressure use in the refining process. However, by improving the refining process, it is possible that the emissions can be reduced. For example, mild hydrotreating is conventionally used to remove sulfur and olefins, while severe hydrotreating removes nitrogen compounds, and reduces sulfur content and aromatic rings (Gucchait et al. 2005). Hence, a search for processes that release fewer emissions and use more environment-friendly catalysts is very important in reducing the emissions from refineries and energy utilization.
12.6 Degradation of Crude and Refined Oil The rate of biodegradation of crude oil and refined oil indicates what change has occurred to the refined products compared to its feedstock. Boopathy (2004) showed that under an anaerobic condition, 81% of diesel oil was degraded within 310 days under an electron acceptor condition. However, 54.5% degradation was observed in the same period under sulfate reducing condition. Aldrett et al. (1997) studied the microbial degradation of crude oil in marine environment with thirteen different bioremediation products for petroleum hydrocarbon degradation. This sample was extracted and fractionated in total saturated petroleum hydrocarbons (TsPH) and total aromatic petroleum hydrocarbons (TarPH). The analysis showed that some products reduced the TsPH fraction to 60% of its initial weight and the TarPH fraction to 65% in 28 days. This degradation was reported to be higher than that which degraded by naturally occurring bacteria. Even though the condition of the
484
THE GREENING OF PETROLEUM OPERATIONS
degradation for diesel and crude oil is different, it is observed that crude oil degrades faster than diesel (refined oil). Al-Darbi et al. (2005) reported that natural oils degrade faster in a sea environment due to the presence of consortia of microorganisms in sea. Livingston and Islam (1999) reported that petroleum hydrocarbons can be degraded by the bacteria present in the soil.
12.7 Pathways of Natural Gas Processing Natural gas is a mixture of methane, ethane, propane, butane and other hydrocarbons, water vapor, oil and condensates, hydrogen sulfides, carbon dioxide, nitrogen, other gases, and solid particles. The free water and water vapors are corrosive to the transportation equipment. Hydrates can plug the gas accessories creating several flow problems. Other gas mixtures such as hydrogen sulfide and carbon dioxide are known to lower the heating value of natural gas by reducing its overall fuel efficiency. This makes mandatory that natural gas is purified before it is sent to transportation pipelines. Gas processing is aimed at preventing corrosion, an environmental and safety hazard associated with the transport of natural gas. In order to extract the natural gas found in the natural reservoirs, onshore and offshore drilling activities are carried out. Production and processing are carried out after the extraction, which includes
Figure 12.5 Generalized natural gas processing schematic (EIA2006).
SUSTAINABLE REFINING AND GAS PROCESSING
485
Natural Gas Processing Remove sand and large particles
Oil and condensate removal
Low temperature separator
Water removal
Absorption
Glycol Dehydration Diethylene glycol (DEG), Triethylene glycol (TEG)
Separation of natural Gas liquids
Adsorption
H2S removal
Absorbing oil or fractionating
Activated alumina or a granular silica gel
CO, removal
Monoethanolamine (MEA)/diethanolamine (DEA)
- Bulk removal by hollow fiber polymer membranes. - DEA, MEA absorption
Figure 12.6 Various methods for gas processing.
natural gas production from the underground reservoir and removing the impurities in order to meet certain regulatory standards before sending for end use. Purified natural gas is transported in different forms such as liquefied petroleum gas (LPG), liquefied petroleum gas (LNG), or gas hydrates and distributed to the end users as per the demands. EIA (2006) illustrated a generalized natural gas processing schematic (Figure 12.5). Various chemicals and catalysts are used during the processing of natural gas. This generalized natural gas processing scheme includes all necessary steps depending on the types of ingredients available in a particular gas. The gas oil separator unit removes any oil from the gas stream, the condensate separator removes free water and condensates, and the water dehydrator separates moisture from the gas stream. The other contaminants, such as C0 2 , H 2 S, nitrogen, helium, are also separated from the different units. The natural gas liquids, such as ethane, propane, butane, pentane, and gasoline, are separated from methane using cryogenic and absorption methods. The cryogenic process consists of lowering the temperature of the gas stream with the turbo expander and external refrigerants. The sudden drop in temperature in the expander condenses the hydrocarbons in the gas stream maintaining the methane in the gaseous form. Figure 12.6 illustrates the details of removing the contaminants and chemicals used during natural gas processing. The procedure to remove each contaminant is discussed below.
486
12.8
THE GREENING OF PETROLEUM OPERATIONS
Oil and Condensate Removal from Gas Streams
Natural gas is dissolved in oil underground due to the formation pressure. When natural gas and oil are produced, they generally separate simply because of the decrease in pressure. This separator consists of a closed tank where the gravity serves to separate the heavier liquids from lighter gases (EIA 2006). Moreover, specialized equipment such as the Low-Temperature Separator (LTS) is used to separate oil and condensate from natural gas (Natural Gas Org. 2004). When the wells are producing high pressure gas along with light crude oil or condensate, a heat exchanger is used to cool the wet gas, and the cold gas then travels through a high pressure liquid knockout, which serves to remove any liquid into a low temperature separator. The gas flows into the low-temperature separator through a choking mechanism, expanding in volume as it enters the separator. This rapid expansion of the gas lowers the temperature in the separator. After the liquid is removed, the dry gas is sent back through the heat exchanger, followed by warming it by the incoming wet gas. By changing the pressure at different sections of the separator, the temperature varies, causing the oil and water to condense out of the wet gas stream. The gas stream enters the processing plant at high pressure (600 pounds per square inch gauge (psig) or greater) through an inlet slug catcher where free water is removed from the gas, after which it is directed to a condensate separator (EIA 2006).
12.9
Water Removal from Gas Streams
Natural gas may contain water molecules in both vapor and liquid states. Water contained in a natural gas stream may cause the formation of hydrates. Gas hydrates are formed when gas containing water molecules reaches a low temperature (usually <25°C) and high pressure (>1.5MPa) (Koh et al. 2002). Water in natural gas is removed by separation methods at or near the well head. Note that it is impossible to remove all water molecules from a gas stream, and operators have to settle for an economic level of low water content. The water removal process consists of dehydrating natural gas either by absorption, adsorption, gas permeation, or low
SUSTAINABLE REFINING AND GAS PROCESSING
487
temperature separation. In absorption, a dehydrating agent such as glycol takes out the water vapor (Mallinson 2004). In adsorption, the water vapor is condensed and collected on the surface. Adsorption dehydration can also be used utilizing dry-bed dehydrating towers, which contain desiccants such as silica gel and activated alumina to perform the extraction. Various types of membranes have been investigated to separate the gas from water. However, membranes require large surface areas, and, therefore, compact membranes with high membrane areas are necessary to design an economical gas permeation process (Rojey et al. 1997). The most widely used membranes consist of modules with plane membranes wound spirally around a collector tube or modules with a bundle of hollow fibers.
12.9.1
Glycol Dehydration
It is important to remove water vapor present in a gas stream, because otherwise it may cause hydrate formation at low temperature conditions that may plug the valves and fittings in gas pipe lines (Twu 2005). Water vapor may further cause corrosion when it reacts with hydrogen sulfide or carbon dioxide present in gas streams. Glycol is generally used for water dehydration or absorption (Mallinson 2004). Glycol has a chemical affinity for water (Kao et al. 2005). When it is in contact with a stream of natural gas containing water, glycol will absorb the water portion out of the gas stream (EIA 2006). Glycol dehydration involves using a glycol solution, either diethylene glycol (DEG) or triethylene glycol (TEG). After absorption, glycol molecules become heavier and sink to the bottom of the contactor from where they are removed. Boiling then separates glycol and water; however, water boils at 212°F, whereas glycol boils at 400°F (Natural Gas Org. 2004). Glycol is then reused in the dehydration process (Mallinson 2004). Ethylene glycol is synthetically manufactured from ethylene via ethylene oxide as an intermediate product. Ethylene oxide reacts with water to produce ethylene glycol in the presence of acids or bases or at higher temperatures without chemical catalysts (C 2 H 4 0 + H 2 0 -» HOCH 2 CH 2 OH). Diethylene glycol (DEG: chemical formula C4H]0O3) and triethylene glycol (TEG) are obtained as the co-products during the manufacturing of monoethylene glycol (MEG). MSDS (2006) categorizes DEG as hazardous materials for 99-100% concentration.
488
THE GREENING OF PETROLEUM OPERATIONS
Several incidents of DEG poisoning have been reported. A syrup for children containing diethylene glycol sickened 109 children and killed 80 in Haiti in 1998, 105 people died after having consumed an elixir containing diethylene glycol in the United States, 109 people died in Nigeria after taking a glycol contaminated syrup, and 200 died due to a glycol contaminated elixir in Bangladesh in 1992 (Daza 2006). Hence, it is important to search for an alternative to glycol so that such health and environmental problems can be avoided.
12.9.2 Solid-Desiccant Dehydration In this process, solid desiccants, such as activated alumina or granular silica gel materials, are used for adsorption in two or more adsorption tower arrangements (Mallinson 2004; Guo and Ghalambor 2005). Natural gas is passed through these adsorption towers. Water is retained on the surface of these desiccants. The gas rising from the bottom of the adsorption tower will be completely dry gas. These are more effective than glycol dehydrators and are best suited for large volumes of gas under very high pressure. Two or more towers are generally required because the desiccant in one tower becomes saturated with water and needs to regenerate the desiccant. A solid desiccant system is more expensive than glycol dehydration process. The solid desiccant adsorption process consists of two beds with each bed going through successive steps of adsorption and
Figure 12.7 Dehydration by adsorption in fixed bed (redrawn from Rojeyetal. 1997).
SUSTAINABLE REFINING AND GAS PROCESSING
489
desorption (Rojey et. al. 1997; Mallinson 2004). During the adsorption step, the gas to be processed is sent through the adsorbent bed and retains the water. When the bed is saturated, hot natural gas is sent to regenerate the adsorbent. After regeneration and before the adsorption step, the bed must be cooled. This is achieved by passing through cold natural gas. After heating, the same gas can be used for regeneration.
12.10
Separation of Natural Gas Liquids
Natural gas liquids (NGLs) are saturated with propane, butane, and other hydrocarbons. NGLs have a higher value as separate products. This is one reason why NGLs are separated from the natural gas stream. Moreover, reducing the concentration of higher hydrocarbons and water in the gas is necessary to prevent formation of hydrocarbon liquids and hydrates in the natural gas pipeline. The removal of NGLs is usually done in a centralized processing plant by processes similar to those used to dehydrate natural gas. There are two common techniques for removing NGLs from the natural gas stream: the absorption and cryogenic expander processes.
12.10.1 The Absorption Method This process is similar to adsorption by dehydration. The natural gas is passed through an absorption tower and brought into contact with the absorption oil that soaks u p a large amount of the NGLs (EIA 2006). The oil containing NGLs exits the absorption tower through the bottom. The rich oil is fed into lean oil stills, and the mixture is heated to a temperature above the boiling point of the NGLs and below that of the oil. The oil is recycled and NGLs are cooled and directed to an absorption tower. This process allows recovery of up to 75% of butanes and 85-90% of pentanes and heavier molecules from the natural gas stream. If the refrigerated oil absorption method is used, propane recovery can reach u p to 90%. Extraction of the other, heavier NGLs can reach close to 100% using this process. Alternatively, the fractioning tower can also be used where boiling temperatures vary from the individual hydrocarbons in the natural gas stream. The process occurs in stages as the gas stream rises through several towers where heating units
490
THE GREENING OF PETROLEUM OPERATIONS C3 + hydrocarbons
'///// Compressor
ΧΧΣ Filter
Natural gas to pipeline
Membrane unil
Raw gas 1
^ Natural gas to pipeline
Figure 12.8 Membrane system for NGL recovery and dew point control (redrawn from MTR 2007).
raise the temperature of the stream, causing the various liquids to separate and exit into specific holding tanks (EIA 2006). 12.10.2
The M e m b r a n e Separation
Various types of membranes can be used to remove water and higher hydrocarbons. The conventional membranes can lower the dew point of the gas. The raw natural gas is compressed and air is cooled, which knocks out some water and NGLs. The gas from the compressor is passed through the membrane, which is permeable to water and higher hydrocarbons. The dry, hydrocarbon-depleted residual gas is sent to the pipeline for use.
12.10.3 The Cryogenic Expansion Process This process consists of dropping the temperature of the gas stream to a lower level. This can be done by the turbo expander process. Essentially, cryogenic processing consists of lowering the temperature of the gas stream to around -120° Fahrenheit (EIA 2006). In this process, external refrigerants are used to cool the natural gas stream. The expansion turbine is used to rapidly expand the chilled gases, causing the temperature to drop significantly This rapid temperature drop condenses ethane and other hydrocarbons in the gas stream, maintaining methane in a gaseous form. This process recovers up to 90-95% of the ethane (EIA 2006). The expansion turbine can be utilized to produce energy as the natural gas stream is
SUSTAINABLE REFINING AND GAS PROCESSING
491
expanded into recompressing the gaseous methane effluent. This helps save energy costs for natural gas processing.
12.11
Sulfur and Carbon Dioxide Removal
C 0 2 and H2S present in the natural gas are considered to have no heating value, thus they reduce the heating value of natural gas (Mallinson 2004). The solvent in an absorber chemically absorbs acid gases such as CO z and H2S, and natural gas with reduced acid gas content can be obtained. The chemical solvent containing the absorbed acid gases is regenerated to be used again in the absorption process. The hydrogen sulfide is converted to elemental sulfur, and the C 0 2 is released to atmosphere. Since C 0 2 is a greenhouse gas, releasing it into the atmosphere will pose environmental threats. With increasing awareness of its environmental impact and the ratification of the Kyoto protocol by most of the member countries, it is expected that the release of C 0 2 into the atmosphere will be limited. Sulfur exists in natural gas as hydrogen sulfide (H2S), which is corrosive. H2S is called a sour gas in the natural gas industry. To remove H2S and C 0 2 from natural gas, amine solutions are generally used (Chakma 1997; EIA 2006). Sulfur removal is generally achieved by using a variant of the Claus process, in which the hydrogen sulfide is partially oxidized. The hydrogen sulfide is absorbed from the natural gas at ambient temperature in a scrubber, or in alkanolamine-glycol solution. The natural gas is run through a tower, which contains the amine solution. This solution has an affinity for sulfur. There are two principal amine solutions used, monoethanolamine (MEA) and diethanolamine (DEA). Both DEA and MEA in the liquid form will absorb sulfur compounds from natural gas as it passes through. After passing through the MEA or DEA, the effluent gas is free of sulfur. The amine solution used can be regenerated (by removing the absorbed sulfur), allowing it to be reused to treat more natural gas. It is also possible to use solid desiccants, such as iron sponges, to remove the sulfide and carbon dioxide. Amines solutions and different types of membrane technologies are used for C 0 2 removal from natural gas streams (Wills 2004). However, glycol and amines are toxic chemicals and have several health and environment impacts (Melnick 1992). Glycols become very corrosive in the presence of oxygen. C 0 2 removal is also practiced using molecular gate systems, in which membranes of different sieve, depending on the size of the molecule, are separated (Wills 2004).
492
THE GREENING OF PETROLEUM OPERATIONS
12.11.1
U s e of Membrane for Gas Processing
The separation of natural gas by membranes is a dynamic and rapidly growing field, and it has been proven to be technically and economically superior to other emerging technologies (Basu et al. 2004). This superiority is due to certain advantages that membrane technology benefits from, including low capital investment, low weight, space requirement, and high process flexibility. This technology has higher benefits because higher recovery of desired gases are possible. Du et al. (2006) reported that composite membranes comprised of a thin cationic poly (N,JV-dimethylaminoethyl methacrylate: PDMAEMA) layer and a microporous polysulfone (PSF) substrate were prepared by coating a layer of PDMAEMA onto the PSF substrate. The membrane showed a high permselectivity to C0 2 . The high C 0 2 / N 2 permselectivity of the membranes make them suitable to use for removing C 0 2 from natural gas stream and capturing the flue gas from power plants. By low temperature plasma grafting of DMAEMA onto a polyethylene substrate, a membrane showed a high C 0 2 permeance (Matsuyama et al. 1996). A study on the performance of microporous polypropylene (PP) and polytetrafluoroethylene (PTFE) hollow fiber membranes in a gas absorption membrane (GAM) system, using aqueous solutions of monoethanolamine (MEA) and 2-amino-2-methyl-l-propanol (AMP) was performed by DeMontigny et al. (2006). They reported that the gas absorption systems are an effective technology for absorbing C 0 2 from simulated flue gas streams. Markiewicz et al. (1988) reported that different types of polymeric membranes were used for the removal of C0 2 , H 2 S, N, water vapor, and other components. However, the majority of the membranes are synthetically made from polymers, which might cause negative environmental impacts. In order to avoid this problem, non-toxic biopolymers are considered attractive alternatives to the conventional membrane separation systems (Basu et al. 2004).
12.11.2 Nitrogen and Helium Removal A natural gas stream is routed to the nitrogen rejection unit after H2S and C 0 2 are removed, where it is further dehydrated using molecular sieve beds (Figure 12.5). In the nitrogen rejection unit,
SUSTAINABLE REFINING AND GAS PROCESSING
493
the gas stream is channeled through a series of passes through a column and a heat exchanger. Here, the nitrogen is cryogenically separated and vented. Absorption systems can also be applied to remove the nitrogen and other hydrocarbons (EIA 2006). Also, helium can be extracted from the gas stream through membrane diffusion.
12.12 Problems in Natural Gas Processing Conventional natural gas processing consists of applications of various types of synthetic chemicals and polymeric membranes. Thecommon chemicals used to remove water, C0 2 , and H2S are Diethylene glycol (DEG), Triethylene glycol (TEG), Monoethanolamines (MEA), Diethanolamines (DEA), and Triethanolamine (TEA). These synthetic chemicals are considered to have health and environmental impacts during their life cycle from production to end uses. The pathway analysis and their impacts are discussed in the following sections.
12.12.1
Pathways of Glycols and Their Toxicity
Matsuoka et al. (2005) reported a study on electro-oxidation of methanol and glycol and found that electro-oxidation of ethylene
Glyoxal (CHO)2
y» Ethylene glycol (CH2OH)2 ~*
Glycol aldehyde CH2OH(CHO) \
^
\ ^ Glyoxylate *CHO(COCr)
,
No poisoning Poisoning path .·
CH,:
Figure 12.9 Ethylene Glycol Oxidation Pathway in Alkaline Solution (Matsuoka et al. 2005).
494
THE GREENING OF PETROLEUM OPERATIONS
glycol at 400mV forms glycolate, oxalate, and formate (Figure 12.9). The study further reports that glycolate is obtained by threeelectron oxidation of ethylene glycol and is an electrochemically active product even at 400 mV, which leads to further oxidation of glycolate. Oxalate was found to be stable and no further oxidation was seen or termed as a non-poisoning path. The other product of glycol oxidation is called formate, which is termed as a poisoning path or CO poisoning path. A drastic difference in ethylene glycol oxidation was noted between 400 and 500 mV. The glycolate formation decreased 40-18% and formate increased 15-20%. In case of methanol oxidation, the formate was oxidized to CO z , but ethylene glycol oxidation produces CO instead of C 0 2 and follows the poisoning path over 500 mV. The glycol oxidation produces glycol aldehyde as intermediate products. As the heat increases, the CO poisoning may also increase. Glycol ethers are known to produce toxic metabolites such as the teratogenic methoxyacetic acid during biodegradation, and the biological treatment of glycol ethers can be hazardous (Fischer and Hahn 2005). It was reported that abiotic degradation experiments with ethylene glycol showed that the by-products are monoethylether (EGME) and toxic aldehydes, e.g., methoxy acetaldehyde (MALD). Glycol passes into the body by inhalation or through the skin. Toxicity of ethylene glycol causes depression and kidney damage (MSDS 2005). As indicated in the MSDS report, ethylene glycol in the form of dinitrate can have harmful effects when breathed in, and by passing through the skin it can irritate the skin causing a rash or burning feeling on contact. It can also cause headache, dizziness, nausea, vomiting, abdominal pain, and a fall in blood pressure. High concentration levels can interfere with the ability of the blood to carry oxygen, causing headache, dizziness, a blue color to the skin and lips (methemoglobinemia), breathing difficulties, collapse, and even death. This can damage the heart, causing pain in the chest a n d / o r increased heart rate, or it can cause the heart to beat irregularly (arrhythmia), which can be fatal. High exposure may affect the nervous system and may damage the red blood cells, leading to anemia (low blood count). The recommended airborne exposure limit is 0.31 m g / m 3 averaged over an 8-hour work shift. During a study of the carcinogenetic toxicity of propylene glycol on animals, skin tumors were observed (CERHR 2003). Ingestion of ethylene glycol is a toxicological emergency (Glaser DS 1996). It is commonly found in a variety of commercial products
SUSTAINABLE REFINING AND GAS PROCESSING
495
including automobile antifreeze, and if ingested it will cause severe acidosis, calcium oxalate crystal formation and deposition, and other fatal organ damage (Davis et al. 1997). It is a high volume production (HPV) chemical generally used to synthesize polyethylene terephthalate (PET) resins, unsaturated polyester resins, polyester fibers, and films (SRI 2003). Moreover, ethylene glycols are a constituent in antifreeze, deicing fluids, heat transfer fluids, industrial coolants, and hydraulic fluids. Several studies have consistently demonstrated that the kidney is a primary target organ after acute or chronic exposures of ethylene glycol (NTP 1993; Cruzan et al. 2004). It has also been reported that renal toxicity, metabolic acidosis, and central nervous system (CNS) depression are reported in humans in intentional or accidental overdosing (Eder et al. 1998). Browning and Curry (1994) reported that because of widespread availability, serious health concerns have been shown for the potential toxicity of ethylene glycol ethers. From the review of these literatures, it is obvious that glycol has health and environmental problems. Hence, searching for alternative materials that have less environmental impacts is very important.
12.12.2
Pathways of Amines and Their Toxicity
Amines are considered to have negative environmental impacts. It was reported that occupational asthma was found in a patient handling a cutting fluid containing diethanolamine (DEA). DEA causes asthmatic airway obstruction at concentrations of 0.75 m g / m 3 and 1.0 m g / m 3 (Piipari et al. 1998). Toninello (2006) reported that the oxidation of amines appears to be carcinogenic. DEA also reversibly inhibits phosphatidylcholine synthesis by blocking choline uptake (Lehman-McKeeman and Gamsky 1999). Systemic toxicity occurs in many tissue types including the nervous system, liver, kidney, and blood system. Härtung et al. (1970) reported that inhalation by male rats of 6 ppm (25.8 mg/m 3 ) DEA vapor 8 hours/day, 5 days/week for 13 weeks resulted in depressed growth rates, increased lung and kidney weights, and even some mortality. Rats exposed continuously for 216 hours (nine days) to 25 ppm (108 mg/m 3 ) DEA showed increased liver and kidney weights and elevated blood urea nitrogen. Barbee and Härtung (1979) reported changes in liver mitochondrial activities in rats following exposure to DEA in drinking water. Melnick (1992) reported that symptoms associated with diethanolamine intoxication included increased blood pressure,
496
THE GREENING OF PETROLEUM OPERATIONS
diuresis, salivation, and pupillary dilation (Beard and Noe 1981). Diethanolamine causes mild skin irritation to the rabbit at concentrations above 5% and severe ocular irritation at concentrations above 50% (Beyer et al. 1983). Diethanolamine is a respiratory irritant and, thus, might exacerbate asthma, which has a more severe impact on children than on adults (Chronic Toxicity Summary 2001). The summary reports showed that diethanolamine is corrosive to eyes, mucous membranes, and skin. Also, liquid splashed in the eye causes intense pain and corneal damage, and permanent visual impairment may occur. Prolonged or repeated exposure to vapors at concentrations slightly below the irritant level often results in corneal edema, foggy vision, and the appearance of halos around skin that contacts liquid diethylamine causes blistering and necrosis. Exposure to high vapor concentrations may cause severe coughing, chest pain, and pulmonary edema. Ingestion of diethylamine causes severe gastrointestinal pain, vomiting, and diarrhea, and may result in perforation of the stomach. As a large volume of amines are used for natural gas processing and other chemical processes, there are chances that it may have negative environmental and health impacts during their life cycles. 12.12.3
Toxicity of P o l y m e r M e m b r a n e s
Synthetic polymers are made from the heavier fraction of petroleum derivatives. Hull et al. (2002) reported that combustion toxicity of ethylene-vinyl acetate copolymer (EVA) has a higher yield of CO and several volatile compounds along with C0 2 . Due to this reason, biopolymers are being considered as an alternative to synthetic polymers.
12.13
Innovative Solutions for Natural Gas Processing
12.13.1 Clay as a Glycol Substitute for Water Vapor Absorption Clay is a porous material containing various minerals such as silica, alumina, and several others. Various types of clays, such as kaolinite and bentonite, are widely used in various industries as sorbents.
SUSTAINABLE REFINING AND G A S PROCESSING
497
Abidin (2004) reported that the Sorption depends on the available surface area of clay minerals and is very sensitive to environmental changes. Low et al. (2003) reported that the water absorption characteristics of sintered sawdust clay can be modified by the addition of sawdust particles to the clay. The dry clay as a plaster has a water absorption coefficient of 0.067-0.075 (kg/m 2 S 1/2 ), where the weight of water absorbed is in kg, the surface area in square meter, and time in seconds (Straube 2000). Recently, Chhetri and Islam (2008b) conducted a detailed study on the use of clay as a moisture removal technique. The moisture content of two clays were determined by drying the soils in an oven for 24 hours at 110°C. Triplicate clay samples were taken for the experiment. It was observed that there was an average of 1 % moisture content by weight in Nova Scotia clay and 2.2% moisture in bentonite clay. The particle size distribution of the Nova Scotia clay was determined by sieve analysis. The sieve openings from 0.074 to 4.75 mm were used. The soil sample was weighed and placed in the sieve and was agitated for 15 minutes so that the soil sample is passed to their corresponding sieve size. Table 12.11 is the summary of the sieve analysis showing the sieve opening size, the mass of the soil retained in each sieve, and the percent finer. Figure 12.10 shows the semilog plot of the percent finer and the grain size. From the figure, it is observed that 99.74% of the soil particles passed through the 4.75 m m sieve and 93.85% particles passed through the 2 mm opening. This indicates the size of the Table 12.11 Particle size distribution by sieve analysis. Sieve number
Openings
Weight (8) 766.7
Final weight(g)
4
4.75
10
2
468.5
497.8
768.7
20
0.85
409.2
40
0.42
420
60
0.25
100
0.149
00 Pan
soil wt (g) 2
Percent mass
Cumulative percent
percent finer
0.260
0.260
99.740
29.3
5.886
6.146
93.854
521.7
112.5
21.564
27.710
72.290
628.3
208.3
33.153
60.863
39.137
287.3
391.6
104.3
26.634
87.497
12.503
362.5
386.3
23.8
6.161
93.658
6.342
0.074
361.6
372
10.4
2.796
96.454
3.546
0
278.3
288.1
9.8
3.402
100.060
-0.060
498
THE GREENING OF PETROLEUM OPERATIONS
Φ
c 9} Φ Q.
0.01
0.1
20
J
Grain size (mm) Figure 12.10 Plot of percent finer vs. grain size.
Figure 12.11 Vapor absorption by bentonite clay (Run 1).
particles present in the Nova Scotia clay sample taken for the water vapor absorption. Figures 12.11 through 12.13 show the total water vapor absorption and cumulative water vapor absorption. It was observed that Run 1, Run 2, and Run 3 of the test for bentonite clay absorbed 8.05%, 8.57%, and 8.96% of water vapor by its weight, respectively. Similarly, in the case of the absorption test from Nova Scotia clay,
SUSTAINABLE REFINING AND GAS PROCESSING
499
Figure 12.12 Vapor absorption by bentonite clay (Run 2).
Figure 12.13 Vapor absorption by bentonite clay (Run 3).
the total water vapor absorption for Run 1, Run 2, and Run 3 were found to be 6.10%, 6.93%, and 5.57% by weight of clay, respectively. This shows that clay can be used as a material for water vapor absorption from natural gas streams.
12.13.2
Removal of C 0 2 Using Brine and Ammonia
A recent patent (Chaalal and Sougueur 2007) showed that carbon dioxide can be removed from exhaust gas reacting with saline
500
THE GREENING OF PETROLEUM OPERATIONS
Figure 12.14 Vapor absorption by Nova Scotia clay (Run 1).
Figure 12.15 Vapor absorption by Nova Scotia clay (Run 2).
water. In such a process, an ammonium solution is combined with C 0 2 in two steps. First, ammonium carbonate is formed: NH, + C0 2 + H 2 0 => (NH4)2 CO, In another step, when (NH 4 ) 2 CO, is supplied with excess C 0 2 ammonium hydrogen carbonate is formed as follows:
SUSTAINABLE REFINING AND GAS PROCESSING
501
Figure 12.16 Vapor absorption by Nova Scotia clay (Run 3).
(NH4)2 CO,+ C02+ H 2 0 => 2NH4HC03 (aq.) When ammonium hydrogen carbonate reacts with brine, it forms sodium bicarbonate and ammonium chloride: 2NH4HCC> + NaCl => NaHCO, + NH Cl 4
3
3
4
Hence, by using this process, carbon dioxide can be removed from natural gas streams. When sodium bicarbonate is heated between 125-250°C, it is converted to sodium carbonate, driving off water of crystallization, forming anhydrite sodium carbonate or a crude soda ash (Delling et al. 1998): NHHCO, + heat => Na,CO, + NH CI + CO, + Η,Ο. 4
3
2
3
4
2
2
This C 0 2 can be used for other purposes such as C 0 2 flooding for enhanced gas recovery (Oldenburg et al. 2001). If ammonium chloride is heated, NH 4 CI is decomposed into ammonia and hydrochloric acid: NH4C1 => NH, + HC1 This ammonia can be re-used for this process, and HC1 can be used in any other chemical processes.
502
THE GREENING OF PETROLEUM OPERATIONS
12.13.3 C0 2 Capture Using Regenerable Dry Sorbents Capturing C 0 2 is also possible by using regenerable sorbents such as sodium bicarbonate. Green et al. (2001) reported that sodium bicarbonate (NaHC0 3 ) can be used as a regenerable sorbent to economically capture C 0 2 from dilute flue gas streams 3Na2C03 (s) + C02(g) + H 2 0 (g) =2NaHCQ,(s) When 2NaHC0 3 is heated, C 0 2 is released forming 3Na 2 C0 3 2NaHC03(s) = Na2C03(s) + C02(g) + H 2 0 (g) In this reaction, C 0 2 is absorbed from a flue gas stream by NaHCO, (s) The NaHCÖ 3 (s) is regenerated by heat as the C 0 2 is released. Thus, this process can be used to capture C 0 2 from a lowpressure stream to form a more concentrated C 0 2 to be used for other purposes such as EOR operations.
12.13.4
C 0 2 Capture Using Oxides and Silicates of Magnesium
There are other techniques that can be used to capture C 0 2 from exhaust gas streams. Zevenhoven and Kohlmann (2001) studied the use of magnesium silicate to capture C 0 2 from exhaust gas streams. The process is called magnesium silicate or magnesium oxide carbonation. MgSiO, + C0 2 (g) => MgC0 2 + Si0 2 MgO + C0 2 (g)=>MgC0 3 The reaction kinetics showed that the carbonation varies depending on the partial pressure of C0 2 . The preferable temperature for carbonation is reported to be 200-400°C. The equilibrium constant pressure for the reaction is reported to be much higher than 1 bar (up to IE + 06 bar) to drive the chemical reaction to the right hand side. Goldberg et al. (2001) reported that there was 40-50% conversion after 24 hours at 150-250°C and 85-125 bar pressure with olivine ((Mg,Fe)2Si04) particles of 75-100 pm. Lee et al. (2006) reported that potassium based sorbents are prepared by impregnation of K2CÖ3 on activated carbon as porous
SUSTAINABLE REFINING AND GAS PROCESSING
503
support. Table 12.12 is a summary of sorbents prepared by impregnation of potassium carbonate (30% wt.) in the presence of 9% vol. H 2 0 at 60°C and their corresponding total C 0 2 capture capacity. It was reported that the CO z capture capacity of K 2 C0 3 /AC, K 2 C0 3 /Ti0 2 , K 2 C0 3 /MgO, and K 2 C0 3 /A1 2 0 3 was 86, 83, 119, and 85mgC0 2 /g sorbent, respectively. Moreover, these sorbents could be completely regenerated at 150,150,350, and 400°C, respectively. Based on regeneration capacity, Ti0 2 was considered to be a potential sorbent for C 0 2 capture. Hence, by employing these methods to capture C 0 2 from natural gas streams, the conventional chemicalbased absorbents such as DEA and MEA can be replaced with natural and non-toxic materials.
12.13.5
H2S Removal Techniques
Hydrogen sulfide (H2S) is one of the impurities coming through the natural gas streams. A significant amount of hydrogen sulfide is emitted from industrial activities such as petroleum refining (Henshaw et al. 1999) and natural gas processing (Kim et al. 1992). H2S is a toxic, odorous, (Roth 1993) and corrosive compound that seriously affects internal combustion engines (Tchobanoglous et al. 2003). If inhaled, H2S reacts with enzymes in the bloodstream and
Table 12.12 Sorbents prepared by impregnation of potassium carbonate (30wt %) in the presence of 9% vol. H 2 0 at 60°C Sorbent
Total C 0 2 capture capacity
K 2 C0 3 /AC
86.0
K 2 C0 3 /A1 2 0 3
85.0
K2CO3/USY
18.9
K 2 C0 3 /CsNaX
59.4
K 2 C0 3 /Si0 2
10.3
K 2 C0 3 /MgO
119.0
K,C0 3 /CaO
49.0
K 2 C0 3 /Ti0 2
83.0
504
THE GREENING OF PETROLEUM OPERATIONS
inhibits cellular respiration, which can create pulmonary paralysis, sudden collapse, and even death at higher concentrations (Syed et al. 2006). Natural gas contains 0-5% of hydrogen sulfide, depending on the reservoir (Natural Gas 2004). Hydrogen sulfide is also present in biogas that is being used as fuel for internal combustion engines and cooking appliances. Lastella et al. (2002) reported that hydrogen sulfide in biogas varies from 0.1-2% depending on the type of feedstock. The removal of H2S from chemical processes is very expensive due to large amounts of chemical, energy, and processing requirements (Buisman et al. 1989). Hence, biological treatment for hydrogen sulfide removal is considered a more attractive alternative to chemical treatment (Sercu et al. 2005) because biological treatment can overcome the disadvantages of the chemical treatment processes (Elias et al. 2002). Biological removal involves the conversion of H2S into elemental sulphur (S°) by using bacteria. Among the various bacteria available, Syed et al. (2001) showed that Cholorobium limicola is a desirable bacterium to use because it can grow using only inorganic substrates. It also has a high efficiency at converting sulfide into elemental sulfur as well as the extracellular production of elemental sulfur, which converts C 0 2 into carbohydrates (van Niel 1931). This also requires light and C0 2 , and the process could be strictly anaerobic. The oxidation product is in the form of elemental sulfur and can be used in other chemical processes: 2nH2S + NC0 2
Light energy ^-*
2nS° + n (CH20) + nH 2 0
A number of chemotrophs, such as Thiobacillus, Thermothrix, Thiothrix, and Beggiato, can be used for the biodegradation of H2S. These bacteria use C 0 2 as a carbon source and use chemical energy from oxidation of inorganic compounds such as H 2 S, and they produce new cell material. The reaction on the presence of Thiobacillus Thioparas as reported by Chang et al. (1996) and Kim et al. (2002) is as follows: 2HS- + 0 2 ^ 2S° + 20H2S° +30 2 + 20H- -* 250^"+ 2H+ H,S + 2 0 , ^ S 0 2 - + 2H+ 2
2
4
SUSTAINABLE REFINING AND GAS PROCESSING
505
Oyarzun et al. (2003) reported that the Thiobacillus species are widely used for the conversion of H2S and other sulfur compounds by biological processes. They have the ability to grow under various environmental stress conditions such as oxygen deficiency, acid conditions, and low and high pH. Hence, with suitable system design, whereby natural gas stream is passed through such bacteria with certain retention time, H2S can be removed.
12.14
Concluding Remarks
The crude oil pathway shows that a natural process drives the formation of crude oil without any impacts on other species in the world. However, the pathway analysis of refined oil shows that its processes create several environmental impacts on the globe. Refining crude oil involves the application of large amounts of synthetic chemicals and catalysts including heavy metals such as lead, chromium, sulfuric acid, hydrofluoric acid, platinum, etc. Moreover, refining the crude oil emits large amounts of VOCs and toxic air pollutants. Refined oils degrade slower and last in the natural environment for a longer duration, affecting the environment in several ways. Because the refining of fossil fuels emits large amounts of C0 2 , it has been linked to global warming and climate change. Hence, a paradigm shift in conventional engineering practices is necessary in order to reduce the emissions and impacts on the natural environment. The review of various natural gas processing techniques and chemicals used during gas processing shows that currently used chemicals are not sustainable. Some natural substitutes for these chemicals have also been experimentally investigated. They offer sustainable alternatives to gas processing.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
13 Flow Assurance in Petroleum Fluids 13.1
Introduction
Oil and gas have been the primary sources of energy for the past 70 years. All predictions indicate that this trend is likely to continue. In this regard, the role of production technologies cannot be over emphasized. Some 30% of petroleum infrastructure costs relate to production operations, mainly in assuring flow from the wellbore to the processing plants, refineries, and oil tankers. After a petroleum well is drilled, the most important preoccupation of the petroleum industry is to ensure uninterrupted flow through various tubulars, both underground and above-ground. Billions of dollars are spent every year to make sure that access to processing plants, refineries, storage sites, and oil tankers remains open and free of leaks or plugging. This exuberant cost does not include the long-term intangible costs, such as environmental impacts, effects of residuals remaining in petroleum streams, loss of quality due to interference, and others. Any improvement in the current practices can translate into saving billions of dollars in the short term and much more in the long term. This chapter discusses some of the current practices of flow assurance during petroleum production. This chapter focuses on 507
508
THE GREENING OF PETROLEUM OPERATIONS
gas hydrates and corrosion, with some discussion of asphaltenes. For every practice, it is shown that some fundamental adjustments can lead to drastic improvements in performance in the short term as well as in the long term. The short-term benefit is mainly saving material costs, and the long-term benefits are in the true sustainability of the technologies.
13.1.1
Hydrate Problems
The production and transmission of natural gas is a very complex set of operations (Figure 13.1). There are virtually hundreds of different compounds in the natural gas stream coming out of the production well. A natural gas stream consists of methane, ethane, propane, butane, gas condensate, liquid petroleum, water vapor, carbon dioxide, hydrogen sulfides, nitrogen, and other gases and solid particles. The overall attraction of natural gas as one of the most environmentally acceptable sources of energy is marred by the presence of some of the most unwanted compounds that are ingredients of the natural gas stream that comes out of the production well. Traces of nitrogen compounds in natural gas are believed to cause ozone layer depletion and contribute to global warming. The H2S and C 0 2 part of the natural gas stream decreases the heating value of natural gas, thereby, reducing its overall efficiency as a fuel. These gases are commonly known as acid gases and they must be removed from the natural gas before it is transported from the production well to the consumer market (Chakma 1999). Hydrogen sulfide, in particular, is a very toxic and corrosive gas that oxidizes instantaneously in the form of sulfur dioxide and gets dispersed in the atmosphere (Basu et al. 2004). These gases render the water content in the gas stream even more corrosive. Hence, the removal of free water, water vapors, and condensates is a very important step during gas processing. The water content in natural gas is exceptionally corrosive and it has the potential of destroying the gas transmission system. Water content in a natural gas stream can condense and cause sluggishness in the flow. The water content can also initiate the formation of hydrates, which in turn can plug the whole pipeline system (Nallinson 2004). Natural gas hydrates are ice-like crystalline solids that are formed due to the mixing of water and natural gas (typically methane). In order to transform the raw natural gas stream into "line quality" gas, certain quality standards have to be maintained, and the natural gas should be rid of these impurities before it can be transported
FLOW ASSURANCE IN PETROLEUM FLUIDS
509
Figure 13.1 Natural gas processing (Tobin et al. 2006).
through pipelines. This whole process of purification is known as gas processing, and it guards against corrosion, hydrate formation, and other environmental and safety hazards related to natural gas transportation (Chhetri and Islam 2007). The above discussion has elaborated the importance of removing water content from the natural gas transmission stream. This would not only provide protection against corrosion problems, but the most important reason behind this task is that it would help prevent the formation of hydrates in the pipeline. The discovery of hydrates is attributed to Humphrey Davy, who claimed in the early nineteenth century that a solid material can be formed when the aqueous solution of chlorine is cooled below 9°C (Davy 1811). These results were confirmed by Michael Faraday, who proved the existence of these solid compounds and showed that the composition of the solid is almost 1:10 for chlorine and water, respectively (Faraday et al. 1823). Throughout the remainder of the 19th century many other scientists experimented with hydrates, e.g., Wroblewski, Cailletet, Woehler, Villard, de Forcrand, Schutzenberger, Cailletet, and Sully Thomas, among others (Schroeder 1926). In particular, Villard was the one who reported the existence of hydrates of methane, ethane, acetylene, and ethylene (Villard 1888). All the abovementioned research efforts were only of academic interest, and it was not until 1934 that Hammerschmidt discovered that clathrate hydrates were responsible for plugging natural gas
510
THE GREENING OF PETROLEUM OPERATIONS
pipelines, especially those located in comparatively colder environments, and that hydrate formation was linked to gas transmission in a pipeline (Hammerschmidt 1934). By the turn of the 21 st century, Sloan's work on the development of chemical additives and other methods to inhibit hydrate formation led to the construction of the first predictive models of hydrate formation (Sloan 1998). Natural gas hydrates have generated the interest of petroleum researchers for the past four decades. The role of natural gas hydrates has been evaluated as (1) a future source of abundant energy, (2) a hazard to the marine geo-stability, and (3) a cause of change in the worldwide climate (Kvenvolden 1993). It has already been projected that natural gas hydrates are one of the most promising future sources of energy. Some estimates put the size of the hydrate reserves at a magnitude that would be enough to last for many decades, if not centuries (Kvenvolden 1988). Methane hydrates were first found in nature in Siberia in 1964, and it was reported that they were being produced in the Messoyakha Field from 1970 to 1978 (Sapir et al. 1973). Another discovery was made in the Mackenzie delta (Bily 1974) and then on the North Slope of Alaska (Collett 1983). These and subsequent discoveries of the methane hydrates led many scientists to speculate the universal existence of large reserves of hydrates, because the low temperature-high pressure conditions that are necessary for the formation of hydrates exist all around the globe, especially, in the permafrost and deep ocean regions. Many countries with large energy needs but limited domestic energy resources (e.g., Japan and India) have been carrying out aggressive and well-funded hydrate research and development programs to initiate the production of methane from the hydrates on a commercial basis. These programs led to the recovery of large hydrate nodules, core collections of ocean-bottom hydrate sediments, and the drilling of the wells designed specifically to investigate methane hydrate bearing strata (Max 2000; Park et al. 1999). In the global energy outlook, in which rising costs, depleting oil reserves, and future energy needs of emerging economies are constantly extrapolated, methane hydrates are considered the most valuable future energy prospect. However, it is also hypothesized that these hydrates play a crucial role in nature; they interact with the sea-bottom life forms, help restore the stability of the ocean floor, balance the global carbon cycle, and affect long-term climate change (Dickens et al. 1997).
FLOW ASSURANCE IN PETROLEUM FLUIDS
511
These concerns have led to different additives and the examination of the long term effects of drilling in hydrate reserves for natural gas, corroborating evidences from the cores of different drilling sites (Bains et al. 1999; Katz et al. 1999; Norris et al. 1999). The other concerns related to the technical aspect of the production of methane hydrates include the hazards posed by the hydratebearing sediments to conventional oil and gas drilling operations (Max et al. 1998) Even though both the balance between pros and cons of the exploration of gas hydrates for methane production and the credibility of gas hydrates as a future source of cheap and abundant energy may take a long time to be fully established, gas hydrates remain one of the most pressing problems for the natural gas transportation industry. Natural gas hydrates have been one of the potential causes of harm and damage to the natural gas transportation industry by affecting its personnel and infrastructure. Incidents have been reported when the hydrate plugs' projectiles have caused the loss of lives and millions of dollars in material costs. It has also been documented that the natural gas hydrate plugs have adverse effects on the drilling activities and threaten the pipelines.
13.1.2
Corrosion Problems in the Petroleum Industry
The petroleum industry has been the backbone of the world economy for the last 60 years. The United States has been the world leader in petroleum engineering technologies. A large tranche of petroleum infrastructure and maintenance costs relates to production operations, mainly in assuring flow from the wellbore to the processing plants, refineries, and oil tankers. Billions of dollars are spent annually to ensure that access to processing plants, refineries, storage sites, and oil tankers is free of leaks or plugging. This exorbitant cost does not include intangible longer-term costs, such as environmental impacts, effects of residuals remaining in petroleum streams, loss of quality due to interference, and others. The biggest challenge in assuring petroleum flow through pipelines has been corrosion problems. In one federal study, a total cost of $276 billion was attributed to corrosion in 2002. This presented a rise of more than $100 billion over five years: approximately 3.1% of GDP (Koch et al 2002), which is more than the subtotal of its entire agricultural and mining component. Congress was sufficiently alarmed so as to enact a Corrosion Prevention Act (2007), offering a tax incentive of 50% to
512
THE GREENING OF PETROLEUM OPERATIONS
companies who invest in corrosion abatement and prevention. The petroleum sector carries the biggest burden of this cost, followed by the U.S. Department of Defense. For the petroleum industry, the cost of corrosion and scaling presents anything from 30% (mainland) to 60% (offshore) of the total maintenance expenditure. That is a huge price tag for an industry that has carried the burden of supporting the bulk of the energy needs of this planet, and predictions are that this trend is likely to continue. Yet, few new technologies have emerged to solve this debilitating problem that the petroleum industry faces (Chilingar et al. 2008). Recently Chilingar et al. (2008) provided a step-by-step analysis of current practices of flow assurance during petroleum production, focusing on corrosion and scaling problems. They suggest numerous adjustments in practices and provide a guideline bound to save millions in preparatory research work in a field project. However, little is said about microbial-induced corrosion (MIC). It is estimated that 30% of the corrosion in the petroleum industry is due to microbial-induced activities (Al-Darbi et al. 2002). MIC is extremely harmful to both industry and the environment. It is estimated that 20-30% of all corrosion is microbiologically influenced with a direct cost of $30-50 billions per year (Javaherdashti 1999). One of the most important types of microbial corrosion is that which is due to the presence of sulfate reducing bacteria (SRB), which is most common in petroleum operations because of the prevailing anaerobic environment (Phelps et al. 1991). Therefore, the protection of structures against MIC has become very critical in many industries, including municipal pipelines, marine, storage vessels, sewage treatment facilities, and so on (Geesey et al. 1994). The study of microbiologically influenced corrosion (MIC) has progressed from phenomenological case histories to a mature interdisciplinary science including electrochemical, metallurgical, surface analysis, microbiological, biotechnological, and biophysical techniques (Little and Wagner 1994). Microorganisms such as bacteria, algae, and fungi, under certain conditions, can thrive and accelerate the corrosion of many metals, even in otherwise benign environments. Biological organisms can enhance the corrosion process by their physical presence, metabolic activities, and direct involvement in the corrosion reaction (Hamilton 1985). The occurrence of MIC is often characterized by unexpected severe metal attacks, the presence of excessive deposits, and, in many cases, the rotten-egg odor of hydrogen sulfide (Lee et al. 1995).
FLOW ASSURANCE IN PETROLEUM FLUIDS
513
For a microorganism to grow, environmental conditions must be favorable. Essential nutrients required by most microbes include carbon, nitrogen, phosphorous, oxygen, sulfur, and hydrogen. Other elements required in trace quantities include potassium, magnesium, calcium, iron, copper, zinc, cobalt, and manganese. All organisms require carbon for converting into cell constituents (Tanji 1999). The main bacteria related to MIC are aerobic slime formers, acetate-producing bacteria, acetate-oxidizing bacteria, iron/manganese oxidizing bacteria, methane producers, organic acid-producing bacteria, sulfur/sulfide-oxidizing bacteria (SOB), and sulfate-reducing bacteria (SRB). Conventionally, only chemical approaches have been taken in combating MIC. Numerous toxic chemicals are used to destroy the microbes that cause corrosion. Only recently, Al-Darbi (2004) proposed in his PhD work a series of natural alternatives to these toxic agents. These agents are just as efficient as toxic chemicals but do not contain any toxins.
13.2
The Prevention of Hydrate Formation
Many techniques are used to prevent hydrate formation in offshore pipeline systems including the dehydration of natural gas, operating beyond the hydrates formation zone, and the addition of gas hydrate inhibitors. Figure 13.2 shows the phase diagram of gas hydrates. The first one is the complete extraction of water before the natural gas is transmitted through the pipeline. In this method, a dehydration plant is utilized, which can be installed either offshore or onshore. The major disadvantage of this method is the cost of both the installation process and the operation of the dehydration unit. These costs are even higher in case of the offshore plants (Hidnay and Parrish 2006). The second method is the mechanism by which the temperature and pressure of natural gas are kept beyond the hydrates formation zone. Heat is introduced to the pipeline system so that the fluid is maintained at a higher temperature with respect to the hydrate formation range. This is done by simply insulating the pipeline system. However, the tie back distances, the topsides capabilities of the platform, and the type of fluid being transmitted should be kept in mind, and this method cannot be applied universally in all cases. This model can be described as a compromise between the astronomical price tag of the insulation process, the calculated
514
THE GREENING OF PETROLEUM OPERATIONS
operability of the pipeline system, and the degree of acceptability of the risk level. The alternative to the insulation of the pipelines is the simple introduction of heat to the pipelines system through an external hot-water jacket, which may have numerous arrangements. Conductive or inductive heat tracing can also be used in this method. The conductive systems may not be reliable to the degree of requisite standards for different reasons. When one uses an electrical heating system directly, it may consist of a feeder cable that is installed piggyback to the pipeline system being heated. When the electricity is provided, a magnetic field is created that induces electric currents in the walls of the pipeline system, which in turn generates heat. The electrical rating of the pipeline system depends on the quantity of heat required, the material of the pipeline system, and the length of the pipes. This is considered to be an environment-friendly technique for controlling the formation of hydrates in the fluid streams of pipeline systems. Also, because there is no depressurization of the line, pigging or heating medium circulation, or removal of hydrates, this method is considered to be very efficient and useful (Kennedy 1993). However, the problem with this method is that the temperature can only be manipulated to a certain limit and lowering the pressure in the pipelines is not economically feasible, especially when natural gas has to be transmitted over long distances. Another problem with decreasing the pressure is that it might cause natural gas decompression at the wellhead. This decompression may result in decreasing the temperature of natural gas in the pipelines below freezing point, and it might push the system to hydrate formation regions. The abovementioned are the reasons that make this mechanism impractical. Also, electrical heating is neither "clean" nor efficient (Chhetri 2006). If global efficiency is considered, it becomes clear that electrical heating is simultaneously inefficient and environmentally hostile. The third method uses certain chemicals that allow the natural gas in the transportation system to tolerate higher pressures and lower temperatures. The most widely used inhibitors include methanol and glycols. Chemical inhibitors are introduced at the wellhead and booster stations. The addition process is carried out through positive displacement machines; therefore, the process can be very accurately controlled. These chemicals are used to prevent the formation of hydrates by pushing the hydrate formation temperatures beneath the operating temperatures of the pipeline system. Although the chemical hydrate inhibition method is the most extensively
FLOW ASSURANCE IN PETROLEUM FLUIDS
515
Hydrates free zone Hydrates dissociation curve
1
°P
' diss
Temperature
Figure 13.2 Schematic pressure vs. temperature diagram for a gas composition (Paez et al. 2001).
used technique, the designing, production, and application of a substitute hydrate inhibition method that is cost efficient and environmentally friendly remain the goal of the natural gas transportation industry. The problems with the chemical method are the unknown quantities of the inhibitors to be used, the costs of the chemicals, the unreliability of the inhibitor injection system, and the possible interaction of the hydrate inhibitors with other additives, thus resulting in ineffective inhibitors (Hidnay and Parrish 2006). Other problems with this widely used mechanism are not only leaks and spills but also the not-so-obvious problems related to the oxidation of these chemicals when natural gas is burned, which cannot be ignored (Chhetri and Islam 2006). Despite these problems, this is one of the most attractive methods for the natural gas transportation industry. This partly has to do with the convenience of these systems to regulate the injection rate of the inhibitors and the fact that this method needs lower capital costs. The gas transportation industry uses three types of chemical hydrate inhibitors: thermodynamic inhibitors, low-dosage hydrate inhibitors, and kinetic hydrate inhibitors.
13.2.1 Thermodynamic Inhibitors Conventionally, methanol, ethylene glycol, and tri-ethylene glycol are the most widely used thermodynamic inhibitors. Some inorganic salts are also categorized as thermodynamic inhibitors,
516
THE GREENING OF PETROLEUM OPERATIONS
but they are very rarely used. Thermodynamic inhibitors are chemical compounds that are added to the system in excess concentration (10-60% by weight) to bring about change in the hydrate formation conditions. These chemicals shift the hydrate formation loci to the left of its original position, which means the hydrate formation state point is pushed to a lower temperature a n d / o r to a higher pressure (Hammerschmidt 1939). Many aspects are evaluated before choosing a specific thermodynamic inhibitor. These aspects include capital and operating costs, the physical properties of the natural gas, safety regulations, inhibition of corrosion, the dehydration capacity of the gas, and so on. The most important issue with the selection of the inhibitors, however, is whether or not the chemical used in the process can be completely recovered from the system (the recovered chemicals are later regenerated and re-injected into the system). Methanol is a non-regenerable and comparatively cheaper thermodynamic inhibitor. Its lower price, compared to the higher recovery, regeneration, and re-injection costs, render these processes cost ineffective. Nonetheless, when this inhibitor is used, the unavailability of the three abovementioned processes cause a significant change in the costs associated with the "lost" methanol. Methanol is used because it has lower viscosity and a lower surface tension, which makes the operational separation simple (Hammerschmidt 1939). Another chemical used in preventing the formation of the hydrate plugs are glycols. Glycols lower the temperature range for the formation of hydrates. Among the glycols, ethylene glycol is considered the best choice because of its lower price, lower viscosity, and lower solubility in liquid hydrocarbons. However, the problem with glycols is that in order for them to be effective they must be added at rates up to 100% of the weight of water. Glycols are very expensive chemicals, thus they essentially need to be recovered and regenerated so that they can be reused in a cyclic manner. The recovery and regeneration of glycols is an additional expense, and it is a space consuming option. Especially on offshore installations where space is limited, the option of using glycols is a difficult and expensive choice (Kelland et al. 1995).
13.2.2
Low Dosage Hydrate Inhibitors
The term low-dosage inhibitors (LDI) comes from the fact that they are used in ppm quantities compared to percentage (as high as
FLOW ASSURANCE IN PETROLEUM FLUIDS
517
40% for methanol) for thermodynamic inhibitors. It is assumed that these inhibitors bind themselves to the surface of the hydrate particles in the early stages of nucleation and do not allow the hydrate particles to grow to the critical size where thermodynamic conditions become suitable for the growth of the hydrate particle. The length of the inhibition time for kinetic agents can be from many hours to several days, allowing the residence time of the fluids in the flowlines to be exceeded by the inhibitors. Based on their working mechanisms, these inhibitors are classified into different groups. Kinetic hydrate inhibitors (KHI), or threshold hydrate inhibitors (THI), increase the induction time for hydrate formation by inhibiting the same for a longer period of time. The anti-agglomerants (AA) cause a change in the agglomeration of hydrate crystals and, thereby, transform the hydrate particles. The surfactant type inhibitors reduce the possibility of hydrate plug formation by sticking to the pipeline (Fu et al. 2002). However, when compared to the high operating costs, difficult supply mechanisms, health, safety, and environmental hazards that are associated with the systems using methanol and glycols, the low dosage inhibitors have not yet proven to be their alternative. There are concerns that the low dosage hydrate inhibitors have not yet been comprehensively subjected to the extreme conditions of some of the very harsh environments in which the oil and gas industries operate. At most, in reservoirs where the temperatures are below the freezing point of water and where there is no fear of hydrate formation, some kind of antifreeze is introduced into the system. In these conditions the operators use a combination of the low dosage inhibitors and the thermodynamic inhibitors. (Paez et al. 2001) It should be clear that the residence time of the hydrocarbons in a pipeline should be less than the induction time of the low dosage inhibitors in the system. Otherwise, plugs that are formed by the hydrates of the fluids could chock the pipeline. If the fluids in the pipelines evaporate, then there would be no solvent available for the inhibition process, which may pose a serious threat to the safety of the installation and personnel (Frostman et al. 2003). Another challenging situation is the shutdown process. In this process, the residence time of the fluid in the system is longer, which means the fluid has to remain in the system for a longer duration of time. Under these conditions the introduction of methanol and glycols in the system is deemed a good short-term solution, and the use of low dosage inhibitors is avoided. In the shutdown process,
518
THE GREENING OF PETROLEUM OPERATIONS
the temperatures may decrease and the pressures may increase drastically, and this increases the chances of hydrates forming. The restart is also a difficult process to handle, because in this process abnormal conditions of very high pressures and velocities need to be dealt with (Szymczak et al. 2005). It has been stated that the low dosage inhibitors may prove to be a replacement for the chemicals that are presently used, but the operators are not ready to take the risk of using these in the field more extensively. This problem can be solved if there are facilities that would interpolate the experimental results with the practical, on-the-site processes. However, for that to happen, a mechanism should be developed in order to apply the experimental results in the field, which is not in place (Fu et al. 2001). More recently, LDIs have been found to cause severe health problems when oxidized. The most commonly used antiagglomerants are surfactants such as alkylbenzene sulphonates (ABS) and alkylphenol polyethoxylates (APnEO). The most widely used kinetic hydrate inhibitors are poly[N-vinyl pyrrolidone] or poly[vinylmethylacetamide/vinyl caprolactam] (Mokhatab, Wilkens, and Leontaritis 2007). However, these are not the only types of surfactants, and there are other more widely used forms of them. The earliest known surfactants are soaps, and they have been manufactured for thousands of years. Soaps are basically sodium salts of natural, saturated and unsaturated fatty acids formed from the alkaline hydrolysis of animal and plant triglycerides (fats and oils). The equation for this reaction is as follows: CH2OCO CH2OCO + 3NaOH I CH2OCO Triglyceride
CH2OH ► ^ OH I CH2OH Glycerol
+
3 R C oONa
Soa
P
Sulphonates, like soaps and sulphates, are anionic surfactants, and they have a negatively charged hydrophile. They are among the most popular surfactants, and the alkylbenzene sulphonates are the most notable sulphonates. The yearly global production rate of the same is about 3 x 106 tons, and that of alkylphenol polyethoxylates is 6 x 10s tons (Ahel and Terzic 2003). The components that are considered to be negligible are not accounted for in the above equation.
FLOW ASSURANCE IN PETROLEUM FLUIDS
519
They are equivalent to Freon leaks in refrigeration calculations or toxic catalyst components that are considered to be negligible in all engineering calculations. Yet, according to Chapter 3, those components are the ones that render the entire product unsustainable. So, the actual equation should appear as follows, in which E/(t) represents the collection of matter, expressed as a time function (intangibles), throughout the material processing phase. CH2OCO
CHoOH
I CH2OCO CH2OCO
> +
I CH,OH
3RCOONa
CH2OH Glycerol
Triglyceride
+
I Soa
P + Z/(t)
The presence of Σ/it) in the gas stream poses irreversible damage to the environment because it pollutes the entire pathway of any matter it comes in contact with. With the comprensive energy balance equation, as presented in Chapter 3, these components will continue to enter the matter through oxidation (all temperature ranges) or through direct contanct (unburnt portion), contaminating food and polluting the environment. These surfactants were being manufactured from natural ingredients for thousands of years. It was only after World War II that the world switched from using natural sources for manufacturing these surfactants and started using synthetic ones. That is why these surfactants, including alkylbenzene sulphonates and alkylphenol polyethoxylates, are now considered considerably toxic to aquatic life and, thus, should be deemed potentially significant environmental contaminants. These chemicals have serious problems in relation to the formation of scum and foam when they are transported to rivers and other fresh water sources. The other problem is that these chemicals do not degrade easily, if at all (Karsa and Porter 1994). More adverse effects could be the bioaccumulation potential of these chemicals. Also, concerns related to the tentative evidence of weak endocrine disruptive activity, plus the huge quantities of these chemicals that are used by the industrialized countries, make them a potential risk for the environment in the future (Craddock et al. 2002; Swisher 1987). As stated earlier, the most common types of LDHIs are KHIs (kinetic inhibitors) and AAs (anti-agglomerants).
520
THE GREENING OF PETROLEUM OPERATIONS
13.2.3
Kinetic Hydrate Inhibitors
Kinetic hydrate inhibitors (KHI) were among the first Low Dosage Hydrate Inhibitor (LDHI) products employed to contain hydrates in natural gas transmission systems. The KHIs are normally watersoluble polymers or copolymers. They do not stop the formation of hydrates; they just delay the nucleation and growth of hydrates in the systems. One of the better known KHI chemicals is a copolymer of vinylmethylacetamide (VIMA) and vinylcaprolactam (VCAP); it is also referred to as polytVIMA/VCAP] (Fu et al. 2001). The inhibition of natural gas hydrates through kinetic inhibitors is carried out by causing low crystal formation by intervening with the formation of the cages. In the aqueous phase, their concentration can be in the range as low as 1% by weight, which is an advantage over thermodynamic inhibitors. The other desired property they have is that they are nonvolatile (Notz et al. 1996). The KHIs directly interact with pre-nucleation hydrate masses to achieve nucleation inhibition. They are believed to cause an increase in the surface energy of the pre-nucleation masses, which in turn increases the activation energy of the nuclei formation barrier. They are said to slow down the growth of the hydrate crystal either by getting adsorbed onto the crystal surface or by fitting into the crystal lattice. This is believed to cause a distortion in the hydrate crystal lattice or growth stages and, hence, prevent the crystals from developing rapidly into regular crystal structures. The other major advantage of using this type of inhibition method is that it is independent of the amount of water present in the system. In case of the depletion of the reservoir, the water content increases in the product, thus these inhibitors with the longest inhibition time will have an edge over other inhibitors. However, the problem with the KHIs is that hydrate crystals eventually form even in the presence of KHI and can potentially build up and plug the transmission system. The time that is required for the formation of hydrates depends on the effectiveness of KHI, the dosage rate, and the hydrate formation driving force. In case of high subcooling, not only are larger quantities of KHI required, but it also decreases the time interval for the hydrates to form in the system. The effectiveness of the KHIs is problematic in that they are efficient only up to specific degrees of subcooling and pressure. If they need to be used under more strict conditions, KHIs are mixed with thermodynamic inhibitors (methanol or glycols). Kinetic inhibitors can be employed in condensates, oils,
FLOW ASSURANCE IN PETROLEUM FLUIDS
521
and in gas-water systems. However, these products are modified to impart a most favorable operation in a particular system (Lovell and Pakulski 2002). These inhibitors have another disadvantage, which is that the proper dosage is measured empirically, and errors in the injection of the quantities of the inhibitors may increase hydrate formation rates. These inhibitors are limited to a recommended maximum subcooling (difference between desired operating temperature and hydrate formation temperature at constant pressure) of 20 C (Mehta and Clomp 1998). In many cases, KHIs need a carrier chemical, e.g., methanol or water. In these cases, special considerations and precautions should be taken in order to select the position of KHI injection placements because in hotter spots they have the tendency to precipitate out of the solution and leave the system exposed to hydrate formation conditions. A carrier chemical, such as methanol, can improve the chances of preventing this from occurring (Lederhos et al. 2001). These kinds of inhibitors are being used in many offshore operations, and it seems that they will be applied more widely as knowledge with their use increases.
13.2.4
Antiagglomerants (AA)
The technology of inhibiting the hydrates using AA was also developed in the late 1990s. Antiagglomerants stop small hydrate grains from lumping into larger masses that have the potential of producing a plug. These inhibitors exist in the liquid hydrocarbon phase, and they are more frequently used in pipelines where natural gas is dissolved in oil. These inhibitors need testing so that proper concentrations can be ensured (Hidnay and Parrish 2006). Research and development programs are being carried out in order to create new, cost effective, and environmentally friendly hydrate inhibitors that will permit multiphase fluids to be transmitted unprocessed over extended distances. These hydrate inhibitors might result in cost savings not only in terms of lower costs of the inhibitors but also in terms of the quantity of the inhibitor injected into the system and the pumping and storage facilities needed for the job (Frostman 2000). This would make it possible to resize the production facilities on a more compact level. It is also claimed that this research and development will lead to the use of the kind of hydrate inhibitor
522
THE GREENING OF PETROLEUM OPERATIONS
technology that will also help with the environmental regulations all around the globe. AAs are more widely used these days, even though their working mechanism is not fully comprehended. These chemicals work in both water and liquid hydrocarbon phases. These chemicals work as emulsifying agents on the nuclei of the hydrates. The emulsification mechanism for anti-agglomeration, or the efficiency of these chemicals, depends on the mixing process at the point of injection, and it will increase with the increase in the turbulence of the system. The efficiency of these chemicals will decrease if the salinity of water is high or even if the water cut by volume is high. This is ironic because the probability of the occurrence of hydrate formation is reduced under high salinity conditions. The advantage of using these chemicals is that they work satisfactorily in severe temperatures and pressures (Frostman 2000). Nonetheless, these research and development programs are focussing on newer chemicals or newer versions of chemicals, which are supposed to replace the present ones. At the present, they are believed to the best solution possible, in terms of cost effectiveness and friendliness to the environment. However, the chemicals, which are not yet proven detrimental to the environment, may have a potential to do harm in the long run. The basic flaw with this research methodology and approach can be attributed to the fact that it is based on the supposition that "chemicals are chemicals" (Editorial 2006). Zatzman and Islam (2006) demonstrated that the "chemicals are chemicals" model, originally promoted by Mueller, the Nobel Laureate (in Medicine and Physiology) who got credit for manufacturing DDT, emerges from a misconception deeply rooted in the Eurocentric culture. This misconception emerges from misinterpretations and misunderstandings of nature, as discussed in Chapters 2 and 3. Note that the features of artificial products are only valid for a time, t = "right now" (At = 0).
13.3
Problems with the Gas-processing Chemicals
This section states and explores the problems with chemicals that are being used by the natural gas processing industry. It also looks at the problems with the use of low dosage inhibitors, which are promoted as the replacement of the presently used inhibitors. It
FLOW ASSURANCE IN PETROLEUM FLUIDS
523
discusses the reasons of the gas industry's indecision to switch from the presently used inhibitors to the newer ones. Ethylene glycol, methanol, and monoethanolamine (MEA) are three conventional chemicals widely used by the natural gas processing industry. All these chemicals serve the purposes of the natural gas processing industry to a large extent. However, all these are considered toxic and have very harmful effects on human health (ASTDR 1997; Barceloux et al. 1997; Burkhart 1997; Morris 1942). Ethylene glycol is a clear, colorless, and slightly syrupy liquid at room temperature. It exists in air in the form of vapor. It is an odorless and relatively non-volatile liquid and has a sweet taste. It has a low vapor pressure and is completely miscible in water (Cheremisinoff 2003). There are considerable limitations of the available data on the exposure and effects of ethylene glycol being oxidized with natural gas. Therefore, researchers have not yet agreed upon a definitive "scientific" conclusion of whether ethylene glycol is toxic or non-toxic. This is typical because the toxicity level is associated with concentration and not time. Because the concentration at sometime may fall below the detection limit and concentration is not expressed as a function of time, this definition of toxicity is inherently misleading (Khan 2006b). What has been agreed upon is that the exposure to glycol in the vicinity of a point source through absorption or inhaling may exceed the tolerable intake (TI) for living organisms and pose a serious threat to human health. The kidney is the primary target site for effects of ethylene glycol, but it also causes minor reproductive effects and developmental toxicity. The range and distribution of concentrations of ethylene glycol in the vicinity of consumer point source play a major role in this regard (Laitinen 1996; Paul 1994; Heilmair 1993; Wang et al. 2006). When ethylene glycol is released into the environment, it splits into surface water or groundwater. It is said that it does not accumulate in the environment primarily due to biodegradation. However, concerns are raised as to the duration of its half-life in air, water, groundwater, and soil, which is estimated to typically range from 0.35 to 3.5 days, from 2 to 12 days, from 4 to 24 days, and from 2 to 12 days, respectively. Even with these conservative estimates, half-lives may exceed these ranges in some cases depending on the environmental conditions.This shows that ethylene glycol released by the oxidation of natural gas will stay in the atmosphere for days (Lokke 1984; Evans et al. 1974; Haines et al. 1975).
524
THE GREENING OF PETROLEUM OPERATIONS
In this backdrop, it is believed that large amounts of ethylene glycol can be fatal. In relative smaller quantities it may cause nausea, convulsions, slurred speech, disorientation, and heart and kidney problems. It may cause birth defects in babies and reduced sperm counts in males. It affects the chemistry of the body by increasing the amount of acid, which results in metabolic problems (Correa et al. 1996; Foote et al. 1995; Lenk et al. 1989). The EPA's drinking water guidelines for ethylene glycol is 7,000 micrograms in a liter of water for an adult, and a maximum level of 127 milligrams of ethylene glycol per cubic meter of air for a 15-minute exposure is recommended by the American Conference of Governmental Industrial Hygienists (ACGIH). About half the amount of this compound that enters the air breaks down in 24 to 50 hours, and it breaks down within a few days to a week in water and soil. This clearly shows that the natural gas tainted with glycol is not as safe as it is projected to be (ASTDR 1997). Methanol is also just as harmful as ethylene glycol. It is also considered to be an acute poison. It is stated that the ingestion of methanol in significant quantities causes nausea, vomiting, and abdomen pain. Other effects are visual symptoms, including falling visual acuity, photophobia, and the feeling of being in a snowstorm. There are increasing indications that a variety of organic (original source) solvents (including methanol) can cause Parkinson's syndrome with pyramidal characteristics in vulnerable persons. Individuals exposed to methanol have been observed to have preferential localization of lesions within the putamina, which is induced by methanol. The case studies of poisoning by methanol have revealed symptoms that progress gradually to visual impairment. In the case of methanol, concentrations are not always proportional to the exposure intervals, owing to metabolic and other elimination manners that occur simultaneously with the exposure. The same statistics show that around 67% of the patients had haemorrhagic pancreatitis reported at post mortem. It further says that in others seizures were observed in cases where the intoxication was severe. Yet, others who had visual symptoms would develop irreversible visual impairment. The ingestion of as small a quantity as only 10 ml can cause blindness and 30 ml can prove to be fatal. The picture gets bleaker with the knowledge that the half-life of methanol is around 30 hrs. This shows that this chemical is going to remain in the atmosphere for around 30 hours (Aquilonius et al. 1980; Batterman et al. 1998; Hageman et al. 1999; Hantson et al. 1997; Jacobsen et al. 1986). Besides, oxidation does take place in
FLOW ASSURANCE IN PETROLEUM FLUIDS
525
nature continuously, and, therefore, the ensuing products become more poisonous. MEA and Diethanolamine (DEA) are also considered to be dangerous chemicals. Contact with the lungs may result in lung injury. It causes severe irritation and more often chemical burns of the mouth, throat, oesophagus, and stomach, with pain or discomfort in the mouth, throat, chest, and abdomen. It causes nausea, vomiting, diarrhoea, dizziness, drowsiness, thirst, faintness, weakness, circulatory collapse, and sometimes can induce a coma. It can cause breathing trouble breathing, can initiate chest pain, increase heart rate, set off irregular heart beat (arrhythmia), and cause a collapse and even death. It can damage the nervous system if the exposure duration and intensity is high and it can also harm the red blood cells, which leads to anaemia. In the liquid form it can cause severe irritation, experienced as discomfort or pain in the eyes. It also causes the eyes to blink excessively and produce tears. It causes excess redness and swelling of the conjunctiva and chemical burns of the cornea. Kidneys and liver can be damaged by overexposure to this chemical. Skin contact with this chemical can aggravate existing dermatitis, and the inhalation of MEA can exacerbate asthma. Symptoms of high blood pressure, salivation and papillary dilation were reported to be associated with diethanolamine intoxication. It caused skin irritation in rabbits when the concentration level was above 5%, and a concentration level of more than 50% caused severe ocular irritation. It is reported to be corrosive to eyes, mucous membranes, and skin. If spattered in the eyes it can cause extreme pain and corneal damage. In some cases, permanent eyesight loss can take place. Repetitive contact to its vapors even at lower irritant levels usually results in corneal edema and foggy vision. In the liquid form it may cause blistering and necrosis. It causes acute coughing, pain in the chest, and pulmonary edema if the concentration of the vapor in the surrounding atmosphere in exceedingly high. The swallowing of this chemical can cause extreme pain in the intestines (gastrointestinal pain), diarrhoea, vomiting, and in some cases perforation of the stomach (ASTDR 1997; Hellwig et al. 1997; Keith et al. 1992; Mankes 1986; Pereira 1987; Liberacki 1996).
13.4
Pathways of Chemical Additives
It is reported that, upon electro-oxidation at 400 mV, methanol and glycol give rise to glycolate, oxalate, and formate. Glycol transforms
526
THE GREENING OF PETROLEUM OPERATIONS
Glyoxal CH0 2
*(
Glycol aldehyde CH2OH(CHO)y
Glyoxylate CHO(COO)
Glycolate \pH2OH(COO)y Poisoning Path
CH,
CO
Non-Poisoning Path
Formate HCOO
Figure 13.3 Ethylene glycol oxidation pathway in alkaline solution (Matsuoka et al. 2005).
first to glycolate and then to oxalate. Oxalate was found to be stable and no further oxidation of the same was observed. This path is termed as non-poisonous. Another product of glycolate is formate, and this transformation is termed a poisonous path, or sometimes CO poisoning path. In the case of methanol oxidation, formate was oxidized to C0 2 , but ethylene glycol oxidation produces CO instead of C 0 2 and followed the poisoning path at over 500 mV (Matsuoka et al. 2005). The oxidation of glycol produces glycol aldehyde as an intermediate product. Figure 13.3 shows the pathway of glycol in the environment. It is observed that when heat increases, CO poisoning also increases. Wang et al. (2005) reported the oxidation of ethylene glycol on the bare surfaces of catalysts and also under steady-state conditions. Complete oxidation of C 0 2 occurred less than 6%, making it a minority reaction pathway. The formation of incompletely oxidized C2 molecules indicated that breaking C-C bonding is a slow process, which gives rise to CO poisoning (Matsuoka et al. 2005). Above, the role of Σ/(ί), as previously discussed, is not explicitly mentioned. However, the presence of these "negligible" components decreases the natural reactivity of CO, slowing down the CO oxidation and resulting in immediate toxicity. It is predicted that the use of natural gas in the coming years will rise sharply. This feature should be considered during the planning of any gas processing unit. The increase in the consumption of
FLOW ASSURANCE IN PETROLEUM FLUIDS
527
natural gas would result in increased quantities of these harmful chemical releases into the atmosphere at the same ratio. Even under the present circumstances, the matter of concern with these chemicals is that they remain in the atmosphere for 36 hours to many days. Also, methanol can never be recovered or 100% regenerated (the approximate system loss is 1%), thus always leaving residues in the gas stream. The recovery procedures for ethylene glycol and MEA are not perfect, and sometimes large quantities of these chemicals are transferred to the atmosphere, along with the constant discharge into the atmosphere of the chemicals that are not recovered from the natural gas. The components that are released into the atmosphere are oxidized, and they further produce other toxic chemicals. The following are examples of the oxidation of the abovementioned chemicals.
13.4.1
Ethylene Glycols (EG)
These are made up of three elements: carbon, oxygen, and hydrogen. Their structure is H-0-CH 2 -CH 2 -0-H. If some of these molecules pick u p extra oxygen, other compounds of the carboxylic acid family are formed, for example, formic acid, oxalic acid, and glycolic acid. These compounds are all acidic, and may cause corrosion of certain metals. Higher temperatures can destroy ethylene glycol in a remarkably short period of time (Carroll 2003). More importantly, each of these chemicals, being manufactured synthetically, is prone to contaminating the natural gas stream irreversibly even when trace amounts are left behind. Conventional analysis, which does not include the effect of the pathway (through Σ/it), as discussed earlier) of the material manufacturing process, is not capable to identifying the long-term damage to the environment when EG is oxidized. In its oxidized form, one obtains C 0 2 and water vapor along with trace amounts of other chemicals that render the C 0 2 and water vapor stream incapable of returning to the ecosystem. The resulting emission produces gas emissions that are incompatible with the natural state of the atmosphere and, therefore, increases the global warming impact of C0 2 .
13.4.2
Methanol
Methanol, CH3-OH, (i.e.. methyl alcohol), is the simplest aliphatic alcohol and the first member of the homologous series. It is a colorless liquid completely miscible with water and organic solvents, and it is very
528
THE GREENING OF PETROLEUM OPERATIONS
hydroscopic. It has an agreeable odor and a burning taste, and it is a potent nerve poison (O'Leary 2000). Methanol will burn with a pale-blue and non-luminous flame and will form carbon dioxide and steam (O'Leary 2000). Even if trace amounts of methanol are left in the gas stream, the entire stream will be contaminated. From that point onward, all matter that comes in contact with this gas will suffer various levels of contamination, even when the complete oxidation (through combustion) of gas takes place, as seen in the following equation: ICH^OH + 30 2 τ± 2C02 + 4H 2 0 + Σ / ( 0
(13.1)
Methanol The E/(t) factor is principally responsible for rendering both water and carbon incompatible with organic matters, making them play the same role as foreign materials in a human body. They will not be assimilated with the ecosystem. This factor is not conventionally included in chemical reaction equations because it is considered negligible. The bulk chemical reaction equation has the following steps in conventional analysis (O'Leary 2000): (1) Methanol is oxidized to form formaldehyde: CH^OH
^±
HCHO
Methanol
[O]
Formaldehyde
+
H 20
(13.2)
water
(2) Then, formaldehyde is further oxidized to make formic acid, and the same is changed to CH 2 and H 2 0: HCHO
<=±
HCOOH
^
Methanol
[O]
Formic Acid
[O]
C02
+
H20 (13.3)
13.4.3 Methyl Ethanol Amine (MEA) Methyl Ethanol Amine (MEA) degrades in the presence of oxygen and CO z , resulting in extensive amine loss and equipment corrosion as well as generating environmental impacts. Rochelle and Chi (2001), in their report on Oxidative Degradation of Monoethanolamine, explain the oxidation mechanism for MEA by Single Electron Oxidation as follows (Figure 13.4):
FLOW ASSURANCE IN PETROLEUM FLUIDS H
529
H Fe+3 or R· I 4 +·Ν-
I NCH2CH2OH
I
Aminum
H
H MEA H
I
O,
: N —CHCH 2 OH
I H
° O
Peroxide radical I MEA H
I
: N —CHCH 2 OH
I O°
H
H Peroxide
1
Hydroxyacetaldehyde
Imine + H 2 0 2
Formaldehyde
>n of MEA. Figure 13.4 Degradatio
Once again, the above depiction does not account for the pathway followed by synthetic products. This is expressed through the Z/(t) factor that includes all products, including matter accumulated from catalysts and other trace particles from various materials used during processing. If this factor were included in the above equation, the formaldehyde shown above would not be considered the same as the organic form of formaldehyde, even when the bulk concentration is the same. It is important to note that the bulk concentration is not a mitigating factor when it comes to long-term material properties that affect the environment and, ultimately, sustainability
13.4.4
Di-ethanol Amine (DEA)
Di-Ethanol Amine (DEA) is a secondary amine. It contains two molecules of ethanol linked through their carbons, and it is used as an anticorrosion agent. DEA is usually produced by the reaction of ethylene oxide and ammonia in a molar ratio of 2:1. It decomposes
530
THE GREENING OF PETROLEUM OPERATIONS
on heating and produces toxic and corrosive gases including nitrogen oxides, carbon monoxide, and carbon dioxide. Once again, the reactivity of the resulting CO is reduced due to the presence of toxins resulting from the unsustainable pathways followed during the manufacturing of these chemicals, leading to irreversible damage to the environment.
13.4.5 Triethanolamine (TEA) Triethanolamine (TEA) is both a tertiary amine and a tri-alcohol and has a molecule with three hydroxyl groups. Triethanolamine acts as a weak base due to the lone pair on the nitrogen atom. It also decomposes with heating and produces toxic and corrosive gases including nitrogen oxides, carbon monoxide, and carbon dioxide. By-products of these reactions such as formaldehyde, nitrogen oxide, carbon dioxide, carbon monoxide, and formic acids have other health hazards and dangerous effects on the human body, and this chain is not going to break at any stage. Keeping the above explanations in mind, it is thought that natural gas processing and the transportation industry would be earnestly looking for safer alternatives to these chemicals, but the contrary is the fact. The industry is moving very slowly, if at all, in responding to this problem. The reluctance of the oil and gas industry to switch from conventional inhibitors has many reasons. At present the only alternatives are the low dosage inhibitors, but the problem is that these inhibitors are more suitable for milder environments in terms of pressure and temperature, and they would lose their efficiency in harsher environments. The use of these chemicals is not in concordance with the environmental safety standards. According to the HSSA syndrome, new chemicals are likely to be more toxic than their previous versions (Zatzman 2007).
13.5
Sustainable Alternatives to Conventional Techniques for Hydrate Prevention
So far, the natural gas transportation industry has been employing different mechanical (e.g., injection of hot oil or glycol, jacketing), electrical (e.g., electric heaters), and chemical methods (e.g., injection of alcohols) to deal with this problem (Carroll 2003). The first two methods, mechanical and electrical, are more desirable in a sense that they are more environmentally friendly as compared to the
FLOW ASSURANCE IN PETROLEUM FLUIDS
531
chemical methods. However, the problem with these methods is that they become less feasible and are exorbitantly costly in the presently explored gas fields, in the extreme conditions of deep seas, and in remote permafrost locations. In the chemical hydrate inhibition methods, different chemicals are used, e.g., alcohols, glycols etc. The concentrations and the volumes used of these chemicals are not fixed and are dependant upon the conditions of the environment (Weast 1978). These chemicals are divided into different groups (e.g., thermodynamic inhibitors, kinetic inhibitors, and specifically the low-dose hydrate inhibitors) on the basis of their functioning mechanisms (e.g., thermodynamic inhibitors consist of methanol and glycols). Therefore, these inhibitors are used alternately in varied circumstances and operating conditions. However, it would be appropriate to state here that none of these inhibitors have perfect results even in conditions deemed favorable to the use of that specific kind of inhibitor. Apart from their functional ineffectiveness, almost all of them have been proven to be a detriment to the environment. They are not only hazardous in terms of physical leaks and spills, but also their mixing with natural gas has dangerous consequences for the environment in the long term. The problems of hydrate formation can be addressed with two possible solutions. The first solution is the production of the same conventional chemicals, such as methanol, ethylene amine, MEA, DEA, and TEA, from reactants that are present in nature. The other is getting rid of the presently used conventional chemicals (as described above) altogether and use alternates that are taken from nature. The suggested solutions, if proven applicable and practical, would not only eliminate the toxicity but also help decrease the costs of the overall process. The proposition of a low cost, universally adaptable, applicable, and environment-friendly solution can be achieved only through a fundamental change in the present scientific thinking, research, and application setup. In the present setup, it is perceived that "chemicals are chemicals," which generally means that if the compositions of the chemicals are the same, their properties in both the long term and the short term should be the same. This perception does not take into account the fact that chemicals with the same chemical compositions but different formation pathways can have completely different properties. Even if it were possible to reproduce a molecule, it would not have the same temporal function as the one that it is being compared with. Then, the molecule would not interact the same way because
532
THE GREENING OF PETROLEUM OPERATIONS
the surrounding environment will have an impact. Two molecules are identical only if they are the same function of time and their surrounding is also identical. In an open system, that would be an absurdity. Miralai (2006) recently established this fundamental trait of nature and proposed a number of natural additives that can replace artificial additives. In this, care was taken to ensure the natural additives are truly natural. Even during the extraction process, natural solvents are used. Even though once a single artificial molecule has been introduced it is impossible to get back the original natural world, his work showed that, with the help of nature, we do not have to get the old world back. The nature of nature is such that we can move on and even claim an earth that is cleaner than before. One of the greatest pieces of disinformation spread widely throughout contemporary scientific research is that there are no serious or other deleterious consequences, flowing from infinite and freely-chosen manipulations of temporal functions applied to the processing of materials in isolation from nature, for the environment and actual physical nature. That so little has appeared in the literature about the anti-nature consequences of synthetic chemical processes and their outputs is powerful testimony about the effective public relations deployed in favor and support of petroleum refining and its by-products.
13.5.1
Sustainable Chemical Approach
The first approach is hypothetical, but it is believed that this can be proven a practical solution with elaborate experimental work. This approach would not alter the present mechanisms and methodology of applying conventional chemicals in processing and the transportation of natural gas, it would only make a change in the pathways of the development of the same chemical. This approach is based on the assumption that "nature is perfect." It is believed that, if the constituents of the conventional inhibitors were taken from innocuous natural sources without introducing an artificial product or process (Miralai et al. 2008), the resulting product would be benign or even beneficial to nature. If the process is sustainable, then the source can be crude oil or natural gas and the products will be benign to the environment. This approach is equivalent to destroying bacteria with natural chemicals, rather than synthetic ones. It is well known that an olive oil and dead bacteria mixture is not toxic to the environment, whereas conventional pharmaceutical antibiotics are.
FLOW ASSURANCE IN PETROLEUM FLUIDS
13.5.1.1
533
Ethylene Glycol
As suggested above, if the process and ingredients of ethylene glycol production would involve only those substances that are found in the nature, the produced ethylene glycol can prove to be sustainable and in harmony with nature. The following is the main chemical reaction in the process: C2H4
+ O -> C2H40 +
(Ethylene)
H20
->
(Ethylene Oxide) (Water)
HO-CH2-OH (Ethylene Glycol)
This reaction shows that if ethylene from a source (natural or otherwise) is oxidized, it will convert to ethylene oxide. The introduction of water to ethylene oxide will change it to ethylene glycol. The principal argument put forward here is that if no artificial product (e.g., catalysts that don't exist in natural environment) is added to the left hand side of the equation, then the resulting ethylene oxide and, eventually, the ethylene glycol will not be detrimental to the environment. This is equivalent to organic farming, in which natural fertilizers and pesticides are used. There are numerous sources of ethylene in nature; they can be obtained from various fruits and vegetables. A list of fruits and vegetables that can be sources of ethylene is given in the table below: 23.5.2.2
Methyl Ethanol Amine
(MEA)
The reaction between ammonia and ethylene oxide yields monoethanolamine, the subsequent reaction between monoethanolamine and ethylene oxide produces diethanolamine, and the reaction between diethanolamine and ethylene oxide results in the production of triethanolamine:
NH, (Ammonia)
+
C2H40 -» (C2H4OH)NH2 (Ethylene Oxide) (Monoethanolamine)
+ C2H40 -» (C2H4OH)NH2 (Monoethanolamine) (Ethylene Oxide) (C2H4OH)2NH (Diethanolamine)
+
C2H40 (Ethylene Oxide)
-»
(C2H4OH)2NH (Diethanolamine) (C2H4OH),N (Triethanolamine)
534
THE GREENING OF PETROLEUM OPERATIONS
Table 13.1 Ethylene sensitivity chart. Perishable Commodities
Temperature C/F
*Ethylene Production
Fruits & Vegetables Apple (non-chilled)
1.1/30
VH
Apple (chilled)
4.4/40
VH
Apricot
-0.5/31
H
Artichoke
0/32
VL
Asian Pear
1.1/34
H
Asparagus
2.2/36
VL
Avocado (California)
3.3/38
H
Avocado (Tropical)
10.0/50
H
Banana
14.4/58
M
0/32
L
Beans (Snap/Green)
7.2/45
L
Belgian Endive
2.2/36
VL
Berries (Blackberry)
-0.5/31
L
Berries (Blueberry)
-0.5/31
L
Berries (Cranberry)
2.2/36
L
Berries (Currants)
-0.5/31
L
Berries (Dewberry)
-0.5/31
L
Berries (Elderberry)
-RO.5/31
L
Berries (Gooseberry)
-0.5/31
L
Berries (Loganberry)
-0.5/31
L
Berries (Raspberry)
-0.5/31
L
Berries (Strawberry)
-0.5/31
L
Beans (Lima)
FLOW ASSURANCE IN PETROLEUM FLUIDS
Table 13.1 (cont.) Ethylene sensitivity chart. Perishable Commodities
Temperature C/F
*Ethylene Production
Fruits & Vegetables 13.3/56
M
Broccoli
0/32
VL
Brüssel Sprouts
0/32
VL
Cabbage
0/32
VL
Cantalope
4.4/40
H
Cape Gooseberry
12.2/54
L
Carrots (Topped)
0/32
VL
10.0/50
L
Cauliflower
0/32
VL
Celery
0/32
VL
Chard
0/32
VL
Cherimoya
12.8/55
VH
Cherry (Sour)
-0.5/31
VL
Cherry (Sweet)
-1.1/30
VL
Chicory
0/32
VL
Chinese Gooseberry
0/32
L
Collards
0/32
VL
Crenshaw Melon
10.0/50
M
Cucumbers
10.0/50
L
Eggplant
10.0/50
L
0/32
VL
5.0/41
M
0/32
M
Breadfruit
Casaba Melon
Endive (Escarole) Feijoa Figs
535
536
THE GREENING OF PETROLEUM OPERATIONS
Table 13.1 (cont.) Ethylene sensitivity chart. Temperature C/F
*Ethylene Production
Garlic
0/32
VL
Ginger
13.3/56
VL
Perishable Commodities
Grapefruit (AZ, CA, FL, TX)13.3/56
VL -1.1/30
VL
Greens (Leafy)
0/32
VL
Guava
10/50
L
Honeydew
10/50
M
Horseradish
0/32
VL
13.3/56
M
Kale
0/32
VL
Kiwi Fruit
0/32
L
Kohlrabi
0/32
VL
Leeks
0/32
VL
12.2/54
VL
Lettuce (Butterhead)
0/32
L
Lettuce (Head/Iceberg)
0/32
VL
Lime
12.2/54
VL
Lychee
1.7/35
M
Mandarine
7.2/45
VL
Mango
13.3/56
M
Mangosteen
13.3/56
M
Mineola
3.3/38
L
0/32
L
Grapes
Jack Fruit
Lemons
Mushrooms
FLOW ASSURANCE IN PETROLEUM FLUIDS
Table 13.1 (cont.) Ethylene sensitivity chart. Perishable Commodities
Temperature C/F
*Ethylene Production
Nectarine
-0.5/31
H
Okra
10.0/50
L
Olive
7.2/45
L
Onions (Dry)
0/32
VL
Onions (Green)
0/32
VL
Orange (CA, AZ)
7.2/45
VL
Orange (FL, TX)
2.2/36
VL
Papaya
12.2/54
H
Paprika
10.0/50
L
Parsnip
0/32
VL
Parsley
0/32
VL
Passion Fruit
12.2/54
VH
Peach
-0.5/31
H
Pear (Anjou, Bartlett/ Bosc)
1.1/30
H
Pear (Prickley)
5.0/41
N
0/32
VL
Pepper (Bell)
10.0/50
L
Pepper (Chile)
10.0/50
L
Persian Melon
10.0/50
M
Persimmon (Fuyu)
10.0/50
L
Persimmon (Hachiya)
0.5/41
L
Pineapple
10.0/50
L
Pineapple (Guava)
5.0/41
M
Peas
537
538
THE GREENING OF PETROLEUM OPERATIONS
Table 13.1 (cont.) Ethylene sensitivity chart. Temperature C/F
*Ethylene Production
Plantain
14.4/58
L
Plum/Prune
-0.5/31
M
Pomegranate
5.0/41
L
Perishable Commodities
Potato (Processing)
10.0/50VL
Potato (Seed)
4.4/40
VL
Potato (Table)
7.2/45
VL
Pumpkin
12.2/54
L
Quince
-0.5/31
L
Radishes
0/32
VL
Red Beet
2.8/37
VL
Rambutan
12.2/54
H
Rhubard
0/32
VL
Rutabaga
0/32
VL
Sapota
12.2/54
VH
Spinach
0/32
VL
Squash (Hard Skin)
12.2/54
L
Squash (Soft Skin)
10.0/50
L
Squash (Summer)
7.2/45
L
Squash (Zucchini)
7.2/45
N
Star Fruit
8.9/48
L
Swede (Rhutabaga)
0/32
VL
Sweet Corn
0/32
VL
13.3/56
VL
0/32
L
Sweet Potato Tamarillo
FLOW ASSURANCE IN PETROLEUM FLUIDS
539
Table 13.1 (cont.) Ethylene sensitivity chart. Perishable Commodities
Temperature C/F
*Ethylene Production
Tangerine
7.2/45
VL
Taro Root
7.2/45
N
Tomato (Mature/Green)
13.3/56
VL
Tomato (Brkr/Lt Pink)
10.0/50
M
Tree-Tomato
3.9/39
H
Turnip (Roots)
0/32
VL
Turnip (Greens)
0/32
VL
Watercress
0/32
VL
Watermelon
10,0/50
L
Yam
13.3/56VL Live Plants
Cut Flowers (Carnations) 0/32
VL 0/32
VL
2.2/36
VL
0/32
VL
Potted Plants
-2.8-18.3/ 27-65
VL
Nursery Stock
-1.1-4.4/ 30-40
VL
0/32
N
7.2-15/45-59
VL
Cut Rowers (Chrysanthemums) Cut Flowers (Gladioli) Cut Flowers (Roses)
Christmas Trees Flowers Bulbs (Bulbs/ Corms/ Rhizomes/Tubers) Source: Website 18.
*N = None; H = High; L = Low; M = Medium; VH = Very High; VL = Very Low
540
THE GREENING OF PETROLEUM OPERATIONS
In the initial reaction, the sources of ammonia and ethylene oxide can be either synthetic or natural. It is suggested that ethylene oxide from natural sources, as described in the abovementioned processes, be allowed to react with aqueous ammonia (from urine, etc.) in the liquid phase without a catalyst at a temperature range of 50-100°C and 1 to 2 MPa pressure. A reaction would result in the production of monoethanolamine, which, if allowed to proceed further, would produce diethanolamine and triethanolamine. Ethylene oxide and ammonia from natural sources would render the product non-toxic, the whole process would be environmentfriendly, and the by products of the reactions would be beneficial as long as the process doesn't introduce any toxic chemical. Note that even the heat source needs to be sustainable.
13.5.2 Biological Approach The second approach is based on the hypothesis that natural biological means can be employed by the industry in processing and transporting natural gas. Paez (2001) has isolated cryophilic bacteria from Nova Scotia that can prevent the formation of gas hydrates at pressure ranges of 150 psi. Such actions of bacteria are similar to how LDHIs work. 13.5.2.1
Hydrate Formation Resistance Through Biological Means
The possibilities of completely replacing the present toxic chemicals (used by the gas processing and transportation industry) with substances that are found in nature are immense. The sustainability criteria of these additives are fulfilled only if both the origin and pathway are natural. The increased activity in natural gas exploration, production, processing, and transportation areas has increased the awareness of the general public regarding the environmental issues. It is believed that, in the future, as the concerns about the toxicity of currently used inhibitors will grow, the environmental consciousness of the consumers would demand major changes to the presently used systems and chemicals. The industry's approach in this regard has only been to focus on minimizing the waste and increasing recovery and regeneration of the presently used inhibitors. However, it is feared that, if the root
FLOW ASSURANCE IN PETROLEUM FLUIDS
541
cause of the problem means the toxicity issue with the presently used chemicals is not addressed, the current situation is only going to cause further damage to the environment. It is essential that the ones that conform to the first and foremost benchmark, i.e., true sustainability criteria are fulfilled, replace the presently used inhibitors. It is appropriate to mention here that the use of the microorganisms in the natural gas industry is not new, and it has been used by the industry in certain fields, such as in the bio-remediation of the contaminated soil and water and in the enhanced oil recovery. However, the same industry has never used biological means for the inhibition of hydrates.Paez (2001) reported that adequate bacteria can be cultured from sewage water. He hypothesized that the extremophiles that are considered to be ideal are also ubiquitous, and one should be able to isolate them from sewage water. These bacteria can be cultured and inserted into the gas pipeline using a chamber, depicted in Figure 13.5. This approach was previously taken by Al-Maghrabi et al. (1999). Extremophiles are the bacteria that live, survive, and grow in extremely harsh conditions. The extremophiles remain active in conditions that are described as inhospitable for other organisms, and the characteristics that allow them to do so are being studied around the developed world. New extremophiles are being discovered, and the already identified ones have been studied. Among the large number of extremophiles, the ones that are needed for the future experiments would be chosen from the category of barophilic and psycrophiles. These barophilic
Figure 13.5 The bacteria insertion chamber with a translucent window.
542
THE GREENING OF PETROLEUM OPERATIONS
and psycrophilic bacteria thrive under high pressures and low temperatures and they have been identified as having an optimal growth rate in the range of 60 MPa and 15°C. This pressure range can be higher in some cases and can reach a mark of 80 MPa for the barophilic bacteria, as evident from the discovery of DB21MT-2 and DB21MT-5 by scientists in Japan. Other significant discoveries were Shewanella benthica and Moritella in the barophilic catrgories (Böhlke et al. 2002; Kato et al. 1998). 13.5.2.2
Reaction Mechanisms of Barophiles and Cryophiles
Researchers have been focusing on the reaction mechanisms of these bacteria under very high pressures conditions. It has been hypothesized that these bacteria regulate the structure of the acids of their membranes to handle these pressures. There are proteins, called ompH, that have the best possible growth environment at high pressures, for which they have an increased ability to take nutrients from the surrounding. The genetic studies of some of these bacteria show that all these barophiles are composed of different DNA-binding factors that vary according to the varying pressure and environmental conditions. These findings led to the culturing of some of the bacteria that exist in the high-pressure zones in the vicinity of 50 MPa (Kato et al. 1997). As mentioned above, the psycrophiles are bacteria that have the property of surviving in very cold temperature conditions. Unfortunately, psycrophiles are the bacteria that researchers have very little knowledge of, as opposed to their cousins, thermopiles, which have a history of research carried out on them (Al-Maghrabi et al. 1999). However, it is hypothesized that these organisms regulate the fatty acid arrangement of the phospholipids of the membrane in order to cope with the cold temperature of the surrounding. When the temperature decreases, the composition of the fatty acid in the membrane also changes from a very disordered gel-like material to a very orderly form of a liquid crystal. There are also signs that the flexibility of proteins plays a part in the ability of these organisms to withstand very low temperatures. These activities are the result of the biochemical reactions that involve the enzymatic catalysts (Cummings et al. 1999). The efficiency of these reactions is considerably reduced at low temperatures because the thermodynamic forces also play a certain role in this process. However, the enzymes found in these organisms are more efficient
FLOW ASSURANCE IN PETROLEUM FLUIDS
543
in their manipulations of the metabolic activity. These types of bacteria have been taken from permafrost (temperatures in the range of 5°C) and deep-sea environments, such as 2000m below the sea level (Rossi et al. 2003). 13.5.2.3
Bacteria Growth and Survival
Requirements
Bacteria, like any other living organisms, need nutrients to survive and grow. The presence of oxygen, the availability of water, temperatures, and pressure are some of the parameters that control bacterial growth. Any nutrient within a specific cell can be composed of carbon, nitrogen, phosphorous, sulphur, etc. These nutrients are available in sugars, carbohydrates, hydrocarbons, carbon dioxide, and some inorganic salts. The main purpose of the nutrient uptake is to generate energy, which is generated by a cell or processed from sunlight. In each of these metabolic reactions, waters play a role of vital importance. This is because 90% of the bacterial body is composed of water molecules. Different bacteria deal with the presence or absence of oxygen in different ways. There are bacteria that would live in the presence of oxygen only, and there are bacteria that would live in the absence of oxygen only. There are other bacteria that would live in an environment where there is oxygen, though they have the ability to survive in the absence of it. Still there are other types that would live in the absence of oxygen, but they can also survive in the presence of oxygen. These are called obligate aerobes, obligate anaerobes, facultative anaerobes, and facultative aerobes, respectively.
13.5.3
Direct Heating Using a Natural Heat Source
As discussed earlier in this chapter, heating pipelines would eliminate the formation of gas hydrates. However, the heating source should be natural. The advantages of natural heating sources are 1) they are cheaper than alternatives sources, 2) they are inherently environment-friendly, 3) they have high efficiency, and 4) they are inherently sustainable. The most natural source of such heat is solar. The efficiency of solar heating can be increased drastically if the heating is done directly (not through conversion using photovoltaics). Direct heating can be used in two ways. It could be used with a solar parabolic collector (see Figure 13.6) placed underneath
544
THE GREENING OF PETROLEUM OPERATIONS
Figure 13.6 Solar collector for direct heating of a pipeline.
the joints of a pipeline. Because joints are the source of pressure decompression, they are the sites that are usually responsible for the onset of hydrate formation. The second approach would be to heat a thermal fluid with solar heating. The advantage of this approach is the usage of heat during night times when sunlight is not available. Figure 13.7 shows the solar collector, along with a thermal fluid tank and a heat absorption fin.
Figure 13.7 Solar parabola, heat absorber (left) and thermal fluid tank enclosurer (right).
FLOW ASSURANCE IN PETROLEUM FLUIDS
13.6
545
Mechanism of Microbially Induced Corrosion
The most important microbial corrosion is that which is due to sulfate-reducing bacteria (SRB). SRB thrive under anaerobic conditions, for example deep in soils and underneath deposits. The best-known examples of SRB are Desulfovibrio and Desulfotomaculum. In many cases, SRB derive their carbon (for incorporation into cell material) from low molecular weight compounds such as lactate and fumarate. SRB possessing the enzyme hydrogenase can obtain their energy from the oxidation of molecular hydrogen. SRB are capable of growing over a wide pH range (4-8) and at temperatures from 10-40°C, although some thermophilic strains can grow in the temperature range of 45-90°C and a pressure u p to 500 atm (Herbert and Stott 1983). There is no universally acceptable mechanism to account for the corrosive action of SRB. It is believed that the iron sulfide formed on the metal surface is an efficient cathodic site for the reduction of hydrogen, which has the effect of accelerating the corrosion process. Another viewpoint suggests that oxygen made available from the sulfate reduction reaction, shown in Equations 13.4 and 13.5, reacts with nascent hydrogen and, therefore, speeds up the cathodic reaction (Pankhania et al. 1986). SOd2" -> S2- + 20, 4
Z
(13.4)
The overall reaction of the anaerobic corrosion of iron induced by SRB can be described by the following reaction (Pankhania et al., 1986): 4Fe + S O 2 ^ 4H 2 0 -» FeS + 3Fe(OH)2+ 20H"
(13.5)
Pankhania etal. (1986) proposed that the hydrogen sulfide (H2S) acts as the cathodic reaction and showed that the sulfate reduction can occur with the cathodically formed hydrogen. Chen etal. (1997) discussed many instrumental analyses of microbiologically influenced corrosion. They emphasized that detection and monitoring of microbiologically influenced corrosion is essential for understanding the mechanistic nature of the interactions and for obtaining control methods. The techniques include electrochemical noise (EN) measurements, concentric electrodes, scanning
546
THE GREENING OF PETROLEUM OPERATIONS
vibrating electrode probe (SVEP) mapping, electrochemical impedance spectroscopy, atomic fore microscopy, confocal laser microscopy, Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and auger electron spectroscopy. Rainha and Fonseca (1997) studied the influence of the sulfate reducing bacteria (SRB) Desulfovibrio desulfuricans ATCC 27774 grown in a lactate/sulfate medium on the anaerobic corrosion of mild steel. Higher corrosion rates as well as the transpassive dissolution of Fe(0) or Fe(II) compounds to Fe(III) were observed in the presence of bacterial culture. Moreno et al. (1992) studied the pitting of stainless steel by the SRB and found that the biogenic sulfides enhanced the passivity breakdown in the presence of chloride anions. Many workers studied the performance of different coatings exposed to biologically active environments (Jones-Meehan et al. 1992; Jack et al. 1996). Jones-Meehan et al. (1992) studied coated steel exposed to mixed communities of marine microorganisms using the energy dispersive spectrometer (EDS). The EDS analysis detected breaching of epoxy, nylon, and polyurethane coatings applied to steel coupons. SEM and ESEM studies have shown that all coated surfaces of steel were heavily colonized with a diverse assembling of bacteria (Jones-Meehan et al. 1992). Al-Darbi (2004) conducted a series of studies using SRB. In order to find natural additives, the mechanism of MIC (particularly with SRB) was studied in details. Figure 13.8 shows scanning electron microscopic photographs of SRB. As the SRB concentration increased, discrete biofilms expanded and merged to form larger biofilms. This formed the initial basis for metal surface degradation, as seen from Figure 13.9. Following this onset of metal degradation, the local physical and chemical conditions created an environment that helped in accelerating the surface degradation. Those biofilms caused the degradation of the coating layer and MIC on the surfaces on which they grew. The scanning electron photomicrographs of the mild steel coupon surfaces underwent heavy microbial colonization after their exposure to the SRB environment for 3 months. Figure 13.10 shows a scanning electron photomicrograph of the surface of one of the uncoated mild steel coupons. The coupons had the composition as given in Table 13.2. The bacterial colonies and biofilm matrices were observed to be attached to and between the layers of the heavy and dense corrosion products. Small holes (pits) were also observed under the corrosion product deposits.
FLOW ASSURANCE IN PETROLEUM FLUIDS
547
Figure 13.8 SEM photomicrograph of the sulfate reducing bacteria (SRB).
Figure 13.9 SEM photomicrograph of the bacterial colonies and biofilms attached to the surface of one of the mild steel coupons.
This can be attributed to both the SRB and the chloride attacks. One specific feature of MIC is that the bacterial products act as agents that increase the growth rate of corrosion. After the onset of pitting
548
THE GREENING OF PETROLEUM OPERATIONS
Figure 13.10 SEM photomicrograph of uncoated mild steel coupon shows the corrosion products and the SRB attack on its surface.
Table 13.2 Chemical composition of the mild steel coupons used by Islam's research group. Element
C
Mn
P
s
Fe
Weight (%)
0.17-0.23
0.30-0.60
0.04
0.05
Balance
corrosion, corrosion becomes unstoppable with remedial actions, including paint or anti-bacterial agents. Most of MIC, however, manifests as localized corrosion attack because most of the micro-organisms do not form completely continuous biofilms on the metal surfaces. Rather, they tend to settle as discrete colonies (Hamilton and Maxwell 1986). This fact explains the localized pitting corrosion attack on some of the tested mild steel coupon surfaces as can be seen from Figure 13.11. Figure 13.12 shows another scanning electron photomicrograph of a mild steel coupon coated with a conventional alkyd coating. This coating falls under the category of synthetic oil-based paints that are preferred by the industry for the following reasons: 1) adhesion
FLOW ASSURANCE IN PETROLEUM FLUIDS
549
Figure 13.11 SEM photomicrograph shows the localized corrosion attack (pitting) on the uncoated mild steel coupon surface.
Figure 13.12 SEM photomicrograph shows the heavy bacterial colonization and the biofilms attached to the surface of a mild steel coupon coated with alkyd coating.
reliability on metal as well as various other surfaces, including plastic and wood materials; 2) waterproof and weatherproof, with low degradation under natural conditions; 3) amenable to automated applications, with high flow and low drip; 4) high level of gloss can be reached; 5) can bond with extender pigments, particularly
550
THE GREENING OF PETROLEUM OPERATIONS
synthetic ones for sealing porous surfaces; 6) can be heat and alkali resistant if used with proper resins; and 7) there is no minimum film forming temperature. Based on these criteria, synthetic paints are selected over natural oil-based paints. These paints are considered to be the latest in oil-based painting technology and are widely used for corrosion protection (Goldberg and Hudock 1986). These alkyd paints contain various amounts of synthetic polymers and mineral oils. For example, Golberg and Hudock (1986) listed the following composition: (1) 45% wt. to 85% wt. of a drying oil component (could be fossil fuel or vegetable oil); (2) 10% wt. to 30% wt. of a polyol such as propylene glycol, trimethylol propane, pentaerythritol, or similar products; (3) 10% wt. to 25% wt. of a polycarboxylic acid, such as phthalic acid or anhydride, maleic acid or anhydride, or similar products; (4) alkylene oxide, if a water-based paint is desired; and (5) emulsifying agents and solvents, typically being synthetic. These paints are not suitable for preventing MIC. As can be seen from Figure 13.12, bacterial colonies and biofilm matricies are attached to the coated surface. Breaching of the coating was also detected on the surface, as can be seen from Figure 13.13. This type of failure in the coating layer led to a severe localized corrosion attack on the mild steel surface underneath the coating. This is the case because synthetic coating does not address the cause of MIC. Instead, it attempts to isolate the corrosion-causing secretion of microbes, which is an impossible task. Biodegradation of the coating was also detected as small holes in the coating layer filled and surrounded by bacteria, as can be seen from Figure 13.14. Black ferrous sulfide deposits were detected wherever there was a crack or hole in the coating layer. It's believed that the presence of SRB reduced sulfate to sulfide, which reacted with iron and produced the black ferrous sulfide (Hamilton 1985). The above discussion shows how the selection of alkyd paints is based on properties that have nothing to do with MIC. Ironically, when paints don't prevent MIC, more toxins are added to combat MIC, the resulting products become far more toxic than the oxides of iron - the principal product of corrosion. These toxins enter the fuel stream and continue in the food chain, bringing in numerous negative effects on the environment and the ecosystem. This comment is valid for the latest patent on the subject to the latest research topic that is being proposed. Table 13.3 lists some of the latest patents on the subject.
FLOW ASSURANCE IN PETROLEUM FLUIDS
551
Figure 13.13 SEM photomicrograph shows breaching and cracks and on a mild steel coupon surface coated with alkyd coating.
Figure 13.14 SEM photomicrograph of a coated mild steel surface with alkyd coating shows the bacteria in and around the pits and cracks on the surface.
552
THE GREENING OF PETROLEUM OPERATIONS
Table 13.3 Some patents on corrosion prevention technologies. US Patent no.
Title
Date
Authors
3285942
Preparation of glycol molybdate complexes
November, 1966
Price et al.
3578690
May, 1971
Becker
4095963
Stabilization of deodorized June, 1978 edible oils
Lineberry
4175043
Metal salts of sulfurized olefin adducts of phosphorodithioic acids and organic compositions containing same
November, 1979
Horodysky
4330420
Low ash, low phosphorus motor oil formulations
May, 1982
White et al.
4370246
Antioxidant combinations of molybdenum complexes and aromatic amine compounds
January, 1983
deVries et al.
4394279
Antioxidant combinations of sulfur containing molybdenum complexes and aromatic amine compounds for lubricating oils
July, 1983
deVries et al.
4428848
Molybdenum derivatives and lubricants containing same
January, 1984
Levine et al.
4479883
Lubricant composition with improved friction reducing properties containing a mixture of dithiocarbamates
October, 1984
Shaub et al.
4593012
Production of hydrocarbon-soluble salts of molybJune, 1986 denum for epoxidation of olefins
Usui et al.
4648985
Extreme pressure additives March, 1987 for lubricants
Thorseil et al.
FLOW ASSURANCE IN PETROLEUM FLUIDS
553
Table 13.3 (cont.) Some patents on corrosion prevention technologies. US Patent no.
Title
Date
Authors
4812246
Base oil for lubricating oil and lubricating oil composition containing said base oil
March, 1989
Yabe
4824611
Preparation of hydrocarbon-soluble transition metal salts of organic carboxylic acids
April, 1989
Cells
4832857
Process for the preparation of overbased molybdenum alkaline earth metal and alkali metal dispersions
May, 1989
Hunt et al.
4846983
Novel carbamate additives July, 1989 for functional fluids
Ward, Jr.
4889647
Organic molybdenum complexes
December, 1989
Rowan et al.
5137647
Organic molybdenum complexes
August, 1992
Karol
5143633
Overbased additives for lubricant oils containing a molybdenum complex, September, process for preparing them 1992 and compositions containing the said additives
Gallo et al.
5232614
Lubricating oil compositions and additives for use therein
August, 1993
Colclough etal.
5364545
Lubricating oil composition containing friction modifier and corrosion inhibitor
November, 1994
Arai et al.
5412130
Method for preparation of organic molybdenum compounds
May, 1995
Karol
554
THE GREENING OF PETROLEUM OPERATIONS
Table 13.3 (cont.) Some patents on corrosion prevention technologies. US Patent no.
Title
Date
5605880
Lubricating oil composition
February, 1997
Arai et al.
5650381
Lubricant containing molybdenum compound and secondary diarylamine
July, 1997
Gatto et al.
5994277
Lubricating compositions with improved antioxidNovember, ancy comprising added 1999 copper, a molybdenum containing compound, aromatic amine and ZDDP
Ritchie et al.
6150309
Lubricant formulations with dispersancy retention capability (law684)
Gao et al.
6174842
Lubricants containing molybdenum compounds, January, 2001 phenates and diarylamines
Gatto et al.
RE37363
Lubricant containing molybdenum compound and secondary diarylamine
Gatto et al.
13.7
November, 2000
September, 2001
Authors
Sustainable Approach to Corrosion Prevention
It is well known that natural materials do not corrode. The reason that corrosion of metal is a great problem is that materials today are processed through unnatural means. The remedy to this problem is not to introduce more unnatural and inherently toxic means to combat corrosion. A sustainable approach to corrosion prevention should mainly focus on using natural material and in absence of natural materials for construction, natural additives to inhibit corrosion. Many researchers have used different natural materials for corrosion inhibition and control purposes (El-Etre 1998; El-Etre and
FLOW ASSURANCE IN PETROLEUM FLUIDS
555
Abdallah 2000). In a study conducted by Mansour et al. (2003), green algae were tested as natural additives for a paint formulation based on vinyl chloride copolymer (VYHH), and its efficiency for protection of steel against corrosion in sea water was evaluated. Both suspended and extracted forms of algae were used to achieve optimum performance of the algae-contained coatings. Poorest performance (protection of steel against corrosion is seawater) was obtained when algae was added in its suspended form, whereas the extracted form exhibited better performance based on impedance measurements. Instead of adding more toxins, Islam's research group took an entirely different approach. A series of natural oils were found to be adequate for preventing microbial growth, some being particularly suitable for destroying SRB (Al-Darbi et al. 2002a, 2002b, 2004a, 2004b, 2005; Saeed et al. 2003). Original trials indicated that various natural oils, such as mustard oil, olive oil, and fish oil, have the properties of bactericides and can destroy SRB effectively. However, if these oils are applied directly, they are not considered to form a base for metallic paints. Typically, it is suggested that the natural oil-based paints suffer from the following shortcomings: 1) slow drying time, 2) high dripping and running, 3) bad sealing with "bleeding" surfaces, 4) heat resistant properties lower with increased oil content, and 5) resistance to alkali is not predictable although somehow stable. Considering these shortcomings, natural oils were added in small concentrations to the alkyd paint. Some of these results are shown here. The scanning electron photomicrograph shown in Figure 13.15 is for the surface of a mild steel coupon coated with an alkyd coating mixed with 2% vol. olive oil. No biofilms were detected, except a few small bacterial spots were scattered at different locations on the surface. Blistering with and without rupturing of the coating was observed on some areas of the coated surface, as can be seen from Figure 13.16. This was a clear indication of some local failure in the coating either as a result of coating disbondment or microbial processes occurring beneath the coating layer. It's worth mentioning here that the SRB in the media and in the slim layers (biofilms) converted sulfates in the sample into sulfides, which in turn produced hydrogen sulfide (H2S). Later, the H2S and carbon dioxide (C0 2 ) reacted with water to produce mild acidic products that lower the pH of the substrate (metal) surface to levels favorable for the growth of bacteria, which in the end created a very
556
THE GREENING OF PETROLEUM OPERATIONS
Figure 13.15 SEM photomicrograph shows some pinholes, spots, and localized attack on the surface of a coated mild steel coupon with alkyd mixed with olive oil.
Figure 13.16 SEM photomicrograph shows blistering on the surface of a coated mild steel coupon with alkyd mixed with olive oil.
acidic environment, thereby encouraging the rapid corrosion attack on those metal surfaces (Lee and Characklis 1993; Lee et al. 1993). Figure 13.17 shows the scanning electron photomicrograph of the surface of a mild steel coupon coated with alkyd coating mixed
FLOW ASSURANCE IN PETROLEUM FLUIDS
557
Figure 13.17 SEM photomicrograph shows the surface of a well-protected mild steel coupon coated with alkyd mixed with fish oil.
with 2% vol. Manhaden fish oil. It was surprising to find only very few bacterial spots on this surface, which was shiny and almost clean. No breaches, blistering, or deterioration were later detected on this surface when it was investigated under the microscope. The above results were attributed to the marked inhibition of bacterial adhesion to the coated surface when one of the natural additives was added to that coating. Also, it is believed that the natural additives increased the modified alkyd coatings' protection efficiency by decreasing the ions and moisture vapor transfer rates through the coating layer. As a result of those findings, it was concluded that the coated mild steel surfaces with alkyd coating mixed with 2% vol. Manhaden fish oil were the most and very well protected surfaces, followed by those coated with alkyd coating mixed with olive oil, while the least protected mild steel surfaces were those coated with the original alkyd coating. Another series of tests were conducted in order to observe the degrading and blistering effects of the acidic environment on the coated surfaces, and later on the corrosion forms and rates on the metallic substrates. Two samples of each coating system were tested in the same environment to be confident of the repeatability of the results. Figure 13.18 shows the blistering effects of the acidic environment on both the control samples (system A) and the
558
THE GREENING OF PETROLEUM OPERATIONS
Coating system C
Coating system D
Figure 13.18 Digital photographs show 20 x 30 mm of the surfaces of the coated samples.
samples coated with the enamel oil-based coating mixed with one of the natural oils (systems B, C, and D). The degree of blistering on each of the samples was evaluated using the ASTM-D714-87 (ASTM
FLOW ASSURANCE IN PETROLEUM FLUIDS
559
Table 13.4 Details of the tested coating systems. Coating system
Description
A
enamel oil-based coating
B
enamel oil-based coating + 3 vol.% mustard oil
C
enamel oil-based coating + 3 vol.% olive oil
D
enamel oil-based coating + 3 vol.% salmon fish oil
2002) photographic reference standard. The results and findings are tabulated in Table 13.4. The samples coated with the enamel oil-based coating (system A) showed very little or no sign of surface damage, while the samples coated with enamel coating mixed with one of the selected natural oils experienced either low or high degree of blistering. The highest degree of blistering was observed on the samples coated with the enamel coating mixed with 3% vol. fish oil (system D). This was followed by the samples coated with the enamel coating mixed with 3% vol. mustard oil (system B). A list of various systems is given in Table 13.4. The samples coated with the enamel coating mixed with 3% vol. olive oil showed anomalous behavior in terms of blister size. Initial surface contamination and a difference in surface preparation could be the reason for the difference in the adhesive strength of the two samples coated with enamel coating mixed with olive oil. From the above observations, it was concluded that the control samples coated with the enamel coating showed better resistance to blistering effects in the acidic environment studied. This also indicates that the presence of natural oils (such as mustard, olive, and fish oils) changes the adhesive properties of the oil-based coatings at low pH environments. These blisters can grow in size and frequency and, hence, can degrade the coating quality and its protection efficiency. The weight loss is one of the most common methods used to quantify corrosion mainly when dealing with small panels and coupons (Fontana and Green 1978). In this study, the rate and extent of corrosion on the surfaces of the different samples were estimated using this method. The weight of each sample was measured before and after exposure in the salt fog test corrosion chamber. The overall period of exposure for each sample was 3,000 hours. The reported
560
THE GREENING OF PETROLEUM OPERATIONS
values are the average of duplicate samples of each coating system. From Figure 13.19, it can be seen that the weight loss factor (WLF) was maximum for the samples coated with enamel coating only, closely followed by the samples coated with enamel coating mixed with mustard oil. From that, it was concluded that these two samples experienced high corrosion and erosion rates in the salt fog test corrosion chamber. On the other hand, the samples coated with enamel coating mixed with fish oil showed the lowest weight loss, followed by the samples coated with enamel coating mixed with olive oil. It was obvious that the addition of fish a n d / o r olive oils to the enamel oil-based coating decreased the rate of the coated surface deterioration and, as a result, decreased the associated substrate metal corrosion. The image analyzer system KS300 was used to monitor and investigate the coated surfaces' deterioration and the growth of the localized corrosion reflected in the form of holes and pits on and beneath the coated surfaces. It was observed that, all the tested coating systems suffered from surface erosion, degradation, and metal corrosion, but with different rates, forms, and extents. The holes and pits on the coated surfaces were photographed using a light microscope
Figure 13.19 The weight loss factors (WLF) values for the mild steel coupons coated with different enamel oil-based coating systems.
FLOW ASSURANCE IN PETROLEUM FLUIDS
561
using a magnification of 10x. The pictures were then analyzed using the image analyzer technique. These pictures gave an idea about the severity and rates of the coating deterioration and the resulting corrosion. This method gave qualitative as well as quantitative results concerning the extent of corrosion in and around a given pit on the surface of a given coated sample (Muntasser et al. 2001). Photographs of some selected pits were taken after 1,000, 2,000 and 3,000 hours of exposure in the salt fog corrosion chamber. The areas of those pits were also measured using the abovementioned image analyzer techniques. Results are graphically represented in Figures 13.20 and 13.21. Figure 13.20 shows the average pits area for the different coating systems at different exposure times. Figure 13.21 shows the growth of pits with time for each coating system. Figure 13.22 shows a comparison between the shapes and sizes of the pits on the surfaces of the different coating systems after an exposure time of 3,000 hours inside the salt fog test corrosion chamber. In Figure 13.22, the brownish and reddish colors in and around the pits are the different corrosion products, mainly comprised of
Figure 13.20 The average pits area for the different coating systems at different exposure times inside the salt fog test corrosion chamber.
562
THE GREENING OF PETROLEUM OPERATIONS 0.45 0.400.35 0.30 0.25 0.20 CD
> < 0.15 0.10 0.05 0.00 500
1000
1500 2000 Time of exposure
2500
3000
3500
Figure 13.21 Pits average area on the surfaces of the coated mild steel coupons after different exposure times to the simulated marine environment.
ferric and ferrous ions. It's worth mentioning here that several limitations existed regarding the coating application method and the curing process. The size and growth of each pit is influenced by the initial surface contamination or breaks in the coating film. The surface areas of the pits and the surrounding corrosion products were used to evaluate the performance of each coating system. From Figure 13.22, it was observed that the samples coated with the enamel coating only and enamel coating mixed with mustard oil both showed the highest degree of localized corrosion. The lowest degree of surface degradation and localized corrosion was observed in the samples coated with enamel coating mixed with olive oil, where the overall surface damage and rusting on those samples were relatively low. The samples coated with enamel coating mixed
FLOW ASSURANCE IN PETROLEUM FLUIDS
Coating system C
563
Coating system D
Figure 13.22 Comparison between the shapes and sizes of the pits formed on the surfaces coated with different coating systems after an exposure time of 3,000 hours inside the salt fog test corrosion chamber.
with fish oil also suffered from coating degradation and localized corrosion attack, as can be seen from Figure 13.22. The amount of surface damage on these samples was higher compared to those on the surfaces of the samples coated with enamel coating mixed with olive oil. Both the ESEM and the EDX were used to study and analyze the surfaces of the abovementioned coating systems. The EDX analysis technique is a well-known method used for investigating the surfaces of metals and coatings. Meehan and Walch (Jones-Meehan and Walch 1992) studied coated steel exposed to mixed communities of marine microorganisms using EDX. Their ESEM/EDX analysis detected the breaching of epoxy, nylon, and polyurethane coatings applied to steel coupons.
564
THE GREENING OF PETROLEUM OPERATIONS
Figures 13.23 shows the ESEM photomicrograph and the EDX spectra of the surface of a sample coated with enamel coating mixed with mustard oil after an exposure time of 3,000 hours in the salt fog test corrosion chamber. Cracks and pits were observed all over the surface of this sample. The EDX analysis of a particular spot on the surface shown in Figure 13.23A was conducted and the spectrum is shown in Figure 13.23B. This spectrum revealed a high percentage of Si and Ti as they do form a major part of the enamel oil-based coating. Iron (Fe) was detected on two different peaks on the spectra, which implies that iron was present in two valence forms (ferrous and ferric). From this observation, it was concluded that the mild steel substrate had corroded and produced both ferric and ferrous oxides as part of the corrosion products. The EDX spectra also showed zinc (Zn) at the energy level of 1.03 KeV. The lower counts of zinc may be justified by the fact that both Zn and the corrosion products in the form of ZnC12 were leached out and washed away from the surface. This fact makes the coating system much less protective in any aggressive environment. Figure 13.24 shows the ESEM photomicrograph and the EDX spectrum of the surface of the sample coated with enamel coating
Figure 13.23 (A) ESEM photomicrograph of the mild steel surface coated with coating system B (B) EDX spectra of a spot on the surface shown in (a).
FLOW ASSURANCE IN PETROLEUM FLUIDS
565
Figure 13.24 (A) ESEM photomicrograph of the mild steel surface coated with coating system C (B) EDX spectra of a spot on the surface shown in (a).
mixed with olive oil after 3,000 hours of exposure in the salt fog test corrosion chamber. Figure 13.24A shows that the surface was much less degraded with fewer pits, holes, and cracks compared to other coated surfaces. The EDX analysis of a spot on this surface is shown in Figure 13.24B. From the spectra, it can be observed that iron was detected only on one peak. Zinc (Zn), on the other hand, was detected with a very high count as compared to that for coating systems B and D. This can be explained by the fact that the amount of zinc leached out from the coating was quite low. This means that the addition of olive oil formed a homogeneous thin film on the metal surface, which helped make it much more protective. Figures 13.25 shows the ESEM photomicrograph and the EDX spectrum of the surface of the sample coated with enamel coating mixed with fish oil (coating system D) after 3,000 hours of exposure inside the salt fog test corrosion chamber. Very few localized corrosion attacks were observed on the sample surface in the form of pits and cracks of almost the same shape and size. The amount of damage on the surface of coating system D was observed to be much lower than that for coating systems A and B. The EDX spectrum of a spot on the surface shown in Figure 13.25B. From Figure 13.25B it was
566
THE GREENING OF PETROLEUM OPERATIONS
Figure 13.25 (A) ESEM photomicrograph of the mild steel surface coated with coating system D (B) EDX spectra of a spot on the surface shown in (a).
observed that Si and Ti had the highest peaks. Iron (Fe) was detected on two peaks, implying that it was present in ferrous as well as ferric forms. The amounts of chloride and zinc were detected to be very low. Zinc reacts with chlorides to form ZnC12, and that could have been washed out because it is a loose product (Munger 1990). From the above results, it was inferred that coating system C showed the best performance under the simulated marine environment, followed by coating system D, while coating systems A and B experienced the highest surface damage and poorest performance. The leaching of zinc from the coating surface indicates the degradation of the coating system as zinc starts behaving as a sacrificial anode. This phenomenon was observed in coating system B and some in coating system D. Saeed et al. (2003) investigated the antimicrobial effects of garlic and black thorn against Shewanella putrefaciens, which is a bacterium implicated in pipeline corrosion. They concluded that both garlic and black thorn possess bacteriostatic effects against Shewanella putrefaciens and, therefore, can be used as bactericides to inhibit and prevent biocorrosion in environments containing Shewanella putrefaciens.
FLOW ASSURANCE IN PETROLEUM FLUIDS
567
13.8 Asphaltene Problems and Sustainable Mitigation The deposition of asphaltene is considered to be one of most difficult problems encountered during the exploitation of the oil reservoirs. Miscible and immiscible flooding operations exhibit suitable environments for such precipitation (Islam 1994). In some cases, asphaltene precipitation can occur during natural depletion and oil transportation and, more commonly, during well stimulation activities (Kocabas and Islam 2000). Recent investigations indicate that permeability damage is more likely to be severe near the wellbore (Ali and Islam 1998). Mansoori (1997) provided a comprehensive review of asphaltene precipitation. He also presented a comprehensive mathematical model that predicts the precipitation of asphaltenes. However, the model does not deal with plugging and adsorption of asphaltenes in porous media. More recently, Sahimi et al. (1997 and 2000) presented a fractal-based model for predicting both the onset and precipitation of asphaltene. Their theory, based on the fractal structure of asphaltene aggregates, favorably agreed with experimental data derived from X-ray diffraction and neutron scattering analysis. This newly developed theory has yet to be coupled with models for describing asphaltene propagation in porous media. There have been some other efforts to propose different mathematical models (Correra 2004). Ali and Islam (1998) provided a new model that incorporates asphaltene adsorption with wellbore plugging, using the surface excess theory (Sircar et al. 1972) and the mechanical entrapment theory (Gruesbeck and Collins 1982). Their model was validated with experimental results in a linear core. They also observed that some of the deposition parameters are related to local speed itself. This poses difficulties in modeling asphaltene deposition and plugging in a wellbore where the speed is continuously changing with the direction of flow. The most significant observation in this process is that asphaltene precipitation can render an EOR scheme (e.g., miscible flood) ineffective very quickly. The following figure from Kocabas and Islam (2000) shows how very significant permeability loss can occur in a short period of time in a wellbore that is subject to asphaltene precipitation (Figure 13.26).
568
THE GREENING OF PETROLEUM OPERATIONS Dimensionless permeability 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
pD = 0.015
20
10
30 40 Dimensionless time
60
Figure 13.26 Nature of asphaltene plugging near a wellbore (Kocabas and Islam 2000).
The following figures (Figures 13.27, 13.28) show that there are two other flow regimes that might lead to less dramatic permeability damage, even though the production loss in the short term is significant.
I
1.20-1 1.00 i t
Γ
0.80
n fin as Φ E 0.40
^ Bv-
X3
Si 0.20 0.00 0.00
10.00
5.00
15.00
Time, hr Figure 13.27 The existence of a monotonous steady state (Al-Adhab et al. 1998).
FLOW ASSURANCE IN PETROLEUM FLUIDS
569
1.20 2? 1 LOO ra a> E 0.80 d)
g> 0.60 S g 0.40 to
§
0.20
Q
0.00 ~l 0 .00
i
i
2.00
4.00
6.00
Time, hrs
Figure 13.28 The existence of a pseudo steady state (Al-Adhab et al. 1998).
13.8.1
Bacterial Solutions for Asphaltene and Wax Damage Prevention
The role of microbes in rapid degradation of petroleum compounds is well known. However, few studies have been done for the possible use of microbes in breaking down asphaltic materials in situ. Often, bacteria are considered unfit for growth in harsh salinity and thermal conditions that are prevalent in wellbores of petroleum reservoirs. A few applications of microbial degradation of waxy materials have been reported. However, no systematic study is available in the literature. Progresses have been made, on the other hand, in using microbes for the remediation of petroleum contaminants (Livingston and Islam 1999). However, most previous studies have focussed on bacteria that can survive only at ambient temperatures and in non-saline environments (Baker and Herson 1990; Hills et al. 1989). Only recently, Al-Maghrabi et al. (1999) have introduced a strain of thermophilic bacteria that are capable of surviving in high-salinity environments. This is an important step, considering that most previous scientists became increasingly frustrated with the slow progress in the areas of bioremediation under harsh conditions. A great deal of research has been conducted on mesophilic and thermophilic bacteria in the context of leaching and other forms of mineral extraction (Gilbert et al. 1988). At least 25% of all copper produced worldwide, for instance, comes from bioprocessing with mesophilic
570
THE GREENING OF PETROLEUM OPERATIONS
or thermophilic bacteria (Moffet 1994). Metals in insoluble minerals are solubilized either directly by mineral metabolic activities or indirectly by chemical oxidation brought on by products of metabolic activity mainly acidic solutions of iron (Hughs 1989). Most of these bacteria, therefore, prefer low pH conditions. One of the best-known mesophiles is the Thiobacilli family. These bacteria are capable of catalyzing mineral oxidation reactions (Marsden 1992). Thiobacillus ferrooxidans is the most studied organism relevant to the leaching of metal sulphides (Hughs, 1989). This strain of bacteria is most active in the pH range of 1.5-3.5, with an optimum pH of 2.3 and preferred temperatures of 30-35°C. Even though it is generally recognized that these bacteria can survive at temperatures ranging from 30-37°C, there is no data available on their existence in the presence of petroleum contaminants. Also, no effort has been made in trying to identify this strain of bacteria in hot climate areas, even though it is understood that the reaction kinetics will increase at higher temperatures (Le Roux 1987). Several acidophilic bacteria have been identified that can survive at temperatures higher than the one preferred by mesophiles (Norris et al. 1989). In this group, iron- and sulfur-oxidizing eubacteria that grow at 60°C can be considered to be moderately thermophilic (with an optimum temperature of 50°C). At higher temperatures, there are strains of Sulfolobus that can readily oxidize mineral sulfides. Other strains, morphologically resembling Sulfolobus, belong to the genus Acidiamus and are active at temperatures of at least 85°C. The isolation of thermophilic bacteria does not differ in essence from the isolation of other microorganisms except in the requirement of high incubation temperatures. This may necessitate measures to prevent media from drying out or the use of elevated gas pressures in culture vessels to ensure sufficient solubility in the substrate (Lacey 1990). The use of thermophiles in petroleum and environmental applications has received little attention in the past. However, if thermophiles can survive in the presence of petroleum contaminants, their usefulness can be extended to bioremediation in hot climate areas. On the other hand, if the thermophiles can survive in a saline environment, they can be applied to seawater purification and to microbial enhanced oil recovery. Al-Maghrabi et al. (1999) have introduced a thermophilic strain of bacteria that can survive in a saline environment, making them useful for petroleum applications in both enhanced oil recovery and bioremediation.
FLOW ASSURANCE IN PETROLEUM FLUIDS
571
Figure 13.29 shows bacterial growth curves for a temperature of 45°C. The two compositions reported are 3% and 6% asphalt (crude oil containing heavy petroleum components and 30% asphaltene). Even though the temperature used was not the optimum for this strain of bacteria, the exponential growth is evident. Note that approximately 10 hours of an adaptation period elapsed between the exponential growth phase and the initial stage of the adaptation phase. During this time of adaptation in high temperature and salinity, bacteria concentration remained stable. Also, no appreciable degradation in asphalt was evidenced. During the exponential growth phase, the growth rate was found to be 0.495/hr and 0.424/hr. The growth rates along with the best-fit exponential equations are listed in Table 13.5. A confidence of more than 90% for most cases shows relatively high accuracy of the exponential form of the growth curve. Even though the actual growth rate is higher at lower asphaltene concentrations, the actual number of bacteria was found to be greater for the high-concentration case. This finding is encouraging because a greater number of bacteria should correspond to faster degradation of asphaltenes. It also demonstrates that the asphalts (and asphlatenes) form an important component of the metabolic pathways of the bacteria. Also, for the 6% asphalt case, there appears to be some fluctuation in bacterial growth. This pattern is typical of two or more species of bacteria competing for survival. At a lower temperature (45°C), two types of bacteria are found to survive. At higher temperatures, however, only one type (the rod-shaped) of bacteria continues to grow. Also, the oscillatory nature of bacterial growth can be explained
■= 600 w 500 c 400 o V 300 200100
60% asphaltene
t
30% asphaltene
Time in hrs Figure 13.29 Bacterial growth in asphaltene at 45°C.
572
THE GREENING OF PETROLEUM OPERATIONS
Table 13.5 Growth rates of bacteria in asphaltene and wax at different concentrations and temperatures. Concentration
Temperature
Growth rate, μ
Complete equation
30% asphaltene
45°C
0.495/hr
Ct = 75.81 e(0-495t)
60% asphaltene
45°C
0.424/hr
Ct = 99.53e (0424t>
30% asphaltene
80°C
0.605/hr
Ct = 56.00e (0605,)
60% asphaltene
80°C
0.519/hr
Ct = 77.16e <0519,)
30% wax
80°C
0.0293/hr
Ct = 48.038e (0029l)
as a process of step-wise breakdown of various components of the petroleum crude. In this process of consumption, the asphaltic crude is subject to rapid degradation, along with exponential growth of bacteria following the Monod equation. Figure 13.30 shows the bacterial growth curve for the two cases of 80°C. The growth rates for these two cases are listed in Table 13.5. Clearly, faster growth was observed for the higher temperature case. Note that all these cases used 2% salinity. This growth rate shows both viability and enhanced bacterial growth in the presence of higher temperatures. Also, similar to the 45°C case, a larger number of bacteria was found at higher concentrations of asphaltic crudes. Al-Maghrabi et al. made a similar observation (1998) when only 3% asphaltene was used. Even though a higher concentration invokes a lower rate of growth, the initial concentration continues to be greater for all temperatures. Also, a much faster increase in bacterial concentration is evidenced at a higher temperature (80°C). This could be due to two factors. The most obvious one is that the bacteria are thermophilic with an optimum temperature around 80°C. The other explanation is that the crude oil components are easier to break down at higher temperatures. In fact, Al-Maghrabi et al. (1999) observed that at 80°C, the interfacial tension between oil and water is lowered significantly, making the oil more vulnerable to microbial degradation. Of course, the viscosity of the crude oil is also reduced at a higher temperature, and this factor cannot be ignored. Similarly, Figure 13.31 shows bacterial growth in a wax medium. This figure shows the effectiveness of thermophilic bacteria in preventing wax deposition problems. For this particular experiment,
FLOW ASSURANCE IN PETROLEUM FLUIDS
573
?3 5 0 0 LU
O
o
growth,
^ 400-
rä 200ω
1 100, m 1 0-
1a
· 30% asphaltene
_
■ 60% asphaltene
1
1
1
2
1
1
3
4
5
Time in hrs Figure 13.30 Growth of bacteria in asphaltene at 80CC.
1000-r900-
E oo
800-
LU O
700-
X
600500-
o u (0 ω
400 -300-
m
200-
m
100;:
0 -I 0
1 20
1 40
1 60
1 80
1 100
120
Time in Hrs Figure 13.31 Bacterial growth in wax at 80°C.
3% wax was added to the bioreactor while keeping the salinity at 2%. The bacterial growth in the presence of wax is extremely slow, especially when compared to that in the presence of the asphaltic crude. Table 13.5 shows the bacterial growth at a low value of 0.029/hr, which is an order of magnitude lower than that of the asphaltene case. Also, the actual number of bacteria is lower in this case. As a consequence, the degradation of wax remains very small (less than 5% in 100 hours). However, the presence of bacteria led to the formation of some crystalline structures. Figure 13.32 shows the effect of salinity on bacterial growth. Both of these two cases represent 10% crude oil (3% asphaltene), but
574
THE GREENING OF PETROLEUM OPERATIONS
I
140
T
° 120§ 100-
I
80-
§
60-
8
* o &
40 20
"
11 0-I 0
1 5
1 10
1 15
Time, hours Figure 13.32 Effect of salinity on bacteria growth (10% crude oil).
one of them has fresh water in it. The presence of fresh water definitely enhances the bacterial growth. This is conceivable. Note that these two curves were generated at room temperature (22°C). The growth rates (for the exponential phase) are listed in Table 13.5. The growth rate is three times less in the presence of salinity. However, the salinity is very high (10%), and most bacteria would not survive in this environment. Figure 13.31 also shows that the fresh water case reaches maximum bacteria concentration at an earlier stage than does the high-salinity case. This is expected because the same amount of crude oil was used for both cases, and with the degradation being slower in the presence of high salinity the bacteria ran out of food faster in the fresh-water case. Microphotographs of the wax body before and after bacterial action showed the structure change during bacterial actions. The structure of the wax is clearly affected by the presence of bacteria that contribute to the breakdown of the long-chain polymeric microstructure of the wax. The emergence of crystalline structures is likely to increase the permeability of the porous medium initially affected by wax deposits. A series of microphotographs was observed in order to visualize bacterial growth and its consequences. Microphotographs showed the existence of both round-shaped and rod-like bacteria after overnight treatment of the culture medium at room temperature. The round-shaped bacteria are very active in breaking down crude oil at its optimum temperature of 45°C. However, for higher temperatures, the rod-shaped bacterium is the only contributor to the degradation of the asphaltic crude. As the temperature increased, the
FLOW ASSURANCE IN PETROLEUM FLUIDS
575
interfacial tension between oil and water decreased leading to the formation of water-in-oil emulsions. At the later stage of bacterial growth, more oil breaks down and the nature of emulsion reverses. Other microphotographs showed the affinity of bacteria to crude oil, in which bacteria are found to gather around the oil droplets. This indicates that, with enhanced agitation, more water will be dispersed leading to higher rates of biodegradation. Several microphotographs also showed the existence of micro-emulsions at 22°C. This is a water-in-oil emulsion. Such emulsions are an indication of low oil-water interfacial tension. Once emulsions are formed, the bioremediation action of bacteria is likely to be enhanced. Microphotographs were also used to affirm a consistent growth in the size of the bacteria as the temperature increased. These large bacteria contribute to the simultaneous formation of oil-in-water and oil-in-oil emulsions within the same area.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
14 Sustainable Enhanced Oil Recovery 14.1
Introduction
The predominant theme of this book is that petroleum operations cannot be rendered sustainable unless the industry eliminates the use of artificial chemicals or energy sources. If artificial matter is replaced with natural matter, then sustainability will be assured. Until a century ago, natural materials were used for most engineering applications. This practice dates back to thousands of years of human history. Natural additives have been used for the longest time, dating back to the regime of the Pharaohs of Egypt and the Hans of China. However, the Renaissance in Europe gave rise to the Industrial Revolution, which became the pivotal point of the emergence of numerous artificial chemicals. Today, thousands of artificial chemicals are being used in everyday products, ranging from health care products to transportation vehicles. With renewed awareness of the environmental consequences and more in-depth knowledge of science, we are discovering that such ubiquitous use of artificial chemicals is not sustainable (Khan 2006). If the pathways of various artificial chemicals are investigated, it becomes clear that such 577
578
THE GREENING OF PETROLEUM OPERATIONS
chemicals cannot be assimilated in nature, making an irreparable footprint that can be the source of many other ecological imbalances (Islam 2004; Chhetri et al. 2006; Chhetri and Islam 2007). Federal regulators determined that about 4,000 chemicals used for decades in Canada posed enough of a threat to human health and the environment that they subjected the chemicals to safety assessments (The Globe and Mail 2006). These artificial additives are either synthetic themselves or derived through an extraction process that uses synthetic products. Crude oil makes a major contribution to the world economy today. Crude oil development and production in oil reservoirs can include up to three distinct phases: primary, secondary, and tertiary (EOR) recovery. During the primary and secondary recovery, only 30% to 50% of a reservoir's original oil in place is typically produced (USDoE 2006). Hence, attention is being paid to Enhanced Oil Recovery (EOR) techniques for recovering more oil from the existing oilfields. The worldwide target for EOR is estimated to be two trillion barrels. Enhanced oil recovery schemes are broadly in the categories of thermal, chemical, gas injection, and microbial. In all these applications, the use of artificial or synthetic chemicals is ubiquitous. This chapter presents various ways of using natural chemicals to achieve the same results, in terms of additional oil recovery. With this mode, the products will be environment-friendly (or at least less hostile to the environment) and less expensive than conventional operations.
14.2
Chemical Flooding Agents
Even though, the world is facing an energy crisis and, therefore, looking for innovative methods for enhanced oil recovery to produce more oil to meet the current and future energy needs, the EOR schemes have declined recently in the U.S. and the rest of the world. The major challenge the EOR schemes face today is to produce oil under attractive economic and environmental conditions (Islam 1999; Khan and Islam 2007). Figure 14.1 shows the total EOR production in the U.S. between 1982 and 2006. The EOR production increased from 1982 but started to decrease significantly from 1998. Figure 14.2 shows the decline of U.S. EOR production attributed to chemical flooding, at the same period. Despite chemical flooding
SUSTAINABLE ENHANCED O I L RECOVERY
579
Figure 14.1 Total EOR production in the U.S. between 1982-2006 (Worldwide EOR Survey 2007).
being one of the widely used EOR techniques, its decline started sharply in the U.S. and in other countries after 1988. Major reasons for the decline of EOR by chemical flooding are the rising prices of surfactants and their environmental impacts in the long term. The alkalis mostly used during alkaline flooding are sodium hydroxide (NaOH), sodium orthosilicate (Na 4 Si0 4 ), sodium metasilicate (Na 2 Si0 3 ), sodium carbonate (Na 2 C0 3 ), ammonium hydroxide (NH 4 OH), and ammonium carbonate (NH 4 ) 2 C0 3 (Burk et al. 1987; Taylor et al. 1996; Almalik et al. 1997). The costs of these chemicals have significantly increased recently. Figure 14.3 shows their cost increment from 1998 to 2006. Alkaline flooding is one of the EOR recovery processes, and it began in 1925 with the injection of a sodium carbonate, "Soda ash,"
580
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
Figure 14.2 Total EOR production in the U.S. by chemical flooding (Moritis 2004).
Figure 14.3 Price of common alkali chemicals (Mayer et al. 1983; Chemistry Store 2005; ClearTech 2006).
SUSTAINABLE ENHANCED O I L RECOVERY
581
solution in the Bradford area of Pennsylvania (Mayer et al. 1983). The alkaline flooding process is simple when compared to other chemical floods, yet it is sufficiently complex to require detailed lab evaluation and careful selection of a reservoir for application. Caustic flooding is an economical option because the cost of caustic chemicals is low compared to other tertiary enhancement systems. The chemicals most commonly used for alkaline flooding are sodium hydroxide (NaOH), sodium orthosilicate (Na 4 Si0 4 ), sodium metasilicate (Na 2 Si0 3 ), sodium carbonate (Na 2 C0 3 ), ammonium hydroxide (NH 4 OH), and ammonium carbonate (NH 4 ) 2 C0 3 (Jennings 1975; Larrondo et al. 1985; Rahman 2007). Due to reservoir heterogeneity and the mineral compositions of rock and reservoir fluids, the same alkaline solution might induce a different mechanism. A good number of laboratory investigations dealing with the interaction of alkaline solutions with reservoir fluids and reservoir rocks have been reported in the literature (Jennings 1975; Ramakrishnan and Wasan 1983; Trujillo 1983). Due to higher pH value, sodium hydroxide is considered to be the most useful alkaline chemical for oil recovery schemes (Campbell 1977). The price comparison of the most common synthetic alkaline substances between 1982 and 2006 is mentioned in Table 14.1. It shows that alkaline prices have increased by five to twelve times over the last fifteen years. The biggest challenge of any novel recovery technique is to be able to produce under attractive economic and environmental conditions (Islam 1996; Khan and Islam 2007). Due to the high cost of synthetic alkaline substances and the environmental impact, alkaline flooding has lost its popularity. This is reflected in Figures 14.4 and 14.5. These graphs have been generated using data reported by Moritis (2004). However, cost effective alkali might recover its popularity in the recovery scheme. It has become a research challenge for the petroleum industry to explore the use of low cost natural alkaline solutions for EOR during chemical flooding. In this paper, wood ash extracted solution is used as a low cost natural alkaline solution. Several experiments have been conducted to test the feasibility of that natural alkaline solution.
14.2.1 Toxicity of the Synthetic Alkalis Alkali is one of the most commonly used chemicals for various applications. It has a wide range of application in different industries, such as petroleum refinery, pulp and paper mills, battery
582
THE GREENING OF PETROLEUM OPERATIONS
Table 14.1 Comparison of price and physical properties of the most common alkalis (Mayer et al. 1983; Chemistry Store 2005; ClearTech 2006). Name of alkali
Formula
pH ofl% solution
Na 2 0
13.15
(%>
Solubility (gm/100cm3)
Price range Price range (dollar/ton) (dollar/ton) in 2006 in 1988 (Mayer, et al. (ClearTech 1983) 2006 Chemistry Store 2005)
Cold water
Hot water
0.775
42
347
285 to 335
830
12.92
0.674
15
56
300 to 385
1385
Sodium Hydroxide
NaOH
Sodium Orthosilicate
NadSiOa
Sodium Metasilicate
Na 2 SiO,
12.60
0.508
19
91
310 to 415
1340
Ammonia
NH,
11.45
-
89
7.4
190 to 205
1920
Sodium Carbonate
Na2CO.,
11.37
0.585
7.1
45.5
90 to 95
1400
4
4
1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 Year of study from 1982-2004 Figure 14.4 Total oil production by chemical flooding projects in the USA.
SUSTAINABLE ENHANCED O I L RECOVERY
583
25000 o 20000 o 15000
1
10000
3 O
5000
1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 Year of production (1984-2004) Figure 14.5 Chemical flooding field projects in the USA.
manufacturer, cosmetics, soap and detergent, leather processing industry, metal processing industry, water treatment plants, etc. The estimated worldwide demand of sodium hydroxide was 44 million tons in 1999. The global demand is expected to grow 3.1% per year (SAL 2006). In Figure 14.6, CMAI (2005) reported that 62 million 40% ID
g> 35% CD
§ 30% H Φ
Q- 25% A § 20% | 15% i . 10%
I 5% 0% ;
#
;·?>·
# ^s
^ ^ P^ , 0
^^
.r?>·
^
.r-P
^
^ ^ «i» cp*
(P
<^
8>
J^_ rP
rSS
Figure 14.6 The total alkali production in the world in 2005 (CMAI 2005).
584
THE GREENING OF PETROLEUM OPERATIONS
tons of alkalis were produced in 2005. Alkalis are raw commercial products, and when they are transferred to other parts of the manufacturer's plant for use in further chemical processing, there is always the risk of leakage. Each year huge amounts of synthetic chemicals are produced, and all of the chemicals including all alkalis are considered responsible for direct or indirect pollution of the environment (Islam 2006). The use of synthetic surfactants usually increases the toxic load to the ecosystem (Mandhava 1994). Moreover, unlike natural surfactants, synthetic surfactants including polymers may not always be easily biodegradable. Haynes et al. (1976) reported that a dose of 1.95 grams of sodium hydroxide can cause death in humans. According to EPA (1992), various types of alkali compounds also have significant adverse effects on human health and are one of the major direct or indirect causes of air pollution. Inhalation of dust, mist, or aerosol of sodium hydroxide and other alkalis may cause irritation of the mucous membranes of the nose, throat, and respiratory tract (MSDS 2006). Exposure to the alkalis in a solid or solution can cause skin and eye irritation. Direct contact with the solid or with concentrated solutions causes thermal and chemical burns, leading to deep-tissue injuries and also permanent damage to any tissue (ATSDR 2006). Haynes (1976) reported that a dose of 1.95 grams of sodium hydroxide can cause death. The use of such compounds in underground reservoirs could have impacts on microbial diversity and other long-term environmental consequences. Hence, there has been a growing interest in developing surfactants from biological sources that are environment-friendly and less expensive compared to synthetic surfactants.
14.2.2
Alkalinity in Wood Ashes
The origin of the word "alkali" is the Arabic word "al qali," which means "from ashes." Wood is the natural source of ashes, and the most important ingredients of natural ashes are sodium and potassium. These metals also form the basis for alkaline solutions used for various applications. In the modern world, wood ash is a by-product of combustion in wood-fired power plants, paper mills, and other wood burning facilities. A huge amount of wood ash is produced every year worldwide, and approximately three million tons of wood ash is produced annually in the United States alone (SAL 2006). Wood ash is a complex heterogeneous mixture of all the non-flammable, non-volatile minerals that remain after the wood and charcoal have burned away.
SUSTAINABLE ENHANCED O I L RECOVERY
585
Because of the presence of carbon dioxide in the fire gases, many of these minerals are converted to carbonates (Dunn 2003). The major components of wood ash are potassium carbonate "potash" and sodium carbonate "soda ash." From a chemical standpoint these two compounds are very similar. From the 1700s through the early 1900s, wood was combusted in the United States to produce ash for chemical extraction. Wood ash was mainly used to produce potash for fertilizer and alkali for the industry On an average, the burning of wood results in about 6-10% ashes. Ash is an alkaline material with a pH ranging from 9-13 (Rahman et al. 2006), and due to its high alkalinity characteristics, wood ash has various applications in different sectors as an environment-friendly alkaline substance. Rahman (2007) made use of the alkaline properties of wood ash and formulated a scheme for enhanced oil recovery applications. Figure 14.7 shows the flow chart on his study. Wood ash (Natural alkaline)
1
r
1
Characterization of wood ash by SEM-EDX i
r
Characterization of wood ash by 13C NMR
l
ik
r Extraction alkaline solution from wood ash 1
i
'
Compare the alkalinity wood ash with the synthetic alkaline
< Feasibility of wood ash for EOR scheme 1
*Γ IFT measurement on oilwater interface
'
Coalescence time of oil-oil droplet measurement
Suitability of wood ash for Enr anced (Dil Recovery Figure 14.7 Major steps used to study the natural additives for enhanced oil recovery.
586
THE GREENING OF PETROLEUM OPERATIONS
14.2.3
Characterization of Maple Wood Ash Producing the Alkalinity
Rahman (2007) used SEM-EDX to characterize the morphology and surface texture of individual particles of the maple wood ash samples and to determine the elemental composition presented in the samples. Each maple wood ash sample was characterized by randomly selecting 3-Ί fields of view and examining all the ash particles observed within the selected fields. The elemental composition and morphology were noted for each particle and compiled for each sample. The morphology of the untreated maple wood ash samples using the magnification of 200 μπ\ revealed that the maple wood ash samples consisted of some irregular shaped amorphous particles and porous particles (Figure 14.8). After the alkaline materials extraction from the maple wood ash samples, SEM-EDX analysis observed the changes in its structures. The SEM micrograph, using the magnification of 50pm, revealed that the treated maple wood ash sample (Figure 14.9) was more dense than the untreated sample due to the chemical reaction of the wood ash components in an aqueous solution. To get the elemental composition in the particles of the untreated and the treated maple wood ash samples, Energy-Dispersive X-ray (EDX) microanalysis was carried out.
Figure 14.8 SEM image of untreated maple wood ash sample.
SUSTAINABLE ENHANCED O I L RECOVERY
587
Figure 14.9 SEM image of maple wood ash sample after the treatment.
The EDX detector is capable of detecting elements with an atomic number equal to or greater than six. The intensity of the peaks in the EDX is not a quantitative measure of elemental concentration, although relative amounts can be inferred from relative peak heights. EDX coupled with the SEM analysis of ash showed that the predominant elements in the wood ash samples were oxygen, calcium, potassium, silicon, and aluminum. Lesser amounts of the elements were sodium, magnesium, and titanium observed in the untreated maple wood ash and treated maple wood ash samples (Table 14.2, Figures 14.10, and 14.11). It was revealed from the SEM-EDX analysis that the elemental composition and nutrients for plants in untreated maple wood ashes and treated maple wood ashes were almost the same. However, after alkaline solution extraction from the wood ashes, those wood ashes might be used as source nutrients for plant growth. The major compounds in maple wood ash, identified by XRD diffraction, are displayed in Figure 14.12. The XRD analysis revealed that the major components of maple wood ash were calcium oxide, potassium oxide, manganese oxide, silica oxide, and magnesium oxide, which were alkaline in nature (Holmberg et al. 2003; Rahman and Islam 2007). The XRD observations were also consistent with energy dispersive x-ray (EDX) analysis coupled with SEM data (Table 14.2) on selected samples of maple wood ashes. During
588
THE GREENING OF PETROLEUM OPERATIONS
Table 14.2 Elemental analysis of maple wood ash samples by EDX coupled with SEM. Serial Elements in No wood ashes
Before the extraction After the extraction of alkaline solution of alkaline solution Atomic (%) 86.60
Weight (%) 73.85
Atomic (%) 86.22
Weight (%) 73.54
01.
Oxygen (O)
02.
Sodium (Na)
0.54
0.66
0.46
0.56
03.
Magnesium (Mg)
1.31
1.69
1.29
1.67
04.
Aluminum (Al)
0.75
1.07
1.09
1.56
05.
Silicon (Si)
0.90
1.35
1.42
2.12
06.
Potassium (K)
1.55
3.24
1.44
2.99
07.
Calcium (Ca)
7.92
16.92
7.70
16.45
08.
Titanium (Ti)
0.14
0.35
0.12
0.29
09.
Manganese (Mn)
0.30
0.87
0.28
0.81
10.
Total
100.00
100.00
the combustion of wood, organic compounds are mineralized, and the basic cations are transformed to their oxides, which are slowly hydrated and subsequently carbonated under atmospheric conditions. The crystalline compounds were found to contain mainly Ca, Mg, and K in all ashes studied. The mineralogical speciation of maple wood ash showed that calcium occurs in a variety of compounds. Very soluble forms such as calcium oxide (CaO) and portlandite (Ca(OH)2) dominate, but calcite (CaC0 3 ) with low solubility occurs in significant amounts (Steenari et al. 1999). Figure 14.12 shows the mineralogical composition of maple wood ash sample. The 13C CP/MAS NMR spectrum of the maple wood ash sample spun at 8.0 kHz, together with the experimental parameters and peak frequencies, are shown in Figure 14.13. A very pronounced feature in the maple wood ash spectra was an intense peak around 168.363 ppm, indicating the presence of carbonate, [(0) 2 -C=0] based on typical 13C chemical shift tables (Hesse et al., 1979; Adelaide 2006). These carbonate ions presented in the maple wood ash reacted with whatever species were around. If there is a lot of hydrogen ion (acidic solution), they stick to the carbonate ion and form new ions,
SUSTAINABLE ENHANCED O I L RECOVERY
589
Spectrum 1
I 1 I ? I f T ' F T"T"I f 1 I | 1 Γ Τ " Τ Τ Τ Τ " Ι T | l
Γ "T 15 10 Full Scale 1390 cts Cursor: 0.042 keV (2009 cts)
l^1
Tl 20
keV
Figure 14.10 Corresponding EDX coupled with SEM spectrums of untreated maple wood ash sample (Spectrum 1).
namely, hydrogen carbonate ions or bicarbonate ions. If there is not much hydrogen ion around, carbonate in effect "steals" a hydrogen ion from water, leaving a hydroxide ion behind and producing an alkaline solution (Dunn 2003). Normally water does not ionize, but in the presence of carbonate and biocarbonate ions it also breaks apart into ions [H 2 0 (/) —> H + (n; + OH~(e ( ], such as ionic compounds, which consequently increase the alkalinity of the aqueous solution. It can be observed in Figure 14.12 that the major components in wood ash are CaO (29.3%) and K 2 0 (11.5%). These two compounds produce alkalinity in an aqueous solution (Holmberg et al. 2003), and the following reactions happened during the process: CaO + HzO -> Ca(OH)2; Ca(OH)2 + COz -» CaC03if);
K 2 0 + H 2 0 -> KOH KOH + C0 2 -» K2COMs) %
590
THE GREENING OF PETROLEUM OPERATIONS Spectrum 2
I I I I I I I I I I I
I I I | I I I I I
■ I ■ I I ff I
) 5 10 15 ull Scale 1376 cts Cursor: 0.042 keV (2048 cts)
I I
2( keV
Figure 14.11 Corresponding EDX coupled with SEM spectrums of treated maple wood ash sample (Spectrum 2).
Figure 14.12 Mineralogical composition of maple wood ash sample.
SUSTAINABLE ENHANCED O I L RECOVERY
591
C u r r e n t Data P a r a n e t e r e 4D aoh 3 1
13C CP/KÄE NUR of a»h
HAKE EXPHO PROCTO
spinning 8.0 kHz
P2 - A c q u i s i t i o n P a r a m e t e r s 200«0914 Date Tine 10.55 opect METRO* PfiOBHD 4 not H-X KfcS POLPROG CPva rTPPN. w i 179» TD SOLVHHT CDC13 €0000 IIS D£ 0 10029.7«» HE sm P I DRUE 1C.701741 He 0.0299870 s e e ÄQ Rfl «192 1C.C50 uoec D* 7.14 uaec DS 300.0 K TS 0.30000001 oec Dl 0.00000«0 see D2 2271 aqloop
Md. G. Rahnan/U. Wemer-Zwanziger 9/14/06
OTJC1
PIS PL1 CP01
. ....
HDC2
P3 PL2 PL12 EP02
CP0 fiPHÄMO
ερορρο
13C 2C00.00 u t c c 4 . 0 0 dfi 100.€505751 HHz 1H 5-00 £.00 2.00 400.241217» £.00 ramp.£4 0.00
ueec dB dB MHz dfi Hz
P2 - P r o c e e e i n g p a r a n e t e r o 409 C El SP 100.£402942 MHz EM WDM ESB 0 5 0 . 0 0 Hz LB
200
150
100
50
ppm
Figure 14.13 XI CP/MAS NMR high-resolution spectrum of maple wood ash sample. CaCO,,. <-> Ca2+ + CO, 2 - · 360
H 2 0 (/) —» H 2
ta
2
(at))'
3
+
3(5)
(nq)
3
brql
j + OH fa ( (in presence of carbonate ions)
+
CO , , + H , ,oHCO,- , 3
'i(ij)
fiTif)
3 (iiij)
H C O - , . + Η,Ο... <-> H,CO - , + OH- , 3
(ηφ
2
(!)
2
3 (πη)
CO 2 " , + Η,Ο,,,->HCO- , + OH- , 3
mi/I
2
(/)
3 'ifi/J
(rti/J
The same types of mechanisms have been proposed earlier by several researchers (Steenari and Lindqvist 1997; Dunn 2003; Rahman 2006) for the alkaline solution extraction from carbonate salts. Renton and Brown (1995) reported that the production of alkalinity depends on the relative amounts of the individual alkaline components. The dissolution of CaO and Ca(OH) 2 results in a high initial production of alkalinity, while the dissolution of CaCQ, results in an overall increase in the amount of alkalinity generated and an increase in the longevity of alkaline production. It is observed in Figure 14.12 that the main component of maple wood ash is CaO at 29.3%. This value is very close to reported values by several authors (Renton and Brown 1995; Obernberger et al. 1997) for different types
592
THE GREENING OF PETROLEUM OPERATIONS
of ashes, such as fluidized bed combustor ash (12.2-29.5% CaO), straw ash (7.8% CaO), cereal ash (7.1% CaO), wood bark ash (42.2% CaO) and wood chips (44.7% CaO), all of which are used as alkalis. However, wood ash might have the potential to produce solutions of high alkalinity depending on the CaO and CaC0 3 contents.
14.2.4 Alkalinity of Maple Wood Ash Extracted Solution The pH values of maple wood ash extracted solutions at different percentages of maple wood ash (1%, 2%, 4%, 6%, and 8%) were measured, and the pH values are presented in Table 14.3. It is found that the alkalinity (pH value) of a 6% wood ash solution is close to a 0.5% synthetic sodium hydroxide solution. This value is also very close to the pH value of a 0.75% Na 2 Si0 3 solution (Rahman et al. 2006). It was reported that alkaline solutions of the pH range 12-14 were treated as "strong bases," as shown in Figure 14.14. From the experimental studies
Table 14.3 Comparison of alkalinity between natural alkaline solution extracted from wood ash and synthetic sodium hydroxide solution at different concentrations. Synthetic sodium hydroxide solution Synthetic sodium hydroxide solution (NaOH)
Synthetic NaOH solution (%)
pH value
2.0% NaOH solution
13.11
1.5% NaOH solution
13.05
1.0% NaOH solution
12.74
0.5% NaOH
12.35
solution
0.2% NaOH solution
11.95
Wood ash solutions Maple wood ash solution
Percentage of wood ash solution
pH value
8% wood ash solution
12.42
6% wood ash solution
12.29
4% wood ash solution
12.09
2% wood ash solution
11.83
1 % wood ash solution
11.42
SUSTAINABLE ENHANCED O I L RECOVERY
593
Figure 14.14 A typical pH scale (Caveman Chemistry 2006).
it was revealed that a 4-8% wood ash extracted solution might have the potential to be used as a source of strong natural alkaline solution, and it might be for an enhanced oil recovery (EOR) scheme during chemical flooding. During alkaline flooding, the pH value of the synthetic alkaline stays in the range of 11.5-13.5 as a common practice. Therefore, it might be proposed that the natural alkaline solution extracted from the 6% wood ash could be used instead of the 0.5% synthetic sodium hydroxide solution or the 0.75% synthetic sodium metasilicate solution during the chemical flooding scheme in an acidic reservoir. Burk (1987) has reported that Na 2 C0 3 solutions are less corrosive to sandstone than NaOH or Na4SiÖ4. The buffering action of sodium carbonate (Na 2 C0 3 ) can reduce alkali retention in the rock formation. The main composition of wood ash is carbonate slats such as CaC0 3 , Na 2 C0 3 (soda ash), and K 2 C0 3 (potash). Carbonate slats offer an additional advantage upon contact with hard water. The resulting carbonate precipitation does not adversely affect permeability as compared to the precipitations of the hydroxides or silicate (Rahman et al. 2006). Therefore, it is suggested that the use of a carbonate buffer solution extracted from maple wood ash might result in longer alkali breakthrough times and increased tertiary oil recovery during chemical flooding. The alkalinity of the ash leachate after repeated batch extraction with deionized distilled water at L/S = 10 is illustrated in Figure 14.15. The pH value is above 10.85 after 10 times repeated extractions due to the presence of different alkaline components in wood ashes. Therefore, same wood ashes might be used for alkaline solution extractions several times.
594
THE GREENING OF PETROLEUM
5 H
0
1
1
1
1
10
20
30
40
1
OPERATIONS
1
50 60 L/S ratio
1
1
1
70
80
90
1
100 110
Figure 14.15 pH of the ash leachate as a function of L/S ratio.
14.2.5
Feasibility Test of a Maple Wood Ash Extracted Solution for EOR Applications
A series of laboratory experiments was conducted on the natural alkaline solution for its application in chemical flooding using crude oil. The physical properties of the crude oil are given in Table 14.4 (Rahman et al. 2006). A microscopic study of the interaction of oiloil droplets in a maple wood ash extracted solution was carried out to understand its effects on oil-oil droplets coalescence and how the oil-water interface changes with time. When the oil droplet was Table 14.4 Physical properties of the crude oil. SI No
Physical properties
Value
01.
Specific gravity
: 0.7 to 0.95
02.
Vapor pressure
: >0.36 Kpa at 20°C
03.
Vapor density
: 3 to 5 (approx),
04.
Freezing point
: -60°C to -20°C
05.
Viscosity
: <15centistokesat20°C
06.
Solubility
: Insoluble
07.
Co-efficient of water/oil distribution
:<1
SUSTAINABLE ENHANCED O I L RECOVERY
595
added to the natural alkaline solution, alkali reacted with the organic acids of oil. As a result, a surfactant was produced. This surfactant contained hydrophilic molecules and hydrophobic molecules that started to form a layer around the oil droplet called "micelles," and it caused the smoothing of surfaces, resulting in reduced interfacial friction. Once the micelle is formed, the mobility of the oil droplets increased and oil droplets moved faster under the influence of a buoyancy force or viscous force, which resulted in the drainage of a thin surfactant water film at the contact between flocculating oil droplets (Figure 14.16). Consequently, this film reached the critical thickness at which it ruptures, and oil droplets coalesced to form a larger globule shown in Figures 14.17A through 14.17F. It was also found that two oil droplets coalesced after 3.5 minutes in a 6% wood ash solution that contains the same alkalinity of 0.5% NaOH and 0.75 % Na 2 Si0 3 solutions.
14.2.6 Interfacial Tension (IFT) Equivalence IFT measurements between a crude oil and an alkaline solution have generally been accepted as a screening tool to evaluate the EOR potential of the crude oil by alkali (Jennings 1975; Campbell 1977; de Zabala, et al. 1982). Recently, Mollet et al. (1996) showed in an experimental study that minimum IFT is not observed in the
Figure 14.16 Schematic illustration of different steps in oil droplets' growth during coalescence.
596
THE GREENING OF PETROLEUM OPERATIONS
(E)
(F)
Figure 14.17 Coalescence of oil droplets in natural alkaline solution.
absence of alkali in the aqueous phase. From our experimental studies, it is found that IFT gradually decreases with increasing concentrations of natural alkaline solutions (Figure 14.18) as well as with increasing concentrations of NaOH solutions (Figure 14.19). It was observed that IFT decreases up to a certain limit with pH values, which is illustrated in Figure 14.20. This behavior is typical of dynamic interfacial phenomena, which are known to take place for heterogeneous fluids (Elkamel et al. 2002). The higher concentration of the alkaline solution develops more surface-active agents as a result of the reaction
SUSTAINABLE ENHANCED O I L RECOVERY
597
Figure 14.18 Interfacial tension vs. different concentrations of wood ash solution at 22°C.
Figure 14.19 Interfacial tension vs. pH of NaOH solutions at 22°C.
between organic acid in the crude oil and alkali in the aqueous phase. Hence, this surface-active agent (petroleum soap) can cause the decrease of interfacial tension and increase the mobility of oil in the continuous water phase.
598
THE GREENING OF PETROLEUM OPERATIONS
Figure 14.20 IFT of crude oil vs. pH of NaOH and natural alkaline solutions at 22°C.
14.2.7
Environmental Sustainability of Wood Ash Usage
In Canada, 4,175,000 km 2 of land out of a total 10,000,000 km 2 of land is covered by forest. Every year a huge amount of wood ash is produced worldwide, and approximately three million tons of wood ash is produced annually in the United States alone (SAL 2006). Wood ash has great potential to be used as a source of major- and micronutrient elements required for healthy plant growth. Wood ash contains many essential nutrients, mainly calcium, potassium, magnesium, and phosphorus, for the growth of trees and other plants. Therefore, wood ashes might have potential applications in different sectors as an environment-friendly, sustainable, natural additive. It has been reported in literature (Anfiteatro, 2007) that wood ashes with neem seed oil, kefir, sea salt, and essential oils might be used to make natural toothpaste, which may help with tooth sensitivity due to the poor condition of tooth enamel or for the prevention of sensitive teeth. This toothpaste has also shown to eliminate gum bleeding if the paste is used on a daily basis. The toothpaste can remove most stains on teeth, such as those caused by cigarette smoking. It may also strengthen tooth enamel and gums. Apart from the traditional usage of wood ash as a source of alkali to the different sectors, it has also been used for a long time to saponify fats
SUSTAINABLE ENHANCED O I L RECOVERY
599
in soap making and shampoo producing (Sh 2007). Chhetri et al. (2007) developed a process to produce completely natural birth soap using all natural ingredients such as vegetable oil, coconut oil, olive oil, honey, beeswax, cinnamon powder, neem leaf powder, and natural coloring and flavoring agents instead of synthetic materials. Wood ash extracted alkaline solution was used to saponify the oils to make the natural soap. This section reveals that the nutritional quality in wood ashes after the alkaline solution extraction and before the extraction from wood ashes is almost same. However, wood ash could be collected separately from different sources, and it could be disposed in the landfill because the nutrient source is from plants. It could also be industrially utilized for cement manufacture. It can be a glazing agent in the ceramics industry, a road base, puzzolona, an alkaline material for the neutralization of wastes (Liodakis 2005), and it can contribute to the establishment of a sustainable process shown in Figure 14.21.
Figure 14.21 Possibilities for a sustainable utilization of wood and wood ash.
600
THE GREENING OF PETROLEUM OPERATIONS
14.2.8
T h e Use of S o a p N u t s for Alkali Extraction
Recently, Chhetri et al. (2008b) reported the use of ground soap nuts (Sapindus mukorossi) to reduce the oil-water interfacial tension. The effect of the surfactant concentrations of 1%, 2% 4%, 8%, and 12% was investigated. The experimental results showed that the surfactant form can effectively reduce oil-water IFT. The effect of heat on IFT was also studied. It was found that a higher IFT reduction was achieved after heating the system to 50°C. The experimental results showed that the extract has a great potential to be used as a surfactant for the enhanced oil recovery schemes. This established that surfactants derived from natural sources are economical and environment-friendly options for chemical flooding operations.
14.3
Rendering C 0 2 Injection Sustainable
The injection of CO z is one of the oldest EOR techniques available. Historically, C 0 2 has been applied for both miscible and immiscible applications. Because of some very attractive features of C 0 2 (e.g., swelling, IFT reduction, low minimum miscibility pressure), it has received considerable attention in the topic of EOR. The injection of CO z into hydrocarbon reservoir is known to increase the amount of recoverable oil, yielding economic benefits. The process, however, has not been implemented out of concerns for the greenhouse effects. International Energy Agency (1995) projects global carbon emissions to grow from about 6 billion metric tons in 1990 to over 8 billion metric tons by 2010, representing an annual growth rate of 1.5%. The participation, signed so far for the Climate Challenges utilities pledging a wide range of greenhouse gas reduction activities, accords for an aggregate of about 44 million metric tons of carbon equivalent (Kane and Klein 1997). However, the scientific protocol to meet this target has not yet been established, and it is becoming increasingly clear that this target is not something that can be achieved. In the last few years, significant progress has been made in understanding the concepts associated with the greenhouse gas emissions that cause an environmental threat to the planet. It is in this context that C 0 2 capture, disposal, and utilisation potentials remain an attractive option for the medium to longer term, particularly if the current trend of energy supplies continues. Figure 14.22 shows the
SUSTAINABLE ENHANCED O I L RECOVERY
601
Figure 14.22 Global C 0 2 utilisation potential (IEA-GHG 1995).
global C 0 2 utilisation potential based on the existing fossil fuel infrastructure and the reliability of associated technologies (Tilley 1997). At present, about 3% of the global oil production comes from EOR. Herzog et al. (1993) have mentioned that underground disposal of carbon dioxide has been identified as one of the high priority areas for research related to global climate change. The major obstacle to the disposal of C 0 2 underground is the necessity to separate CO z from flue/waste gases. Capturing C0 2 , its disposal, and utilisation have significant challenges to overcome, and the existing technology is considered to be prohibitively expensive. The cost effectiveness for such separations can be achieved by eliminating gas separation, such as through gas re-injection into the sub-surface formations from gas processing plants. Chakma (1996) emphasised that the cost may be acceptable when applied in combination with other measures. It is believed to be too early to predict the implications regarding the disposal of C0 2 . Instead, evaluating and discussing the options are the needs of the hour. Chakma (1996) provided examples for the acid gas re-injection projects into depleted oil and gas reservoirs as well as aquifers that have been used for this purpose. There have been numerous studies performed on various aspects of storage potential of C 0 2 and other greenhouse gases. While many potential solutions have been proposed through these studies, they have also created much confusion due to conflicting findings, narrow focus, and a lack of interdisciplinary and global approach. A comprehensive review of the latest research developments can dispel some of the misconceptions that have dominated this important aspect of global environment. Turkenberg (1997) reviewed the potential of C 0 2 utilisation and storage options. In the overall C 0 2 utilization picture, EOR potential is considered to be low, as can be seen from Figure 14.23. The potential to store C 0 2 in depleted oil and natural gas fields can be much larger with estimates ranging from 130 to 500 GT-C, depending on the recoverable
602
THE GREENING OF PETROLEUM OPERATIONS
Figure 14.23 C 0 2 global sinks and capacities (IEA 1995; Kuuskraa et al. 1992; Bamhart et al. 1995; Hendriks 1994).
amount of oil and gas. Deep aquifers have an estimated storage potential of 90 to 1000 GT-C. This wide range of variation is due to different assumptions made about the necessity of having a structural trap to assure safe and sustained disposal. Other assumptions for such an estimate are the volume of the aquifers, the percentage of the aquifer to be filled, and the density of CO z under reservoir conditions (Hendriks 1994). Figure 14.24 shows the pictorial view of the CO z EOR process. C0 2 , when injected into the reservoir, is dissolved in the oil. As a result, the oil viscosity is reduced and mobility increases. The efficiency
Figure 14.24 Graphic of C 0 2 enhanced oil recovery (Courtesy of Occidental Petroleum Corp., DOE, 2004).
SUSTAINABLE ENHANCED O I L RECOVERY
603
of EOR depends on the pressure and, thus, on the reservoir depth. Carbon dioxide is miscible with reservoir oil at high pressures, and greater miscibility has cost benefits associated with increased oil recovery. The threshold pressure, above which miscibility occurs, is called the Minimum Miscibility Pressure (MMP). Miscible C 0 2 displacement results in approximately 22% incremental recovery, while immiscible displacement achieves approximately 10% incremental recovery (Taber 1994). Therefore, there is a greater payback for miscible displacement. For this, however, deeper reservoirs are preferred in which pressures are above the MMP. The minimum miscibility pressure depends on the composition of the oil, higher density, and higher viscosity oils with more multiple aromatic ring structures having a higher MMP (Taber 1994). Historically, C 0 2 injection has only been used for recovering oils with an API gravity greater than 22 and a viscosity lower than lOcp because of greater miscibility and higher recovery efficiencies (Bergman et al. 1996). Conventionally, it is the norm to use purified C 0 2 for gas injection. Often, the cost of purification is prohibitive and can render a C 0 2 injection scheme uneconomical. The argument has been made in the past that C 0 2 injection is beneficial to the environment because a gas sequestration scheme and the additional cost of CO z purification are justified. This argument is not scientifically correct, and the resulting scheme doesn't meet the sustainability criterion as proposed by Khan (2007). This becomes clear in the following analysis in this section.
14.3.1 Miscible CO z Injection Miscible C 0 2 injection has been the most common form of C 0 2 injection for EOR purposes. Holt et al. (1995) concluded from their study that tertiary C 0 2 injection would displace significant amounts of oil and water, giving rise to a large storage capacity for C0 2 . Their reservoir simulation on a real oil field showed that 22-26% HCPV (hydrocarbon pore volume) of incremental oil can be recovered and more than 62% HCPV of C 0 2 can be stored. Although not usually miscible with reservoir oil upon initial contact, C 0 2 can create a miscible front. Miscibility is initiated by the extraction of significant amounts of heavier hydrocarbons (C.-C30) by C0 2 . This process aids recovery by a solution gas drive. It is more useful over a wider range of crude oils than hydrocarbon injection methods. At different reservoir conditions, C 0 2 displacement can
604
THE GREENING OF PETROLEUM OPERATIONS
resemble an enriched gas drive. That is, C 0 2 can saturate the reservoir fluids to an extent that the in-situ swollen crude is miscible with trailing C O r In steeply dipping beds, mobility control is not critical, and the C 0 2 slug can be chased by less expensive, lighter gases (N2, flue gas, etc.) (Holm 1982). Miscible flooding with carbon dioxide or hydrocarbon solvents is considered to be one of the most effective enhanced oil recovery processes applicable to light- and medium-gravity oil reservoirs. C 0 2 has a viscosity similar to hydrocarbon miscible solvents. Both types of solvents affect the volumetric sweep-out because of unfavorable viscosity ratios. However, C 0 2 density is similar to that of oil. Therefore, C 0 2 floods minimize gravity segregation compared to the hydrocarbon solvents. Miscible displacement between crude oil and C 0 2 is caused by the extraction of hydrocarbons from the oil into the C 0 2 and by dissolution of C 0 2 into the oil. Light and intermediate molecular weight hydrocarbon fractions, as well as the heavier gasoline and gas oil fractions, are vaporized into the C 0 2 front. Consequently, vaporizing-gas drive miscibility with C 0 2 can occur with few or no C2 to C6 components present in the crude oil. Miscible displacement between reservoir oil and hydrocarbon gases can occur by in-situ transfer of intermediate molecular weight hydrocarbon fractions from the injected gas into the oil, called the condensing gas drive process. Zick (1986) suggested that miscibility development in a hydrocarbon gas development process might not occur by the classical condensing drive mechanism. This could be a combined condensing-vaporizing drive of a frontal vaporizing mechanism, as suggested by Novosad and Constant (1988; 1989), or the combination of all three mechanisms (Lee et al. 1988). The dominating mechanism depends on the solvent-oil compositions, reservoir temperature, and operating pressure (Lee et al. 1988). Irrespective of the prevailing mechanism, the miscibility between the oil and C 0 2 or hydrocarbon solvents eliminates interfacial tension, and capillary forces could help recover, in theory, essentially all of the residual oil. The efficiency of EOR depends on pressure and, thus, on reservoir depths. Greater miscibility has cost benefits associated with increased oil recovery. Therefore, there is a greater payback for miscible displacement, notably where reservoir pressures are above the minimum miscibility pressure. Laboratory studies conducted by many researchers reported that for light and medium gravity reservoirs, C 0 2 injection is considered
SUSTAINABLE ENHANCED O I L RECOVERY
605
to be the most effective EOR process (Huang et al. 1989; Huang et al. 1994; Srivastava and Huang 1994). Huang et al. (1994) conducted displacement tests for the Weyburn reservoir with medium to light crude. They found that C 0 2 effectively mobilized the residual oil, as determined by the significant increase in oil recovery after the start of the C 0 2 injection. Tertiary oil recovery was approximately 36.3% of the residual oil-in-place or 26.6% of the initial oil-in-place. Water flood tests conducted on the Weyburn reservoir, fluid-saturated Berea sandstone cores at different injection rates showed a dramatic effect of flood velocity on breakthrough time and recovery. As the injection rate increased, the breakthrough time and breakthrough recovery decreased. Huang et al. (1994) did not explain the reason behind such a strong dependence of recovery. However, Bansal and Islam (1994) also observed such dependence and attributed this to non-equilibrium phenomena that are not accounted for in conventional modelling. Figure 14.25 shows the sensitivity of oil recovery (as expressed as a percentage of the initial oil in place, IOIP) to flow rate. Note that this dependence is not due to viscous instability because all runs are optimized even when one is operating under a stable flow regime. 14.3.1.1
Problems Associated with Miscible C02
Injection
Some of the major disadvantages of the miscible displacement with C0 2 are that C 0 2 is expensive to transport and it is not always available. Efforts have been made to determine the feasibility of using more available gases (e.g., flue gas, waste gas) without sacrificing the benefits of C 0 2 injection (Huang et al. 1997; Al-Falahy et al. 1998).
80 60 §9: 40 8Q 20 0
High rate
Medium rate Low rate 500 1000 Days of production
1500
Figure 14.25 Dependence of oil production on flow rate during miscible CO. injection (Islam and Chakma 1993).
606
THE GREENING OF PETROLEUM OPERATIONS
Even though field studies of these techniques are not yet reported, the general consensus is that flue gas or waste gas can be effective. Corrosion is known to increase during CO z injection and is a potential problem regardless of whether the disposal is into oil or gas reservoirs. Carbon dioxide containing moisture can be very corrosive. Ironically, using a low concentration of C 0 2 (as in flue gas) would decrease corrosion in the injection tubing, saving both in processing and corrosion remediation. Poor sweep and gravity segregation can result in low oil recovery during C 0 2 miscible injection. This is of particular concern if the oil viscosity is not very low. Gravity stabilization has been recommended by researchers in order to improve oil recovery under unfavorable mobility ratio conditions (Islam et al. 1994). Wellbore plugging due to asphaltene precipitation during miscible C 0 2 injections is a valid concern (Islam 1994). Huang et al. (1994) tested for asphaltene precipitation in the presence and absence of brine. They reported a smooth increase in asphaltene precipitation with C 0 2 concentration in the absence of brine. They observed that the presence of brine enhanced the asphaltene precipitation to some extent. They attributed this small increase to the contact time between oil and C 0 2 rather than to the brine effect. Laboratory tests for the light to medium crude suggested that the light oils had less asphaltenes, suggesting that these reservoirs are less likely to have formation plugging problems. This study was not conclusive and suggested further investigation to address this problem. Previous studies have shown that the presence of asphaltenes in a hydrocarbon reservoir affect reservoir rock properties during both miscible and immiscible flooding operations (Islam 1994). Kamath et al. (1993) conducted a series of dynamic displacement tests to evaluate the effect of asphaltene deposition on water-flood in both consolidated and unconsolidated sand packs. They concluded that the asphaltene deposition affects reservoir rock permeability and the end-point saturations. Srivastava and Huang (1995), in their feasibility study of miscible or near-miscible flooding with C 0 2 on heavy crude, reported an initial rapid decrease in fluid viscosity followed by a slow decrease at higher concentrations of C0 2 . The bubble point-pressure, gas-oil ratio, swelling, and formation volume factors for the reservoir fluid C 0 2 mixtures increased smoothly with C 0 2 concentrations. The deposition of asphaltene during a miscible displacement process in the presence of C 0 2 as an EOR agent can cause numerous problems with a negative effect on the oil recovery.
SUSTAINABLE ENHANCED O I L RECOVERY
607
Srivastava and Huang (1997) in their laboratory study addressed the deposition of asphaltenes and other heavy particles. They concluded that asphaltene precipitation depends on the C 0 2 concentration and is independent of the operating pressure. Ali and Islam (1998) mentioned that the deposited asphaltenes can be removed mechanically with an increased flow rate, whereas adsorbed asphaltene can only be removed through desorption, the rate of which is much lower than that of adsorption. They concluded that asphaltene plugging is dependent on the flow rate, leading to greater deposition near the well bore. Loss of miscibility during a miscible C 0 2 injection process can have severe consequences. Because most reservoir simulators are not suitable for modelling the transition between miscible and immiscible displacements, the C 0 2 injection design can be seriously flawed. Figure 14.26 shows numerical simulation results of recovery performance with miscible and immiscible displacement processes (Islam and Chakma 1993).
14.3.2
Immiscible C 0 2 Injection
Contrary to miscible C 0 2 cases, few applications have been reported on immiscible CÖ 2 injection. Only recently, immiscible CÖ 2 injection has been introduced to heavy oil reservoirs in the context of non-thermal EOR (Rojas et al. 1995; Lozada and Farouq Ali 1988; Islam et al. 1994; Srivastava et al. 1993, 1995). C 0 2 can also be useful in the heavy oil reservoirs where thermal methods are difficult to implement. C0 2 -saturated crude oils exhibit moderate swelling, leaving fewer stock tank barrels of residual oil in place, and they reduce oil viscosity to a point at which mobility ratios are drastically affected.
1000 2000 Days of production
3000
Figure 14.26 The role of miscibility on recovering heavy oil with C 0 2 (Islam and Chakma 1993).
608
THE GREENING OF PETROLEUM OPERATIONS
Dyer et al. (1994) studied the phase behavior and scaled model behavior of a Saskatchewan reservoir to investigate displacement mechanisms associated with immiscible gas (COz) injection processes. They reported that non-thermal EOR techniques show good potential for recovering oils from the thin and shaly heavy oil reservoirs of Saskatchewan. Among the non-thermal processes, immiscible CO z injection holds the most promise for accessing these reservoirs. Dyer et al. (1994) concluded that the process is proven and is applicable to Saskatchewan reservoirs. They further suggested that approximately 90% of the total oil-in-place could be accessed from the Saskatchewan reservoirs with pay thickness of three to seven meters. Several studies showed the importance of gravity stabilization during immiscible C 0 2 injection, especially in a heavy oil formation (Islam et al. 1994; Islam et al. 1992). For heavy oil reservoirs, a displacement front is expected to be unstable, which can lead to a significant loss in oil recovery. Figure 14.27 shows one such case for which the oil recovery is several times smaller during unstable flow than during stable flow. The reason for such dependence was attributed to the presence of viscous fingering that decreases the gas breakthrough time significantly. Figure 14.28 shows that the instability number (that lumps flow rate, geometry, and other factors into a dimensionless group) affects breakthrough recovery during gas injection for which viscous fingering occurs. In contrast to water flooding that shows the existence of a pseudo-stable regime, gas breakthrough appears to decline continuously as a function of the instability number.
Unstable front 1000 1500 2000 Days of production
2500
3000
Figure 14.27 The role of stability in recovering heavy oil with C 0 2 (from Islam and Chakma 1993).
SUSTAINABLE ENHANCED O I L RECOVERY
1000 Instability number
10000
609
100000
Figure 14.28 Role of instability number during recovery of heavy oil with flue gas (Islam et al. 1992).
14.3.3 EOR Through Greenhouse Gas Injection Nitrogen and flue gases are the least expensive among all EOR agents. These gases are also known to be similar (Emmons 1986), and it appears that they can be used interchangeably for oil recovery. However, these gases also have a high MMP. Therefore, miscible displacement is possible only in deep reservoirs with light oils. Figure 14.29 shows how different gases can lead to similar recovery. This is particularly important because the decision of using a less expensive gas can often change the entire economics of the project.
Figure 14.29 Gas injection with horizontal wells (Bansal and Islam 1994).
610
THE GREENING OF PETROLEUM OPERATIONS
Moritis (1994) reported that three nitrogen injection projects were operated successfully for years as flue gas injection projects. However, corrosion was a problem, especially for flue gas from internal combustion engines. In addition to its low cost and widespread availability, nitrogen is the most inert gas of all injection gases. Therefore, this scheme holds promises for the future. The suitability of flue gases, containing C0 2 , for enhanced oil recovery (EOR) was studied by Islam et al. (1992) in the context of cold front of in situ combustion. They observed that the flow rate plays a significant role in recovering heavy oil from a reservoir. This flow rate dependence was attributed to viscous fingering, which is a sign of unstable displacement. More recently, Srivastava and Huang (1997) conducted laboratory studies to evaluate various operating strategies for heavy oil recovery with flue gases. They conducted one-dimensional core-flood tests with a 14° API oil and flue gas system. Secondary gas slug flood with live oil was observed to be the most suitable injection strategy for heavy oil recovery from among three different injection strategies investigated (secondary slug, tertiary slug, and tertiary WAG) because this flood had the highest displacement efficiency. The tertiary WAG process recovered more oil than the tertiary slug process because the former predominantly improved mobility control. Ultimate oil recovery was higher in runs conducted using live oil than in those using dead oil. This was attributed to relatively more favorable mobility of live oil and a slightly higher operating pressure. Srivastava and Huang (1997) reported that the flue gas appeared to be an effective flooding agent because the oil recoveries were only 2-4 percentage points lower than those obtained with C O r They further mentioned that the comparable oil recoveries in flue gas runs are believed to be a combined result of competing mechanisms, namely, the free gas mechanism provided by C 0 2 in contrast to the solublization mechanism provided by C0 2 , which predominates in C 0 2 floods. The common features of all these estimates lack rigorous scientific analysis. The case in point is the Weyburn project. These estimates were based on MMP studies only. These values were included in a conventional reservoir simulator that had very little scientific merit in terms of modelling complex phenomena such as viscous fingering, gravity override, reservoir heterogeneity, and others. The production strategy was optimized based on the stable steady state of the displacement front. Besides, no risk analysis was performed for loss of miscibility during the C 0 2 displacement phase. Any of
SUSTAINABLE ENHANCED O I L RECOVERY
611
these phenomena can make the economic estimates irrelevant. For instance, if viscous fingering occurs during the displacement phase, the time of C 0 2 breakthrough will be reduced by more than 50%, resulting in half of the oil recovery and storage capacity of C 0 2 (see analysis by Coskuner and Bentsen 1990). It turned out that the C 0 2 breakthrough in the Weyburn field took place in a short period of time, approximately around the time it was predicted through experimental studies using models that simulated viscous fingering (Khan and Islam 2007a).
14.3.4
Sour Gas Injection for EOR
The concept of using sour gas to enhance oil recovery is not new. Harvey and Henry (1977) reported one of the first studies on core flood experiments with carbon dioxide and hydrogen sulfide. Even though oil recovery with carbon-dioxide (both miscible and immiscible) has been investigated by many researchers, Harvey and Henry (1977) appear to be the first ones to use hydrogen sulfide to recover oil. By using pure hydrogen sulfide, carbon dioxide, and mixtures of the two, they were able to compare recoveries for different cases. They reported that the miscibility pressure was lower for H2S than for C0 2 , consequently making the H2S displacement process more attractive from the recovery point of view. They observed that miscibility could not be achieved for heavy oil with either H2S or C0 2 . However, even during immiscible displacement, pure H2S displacement showed a distinct advantage over mixed (H2S and COz) displacement, which in itself performed better than pure C 0 2 injection. For both light and heavy oil, more than 15% incremental oil was produced when pure H2S was used instead of a mixture of H2S and C0 2 . Pure C 0 2 showed advantage over water-flood, but consistently showed a lower recovery than that with pure H2S or a H2S and C 0 2 mixture. In later years, research has been performed on the recovery of both heavy and light oil with C0 2 , with more emphasis on immiscible C 0 2 displacement for heavy oil. For light oil, on the other hand, focus has on the impact of H2S that is generated through bacteriogenic activities (Hill et al. 1990; Frazer and Boiling 1991). These studies have identified the nature of the H2S generation problems, but they did not focus on the recovery aspect of H2S. The presence of sour gases poses a difficult problem in terms of oil and gas production, processing, and refining. Unfortunately, little is known in the areas of sour gas behavior in most cases.
612
THE GREENING OF PETROLEUM OPERATIONS
Advances have been made in other areas of unwanted gas disposal and utilization (see Islam and Chakma 1993). It is important, however, to differentiate between gas injections under stable and unstable flow conditions. If gas is carried out from the top of a reservoir, the flow is likely to be gravity-stabilized. In the presence of an unstable displacement, the gas breakthrough takes place early, leading to a decline in the overall productivity. In a recent publication by Al-Falahy et al. (1998), several solutions were proposed to the sour gas problem supported by numerical simulation and laboratory experiments. The proposed solution deals with both sour gas disposal and oil recovery with sour gas. The problems associated with these techniques were studied in detail in order to depict an accurate picture of the available options. They found that sour gas improves the miscibility behavior of the crude oil, leading to a greater recovery when significant amounts of sour gas are present in the injection stream. They also reported a scheme for separating S0 2 from a gas stream containing C 0 2 and other gases. Numerical results indicate that oil recovery as high as 90% can be achieved with pure H2S injection. Furthermore, they reported that the recoveries changed only slightly when a mixture of gases was used. Mixtures of H2S and CO z and methane followed the high recovery of H2S injection. The lowest recovery was reported with C0 2 . In addition, when an immiscible, unstable gas injection scheme was employed, recovery was significantly lower, limited to a value similar to that with water flooding.
14.3.5 Viscous Fingering Because C 0 2 usually has lower viscosity than the displaced crude oil, the displacement is bound to be unstable often with profuse viscous fingering (Islam et al. 1992; Islam et al. 1994). The currently used theoretical models for predicting viscous fingering (onset as well as propagation) do not include the existence of a compressible fluid, nor do they apply to miscible C 0 2 systems (Bentsen 1985). However, the process of miscible or immiscible displacement suffers from viscous instability (due to low viscosity of C 0 2 injected into highviscosity crude oil) and sensitivity to heterogeneity (that induces viscous fingering and can diminish sweep efficiency). Other problems involve the role of dimensionality in determining the onset propagation of viscous fingers, the loss of miscibility due to viscous
SUSTAINABLE ENHANCED O I L RECOVERY
613
fingering, the role of asphaltenes in altering the displacement front and its mobility, and the relation between bottom-water and viscous fingering (bottom water plays multiple roles initially adding to the injectivity).
14.3.6 Design of Existing EOR Projects Due to the lack of rigorous scientific investigation, the designs of EOR projects involving CO z or other greenhouse gases have been flawed both technically and environmentally. Flow rates that have been selected in the past are usually high enough to induce viscous fingering, with the only exception being the projects involving injection of gas from the top of an anticline (see Islam et al. 1994 for detail). If this is not the case, viscous fingering will most likely occur, and the entire design should be based on scaling viscous fingers. One aspect of modelling EOR processes is the modelling of unstable displacement fronts. When the displacement front is unstable due to viscous fingering in an immiscible system, viscous grading in a miscible displacement process, instability due to heterogeneity, etc., it is important that the displacement in the laboratory be unstable, too. Thus, the degree of instability in the laboratory needs to be similar to that in the field (Islam and Bentsen 1986). Because the degree of instability depends on the dimension of the domain, having a similar degree of instability in the laboratory translates into having a much higher velocity in the laboratory than in the field. This is contrary to the common belief that laboratory experiments should match field velocities. An EOR scheme is often accompanied by unstable displacement. As discussed by Khan and Islam (2007a), if the mobility ratio is unfavorable, it is likely that the field dimension will make the process unstable. Under this condition, profuse viscous fingering occurs, and the recovery efficiency can drop significantly. At present, few researchers have conducted scaled model studies of an unstable displacement process. Bansal and Islam (1994) were the first to attempt modelling unstable displacement in a scaled model. Their findings indicated that it is important to conduct experiments in the laboratory with a flow rate that would give rise to the same instability number. They argued that only then the flow regime in the laboratory would be the same as the one prevailing in the field. Islam (PTRC 1998) conducted a series of displacement tests under
614
THE GREENING OF PETROLEUM OPERATIONS
unstable conditions. It was noted that 3D modelling was essential because with ID cores such an unstable regime could not be established even with a velocity of 50 ft/day. Note that, conventionally, a flow rate of 1 ft/day is used during laboratory core testing. This slow flow rate puts the flow regime invariably on the stable side, making it practically impossible to simulate unstable flow regimes in a laboratory. Basu (2005) used this technique to model chemical flooding experiments. He showed reasonable agreement between numerical modelling and scaled modelling results. For scaling an EOR process, the following steps are required: 1. 2. 3. 4. 5. 6. 7. 8.
Determine the end-point permeabilities. Determine the capillary pressure curve. Determine interfacial tensions. Estimate flood pattern dimensions. Estimate frontal velocity from the injection well. Calculate the gravity number. Calculate the instability number. If the instability number is less than π2, follow the conventional approach (velocity matched with that of the field as per the scaling requirement). 9. If the instability number is greater than π2, calculate the laboratory velocity such that the instability number in the laboratory matches with that of the field.
Khan and Islam (2007) give full details of these steps, including the definition of the instability number, capillary number, mobility ratio, etc. It is important to note that, for gas injection, the instability number continues to affect the recovery, meaning the higher the flow rate the less recovery there will be. This is because the pseudo-stable regime is never reached with gas (Islam et al. 1991). Gravity plays an important role during gas injection because the value of the gravity number can stabilize a process. Islam et al. (1994) reported that by placing gas injection on top of the producer (e.g., in a dual horizontal well), gas displacement, even in a heavy oilfield, can be rendered stable. However, such phenomena cannot be simulated using the conventional approach. During miscible displacement, the displacement front develops a transition zone that can vary in length significantly. If the crude
SUSTAINABLE ENHANCED O I L RECOVERY
615
oil in question is not light, the transition zone can be much wider. The problem with a wide transition zone is that the miscibility can be lost altogether. The lack of miscibility or the extension of the transition under any displacement situation would translate into an inadequate sweep of the reservoir, resulting in low oil recovery. The effect of the transition zone length has not been studied in the past (see series of articles published by Coskuner and Bentsen). Inherently related to this problem is the storage or mitigation aspect of C 0 2 displacement. Unless efforts are made to define the miscible/immiscible system, the performance prediction is bound to be inaccurate. Also, it is important to predict the sustenance of a miscible front. The lack of miscibility may in turn lead to the onset of viscous fingering. Figure 14.30 shows how the composition of the injection gas affects the minimum miscibility. The following are three important features of this graph: 1) if miscibility doesn't occur, for instance, due to heterogeneity or viscous fingering, then there is no reason to inject C0 2 , which would be much more expensive than nitrogen or air injection; 2) the high value of MMP can be countered with the addition of H2S or S0 2 ; 3) the presence of H2S or S0 2 from the source is both economical and sustainable (as opposed to artificial additives that are used to remove these gases, e.g., MEA, DEA, TEA).
Figure 14.30 The role of contaminants in determining MMP (Islam and Huang 1999).
616
THE GREENING OF PETROLEUM OPERATIONS
14.3.7 Concluding Remarks 1. Most EOR projects involving C 0 2 injection (miscible or immiscible) have grossly overestimated the oil recovery potential. Due to the lack of rigorous scientific considerations, operating parameters have not been optimized, and they suffer from serious risk of premature gas breakthroughs that would make the economics of both recovery and storage potentials very dismal. 2. The use of waste C 0 2 (e.g., flue gas), sour gases, and nitrogen and other greenhouse gases appears to be promising. 3. The cost of C 0 2 disposal can be reduced significantly if waste gas could be disposed directly without separating C O y H 2 S, etc., from the main stream. This fits the sustainability criteria and avoids introducing toxic chemicals (e.g., MEA, DEA, TEA, and others) in the process.
14.4
A Novel Microbial Technique
In an EOR process, the alteration of rock/ fluid properties is often sought in order to increase productivity and, ultimately, production. Microbial processes have been presented for improving fluid flow characteristics. In this section, a microbial application for rock characteristic alteration is presented. This technique can be used for sand consolidation as well as fracture remediation to improve wellbore performance.
14.4.1 Introduction The objective of this technique is to develop a biomimetic strategy to obtain biomineralization from the biologically based source materials and identify the sustainability of this process in the long term. Biomineralization in the selective place will eliminate the use of synthetic materials and, thus, will keep a healthy environment (Ferris et al. 1988). Because of biological origin and formation, these biomaterials have superior performance over the synthetic or non-natural structure because of both the mechanical strength and the functional
SUSTAINABLE ENHANCED O I L RECOVERY
617
properties. The outstanding properties are only visible and attributed to the biomaterials as observed by their well-organized structure and strong interfacial interaction between biomacromolecules and inorganic components. Yu et al. (2004) suggested that every biological system use biomacromolecules as nucleators, cooperative modifiers, and matrixes or molds to exert exquisite control over the processes of biomineralization, which results in unique inorganic-organic composites with various special morphologies and functions. Sondi and Sondi (2005) studied the bioprecipitation of mineralizing organisms that selectively form either intracellular or extracellular metal carbonate polymorphs or unusual morphological properties at ambient pressure and temperature. They found that the nucleation, growth, and morphological properties of the formation of biogenic metal carbonate structures are controlled and regulated by organic macromolecules, mostly peptides and proteins. One of the most important parameters is particle morphology. To control the precipitation reaction, the nucleation and growth steps must be mastered. The nucleation step is especially important in precipitation reactions. The nucleation rate and the duration of the nucleation process have a direct influence on the final particle size, the size distribution, and the growth mechanism (Donnet et al. 2005). The other unique effect is the initial formation of a metastable amorphous phase of calcium carbonate, which rapidly transforms into crystalline entities (Sondi and Matijevic 2001). The rate of the change in the crystal structure strongly depends on the concentration of the enzyme. To obtain a high space-time yield in industry, the mineralization under high, super saturation conditions in batch or semi-batch processes has become a popular method (Schlomach et al. 2006). However, precipitation usually does not naturally take place at low ionic concentrations. Biological processes do not have this limitation. The natural biomineralization process is considered a superior process not only for its super performance but also for its sustantaibility in the long term. As a novel and sustainable process, the biomineralization process is now encouraged to replace every non-natural, chemical precipitation process. The application can be wide spread from selective reservoir plugging to teeth feelings. The wide distribution of microbes in geological environments facilitates the biomineralization process in nature. Natural surface rocks have been observed to have 103 bacteria or fungal cells
618
THE GREENING OF PETROLEUM OPERATIONS
per gram of stone (Eckhardt 1985). Microbial metabolic activities play an important role in deposition and diagenesis processes in a geological environment. Microbio-mineral-precipitation is not an unusual process in nature. There are numerous examples, such as bacteria and algae precipitate minerals from seawater, and this process plays an important role in the deposition and consolidation of beach-rock formations (Krumbein 1979). Microbes can oxide metals and deposit them in hot spring systems and in deep ocean hydrothermal vents (Ferris et al. 1987; Ehrlich 1983). The formation of marine ferromanganese nodules and freshwater ferromanganese deposits has been attributed to bacteria (Ehrlich 1974). Minerals such as calcite, silicon, oxidized manganese, and oxidized iron usually do not precipitate naturally because of the low ionic concentrations. However, when bacteria interact with ions such as Ca+2, Si+, Fe+3 and Mn+, precipitation takes place (Beveridge et al. 1985), and plugging or cementing occurs consequently. The most abundant mineral phase associated with bacteria is a complex (Fe, Al) silicate with a variable composition. The amount of metal sorption and biomineralization largely reflects the availability of dissolved metals in the water. In laboratory studies, Krumbein (1979) found that among 20 bacteria strains, 16 were able to precipitate aragonite from solutions made of seawater and nutrient for bacteria. Some strains yield Mg calcite. In his experiments, up to 350 mg of aragonite were obtained from a liter of the medium. In sediments, heavy metals such as alkali and alkaline earth ions deposit at the surface of membranes of bacterial cells and stain these membranes (Degens and Ittekkot 1982). These naturally stained membranes can further act as templates for mineral deposition and may, in certain environmental conditions, lead to stratabound ore deposits. Microbial-mineral precipitation occurs directly as a result of bacterial metabolic activities or indirectly as a consequence of regional geochemical condition changes due to bacterial metabolic activities (Kantzas et al. 1992). Similar to most cell surfaces, the bacteria cell walls are anionic (Beveridge et al. 1984). These characteristics are independent of whether the bacteria are gram-positive or gram-negative. It is reasonable to assume that bacteria will interact strongly with metallic ions even in diluted solutions in natural bodies of water. Laboratory experiments had demonstrated that metal accumulation could be substantial within the wall fabric (Hoyle and Beveridge 1983). In a laboratory simulation of a low temperature sediment diagenesis
SUSTAINABLE ENHANCED O I L RECOVERY
619
process, Beveridge and Fyfe (1985) found that metal precipitation associated with bacteria was primarily in the wall fabric. Mineral crystals grew with time until all of the wall material had been mineralized, and then crystals developed within the cytoplasm. Eventually, the entire cell became crystalline. In this process, gram-positive walls seem more reactive than their gram-negative counterparts (Beveridge and Fyfe 1985). In a geological formation or an experimental sand pack, bacteria adhere to pore-surfaces with their glycocalyx. They induce mineral crystallization. The dead cells stick together by glycocalyx. These mineral-bio-films eventually plug the pores. Detailed studies have been carried out on carbonates precipitation induced by environmental condition changes caused by bacterial metabolic activities (Kantzas et al. 1992; McCallum and Guhathakurta 1970; Krumbein 1979). In the case of calcium carbonate deposition, bacteria increase the pH of the solution, which in turn reduces the solubility of CaC0 3 and induces precipitation. When fresh medium (mineralization solution) continuously flow through porosity or fracture, continuous mineral precipitation can be maintained to result in plugging. Many experiments have been carried out to study the process of microbial mineral plugging. Bacteria and their population are found to affect this process (Macleod et al. 1988; Jack 1988; and Gollapudi et al. 1995). Anaerobes and facultative aerobes are preferred in order to enhance oil recovery because reservoirs are essentially anaerobic and oxygen injection faces its own constraints. Vegetative cells are more active than starved cells and can achieve more complete plugging. Whereas, starved cells have smaller sizes, and they can penetrate to deeper levels and into smaller pores than vegetative cells, which is the advantage of applying starved bacteria to plugging. A higher bacteria concentration (greater population) will induce a quicker and more complete mineral plugging. Deferent quantities and qualities of nutrients induce different plugging results (Kantzas et al. 1992). Experimental results indicate that the amount of plugging in a core material in a porous medium is roughly proportional to the amount of nutrients passed through the porous medium. There seems to be a critical nutrient injection rate below which bacteria plugging does not take place. In microbial silicon precipitation, there are certain sugar and amino acid concentrations that are optimum for bacterial silicon uptake. Microbial-mineral-plugging has been employed by petroleum microbiologists as a method to enhance the production of
620
THE GREENING OF PETROLEUM OPERATIONS
hydrocarbon resources (Jack 1992). Reservoir heterogeneity has a significant effect on the oil recovery efficiency of a water-flood process. The residual oil saturation (ROS) that remains after water flooding is a potential target for applying reservoir selective plugging techniques using in-situ growth of bacteria (Jenneman et al. 1984). In heavy oil fields, where water tends to respond to pumping more readily than viscous oil, wells of primary production commonly water-out at low oil recoveries. This is a serious problem that can occur gradually over several years, or it may be a catastrophic event in which water directly underlies oil in the reservoir. For this situation, control of excess water production may be accomplished by selectively plugging the zones of water encroachment (Jack et al. 1991). Chemically cross-linked polymers may be used as plugging agents. However, they are unsustainable, expensive, and their performance is unpredictable. Microbial plugging is an efficient and less expensive method to these problems. Moreover, this process is sustainable for both the short term and long term. The method involves the introduction of viable bacteria in the aqueous, displacing fluid to be injected into the high-permeability water-swept zones. Once the bacteria are in place, a designed volume of nutrients can be injected into the reservoir to support in-situ metabolism of the bacteria that are capable of initiating physical plugging and reducing original permeability. This will result in a diversion of the displaced fluids from plugged high-permeability zones to unswept zones and, thus, will improve sweep efficiency. Jack et al. (1992) suggested that a significant target for microbiomineral-plugging might be the plugging of fractures in carbonate reservoirs, which presently thwart late-life strategies for gas and oil recovery by fostering gas and water breakthrough to production wells. In an experiment conducted by Gollapudi et al. (1993), simulated fractures were put into sand packs. They found that increased microbial activities take place in simulated fractures, and the formation of precipitates increases at the fracture surfaces (Gollapudi et al. 1995). Islam and Bang (1993) suggested that microbial mineral plugging could be employed to remediate fractures in historic monuments and buildings (Islam and Bang 1993). Zhong and Islam (1995) reported some experimental studies conducted on the process of microbial fracture remediation. Microbial carbonates precipitation received much attention because it is comparatively a more efficient plugging or consolidation process
SUSTAINABLE ENHANCED O I L RECOVERY
621
(McCallum et al. 1970; Krumbein 1974; Kmmbein 1979; Kantzas et al. 1992). C a C 0 3 has several polyforms: calcite, aragonite, vaterite, and amorphous calcium carbonate (Seo et al. 2005). The first two are the most common because they are widely found throughout nature, occurring as the main mineral constituents of sedimentary rocks and as inorganic components in the skeletons and tissues of many mineralizing organisms, especially mollusks. Bacteria Bacillus Pasteurii can be used efficiently to promote carbonate precipitation and to reduce permeability in an unconsolidated system. These bacteria were employed in this research. Chemical and biochemical reactions and their results, the effects of width and fracture fillings, experimental fracture remediation results, and the effects of chemical and physical factors, medium, and bacteria were studied. 14.4.2
Some Results
14.4.2.1
pH Change at Constant
Temperature
CaC0 3 precipitations (chemically) increased when pH increased (Figure 14.31). There were obvious precipitations in solutions of pH 9.5 and above. However, no precipitation was observed in solutions of pH 9 and below. pH in bacteria inoculated solutions increased from 8.0 to 8.2,8.5,9.0, and 9.5 in day 1, day 2, day 3, and day 5 after inoculation, respectively. The value stayed at 9.5 at day 6 and day 7, and then it dropped to 9.3 at day 8.
c .a- u.<» 2 u nn ra Q- 0.2
U
0.1 -
Figure 14.31 Calcium carbonate precipitation (chemically) at different pH.
622
THE GREENING OF PETROLEUM OPERATIONS
The total precipitations increased when pH increased (Figure 14.32). The equation of the chemical reaction (precipitation) that took place in the solution is: Ca2+ + HCO-3 + OH -> CaCO, + H 2 0 According to this equation, when pH increases, more CaC0 3 precipitation will be induced. In bacteria inoculated solutions, pH increased to a value as high as 9.5 and stopped there because these bacteria cannot survive at higher pH values. So, if one can adapt these bacteria to grow at higher pH values, or if one can find alkalophilic bacteria to be used in the fracture remediation process, more precipitates will be produced and better remediation of fractures will take place. The pH values of the 20°C and 30°C solutions increased from the 8.5 initial value to 9.0 at day 3 and to 9.5 at day 5. Following this initial phase, pH values remained constant from day 5 through day 9. After day 9, the pH value dropped to around 9.0. The pH value in the 5°C solution did not change during the first 3 days, but it increased to 9.0 on day 5 and remained constant at that level until the experiment was terminated. The pH value of the 50°C solution remained unchanged during the first 5 days. After day 5, it changed between the range of 8.0 and 8.5 (Figure 14.33).
Figure 14.32 Calcium carbonate precipitation (chemically) and calcium carbonate + biomass precipitation (bio-chemically) at different pH.
SUSTAINABLE ENHANCED O I L RECOVERY
623
Figure 14.33 pH change vs. time at different temperature.
14.4.2.2
Bacteria Concentration
Changes
The concentration in the 50°C solution dropped abruptly during the first 3 days, and those of 5°C, 20°C, and 30°C also dropped but not as abruptly. The concentration at 5°C continued to drop until day 5. The concentrations of 20°C and 30°C remained almost constant from day 3 through day 9, while the concentration at 50°C increased after day 5 and continued to increase until day 9 (Figure 14.34). This trend is because most of the bacteria cultured at room temperature died at a low temperature (5°C) and at a high temperature (50°C) and then adaptations of bacteria at high temperature took
Figure 14.34 Bacterial concentration changes vs. time at different temperatures.
624
THE GREENING OF PETROLEUM OPERATIONS
place after day 5. Also, because the fresh medium added to the bacteria solution was limited, the bacterial concentrations at 20°C and 30°C also decreased (since the experiment began at a high concentration). After day 8, there was no medium added to the bacterial solution, and the concentration in all these solutions dropped. 14.4.2.3
Total Precipitations at Different
Temperature
Total precipitations (including calcium carbonate, biomass, and microbial metabolic waste) were 1.7 g, 2.4 g, 2.8 g, and 4.4 g at 5°C, 20°C, 30°C, and 50°C, respectively (Figure 14.35). The results are in good agreement with the results obtained by Gollapudi et al. (1995), who reported the optimum bacterial growth and the chemical deposition associated with that growth to be around a pH value of 8.5. 14.4.2.4
Effect of Medium
According to the following reactions, it can be calculated that the ionic strength in the urea-NaHC0 3 -CaCl 2 medium is 0.239 and the solubility product of C a C 0 3 in this solution is 10~733: NH4C1 <-> NH4+ + ClCaCl2 ·Η> Ca22+ + 2C1" NaHCO, <^ Na+ + HCCX," HC0 3 : <-> H+ + CO/ In this solution [Ca2+] = 2.5 x 10 2 mol/1, [CO.,2 ] = 1.5 x 10 6I7 mol/1, and [Ca2+] x [CO/ ] = 10~7S7. Because the medium is almost saturated
5°C
20°C 30°C Temperature
Figure 14.35 Total precipitation at different temperatures.
50°C
SUSTAINABLE ENHANCED O I L RECOVERY
625
with CaC0 3 , it is not practical to try to increase CaCl2 or N a H C 0 3 in the medium to increase precipitation. However, because bacteria use urea as their food and increase their pH by decomposing urea into NH 3 and H 2 0 (Kantzas et al. 1992), one can increase urea concentrations to induce more pH changes in order to produce more precipitation. To test this, five media samples of 400 ml each with urea concentrations of (A) 1.0%, (B) 1.5%, (C) 2.0%, (D) 2.5%, and (E) 3.0% were prepared. Then, B. Pasteurii were inoculated into each solution. The pH and bacteria concentrations were measured every day for 12 days. Total precipitations were measured at the end of the experiment. The pH increased faster and reached higher values in solutions with higher urea concentrations (Figure 14.36). In each solution, the pH reached a peak value at day 7 and then dropped. The bacterial population was higher in solutions with lower urea concentrations than that in solutions with higher urea concentrations (Figure 14.37). But, bacteria in higher urea concentrations solutions grow much longer than the bacteria in solutions of lower urea concentrations. More total precipitations were produced in higher urea concentrations, but the difference was small (Figure 14.38). Therefore, the medium that is used in this research (urea = 2.0%) is not optimal for urea concentration. In a solution with 2.5% of urea, the pH will increase more than in solutions with 2.0% and 3.0% urea. This solution is recommended for use in future research.
Figure 14.36 pH change vs. time in media with different urea concentrations.
626
THE GREENING OF PETROLEUM OPERATIONS
Figure 14.38 Total precipitation in medium with different urea concentration.
The strain B. Pasteurii does not adjust well to higher urea concentrations, so the bacteria population in solutions (E) and (D) are significantly lower than in solutions (A) and (B). 14.4.2.5
Effects of Fracture Width
A column flow test was set up to study the effects of fracture width. In this set up, capillary tubes of different sizes in diameter, (A) 1.51.8 mm, (B) 2.1 mm, and (C) 3.2 mm, were used to simulate fractures. These tubes were cut into pieces of 1 to 2 cm in length and were put into three sand-bacteria packs, respectively. Another column without fractures was prepared at the same time. Medium range
SUSTAINABLE ENHANCED O I L RECOVERY
627
flow rates were used in all of these sand-bacteria packs. The initial flow rates through these columns were measured. Then, flow rates were recorded every 24 hours for two weeks. The results are shown in Figure 14.39. In the sand pack without fractures, the flow rate dropped to 0.05 ml/min after 120 hours and 0.0 m l / m i n after 192 hours. In the pack containing fractures of 1.5-1.8 mm widths, the flow rate dropped from 1.2 ml/min to a low value (<0.23 ml/min) after 168 hours. The flow rate in the pack containing fractures of 2.1 mm width dropped from 1.6 ml/min to a low value (<0.5 ml/min.) after 216 hours. It seems that complete or nearly complete mineral plugging took place in these three sand-bacteria packs. However, in the pack containing fractures of 3.2 mm width, the flow rate did not drop obviously, indicating no appreciable mineral plugging. It appears that the bacterial mineral deposition in fractures will be washed out by the flow of medium if the fractures are too wide. Fracture width has an important effect on the bacterial mineral plugging process. It is one of the major factors to be considered on fracture remediation projects, and it is reasonable to deduce that fillings in fractures will help the remediation process. 14.4.2.6
Effects of Fillings on the Fracture Remediation Process
Eleven granite cores, with a diameter of 5.4 cm and a length of about 3.5 cm, were prepared. In the center of each of these cores, a fracture, which is vertical to the axis of the core, was cut. The width of these fractures is 2.4 mm. These fractures were filled with different materials (Table 14.5). Then, all of these granite cores with fractures
Figure 14.39 Flow rate in sand-bacteria packs containing fractures of different sizes.
628
THE GREENING OF PETROLEUM OPERATIONS
Table 14.5 Effects of Different Fracture Fillings on Remediation Process * Samples
Fracture Fillings
Compressive Strength (psi)
1
None
2
Sand + bacteria
4556.5
3
Sand (95%) + silica fume (5%) + bacteria
5060.2
4
Sand (80%) + silica fume (20%) + bacteria
6104.4
5
Sand (95%) + silica fume (5%)
4680.0
6
Gypsum
2955.5
7
Gypsum + bacteria
7459.7
8
Sand (90%) + limestone dust (10%) + bacteria
4674.8
9
Sand (80%) + limestone dust (20 + bacteria
5582.3
10
limestone dust + bacteria
6938.8
Failed before loading
2369.5 11 limestone dust * Fracture size (width): 2.4 mm ; Bacteria concentration: 1.103 x 10'° cells/ml; samples were treated with medium for 12 days.
were treated with a urea-NaHC0 3 -CaCl 2 medium for 12 days. Compressive strength tests were conducted on all the samples after they were dried to test the remediation results. Results indicate that the fillings enhanced the fracture remediation process (Table 14.5, Figure 14.40). No plugging occurred in the fractures without fillings, and the compressive strength of the granite was low. Fillings (sand + silica fume or sand + limestone dust) with bacteria in fractures were consolidated better than fillings without bacteria and increased the strength of the core more effectively. When the fillings were sand + silica fume or sand + limestone dust, better remediation results could be obtained by increasing the percentage of silica fume or limestone dust. 14.4.2.7
SEM and XRD Studies of Plugged Sand-bacteria-fracture Column
SEM studies were carried out in a plugged sand-bacteria-facture pack (Figure 14.41). Results show that bacteria-mineral precipitation
SUSTAINABLE ENHANCED O I L RECOVERY ~
629
8000 -,
in
B f, £ Φ a
7000 4 6000j 5000 -j 4000j 3000 4
S 2000J
I 1000| o4-J"-
"
■
·
-
ό
C\J
CO
uo
d
d
d z
i-~
oo
σ>
o
d d z z Samples
d z
d z
T o
CD
Figure 14.40 Compressive strengths of granite cores with fractures containing different fillings.
covered the surface of sand grains. More precipitations occurred in gaps between sand grains (Plate A). Enhanced precipitation and cementation occurred in fractures (Plate B). Organic materials also made contributions to the plugging process (Plate C). There are 5 different shapes in mineral crystals (Plates D and E). They are the shapes of calcite and aragonite. There are a lot of crevices in most of the mineral crystals (Plates D and E) that are rod shaped with lengths between 1 to 4 pm, and they distributed randomly in the crystals. Bacteria are considered to cause these holes. Bacteria sedimented onto the surfaces of growing crystals, and they acted as impurities on the surface, inhibiting crystal growth in that site while the crystal body kept growing. Therefore, a hole was created (E. Duke 1995; SDSM & T, personal communication). X-ray diffraction analyses were conducted on several crystals and organic materials. Results (Figure 14.42) show that the crystals are calcite or aragonite. In organic compounds, chlorine was detected. (H, C, and O cannot be detected in the equipment used.)
14.4.3
Concluding Remarks
Microbial mineral plugging is an efficient method to remediate fractures. Mineral deposits, induced by bacterial metabolic activities and biomass, plugged the pores in a porous medium, and enhanced plugging occurred in fractures. Bacillus Pasteurii is the strain of bacteria to be used.
630
THE GREENING OF PETROLEUM OPERATIONS
Figure 14.41 Energy dispersive X-ray analysis of microbial mineral precipitation in plugged sand-fracture-bacterial pack; (A) organic material, (B) crystal.
Mineral precipitation increased when pH increased chemically or biochemically. Higher temperatures helped the plugging process by increasing mineral precipitation. The urea-NaHC0 3 -Ca0 2 medium is almost saturated with CaCO r Different urea concentrations in the medium induced different pH changes and different amounts of
SUSTAINABLE ENHANCED O I L RECOVERY
631
Figure 14.42 SEM picture of a plugged sand-bacteria-fracture column.
precipitation in the medium. It appears that 2.5% of urea is optimum for the microbial mineral precipitation process. Fracture width is an important factor that affects the remediation process. The mineral deposition in fractures is washed out by the flow of the medium if the fractures are wide enough. The critical width was found to be between 2.1 mm and 3.2 mm. Fillings in fractures helped remediate fractures that are too wide for bacterial plugging. Different fillings induced different remediating results, as shown by the compressive strength of fracture-remediated granite cores.
632
THE GREENING OF PETROLEUM OPERATIONS
14.5
Humanizing EOR Practices
Especially in EOR operations, thermal EOR, as a reservoir heating method, would require steam generation and bitumen upgrading facilities. Huge amounts of source water are needed for steam generation. The thermal method also generates a large amount of produced water, and recycling of this water would be required in order to reduce the source and disposal volumes to acceptable levels. A hot water extraction process needs to use open pit mines. Sometimes large-scale tailing ponds are also required. There would be a relatively minor disposal problem for produced sand and fines. Disposal water can be separated into two streams, the most offensive waste being disposed of underground and the safe stream discharged into a river system. In the underground disposal of wastewater, it is essential to ensure that groundwater sources are not contaminated. Noxious gas emissions into the atmosphere would be an issue of concern. Sulfur dioxide is the main pollutant and is produced by burning high sulfur crude or oil in the boilers. The injection of flue gases along with steam into the reservoir may have some advantages in reducing atmospheric pollution. In order to develop EOR schemes that are inherently environmentally friendly, technically effective, and socially responsible, the following steps should be taken (Khan and Islam 2007a): 1. Set up an interdisciplinary team (engineers, scientists, economists, and even social scientists). 2. The problems need to be openly discussed in the presence of top executives and policymakers before solutions can be addressed. 3. Ask each participant to propose his/her own solution to the problem. At least one solution per person is ideal. This should apply to every participant, including those who are from social science or other nontechnical disciplines. 4. Document multiple solutions for each problem. 5. Evaluate each solution objectively, irrespective of who is proposing it. 6. Evaluate the cost of the status quo. 7. Use the screening criterion of Khan and Islam (2006) to evaluate the long-term benefit/cost of a particular solution.
SUSTAINABLE ENHANCED O I L RECOVERY
8. List all the waste materials naturally generated in a particular project. 9. Select an injection fluid from the waste products (point 8). 10.If a particular solution is not fit for a specific field, investigate the possibility of using that solution in a different field. 11. Develop scaling criteria for each solution. 12.Conduct scaled model experiments using scaling groups that are the most relevant.
633
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
15 The Knowledge Economics
15.1
Introduction
The devastating cultural impacts of extremely fundamental ideas start to register very loudly and sharply at moments of massive crisis such as the present. This is true with environmental concerns, and it is even truer in economics, which is the focus of the post-Renaissance modern world. However, few recognize those impacts, and even fewer understand from where the impacts originate. In this process, the entire world either continues to ignore the impacts or doesn't see or hear these impacts coming.
15.2
The Economics of Sustainable Engineering
Traditionally, economic evaluations are based on cost per unit output, which is only suitable for determining short-term and tangible outlooks. A comprehensive economic evaluation of any system should include long-term considerations that are only captured through intangible elements. An evaluation that incorporates both the tangible and intangible elements may be considered truly comprehensive. An engineering decision support system that follows 635
636
THE GREENING OF PETROLEUM OPERATIONS
such an evaluation process will focus on the long term, even as it tests and selects ingenious solutions that are suitable for tangible and short-term applications. By focusing on the long term, the sustainability criterion is fulfilled, thereby eliminating long-term negative consequences of a short-term remedy. This chapter proposes a guideline of economic evaluation that will truly identify the best process among different processes for both short-term and long-term applications. In most cases, the comparison of different processes is based on economic evaluation. In a conventional economic analysis, there is no room for distinguishing between an energy source that is nonrenewable and one that is renewable. In the Information Age, it has become clear that such a "fit for all" analysis technique is not appropriate for meeting the energy needs of the future (Zatzman and Islam 2007a). In the Information Age, it has become both necessary and possible to custom-design a specific engineering application in order to ensure long-term sustainability. One can no longer count on the future in order to ensure that the short-term needs are fulfilled. Because of the sustainability crisis, conventional economics and accounting theories have lost their effectiveness. For instance, according to the supply-and-demand theory, the cost of products from limited resources will increase continuously with the increase of demand and depletion of the resources. This scenario holds the most severe consequences for any resource that is the driving force of civilization and is very limited in supply. With this mode, if current practices of energy production and utilization continue, there will be a huge shortage of energy in the near future. This is evident from the increase in gas prices within the last two decades, the recent tripling of oil prices that sparked worldwide financial crisis, and the most recent sharp drop in oil prices that made the financial crisis worse. Yet, the real value of crude oil did not change, just like the real value of soil or any other natural commodity. This is the reason Zatzman and Islam (2007) characterized this economic system as being based on perception and not knowledge.
15.2.1 Insufficiency of Current Models: The Analogy of the Colony Collapse Disorder In 2007, some parts of the United States and the world lost as much as 90% of the bee population, triggering worldwide concern for long-term sustainability of the human race. In the last century, Albert Einstein was quoted saying, "If the bee disappeared off the
THE KNOWLEDGE ECONOMICS
637
surface of the globe, then man would only have four years of life left. No more bees, no more pollination, no more plants, no more animals, no more man" (Häfeker 2005). If the author's name were withheld (but not his Physics background) or the crisis of the Colony Collapse Disorder (CCD) were not the burning topic, today's scientific community would have remarked, "This is pseudoscience. He should simply talk about probability and not some negative assertions. He is not a biologist. He is not an ecologist. This is totally hypothetical. All bees will never disappear," and numerous other comments marginalizing the statement as "utter nonsense." Because it is Einstein and because the rate of bees disappearing is "real" (meaning measurable with tangible means), alarms are sounding everywhere. However, it is only a matter of time until the experts will begin to say, "Because we cannot find a 'significant' link between phenomenon and this effect, there is nothing that can be done." Of course, it won't be stated publicly. Instead there will be public outcry for funding so that "facts" can be collected, hypotheses can be "verified," tests can be "repeated," and the problem can be "solved" by proposing "counter measures." What would be absent in any future discourse is the questioning of what constitutes "facts," how a hypothesis can be "verified," what "repeating" a phenomenon means, and, most importantly, how one can measure success of the proposed solutions (Zatzman and Islam 2007a). Remarkably, the entire modern age is synonymous with a transition from honey to sugar to Saccharine® to Aspartame®. This transition also means that more engineering leads to more revenue and more profit, even though the transition is actually a departure from real to artificial. Consider the following transition, as outlined by Zatzman (2007a): Honey —> Sugar —» Saccharine® —> Aspartame® From the scientific standpoint, honey fulfils both conditions of phenomenality, namely, origin and process. That is, the source of honey (nectar) is real (even if it means flowers were grown with chemical fertilizers, pesticides, or even genetic alteration) and the process is real (honeybees cannot have false intentions, therefore, they are perfectly natural) even if the bees were subjected to air pollution or sugary diet. The quality of honey can be different depending on other factors, e.g., chemical fertilizer and genetic alteration, but honey remains real. None of these features are required to be
638
THE GREENING OF PETROLEUM OPERATIONS
recorded as per the guidelines provided by the regulatory agency (EU Council 2002). The science of tangibles is incapable of characterizing the quality of a product beyond tangible features. For instance, only recently, the sale of "unpasteurized" honey became acceptable, at a higher price. Today, there is no price structure to distinguish between honey that is produced by "organic" growers and honey that is produced by chemical growers, who, for instance, give high sugar diets to the bees. As we "progress" from honey to sugar, the origin remains real (sugar cane or beet), but the process is tainted with artificiality, starting with electrical heating, chemical additives, bleaching, etc. Once again, the science of tangibles does not offer any means of differentiating or controlling the quality degradation due to practices that are not sustainable (Chhetri and Islam 2007). Further "progress" to Saccharin® marks the use of another real origin, but this time the original source (crude oil) is old, very old compared to the source of sugar. Even though crude oil is real (because it does come from natural processes), it is not comparable to sugar cane or beet. With a steady-state analysis, both will appear to have the same quality. This steady-state analysis is the characteristic feature of the science of tangibles. This analysis has misconceptions embedded in it, as outlined recently by Khan and Islam (2007b). As further processing continues, one witnesses the final transition to Aspartame®. Indeed, nothing is phenomenal about Aspartame®, because both the origin and the process are artificial. So, the overall transition from honey to Aspartame® has been from 100% phenomenal to 100% aphenomenal. Considering this, what economic calculations are needed to justify this replacement? It becomes clear, without considering this phenomenality feature, that any talk of economics would only mean the economics of aphenomenality. Yet, this remains the standard of neo-classical economics (Zatzman and Islam 2007). Zatzman and Islam (2006) considered this aspect in the context of gas energy pricing and disclosed the science of tangibles behind the graph, depicted in Figure 15.1. Note that in this graph it is impossible to quantify reality. For instance, one cannot say that the honey is 100% real (organic) because there is no way to determine, let alone guarantee, the complete composition of a product. Similarly, there is no way to determine what percentage of reality is lost when an aphenomenal (un-natural) processing technique is introduced. Figure 15.1 shows
THE KNOWLEDGE ECONOMICS
639
Reality \
Profit margin
Degradation from reality to aphenomenality
Extent of processing Figure 15.1 The profit margin increased radically with external processing.
how, during the external processing period, the profit margin is increased as the quality of the product declines. The right portion of the curves represents the bifurcation, which represents a continuous decline in quality (intangible) as profit margin (tangible) is increased. This bifurcation is reminiscent of the knowledge vs. ignorance chart, presented by Zatzman and Islam (2007a). The driving force in both graphs is the fact that the short-term analysis (based on At approaching 0, time being equal to "right now") reverses the trends and makes the quality of the product appear to increase with increasing processing. A case in point is a product that is marketed worldwide, called Tropicana Slim®. (There is nothing particularly egregious about this product a n d / o r its producer — it is quite typical of its kind. The problem addressed here is the distortion of basic scientific understanding that is used to push sales upwards.) The particular sweetener has a picture of corn on the front of the 2.5 g package. It is promoted as a health product, with the sign "nutrifood" clearly marked on the package. It also says, "low calorie sweetener for your coffee & tea." The back of the package outlines how low the calories are. It contains 10 calories per sachet of 2.5 g. Even though the actual calorie content or the basis of this calculation means little to general consumers, the slogan, "nutrifood," along with "low calorie sweetener" gives the impression that the quality of the product is high. To reinforce that perception, the following statements are added: "No sugar, no cyclamate, no saccharine, no preservatives."
640
THE GREENING OF PETROLEUM OPERATIONS
Even though the product meticulously outlines what it does not contain, the package actually doesn't say what it does contain. One has to go to the website to find out what the package actually contains. Its ingredients are Sorbitol (46.2 g/package) and Aspartame (40 mg/packet). To a consumer, this information could mean little, and it is easier to rely more on slogans that are easily comprehended, such as, "sugar substitute for weight control and diabetic diets. It has low a calorie sugar substitute to keep you healthy and slim. It is the real taste of corn sugar." There is also some "advice for using Tropicana Slim:" 1) the maximum daily intake of Aspartame is 40 m g / k g (does anyone keep count of aspartame consumed per kg?); 2) Aspartame loses its taste in high temperatures (so much for use with hot coffee and tea); and 3) it is not healthy for people who have Phenylketonurics because it contains Phenylalanine (implying that this is healthy for those not ill with this disease, and it acknowledges substances other than the two ingredients mentioned). Then the website gives a long list of what the product does not contain: sugar, cyclamate, Saccharin, preservatives, sodium, fat, and protein. It is known that on a per kg basis this product will sell at a price 10 times higher than sugar that is locally available. Contrast this product with another product on sale. It is called Sugar Not®. Following is the content of this product (Figure 15.2). If a comparison of this product were made on the basis of sugar, this product would fare worse than the previous product. If the comparison basis is calories (low calorie being better), this product will be seen as infinitely better than the other one (10 calorie/0 calorie = °°). Other bases for comparison would become spurious because they would have zero divided by zero. Other uncertainties arise from the fact that it is not explicit how the Sugar Not® product is actually extracted. This is also complicated by the misconception that "chemicals are chemicals," in which case fructose from a natural origin is the same as that from an artificial origin. Overall, the confusion in deciding which product is better for the consumer becomes quite arbitrary. Similar confusion exists for every product that is manufactured and marketed in the modern age. This confusion is deliberate as evidenced by numerous "scientific papers" that routinely promote disinformation (Zatzman and Islam 2007a; Shapiro et al. 2007). Examples of this disinformation are available in practically all product-oriented research.
THE KNOWLEDGE ECONOMICS
641
Figure 15.2 The claim, "all-natural," is made, but there is no way to verify the claim.
As a sample, the following quote is provided from Lähateenmäkia et al. (2002). Note that the paper investigates how to overcome negative responses to genetically modified products. The title itself is quite telling: "Acceptability of genetically modified cheese presented as real product alternative." The focus here, as usual, is not to investigate the real effect of such products. Instead, the focus is
642
THE GREENING OF PETROLEUM OPERATIONS
on how to alter the taste and misinform so that "curious" consumers are persuaded to buy an artificial product that is packaged as real. Obviously the paper is funded by a consortium of industries: This study shows that, although consumers have an overall negative attitude towards gene technology, this negative attitude does not cause consumers to reject genetically modified options if they are presented as real product alternatives. Taste benefit seems to be a strong promoter for genetically modified products, and health benefits are also perceived positively. Consumers with the most positive attitudes are curious about gm-products and exposure to these products also influenced responses of those with the most negative attitudes towards gene technology. In the context of the CCD, it is becoming clear that for the first time, such syndromes are being attributed to practically all gadgets that have glorified much of the modern age. In the HSSA degradation, the science of tangibles made it impossible to show the value of natural products, and in the meanwhile, obscured any surrounding science that would make it possible to determine the cause of the disappearance of bees. In the word of Diana Cox-Foster, a member of the CCD Working Group, which is recently formed in order to investigate CCD, "It is particularly worrisome that the bees' death is accompanied by a set of symptoms which does not seem to match anything in the literature." This comment is in line with our assertion that the science of tangibles cannot account for the cause of CCD. This is the onset of the economics of tangibles, as shown in Figure 15.1. As processing is done, the quality of the product is decreased (e.g., the HSSA syndrome). Yet, this process is called "value addition" in the economic sense. The price, which should be proportional to the value, in fact increases inversely proportional to the real value (opposite to perceived value, as promoted through advertisement). Here, the value is fabricated, similar to what is done in the aphenomenal model that uses the syllogism that is based on false premises, as discussed in previous chapters. The fabricated value is made synonymous with real value or quality (as proclaimed by advertisements), without any further discussion of what constitutes quality. This perverts the entire "value addition" concept and falsifies the true economics of commodity (Zatzman and Islam 2007). Only recently, the science behind this disinformation has begun to surface (Shapiro et al. 2006).
THE KNOWLEDGE ECONOMICS
643
In order to demonstrate such disinformation in engineering applications, consider the following example, involving honey and Aspartame®. With the science of tangibles, the following reactions take place: Honey + 0 2 -» Energy + C0 2 + Water Aspartame® + 0 2 -> Energy + COz + Water A calorie-conscious person would consider Aspartame® a better alternative to honey because the energy produced in Aspartame is much less than that of honey for the same weight burnt, never questioning why a calorie is considered to be something negative or what the purpose is of food intake. The big picture would be apparent if all components of honey and Aspartame® were included. For the two cases, the actual compositions of water as a product are very different. However, this difference cannot be observed if the pathway is cut off from the analysis and if the analysis is performed within an arbitrarily set confine. Similar to confining the time domain to the "time of interest" or time = "right now," this confinement in space perverts the process of scientific investigation. Every product emerging after the oxidation of an artificial substance will come with long-term consequences for the environment. These consequences cannot be included using the science of tangibles. Zatzman and Islam (2007) detailed the following transitions in commercial product development and argued that the transitions amount to an increased focus on tangibles in order to increase the profit margin in the short term. The quality degradation is obvious, but the reason behind such technology development is quite murky. At present, the science of tangibles is incapable of lifting the fog out of this mode of technology development. Of course, the above list is not exhaustive. Similarly, on the "social sciences" side, the same drive for tangibles is ubiquitous. In the post-Renaissance world, all sciences have been replaced by the science of tangibles that works uniquely on perception. Consider the following transitions: History, culture —> entertainment, belly dancing Smile —> Laughter Love of children —> Pedophilia Passion —> Obsession Contentment —> Gloating
644
THE GREENING OF PETROLEUM OPERATIONS
Table 15.1 The transition from natural commodity to artificial commodity and reason behind the transition. Original natural components with high value
Final engineering product with very negative value
Driver of the technology (artificial product)
Air
Cigarette smoke
Profit of tobacco processing (e.g., nicotine)
Crude oil
Refined oil
Profit of refining and chemical processing (chemicals, additives, catalysts, etc.)
Natural gas
Processed gas
Profit of chemical companies (MEA, DEA, TEA, Methanol, glycol, etc.)
Water
Soft drinks, carbonated water, sports drink, energy drinks
Profit of chemical companies (artificial C0 2 , sugar, saccharine, aspartame, sorbitol, synthetic 'nutrients', etc.)
Tomato
Ketchup
Profit to the manufacturer and chemical companies (sugar, additives, preservatives, etc.)
Egg
Mayonnaise
Profit to the manufacturer and chemical companies (sugar, additives, preservatives, etc.)
Corn, potato, etc.
Chips, corn flakes
Profit for manufacturers and chemical companies (transfat, sugar, additives, vitamins, non-transfat additives, etc.
Milk
Ice cream, cheese cake
Profit for chemical companies and manufacturers (sugar, no-sugar sweeteners, flavors, vitamins, additives, enzyme replacements, etc.)
THE KNOWLEDGE ECONOMICS
645
Quenching thirst —> Bloating Feeding hunger —> Gluttony Philosophy and True science —> Religious Fundamentalism Science —> "Technological development" Social progress —> "Economic development" By contrast, the true science includes all phenomena that occur naturally, irrespective of what might be detectable. For the use of catalysis, for instance, it can be said that if the reaction cannot take place without the catalyst, clearly it plays a role. Just because at a given time (e.g., time = "right now") the amount of catalyst loss cannot be measured, that doesn't mean it (catalyst loss a n d / o r a role of catalysts) doesn't exist. The loss of catalyst is real, even though one cannot measure it with current measurement techniques. The science of intangibles does not wait for the time when one can "prove" that catalysts are active. Because nature is continuous (without a boundary in time and in space), considerations are not focused on a confined "control" volume. For the science of tangibles, on the other hand, the absence of the catalyst molecules in the reaction products means that one would not find that role there.
15.2.2
Insufficiency of Energy Economics Theories
The conventional economic theories can be directly linked to disinformation regarding the intangible-tangible nexus. The theories of modern economics taught today all start from something called "the theory of marginal utility." The English economics writer William Stanley Jevons (1870) first developed this theory in the 1870s, and it was furthered in the works of Carl Menger (1871), Leon Walras (1874), and Alfred Marshall (1890). Its underlying thesis, which became the basis of an elaborate theory known as neoclassical economics, is that endogenous "choices" about price operate entirely according to personal choice or desire for access to, and use of, some good or service, with the last unit of demand determining the price "at the margin." All of this takes place without reference to any exogenous conditions such as the monopolized character of production, the cartelized character of international trade, or the role of imperial dictates, rivalries, a n d / o r wars in suppressing or further distorting the operation of supply and demand. Consumption takes place without reference to how commodities were produced in the first place. How, indeed, could that which has
646
THE GREENING OF PETROLEUM OPERATIONS
not yet been produced supposedly be distributed and consumed? "At the margin," reply the neoclassical economists. But one only has to ask the question, "whence the originating intention to consume or to produce?" The specific role asserted for the theory of marginal utility in this arrangement reduces the domain of concern for "the margin," so as to simplify handling the relevant variables. However, it is precisely in this process that the counterfeiting job is carried out, as all exogenous conditions beyond "the margin" are simply removed from the domain of consideration, including any condition (such as intention) that may play some role in defining the boundary of, a n d / o r the conditions at, "the margin." The mathematical assumptions fundamental to the basic theory of marginal utility are another source of serious misdirection. Here, the still unaddressed question of intention becomes layered in further opacity. In order to model the individual's "choice" behaviors in economic reality, an untestable assumption that is completely subjective is used to measure what happens on a societal scale. That untestable assumption asserts that individuals' behaviors consist of maximizing personal pleasure and minimizing personal pain. This assumption is untestable because it assumes that society is composed of the individual multiplied uniformly and homogeneously over and over, or, in other words, that the individual exists, but society as a real category in its own right does not exist when it comes to economic analysis or decisions. To assert the existence of each would be true, but to hitch one (society) as subordinate to and derivative of the other (the individual) is false. In addition to this fundamental difficulty, this procedure also transfers the short-term perspective of the individual to society as a whole. However, since the death of any individual or individuals obviously cancels their personal term without shortening the term of society's existence, such a procedure is inherently and patently absurd. Jevons, in particular, declared economic behavior to be nothing more or less than the materialization in social form of this allegedly universal and thoroughly selfish principle. What the worker does to avoid starvation is, thereby, equated with what the business owner does to get another few pennies of profit out of the powerless public. Jevons even suggested a mathematical model that justified his standpoint, arguing that the discrete choices of millions of economic actors may be approximated meaningfully or usefully by continuous-type mathematical functions. He combined this with
THE KNOWLEDGE ECONOMICS
647
the notion of processes that would eventually reach steady-state conditions, adapting to economics a mode of analysis that researchers in thermodynamics had pioneered widely by the middle third of the 19th century. Like the vast majority of educated Europeans of his time, Jevons believed that human actions were ultimately to be accounted for as some variant of animal instincts to eat, procreate, etc. He believed that the most civilized arrangement for people would be one in which the instinct to self-preservation were molded or made to gravitate towards pursuit of self-interest, while the same pursuit by all individuals would be commonly protected by a state that would intervene only to prevent or reverse gross injustice. The factory system was already almost a century old, but - apart from Marx, a few of the followers of David Ricardo, and an even smaller number of the followers of Adam Smith - neither Jevons nor any other economists acknowledged that this system had already given rise to an entirely unprecedented social order. This was a social order marked not only by a vast accumulation of products and opportunities for enrichment but also by the increasing socialization of labor and its output, marking a fundamental transformation of the role and character in society of human creative laboring powers. Economic theories are often announced or explained as final finished products. However, in reality they are anything but. They express ideological and political priorities of the ruling forces of the establishment in the short term at various times in response to various pressures. Thus, the long-standing current defense of the conventional establishment economic theory takes the form of an argument to the effect that, so long as all economic players act according to their self-interest in the marketplace either as a buyer or seller of commodities, they will each maximize their own satisfaction. From such a standpoint, solving problems in the short term entails no additional responsibility for the longer term. Each solution-step is already, and at the same time, a further discharge of the individual's responsibility for the long term. The underlying logic of the position is that greed is just another form of need. The economy exists in the first place mainly, or only, to allocate, as rationally as possible, scarce resources for production either into finished goods or necessary services. Therefore, overproduction can be at most a passing and temporary aberration. On the other hand, underconsumption, because of its potential to disorganize or destabilize the
648
THE GREENING OF PETROLEUM OPERATIONS
Figure 15.3 David Ricardo's seminal work, appearing in 1817, a year of unprecedented distress in England's farming districts, argued strongly in favor of industrial Free Trade principally as a means of eliminating the monopolistic and dictatorial character of rent as a monopoly price imposed by landlords, not only on agricultural producers but on society as consumers of its overpriced food products.
aforementioned allocation of resources that were scarce to begin with, is a most dangerous threat. This version of conventional theory replaced an earlier version that had declared that the marketplace was guided by an "invisible hand." This supposedly maximized the satisfactions of buyers and sellers, so long as neither buyers nor sellers combined to restrain the freedom of the other in the marketplace and so long as the government resisted all opportunities to interfere in the operations of the marketplace. If all these conditions were met, all markets would clear at equilibrium prices, as shown in Figure 15.4 below, and there would be no danger of overproduction or underconsumption.
THE KNOWLEDGE ECONOMICS
649
Figure 15.4 Price, quantity, supply, and demand according to conventional economic theory.
Subsequently in the Great Depression of the 1930s, the emergence of vast concentrations of ownership and production disastrously confirmed the validity of all the earlier warnings against sellers of finished goods combining in the marketplace. It also demonstrated conclusively that, once such monopolies emerged, overproduction had become endemic to the short term and long term of the economy. This, in turn, greatly strengthened arguments in favor of reorganizing production for the long term on a very different basis. The new basis proposed eliminating the capture of surpluses and profits as the main and sole driver of economic development and investment, either in the short term or the long term. Almost mesmerizing in its simplicity, conventional theory tackles the production system as given for any commodity. The graph depicts the resulting situation provided that there are no interdependencies, all competing suppliers are in the market on an equal basis, and a current demand for any good is met entirely and only by its current supply (Figure 15.5).
650
THE GREENING OF PETROLEUM OPERATIONS
Figure 15.5 Production-cost and market-price realities "At The Margin".
Once a market is filled, whether by a quasi-monopoly supplier, a cartel, or all competitive suppliers, conventional economic theory asserts that it also "clears." All goods that can be sold have been exchanged for money, and the production-consumption cycle is then renewed. Reality demonstrates otherwise. Once actual total production has taken place, some proportion, which may increase over time, becomes stockpiled. As time passes, this surplus could be well in advance of current demand. Market demand, meanwhile, advances at rates far below this rate of increase in total production. In such a
THE KNOWLEDGE ECONOMICS
651
scenario, suppliers' costs are transformed from input costs framed by the exigencies of actual external competition into "transfer prices" between different levels of an increasingly vertically integrated structure of production and marketing. Consumers' costs then become predetermined in accordance with arrangements between owners of the forces of material production and owners or operators of wholesale and / o r retail distribution networks. The question remains: what happens to the margin? Neoclassical doctrines of marginal costs and prices crucially assume that current markets are conditioned by current supply and current demand. The essence of their error is not unlike that of the claim that sailing vessels would fall off the earth when they reached its edge. The crucial aphenomenal assumption underlying this conclusion is that Earth had an edge in the first place (i.e, that it is flat). In reality, monopoly and monopolistic forms of competition take over once
Figure 15.6 John Maynard Keynes' seminal work, appearing in 1936, defended governments in temporarily running deficits if the liquidity of the economy so required.
652
THE GREENING OF PETROLEUM OPERATIONS
markets are placed in a condition of permanent actual or latent oversupply. Whereas, the aphenomenal world of neoclassical economics is one that exists before anyone reaches this margin. To combat the possible outbreak of high levels of social struggle, or even revolution, that are implicit in such a position, a compromise was developed based on the theories of Lord John Maynard Keynes (Figure 15.6). It was premised on the idea that deficit spending by governments could temporarily subsidize the maintenance of employment until the next upswing in the economy. This proved highly effective during World War II and for the next 35 years afterward in America, Western Europe, and Japan. It began to lose steam in the 1970s and crashed to the ground during the 1981 recession. That downturn was created by regulatory bodies following all the standard Keynesian policy prescriptions to the letter. Central banks raised interest rates, in the name of combating inflation, to levels considered stratospheric in these countries, only to end up eliminating millions of jobs concentrated in the heavy-industrial bases and heartlands of these economies. The present economic establishment theory focuses far more narrowly on maintaining high levels of consumption of goods and services, regardless of what havoc is done to the environment in the course of creating and meeting so many new "needs." This theory emerged to "correct" the Keynesians' "failure." As this brief history reveals, for some time the aim of economic theory has not been to figure out what serves society best. Rather, the aim has been to buttress the ability and capacities of the establishment to overcome massive resistance to its activities in the short term. As numerous commentators in U.S. media from the "left" and "right" have noted, in order to, and within the course of taking the appropriate steps to, finance various wars, establish and maintain questionable trade deals, and sometimes shield key allies from the application of the Rule of Law, the U.S. has gone into 40 trillion dollars of debt. This amount is far beyond the ability of the present generation, or even the next two generations thereafter, to repay. In addition to the damage that conventional economic theories can inflict in the long term, which is extensively discussed among scholars, there is growing evidence for the conclusion that they are highly dysfunctional in the short term, as well. After Jevons' death, it became crystal clear that this new kind of social order was not moving in the direction that its main beneficiaries in Victorian England might have preferred. A concerted effort was mounted to convert what Jevons had offered the public, which
THE KNOWLEDGE ECONOMICS
653
was a mixture of scientifically researched conclusions mixed with questioning and speculation, into dogma, i.e., into the basis of what is taught today as "neoclassical economic theory." Students of Jevons' career seem to have fallen into one of two ditches off the main path. On one side, some of these writers present the kingdom of Jevons, the Cassandra of coal, propounding his paradox with the aim of alerting the overseers of the empire to wake up before the coals that fuel Great Britain's global supremacy are burning down to their last embers. Others portray the world on the other side in which Jevons, the Mahatma of marginal utility, holds forth. What brings the two realms into any kind of connection is an assertion that the net increased production of goods and services (and, hence, the net overall increase in energy use) uncovered by "Jevons' paradox" somehow demonstrates the supremacy of "consumer choice" over the entire economic system. That constitutes a central thesis of neoclassical economics - Jevons raised to the power "dogma." A number of Jevons' speculations turned out to be wrong. For example, he linked a correct insight, that business seemed to move in approximately 11-year cycles in the 19th century, to a guess (thoroughly refuted decades after his death) that this might be correlated with the sunspot cycle. There was also his dogmatic insistence that coal was the last in industrially useful energy, whereas petroleum was an overpriced substitute and electricity about as practicable as a perpetual-motion machine. Nevertheless, in no contemporary sense can Jevons be considered to have worked in his own day because some of his ideas were already dubious or had failed. For one thing, likely as a result of the experience of his own family's circumstances, in which his father suffered financial ruin in the iron business, Jevons was well aware of the historical conditions that imposed limitations on the capacities of the economic system of his time to meet the needs of those who stumbled in the economic competition that capitalism generates. Nowhere did he advocate that the ability to acquire wealth was any proof of an individual's virtuousness, sterling character, or even entrepreneurial skill. Any of these linkages, none of which withstand serious evidentiary scrutiny, may in our time be found frequently throughout the writings of American academic economists. For another, Jevons nowhere assumes that the production of wealth guarantees a future of plenty. On the contrary, the guideline
654
THE GREENING OF PETROLEUM OPERATIONS
implicitly framing his entire body of economic theory was that of an ineluctable and irremovable scarcity of necessary and vital energy supply. Again, however, discussion in general of the problems of long-term scarcity in the American economy is a minoritarian trend in the academic literature. Yet, specific discussions of shortages and gaps in U.S. energy supply have grown since the 1970s with entire journals dedicated to aspects of that subject. Almost everything dealing with depleting reserves of oil and gas, in fact, relies heavily on a dogmatic rendering of Jevons' views about depleting supplies of coal in his day and on a misreading of Jevons' famous "paradox" that is pure disinformation. Islam and Zatzman (2007) discussed contemporary "energy crisis" at length as the tangible problem of our time, the authors examine how Jevons' theory of coal depletion and his theory of how marginal utility is the most efficient pricing mechanism have been recycled in a form that blocks progress to the self-evident solution to this entirely removable "crisis." More immediately, the current discussion takes up "Jevons' Paradox."
15.2.3 Jevons' Paradox "Jevons' Paradox" is the notion that lowering unit costs of production, on the basis of technological changes that lower the rate of energy consumption by using the same fuel source more efficiently than before, tends to bring about an increase in the production of commodities across all sectors that use or rely on that same energy source. The contradictory, paradoxical upshot is the tendency for the net consumption of energy to increase. This "paradox" was first formulated in Jevons' The Coal Question published in 1865. A typical modern-day restatement from John H. Lienhard, an engineering professor at the University of Houston, follows below: Herbert Inhaber and Harry Saunders take a disturbing look at energy conservation. They begin in 1865. An English mathematician, William Stanley Jevons, had just written a book titled The Coal Question. Watt's new engines were eating up English coal. Once it was gone, England was in trouble. And Jevons wrote: '... some day our coal seams [may] be found emptied to the bottom, and swept clean like a coal-cellar. Our fires and furnaces ... suddenly extinguished, and cold and darkness ... left to reign over a depopulated country.'
THE KNOWLEDGE ECONOMICS
655
The answer seemed to lie in creating more efficient steam engines. Jevons may not have realized that steam engines were already closing in on thermodynamic limits of efficiency. But he did see that increased efficiency wouldn't save us in any case. Look at the Watt engine, he said. It was invented because the older Newcomen engine was so inefficient. Did Watt cut coal consumption by quadrupling efficiency? Quite the contrary. By making steam power more efficient, he spread the use of steam throughout the land. Coal consumption was skyrocketing. A few years later, Henry Bessemer invented a new highly energy-efficient scheme for smelting steel. Jevons's argument played out once more. Now that we could have cheap steel, we began making everything from it - plows, toys, even store fronts. Energy-efficiency had again driven coal consumption upward. The existence of unoccupied economic space under free competition is a precondition for any further specialization within the industrial division of labor to increase the net addition of the stock of industrial productive forces (even as productive forces at the margin were destroyed or otherwise rendered economically redundant). This modus operandi enables technological changes that increase the efficiency of the utilization of an energy source to become linked with, and eventually bring about, the net increase in overall consumption of the said energy source observed by Jevons. Fundamentally, the accounting trick that makes this "paradox" seem more real than it actually is involves an arithmetic sleight-of-hand. Lowering unit costs of production is associated mainly with technological change. (However, in Jevons' time, during a crash of the business cycle, the conditions in which the technological change took place are nowhere referenced.) The introduction of new technologies of this order is invariably undertaken at the upturn of the next cycle by the enterprise or enterprises that wiped out many rivals during the preceding slump. In other words, a mass of productive forces was destroyed, which new technology will render superfluous or displace. It follows that any associated lowering of unit costs of production (an increase in economic efficiency) will manifest only in specific industries or sectors where production became more concentrated as a result of the previous crisis clearing weaker economic players away, or otherwise severely marginalizing them in the market. Energy consumption per employed worker will therefore
656
THE GREENING OF PETROLEUM OPERATIONS
go up as production becomes more concentrated in fewer, more highly capitalized enterprises, probably employing fewer workers than the entire sector employed before the crisis. The reality is that the capitalization of certain sectors in any given crisis is strengthened through such processes while weakened in others. However, Jevons' method proceeds according to a fallacious assumption that what is true for any one sector will be true for all, just as whatever is true for the individual consumer will be true for consumption in general and in society as a whole. If the losses of the whole of the economic system (occasioned by the bankruptcies and other epiphenomena of the crisis) were properly added back in, but in the post-crisis phase as deductions from overall energy consumption, what would the net energy consumption look like? It might have changed little, if at all. In effect, the increased production of commodities and the accompanying increase in energy consumption, referenced by this so-called "paradox," can also be seen as "over-compensating" the losses represented by sidelining or destroying other productive forces. In this respect, the "paradox" is only the outward appearance of an undamped oscillator. Another feature, of which Jevons was not (and, indeed, could not be) conscious, was that economic space would become completely occupied as oligopolies and cartels consolidated their overall role throughout the economy. The space in which free competition once predominated would thus be eliminated. Already, in Jevons' day this space was being pushed to the margins, although this was not understood at the time for what it actually was. The English economist J.A. Hobson, who wrote about the British Empire as an economic proposition, wrestled with certain parts of the problem as early as 1902 (Hobson 1902). The Austrian economist Hilferding glimpsed some implications in 1910 of the rise and role of finance capital, which he defined as the merger of banking and industrial capital (Hilferding 1910). The crucial fact about an economic space that has already been divided up and can only be re-divided is that an increase or decrease in energy consumption loses any clear-cut linkage to or dependence upon changes in "productivity." These cease to regulate each other in any predictable way. In effect, as oligopolies, cartels, and monopolies displace free competition while appropriation of the fruits and ownership of the means of production remain private, the impossibility of production and consumption mutually regulating one another spreads to encompass all other relationships engendered
THE KNOWLEDGE ECONOMICS
657
earlier when free competition reigned supreme. Thus, Jevons' paradox disappears. Why? Because the relationship it proposed (in order to account for phenomena that appeared sequentially related) no longer exists, even though the phenomena themselves persist.
15.2.4 The "Marginal Revolution" as a Legacy of Utilitarian Philosophy During economic crises under modern oligopoly and cartelized commodity production, most of the price of technological advance especially advances in the efficient utilization of energy sources - has been paid by those productive forces caught on the margins of the economic system at the moment of crash a n d / o r crisis. As will be discussed in some detail in the next chapter, this was vividly illustrated in the east coast fisheries of Canada before and after the five collapses (1971,1974,1981,1984, and 1990) experienced in this sector between 1968 and 1992. In each of these moments, more small-boat fishermen lost or suffered serious contractions to their livelihood. Many were confronted with the stark option to quit the fishery or go deeply in debt to finance acquisition of more advanced means of production to enable them to stay competitive with the fleets of the leading processors, Canadian and foreign. In net terms, over this period the numbers of small-boat fishermen in the four Atlantic provinces fell by half,while overall catches offshore and inshore rose approximately four times. Essentially, the seasonal fish plant, small processors, and independent fisherman were displaced by foreign factory trawler fleets and by local trawler fleets owned and operated by a much smaller group of giant processing interests. In Jevons' lifetime, although free competition still dominated, there nevertheless arose, with the further consolidation of railways especially after the inauguration of Free Trade, increasing examples of tendencies to what would later be identified as "vertical integration," the resort to the stock markets to float so-called joint-stock companies, and there were also increasing examples of leading companies in various fields colluding to fix prices as a means of excluding other competitors from markets. Yet, the centrality of the role of free competition in giving rise in the first place to the very relationship spotlighted in "Jevons' paradox" was invisible at the time to those living through it. At the same time, its very invisibility - like the gravitational effects of black holes and so-called "dark matter" in outer space - served to "distort the optic" of those theorizing about
658
THE GREENING OF PETROLEUM OPERATIONS
the significance of contemporary developments. This recognition is profoundly important because it points to a major potential source of error unleashed by paying insufficient attention to the role of intangible factors when summarizing the historical line of a social phenomenon's development. Jevons and others following on the same line believed overproduction crises were aberrations that the further evolution of civilization would eliminate. By refusing to acknowledge the destruction of productive forces at the margins of the factory system as a consequence of overproduction crises in general, they were unable and unfit to see, let alone acknowledge, the "paradox" as something that results from tendencies peculiar to the mechanics of such crises under conditions of free competition in particular. Viewed in this light, Jevons' failure to actually penetrate the veil of the apparent "paradox" is symptomatic. Theorists of Jevons' time (the Victorian era) were saddled with philosophical baggage that rendered them incapable of accounting theoretically for Jevons' paradox in anything resembling a scientifically convincing fashion. They were still operating according to certain assumptions about the nature of interrelationships between the individual, society, and nature that were only beginning to be challenged at the time and would not be supplanted until years after World War I. As far as Jevons' own ideological predispositions are concerned, the "form" seems well known - a devotee of the original utilitarian doctrines of Jeremy Bentham. When it came to defining the source of value, he dissented from the modified translation of Benthamite principles into economic theory proposed by fellow utilitarians James Mill and his son John Stuart Mill. Although this heritage is widely remarked as though it were the most significant feature, it is actually the least useful for grasping anything fundamental in Jevons' outlook. The central issue, which would sharply differentiate Jevons from most educated people living since the middle of the 20th century, would be the understanding of the role of nature and the role of society, or more precisely, the forces of social class and social strata. For Jevons and many of his fellow Victorians, scarcities arose from, or were embedded in, nature. Inequality was not necessarily synonymous with injustice, mutual pursuit of self-interest among people would harmonize, rather than divide, society, and the pursuit of
THE KNOWLEDGE ECONOMICS
659
self-interest was the anteroom to the exacerbation of inequalities. Society was the product of individuals, whereas the individual was not the product of society and had no claims to entitlements of any kind from society. Like many mainstream utilitarian, Jevons rejected any notion of natural rights, or "rights of man," as a dangerous incitement to revolution, anarchy, and Jacobinism. Flowing from the assumption that the effects of pursuing selfinterest would be harmonizing, self-correcting, and generally equilibrating, it then becomes apparent that Jevons could not assign responsibility for the deteriorating state of Great Britain's coal resources to individuals. Ultimate responsibility could derive only from the fact that coal was a finite, non-renewable natural resource. On the other hand, objects of exchange were a different matter, insofar as the process of exchange, repeated many times over, connected all such objects to a definite individual. These objects were thus deemed to have acquired their value from their "utility" (and relative scarcity) for the individuals seeking their purchase. The relative stability of the price of many common items of daily consumption reflected the equilibrating effect of countless independent acts of exchange pursued by individuals with differing degrees of need for an item offering approximately the same "utility." In Jevons' view, as a utilitarian, the notion that value could inhere in objects without a potential utility for a potential purchaser was the height of theoretical inconsistency. Hence, he rejected the position taken by James Mill and John Stuart Mill, who accepted the views of Adam Smith and David Ricardo that the value of a commodity-object in the marketplace was imparted prior to its purchase by the human laboring power applied to give rise to these commodities in the first place. In this, of course, Smith, Ricardo, James Mills, John Stuart Mills, and others had extrapolated the Aristotelian conception of "natural price." Jevons, on the other hand, was insisting that, for the sake of consistency in the application of fundamental philosophical principles, the actual content of all such concepts as value or price would have to be redefined and expressed in terms of the utilitarian pleasure-pain calculus. It was an obvious truth of daily life that value in the marketplace, as reflected in an item's price, was one kind of value - value-in-exchange. Whereas, value in use for the individual was of a different order, not necessarily quantifiable and even where quantifiable not necessarily equal to the value assigned
660
THE GREENING OF PETROLEUM OPERATIONS
by the marketplace. Yet, instead of acknowledging, let alone confronting, this problem, Jevons one-sidedly made it disappear by an act of purest solipsism. He asserted (a) that all value is essentially value-in-use as determined by and for the individual purchaser, and (b) price is the material quantification of that use-value. For a consistent utilitarian, value in exchange is only a quantified form of "real" value, which is value-in-use for an individual. The problem is that in practice it is extremely effective to deal with commodities and what happens to them precisely as so many units of exchange-value. Meanwhile, at the level of theory, the utilitarian approach denies the existence of commodities as exchange-values by simply asserting that a commodity's price is the quantification of its use-value. Hence, for the utilitarian, only use-value exists. A similar problem emerges when utilitarian doctrine attempts to comprehend and elaborate the character of labor and its economic form - society is collapsed into the individual or assumed to be but the multiplication of the individual. For Jevons and company, the social character of modern, i.e., factory-based, commodity production was and remains irrelevant and actually oxymoronic. Jevons theorized that production was induced as a result of the workers' "pleasure" to continue to eat and reproduce outstripping the "pain" of the laboring effort. In effect, in yet another solipsistic leap, labor that is effectively, and for all practical purposes, social in character is re-cast as purely individual.
15.2.5
What is Anti-nature About Current Modes of Economic Development?
No sustainability can occur if a system is not in conformance with nature. As already discussed earlier, the current technology development process is not driven by the long-term benefit. Often this is called greed. Nature is an infinite source of wisdom and, in a sense, actually anti-greed. Nature operates at zero-waste. Hence, wastebased technology is anti-nature. Where Nature turns things from good (in the sense of functional for our purposes) to better (in the sense of even more functional), anti-nature approaches turn things from good to bad to worse (even in the short-term). By taking the short-term approach, we create mechanisms that make things continuously worse. Figure 15.7 elaborates this concept, with respect to technology development, and it may readily be extrapolated to other aspects of social development, including politics and education. The absence of good intentions can only bring long-term disaster.
THE KNOWLEDGE ECONOMICS
661
Net development (true GNP per capita, after subtracting foreign debt payments & re-exported profits of TNCs, etc)
Figure 15.7 As a result of the overextension of credit and subsequent manipulation (by the creditors) of the increasingly desperate conditions of those placed in their debt, nostrums for "development" remain cruel illusions for the lives of literally billions of people in many parts of Africa, Asia, and Latin America. (Here, the curves are developed from the year I960.).
Only human beings have the ability to alter the natural course of nature. If this intervention is motivated by greed or self-interest in the short-term, this intervention will invariably lead to disasters. Figure 15.8 shows how pro-nature technologies are synonymous with truly sustainable engineering models.
15.2.6
The Problem with Taxing (Carbon Tax or Otherwise)
At present, there has been a great deal of controversy over what form of taxes would fix society's economic or environmental woes. In this, those who can afford to pay more taxes or give more to charities are those who are least affected. The power of those who have hoarded great wealth in the form of stockholdings, investment certificates, and other claims on exchangeable value created by others has never diminished. The principle of taxing income progressively no longer has much if any effect, because so many of these forms of wealth
662
THE GREENING OF PETROLEUM OPERATIONS Benefit Pro-nature technology
> Time
Harm
Anti-nature technology
Figure 15.8 Pro-nature and anti-nature development schemas diverge in beneficial impacts.
can be sheltered from tax under categories counted as "capital" or "investments" rather than as "income". Taxes on consumption, which target necessary goods like food and clothing, clobber the incomes of those compelled by circumstances to actually spend the bulk of their earnings on necessities. Figure 15.9 compares what happens in practice with what was promised in theory. Figure 15.9 illustrates the problem associated with the current taxing philosophy. Vilfredo Pareto's celebrated "optimum," which arrives after taxation and can no longer "pump" economic growth upwards, was "disproved" at the end of World War II by Keynesian programs of governments spending beyond their normal budgetary means into long-running deficits. Although Lord Keynes cautioned that this was only to carry on while growth continued to be recorded, it became useful for private corporate interests to encourage the continuation of such deficit spending in ways that would cushion or eliminate risks incurred by their own investment activities. Social-democratic governments in certain Canadian provinces and west European countries have taken this further with a questionable ideological justification that identified replacing private corporate capital with state corporate capital in name of "empowering the public." Once the private sector was able to rescue of the over-extended, debtridden state sector, starting in the time of Thatcher and Reagan, this taxation policy continued, spreading from taxing income to taxing spending and taxing the income of people without capital. From the early 1990s to date, people were so "empowered" by this expansion
THE KNOWLEDGE ECONOMICS
663
(a) Policy ideal of social-democrats elected to office: Government as the biggest corporation increases tax rates & collections, and its power ^ o v e r all others — disempowering the W people in the name of empowering ' them
(d) Vilfredo Pareto's celebrated "optimum" arrives after taxation can no longer "pump" economic growth upwards
(b) Policy reality of liberal and conservative parties elected to office: Government increases and collects from individuals and from small and ^medium-dsized business, to hand over directly to ^ the largest corporation or finance infrastructure for the corporate sector (c) Knowledge-based taxation policy: maximising popular empowerment and minimising dependence upon, interference by, government - by taxing savings, rather than taxing income or consumption
Tax burden (per cap)
Figure 15.9 Theory versus practice of modern taxation policy.
of government taxation that they have been unable to defend a single social program, from health to education, from being gutted in order to ensure the creditors were paid their pound of flesh. Note that a knowledge-based policy of taxing savings instead of income or consumption breaks the cycle of disempowerment, paying the price of a certain reduction in the rate of GDP growth by continuing to lower the tax burden.
15.3
The New Synthesis
For the sustainability criterion to be real, it must be based on knowledge rather than perception. This requirement dictates that the economic scheme include essential features of the economics of intangibles (Zatzman and Islam 2007). The term "intangibles" is essentially the continuous time function, including origin and pathway. For an action, the origin is the intention, and for any engineering product development, the origin is the raw material. Figure 15.10 shows how decisions based on long-term thinking (well intended) can lead to true success, whereas bad-faith actions lead to failure.
664
THE GREENING OF PETROLEUM OPERATIONS
Figure 15.10 Trend of long-term thinking vs. trend of short-term thinking.
The human brain makes approximately 500,000 decisions a day. The trend in a line of these decisions comprises discrete points. At any one of these points, a bifurcation can begin when a well-intended choice is taken based on appreciating the role of intangibles. The overall trends of long-term and short-term thinking are nevertheless quite distinct. Well-intended decisions can only be made after a knowledgebased analysis. As discussed in previous chapters, knowledge-based decision-making involves the consideration of multiple dimensions. As shown in Figure 15.11, the whole point of operating in the knowledge dimension is that it becomes possible to uncover/discover the intangible factors and elements at work that normally remain hidden or obscured from our view. The following method plagues a great deal of investigation in the natural and social sciences today: 1) advance a hypothesis to test only within the operating range of existing available measuring devices and criteria, then 2) declare one's theory has been validated when the "results" obtained as measured by these devices and criteria correspond to predictions. This method needs to be replaced by a knowledge-based approach that would ask the relevant and necessary questions about the available measuring devices and criteria before proceeding further. This is the theoretical framework in which we raise the notion of a
THE KNOWLEDGE ECONOMICS
2D 1D ( •±y
"
665
-fd 3D 9 J^^ "N3D
" * \ 3D
j?4D J>4D
o4D
Figure 15.11 Bifurcation, a familiar pattern from the chaos theory, is useful for illustrating the engendering of more and more degrees of freedom in which solutions may be found as the "order" of the "phase space," or in this case, dimensions, increases from one to two to three to four.
knowledge-driven economics that would be based truly on economizing rather than wasting. Consider the following pair of figures. The first displays all possible pathways for quarterly income in a time span examined and visible within the knowledge dimension. The second displays a truncation of the same information in two dimensions - a truncation of the kind conventionally presented in business and economics texts, with time as the independent variable. However, the information in the second, while abstractly suggesting a positive trend, actually achieves this effect by leaving out an enormous amount of information about other concurrent possibilities. Figures 15.12 and 15.13 illustrate this point.
15.3.1
Understanding the History of Reversals of Fortune
A number of the largest corporations that dominate the world oil and gas business originated in the latter part of the nineteenth century, but the world as a whole may be said to have entered, or more precisely, been dragged into, the Era of Big Oil only since the end of World War I. Today, as the following chart indicates, this energy production effort spans the entire globe and continues ceaselessly to grow. Although the energy-market growth prospects shift away from North America and Europe towards China and India, as well as the
666
THE GREENING OF PETROLEUM OPERATIONS Economics in the Knowledge Dimension Quarterly
► Time
Figure 15.12 In the knowledge dimension, data about quarterly income over some selected time span displays all the possibilities - negative, positive, short-term, long-term, cyclical, etc.
Quarterly Income A
Economics in Two Dimensions
/ / / /
Time
Figure 15.13 Presenting the same data in two dimensions, taking time as the independent variable, only a single upward trend might still be visible.
increasingly global character of the continuous expansion of energy production, are masked by this presentation, it does disclose that the main change over time has been the rising production, distribution, and sale of natural gas on the global scale (US DOE, 2004). It is a curious fact that natural gas prices do not head towards a single world price, whereas the oil price continues to be a single
THE KNOWLEDGE ECONOMICS
667
Figure 15.14 World energy supply (U.S. Department of Energy 2004).
price no matter the different conditions, local energy needs, and demands attending its production (Figure 15.14). Conventionally, the tendency towards a single world oil price has been explained as the outcome of an overwhelming concentration of the refining capacity in leading consuming countries. However, the tendency to maintain a single price in global export and import markets has been attenuated by the rise of the refining capacity in a number of the leading OPEC countries, including, in the cases of Venezuela and Iran, an increasing export trade of refined products and no longer just crude oil. The temporary tripling of the Henry Hub spot price for natural gas, disclosed in Figure 15.15, demonstrates the extreme potential effect of a short-term event. In this case, Hurricane Katrina, in the end, was found to have caused little consequential damage to the gas pipeline network along the Louisiana coast, whose throughput at Henry LA forms the basis for the so-called "hub price." This single event, plus the speculation before, during, and after of the potentially disastrous consequences of the 2005 hurricane season in the Gulf of Mexico, enabled sellers, basing themselves on the Henry Hub, into markets to double the price from the US$5.00-7.50/Mcf band in which gas had been traded from September 1, 2004 until the third week of August 2005. Such a jump would not be possible if there was a world price for this commodity. By contrast, there was in the same time period an even shorter-lived "speculative premium" that bumped the price of oil from US$65 to US$70 bbl. It lasted less than 72 hours precisely because a world price exists that dampens the impact of such events.
668
THE GREENING OF PETROLEUM OPERATIONS
Figure 15.15 Henry Hub natural gas spot market price, September 1, 2004September 1, 2005 (WTRG Economics).
The secret of the divergence between the tendencies of oil and natural gas prices is buried in the historical foundations of modern fossil fuel exploration and production. Such examination discloses unexpected and surprising features of the true relations governing the exploration, production, and processing of fossil fuels, in general, as well as of natural gas, in particular. For example, for some time, among those concerned about postsanctions scenarios, anxiety surrounded the vexed question of reintroducing Iraqi crude in the world market. Adelman (2001) wrote, "When the sanctions regime finally erodes, Iraq will behave like an 800-pound gorilla: it will bring in foreign companies to invest and expand while leaving other members out." Instead, Iraq was marginalized by Russia's record oil output (1.5 MMBPD more than Saudi output, while oil prices remained higher than what was believed to be a "fair" $25/bbl world price). Even with 140,000 troops on Iraq's territory since toppling the regime of Saddam Hussein in April 2003, and in the absence of any agreement with the Paris Club of long-term creditors of the former regime (Russia, France, etc.), the current regime remains unable to secure even pipeline transport of Iraqi oil out of the territory let alone bulk transportation contracts with international shippers. Meanwhile, the meter keeps ticking on the coalition's billion-dollar-a-day war and occupation, which was
THE KNOWLEDGE ECONOMICS
669
Figure 15.16 Crude Oil Nominal Price Index (CONPI) vs. CPI, and the permanent excess OECD demand compared to OPEC supply (EDOXSources: US BoLS, 2006, Tables 1.2, 2.2, and 7.1; EIA, 2003).
to be paid for from exports of Iraqi oil. Nevertheless, after dipping below US$24/bbl at the moment Saddam was toppled, the world oil price crept back up but only to the US$27-US$30/bbl range, which is typical for the period of the winter heating season in the northern hemisphere where the largest per-capita energy consumption markets are located. By the summer of 2005, the price had escalated to US$65-70/bbl. The green lines show the price of oil, as indexed by its nominal price (Crude Oil Nominal Price index), fluctuates without reference to, and far below, the Consumer Price Index. The red lines show that the supply shortfall is permanent. The CONPI trend line relative to the demonstrated EDO shows neither oil supply nor demand regulate one another. This line of investigation based on CONPI is further elaborated by Islam and Zatzman (2004).
670
THE GREENING OF PETROLEUM OPERATIONS
Source: EIA/OPEC News Agency (official OPEC news source) Figure 15.17 OPEC "Basket" prices for crude oil, January 2001 - March 2005.
As Iraqi resistance grew, Iraqi fossil fuel exports plummeted to 15-35% of pre-invasion levels, compelling other members to increase their pumping levels in order to cover the Iraqi shortfall in the market and at the same time maintain their relative positions in OPEC. Until the January 30, 2005 elections in Iraq, it briefly appeared that production in the northern fields around Mosul would be largely restored, but this never happened. Eventually, with George W. Bush's re-election in November 2004, OPEC abandoned the "price band." At its March 2000 meeting, OPEC set up a price band mechanism, triggered by the OPEC basket price, to respond to changes in world oil market conditions. According to the price band mechanism, OPEC basket prices above $28 per barrel for 20 consecutive trading days or below $22 per barrel for 10 consecutive trading days would result in production adjustments. This adjustment was originally automatic, but OPEC members changed this so that they could finetune production adjustments at their discretion. Since its inception, the informal price band mechanism has been activated only once. On October 31, 2000, OPEC activated the mechanism to increase aggregate OPEC production quotas by 500,000 barrels per day. On March 4, 2005, the OPEC basket price rose to $48.37 per barrel, its
THE KNOWLEDGE ECONOMICS
671
highest price since the price band mechanism was established. Since December 2, 2003, when the basket price last crossed the $28 per barrel threshold, the OPEC basket price has traded above the $28 per barrel level for 325 consecutive trading days through March 7, 2005 without triggering the price band mechanism. At its January 30, 2005 meeting, OPEC decided that market changes had rendered the band unrealistic and decided to temporarily suspend the price band mechanism, pending completion of further studies on the subject. One of the most difficult situations that emerged during 2004 was what appeared to be an acute drawing-down of unused production capacity in the hands of the national oil companies of the OPEC countries. Some commentators have tried to suggest that this circumstance is connected with attempts to avert the consequences of the "peak oil" scenario. However, facts suggest such claims are disinformation aimed at hiding how the powers concentrated in downstream activity, such as refining, have chosen to make the members of OPEC pit themselves against one another in a "race to the bottom" to see who can be forced to pump the last barrel of oil before any other member, instead of investing their increased profits from increased prices in additional refining capacity.
15.3.2
True Sustainability is Conforming with Nature
The conventional route has been to reconcile the principles discovered and innovations produced with the demands and requirements identified earlier with the aphenomenal model. While this remains open and available, there is an alternative. It consists of reformulating the deeper problem to the "humanization of the environment." This solution has infinite pathways, but all pathways to a solution must meet a single guideline. That guideline may be outlined by defining what "change" looks like, so that people are not fooled by claims based purely on external appearances of "change" alone. The essential point to distinguish is that, while the basis of change is internal, the conditions of change are external. Here, in this context, the basis is determined by structure, function, and intention, whereas conditions are prepared historically through time. Furthermore, by this definition, the relevant criteria of necessity and sufficiency are that, without preparations of the conditions of change, none of the internal structures, functions, and intentions can guarantee or sustain the changes for which they provide the basis.
672
THE GREENING OF PETROLEUM OPERATIONS
It is especially crucial that knowledge-gathering activities, such as research, be re-ordered in all fields of science and engineering on the basis of looking after the future not by mortgaging it so as to indefinitely extend the present, but rather by working a n d / o r arranging matters in the present so as to take care of the long-term and, thereby, ensuring the short-term as well. By its advocacy of tackling today's problems without unduly burdening future generations, this outlook overcomes serious limitations inherent in the long-standing mantra of "reduce, reuse, and recycle," which is associated with the agenda of "environmental protection." Instead, this outlook substitutes in the place of the pragmatic stance of the "three Rs" a natural act of personal stewardship and taking responsibility for the fate of humanity, based on the aim of truly sustainable development with appropriate criteria. What is being labeled here as a "new synthesis" includes both new elements alongside existing phenomena and new ways of arranging the new and existing elements. As a general society-wide process, the "humanization of the environment" appears as something external to the individual and, thus, as part of the conditions of change. Its role is nothing less than to elevate conscience to its place as the driver of everything, unfolding a social and economic order that is needs-based rather than greed-based. The detail of the systematic workings of what we are calling the "economics of intangibles" will be provided by what comes out of the struggle to introduce these innovations. With conscience as the driver, this much can be sketched. The following key intangibles can assume their proper roles when redefined as follows: 1. Time stands for the long term or the characteristic term, rather than t = "right now." 2. Knowledge refers to things-in-themselves and inrelation, rather than only the perception of things in their external appearance. 3. Intentions must be redefined to include the effects of our actions on others, based on recognizing that deeds come from the intentions of individuals. Although deeds are external to the individual, intention is internal and, thus, forms part of the basis of change. When modelling anything for the new synthesis based on these key intangible criteria,
THE KNOWLEDGE ECONOMICS
673
intentions should become transparent on both the social and individual scale in order that, among other things, the meaning of certain conventional economic actions is transformed: a) Investment can be viewed as something undertaken for the long term. Under the new temporal criteria of At —> not zero, investments that amount to hoarding disguised as "saving" are not considered worthy of the name. b) Whatever has not been produced cannot be distributed. Hence, any treatment of production, distribution, and exchange as objects of speculation is superseded. c) Beyond what is required to purchase whatever is needed by one's dependents, money must be treated as a trust and no longer as a source of usury or any other form of enslavement. This is rife with consequences. For example, charity becomes possible only without strings. What is meant by "a social and economic order that is needs-based rather than greed-based?" First and foremost, it means self-interest cannot be the final arbiter and that, where they conflict, the interests of the individual and of society must be consciously, and conscientiously, reconciled. This, in turn, is part of a larger picture in which nothing in the natural-physical environment can be treated with any less care than one would treat any other human. This means putting an end to the subordination to human whims of everything in the natural-physical environment, as well as to the subordination of the social to the individual. In sum, "a social and economic order that is needs-based rather than greed-based" means putting an end to the separation of humanity from the natural-physical environment and the separation of the individual from the social environment of all humanity. (These separations are of course not real. If they were, we could exist neither as individuals nor in society. The notion of such separation is affected at the level of outlook, as the individual, from earliest consciousness, is guided to interpret and internalize the notion that humanity stands at the top of the food chain.) From the standpoint of the economics of everyday living, the most important aspect of "a social and economic order that is needs-based rather than greed-based" is that it demonstrates in practice how doing good can be good business. In other words, by
674
THE GREENING OF PETROLEUM OPERATIONS
organizing with a view to ensure the long term, material success in the short term may not only be as high or better than what would be achieved by conventional approaches, but it would acquire a future based on maintaining a sustainable set of practices from the outset. Today there is no corner of the globe in which technological innovation is not undertaken. Yet, we have an enormous amount to learn all over again from the vast array of traditional methods and technologies that were developed over centuries and millennia ago in the traditional village lives of Asia, Africa, and Latin America. These are the places that are spurned as backward and excessively labour-intensive by all those who premise economic development. 1. on a prior infusion of money capital, 2. on market-based standards of commodity production, and 3. on a notion that associates economic efficiency with diminishing and minimising the application of creative human labouring power. For the entire history of the Industrial Revolution beginning in Britain in the middle of the 18th century, one constant theme has been the increasing scale on which production carries on. The drive towards more cost-effective energy systems powering industry, from coal to petroleum to electricity to natural gas, has encouraged greater levels of production, while the ruinous consequences of periodic crises of overproduction led stronger capital formations to absorb weaker formations, thereby, over time, increasingly concentrating ownership of the productive forces as well. Combining the concentration of production with concentration of its ownership has accelerated tendencies towards integration of primary, secondary, and tertiary phases of production and distribution under a single corporate umbrella, in a process generally labeled as "vertical integration." The notion of "transmission belt" provides a unifying common metaphor that defines the interrelationships of the various structures and functions under such an umbrella. According to this metaphor, the demand for products on the one hand and their supply on the other hand can be balanced overall, even though certain markets and portions of the corporate space may experience temporary surpluses or shortages of supply or demand. This apparently elegant unity of structure, function, and purpose at the level of very large-scale
THE KNOWLEDGE ECONOMICS
675
monopolies and cartels is one side of the coin. The other side is seen in the actual fate of the millions of much smaller enterprises. Conventionally accounting for these phenomena assumes that the vertically integrated "upstream-downstream" model is the most desirable, and the fate of the small enterprise is some throwback that "economic evolution" will dispose of in the same way that entire species disappear from the natural world. Conventional accounting fails to explain, however, the utterly unsustainable scale of resource rape and plunder that the asserted superior, vertically integrated "upstream-downstream" enterprise seems to require in order to maintain its superiority. From de Quesnay's Tableau economique in the 1760s to the work of the 1973 Nobel Economics laureate and Harvard University professor Wassily Leontieff (Leontieff 1973), transmission-belt theories of productive organization, along with all the "input-output" models of economic processes spawned from its premises, have focused entirely, and actually very narrowly, on accounting for the circulation of capital and nothing else. Whatever this position has been based on, it is not science. Karl Marx's work on the circulation of capital (Marx 1883) and the Production of Commodities By Means of Commodities (Sraffa 1960) demonstrated that the rich getting richer and the poor getting poorer was inherent and inevitable in any system that buys and expends the special commodity known as labor-time in order to produce whatever society needs in the form of commodities. Their research established that the circulation of variable capital (wages spent to buy consumer goods) runs counter and actually opposite to the circulation of constant capital (money spent on raw materials used in production and for replacing, repairing, or maintaining machines and equipment used up or partially "consumed" in the processes of production). In effect, merely redistributing the output of such a system more equitably cannot overcome inherent tendencies of such systems towards crises of overproduction or a cumulative unevenness of development with wealth accumulating rapidly at one pole and poverty at the other. These tendencies can only be overcome in conditions where people's work is no longer treated as a commodity. How is labortime to be decommodified? There are various pathways to decommodifying labor-time. For the economics of intangibles, time is considered the quantity of labor-time needed to produce/reproduce society's needs on a self-sustaining basis. Supply is planned on the basis of researched
676
THE GREENING OF PETROLEUM OPERATIONS
knowledge. Its provision is planned according to the expressed needs and decisions of collectives. Meanwhile, production is organized to maximize inputs from natural sources on a sustainable basis. Primary producers organize through their collectives. Industrial workers and managers organize their participation in production through their collectives. Those involved in wholesale or retail distribution organize their participation through their collectives. The common intention is not the maximum capture of surplus by this or that part of the production-distribution transmission-belt, but rather it is to meet the overall social need. Setting and meeting this intention is the acid test-criterion for all the socialist social-economic experiments, past, present, and future, as well as for community economic development mechanisms like micro-lending. The economics of intangibles fills a huge theoretical gap in this respect. Its emphasis on intention, however, also challenges many essentially Eurocentric notions that tend to discount its role. One of the greatest obstacles to achieving sustainable development on these lines is the pressure to produce on the largest possible scale. Although such a production scheme is nominally intended to realize the greatest so-called "economies" and "savings of labor," it also contradicts the competitive drive of individual enterprises to maximize returns on investment in the short term. Modern society grants all intentions full freedom to diverge, without prioritizing the provisioning of social needs. This means, however, that every individual, group, or enterprise considers its cycle and social responsibility completed and discharged when the short-term objective, whatever it may be, is achieved. On the other hand, the only way to engender and spread truly sustainable development is to engage all individuals and their collectives for, and according to, common long-term intentions and, on that basis, harmonize the differing interests in the short-term of the individual and the collective. Although the execution of any plan, however well designed, requires hierarchy, order, and sequence, it does not follow at all that plans must also be produced in a top-down manner. This top-down approach to drawing up economic plans was one of the gravest weaknesses of the eastern-bloc economies that disappeared with the Soviet Union. Starved of the flow of externally supplied credit, the bureaucratized center on which everything depended became brain-dead. Nowhere else in their systems could economic initiative be restored "from below."
THE KNOWLEDGE ECONOMICS
677
Truly sustainable development encounters another serious hurdle from the prevailing legal systems of North America. In this arena, the greatest obstacle arises from the laws that prioritize the protection of property, which can be used to make money or exploit others over protection of people's individual and collective rights as individuals born to society. One of the forms in which this finds expression is the definition and treatment of corporate entities as legal persons, who unlike physical persons are exempt from imprisonment upon prosecution and conviction for the commission of criminal acts. The prioritizing bias of the existing legal system, expressed most rigidly in its unconditional defense of the sanctity of contract, has many implications as well for the manner in which money and credit may be offered or withdrawn in conventional business dealings among business entities, corporate or individual. The shifting needs of real human collectives affected by such changes take second place to the actual wording of a contract, regardless of the conditions in which it was made. This gives wide latitude to any individual or group that would seek to wreck the interests of any well-intended collective, while affording the victims of sharp practice almost no remedy.
15.3.3
Knowledge for Whom?
Conventional economic theory correlates stable, and relatively low, interest rates with higher well being, as expressed in a large per-capita Gross Domestic Product. By such measures, the United States, Canada, and the leading economies of western Europe routinely "come out on top" in the international rankings. Conversely, by the same ranking process, the most impoverished societies from the Third World have the lowest per-capita GDP and quite high interest rates, seemingly "clinching the argument." The moment one looks beyond the tangible evidence available to support these claims and correlations, however, the larger picture suggests how partial and limited this correlation is. The veritable jungle of channels and pathways facilitated by an entirely corporate-dominated system of money supply (in which "money" includes credit and other forms of term indebtedness) is the structural underpinning of intangible relationships that stimulate an intense competition to issue debt and, therefore, competition to keep interest rates within a narrow band. In societies lacking such extensive corporate-generated networks
678
THE GREENING OF PETROLEUM OPERATIONS
or the accompanying web of relationships, the nature and extent of the risks that a lender undertakes cannot be fully accommodated in local markets or in a sufficiently timely enough fashion so that interest rates can remain low for any extended period. These gaps are bridged in practice by a welter of private, temporary, familyor clan-based, and definitely non-corporate relationships enabling certain classes of customers to repay large portions of the debt they have contracted in various non-cash forms. Those with more means will repay more in cash. The really wealthy borrow from, and do business with, Western banking institutions or their commercial correspondents in these countries. The hard-currency sectors of the economy, largely in the hands of government officials and foreign corporations and their local agents, are transferring out of the country a large portion of profits garnered from commerce conducted inside the country. The upshot is that the GDP total for such countries starts at a much lower level and looks far poorer in per capita terms compared to any "developed" country. What happens as Western corporations and financial networks penetrate these societies further? More money and credit circulate longer inside the country before completing their circuit and exiting the economy as hard currency in foreign bank accounts and payments on government loans, so per capita GDP appears to go up. And as these networks spread internally and compete to attract borrowers, interest rates decline and increasingly occupy a more stable and narrowing band. This is then described as underdeveloped countries "catching up" to developing countries. That claim, however, represents the endpoint of a completely aphenomenal chain. The correlation of interest rates and GDP per capita is a chimera. Neither is actually the reason for the inverse behavior of the other. Additionally, two other well-known major economic indices associated with overall economic well-being - the inflation rate and the unemployment rate - produce an entirely different yet related set of obstacles to grasping the essence of actual problems of economic development. These indices are seen as being much more subject to explicit policy direction under all circumstances. However, solving (in the direction of sustaining the long-term interests of societies and all their individual members) the problems that follow in the train of either their opposite movement or the failure of their movement altogether turns out to be no less intractable. The modern understanding of the relationship between the unemployment rate and the rate of inflation was elaborated in
THE KNOWLEDGE ECONOMICS
679
the 1950s and 1960s in what was known as the "Phillips curve" (Figure 15.18). What is not aphenomenal about the very notion that the two trends could move together at all, let alone inversely? In order to elaborate such a relationship, it is necessary to elaborate some additional concepts such as the "non-accelerating inflation rate of unemployment" (NAIRU). Inflation, meanwhile, is never defined in terms of employment or underemployment of production factors even in general, let alone in terms of the workforce; there is no such animal as a "non-accelerating unemployment rate of inflation." It refers only to the amount of money circulating in the economy (including debt and credit) exceeding on an accumulating basis the sales-value of the supply of goods and services normally being consumed. Furthermore, the unemployment rate is calculated on a basis that excludes, from the potential total number of workers in any given period of time, all those who have left the workforce for that period of time, even though the vast majority of such individuals indeed form part of the workforce over the years. The range in which the rate of inflation can move, therefore, is far broader and less bounded than the range in which that total can move. Also, it follows that the comparability of movement in the unemployment rate and the rate of inflation is purely notional.
Figure 15.18 The "Phillips curve".
680
THE GREENING OF PETROLEUM OPERATIONS
The "Phillips curve," or PC, purported to explain how relatively full employment might be possible for periods of relatively high inflation of money-prices, suggests that monetary or fiscal policy intervention by governments a n d / o r central banks could counter increases in unemployment in the short-term by increasing the money supply in the short-term, e.g., moving policy targets from condition "A" to condition "B" (Phillips 1958). Stagflation in the U.S. economy, which followed the extrication of U.S. military forces from wars in Southeast Asia and consequent reductions in spending by the Department of Defense, demonstrated just how baseless this optimistic scenario turned out to be.
15.3.4 The Knowledge Dimension and How Disinformation is Distilled A great deal of what passes as social science consists of data whose actual trend-line comes not from the actual development of events as they are observed and unfolded, but rather from the intentions of the researcher compiling the information. This has been the source of much mischief, including the correlations discussed in the previous section. Although such distillations of information are presented as contributions to knowledge, it is difficult to determine the truth of such an assertion in any given case when only selected data points are provided. The following sets of graphs comprising Figure 15.19 demonstrate how data extracted from the knowledge dimension in such selective ways can obscure understanding of the larger picture in profound ways. The overarching development since the late 1950s in the developing countries has been a struggle to broaden independence from the political sphere to the economic sphere. This process has been impelled by historical factors that continue to play out in the present. More recent developments have accelerated or decelerated the cycle, but the overall path is similar to any Julia or Mandelbrot set, i.e., classical mathematical chaos, only expressed over the passage of historical time and therefore appearing as an ongoing cycle of self-similar but aperiodic helices. The historical context links these phenomena. They all developed in response to the emergence of political independence from old European colonial powers in various parts of Africa and from the post-Soviet Russia in central Asia. The processes of "decolonization" in former British, French, Portuguese, Spanish, and Belgian territories of Africa was correctly
THE KNOWLEDGE ECONOMICS
681
(A)
NOTE: Each helix identifies a specific "neo-" spiral in order of its historical appearance. In general, as time moves forward, these helices appear "tighter", i.e., the average individual radius diminishes. Depending on the phase space selected as the frame of reference, one might see the complete picture (this graphic, in the Knowledge dimension). Depending on which effects of other dependent parameters are discounted or included, one might alternatively see mainly a rising trend (graphic 'b'), a falling trend (graphic 'c') or a steady state (graphic 'd'). (B)
GRAPHIC 'b': For selected less-developed countries, the GDP per capita demonstrates a clearly rising trend that has continues from the 1960s into the 21st century. In this collection of points, sequenced identically from a subset of points in the knowledge phase-space of the neo-spiral graphic, a large amount of information that would have served to disclose the actual historical course of development of the global political economy in this timespan - providing the actual context of this GDP increase - has been lost. Figure 15.19 (a) and (b) Full historical view of increasingly implosive trend of neo-colonial decay in "developing" countries compared to trend of GDP per capita in selected LDCs.
682
THE GREENING OF PETROLEUM OPERATIONS
(A)
NOTE: Each helix identifies a specific "neo-" spiral in order of its historical appearance. In general, as time moves forward, these helices appear "tighter", i.e., the average individual radius diminishes. Depending on the phase space selected as the frame of reference, one might see the complete picture (this graphic, in the Knowledge dimension). Depending on which effects of other dependent parameters are discounted or included, one might alternatively see mainly a rising trend (graphic 'b'), a falling trend (graphic 'c') or a steady state (graphic 'd'). (C)
GRAPH 'c': The falling trend in the proportion of GDP due to primary production in the fastest growing economies among the less-developed countries can be readily reconstituted from yet another subset of identically-sequenced points abstracted from the fuller picture of the knowledge-dimension phase-space illustrated in the first graph. As in Graph 'b', everything historically specific has been stripped out, so that the direction suggested by the trendline is actually... meaningless Figure 15.19 (a) and (c) Full historical view of increasingly implosive trend of neo-colonial decay in "developing" countries compared to primary-production share of GDP per cap in selected LDCs.
THE KNOWLEDGE ECONOMICS
683
(A)
NOTE: Each helix identifies a specific "neo-" spiral in order of its historical appearance. In general, as time moves forward, these helices appear "tighter", i.e., the average individual radius diminishes. Depending on the phase space selected as the frame of reference, one might see the complete picture (this graphic, in the Knowledge dimension). Depending on which effects of other dependent parameters are discounted or included, one might alternatively see mainly a rising trend (graphic 'b'), a falling trend (graphic 'c') or a steady state (graphic 'd').
GRAPH 'd': The data points in the above graph depict steady-state trend in the share of national income enjoyed up to the early 1990s by the small middle stratum found in most LDCs. As in Graphs 'b' & 'c', the data points are sequenced exactly as they were located in the original "neo"- spiral graph, and all historically specific information removed. Figure 15.19 (a) and (d) Full historical view of increasingly implosive trend of neo-colonial decay in "developing" countries compared to middle classes' share of national income in selected LDCs.
684
THE GREENING OF PETROLEUM OPERATIONS
dubbed "neo-colonialism" by Patrice Lumumba, the late Congolese independence leader. This term stressed the continuity of foreign economic domination of these countries, usually by their former colonial occupiers, for some indefinite but considerable period following formal political independence. Certain other processes then developed in the wake of these unfolding neo-colonial relations. The top panel of each of the three sets of Figure 15.19 illustrates how the neocolonial process fed into the emergence of a neo-imperialist system. In this rearrangement, sometimes called the "bipolar division of the globe," the (at the time) Soviet Union dropped any pretence of defending its "socialist camp" from actual or threatened attack by the U.S. or its NATO bloc. They resorted to definite arrangements with the U.S. bloc to divide the globe into spheres of influence consisting of countries not necessarily sharing the same social or economic system as their Big Brother, nevertheless depending on one or the other superpower for security from external attack. As this encountered growing resistance in these countries "from below," both superpowers and especially the United Nations spawned large numbers of what were called "non-governmental organisations" (NGOs). These were really neo-governmental organizations, supposedly supplementing inadequate local services but also gathering intelligence on incipient protest movements. With the disappearance of the USSR, the pretence of social service for society's sake was dumped, and a ruthless neoliberal doctrine declared only the "fittest" developing countries could survive. As the tightening sequence of helical spirals indicates, this process is increasingly implosive with time. However, as graphs (b), (c), and (d) illustrate, by "slicing" this view of the overall data trend from the knowledge dimension so as to exclude the actual progression of historical time, one can extract many scenarios about what is happening to social and economic conditions in the weakest and most vulnerable parts of the world, known euphemistically as either the "less-developed" or "least-developed" countries (LDCs).
15.4
A Case of Zero-waste Engineering
While this is commonly seen as a basis for developing renewable energy sources, few actually realize that the conventional economic analysis models are incapable of predicting the type of energy crunch
THE KNOWLEDGE ECONOMICS
685
and energy pricing crises that we are currently facing (Zatzman and Islam 2007). In order to predict the future outlook, there is a need to balance energy demand and energy supply, rather than treating them as dependent variables of the "addiction to consumption" (Imberger 2007). Only sustainable development of energy production and utilization can guarantee this balance. While it is commonly understood that only renewable energy fits this description, it is possible to utilize "non-renewable" energy sources (e.g., fossil fuel) as long as the processing mechanism is sustainable (Khan and Islam 2007b). It is becoming increasingly clear that there is no need to resort to unsustainable practices even for handling short-term emergencies (Islam et al. 2008). With the currently used economic analysis techniques, sustainable energy practices appear to be more expensive than their unsustainable counterparts. This is because the conventional analysis does not account for numerous hidden costs, including the cost of environmental damages, social imbalance, and others (Imberger 2007). These hidden costs are only accounted for in the economics of intangibles (Zatzman and Islam 2007a). With such analysis, it would become evident that the cost of energy with sustainable practices would be very stable over the longer term due to continuous supplies of sources, even if only the supply-and-demand model were used. In addition, with the continuous improvement of the extraction technologies, the cost is expected to decrease. On the other hand, the cost of energy from unsustainable processes increases continuously as the hidden costs continue to surface. This raises many questions as to the credibility or significance of comparing the economics of various technological options. If a technology is not sustainable, making it economically appealing is possible only if long-term considerations are hidden. In the information age, such modus operandi is not acceptable. In this section, zero-waste living with inherently sustainable technology developed by Khan et al. (2007b) has been used as an example to evaluate the economics of the process by a detailed analysis of tangible and intangible features. The zero-waste scheme is a complete loop with all products and by-products being continuously recycled through the natural ecosystem. The proposed zerowaste scheme is comprised of a biogas plant and a solar trough that are applied to solar refrigeration, solar heating and cooling, solar aquatic treatment of wastewater, and a desalination plant. The detailed engineering of the scheme has been shown by Khan et al. (2007a, 2007b). Because of the natural and renewable
686
THE GREENING OF PETROLEUM OPERATIONS
nature of the key units, it is necessary to develop a guideline of economic evaluations that will unravel the true economics of the scheme. Note that a renewable energy scheme is used to demonstrate that the knowledge economics of sustainable fossil fuel production and utilization is not different from the knowledge economics of any other zero-waste scheme.
15.4.1
Economic Evaluation of Key Units of Zero-waste Scheme
In this section, detailed economical analysis of various components of a zero-waste scheme is presented. In this analysis, both tangible and intangible aspects are considered. 15.4.1.1
Biogas Plant
Before a biogas plant can be put into operation, it is necessary to estimate the cost- and benefit- profile of the plant. The capital involved in different phases of the plant installation and operation can be divided into two categories: total capital investment and annual expenditure. Capital investment is the total amount of money initially needed to supply the necessary plant construction and facilities, plus the amount of money required as working capital for the operation of the facilities. Initial capital investment is considered to be a onetime investment for the entire life cycle of a plant. On the contrary, annual expenditure is the cost incurred to run the facility for one year, which includes the depreciation of different facilities, maintenance cost, labor cost, and raw materials cost. The cost of the biogas plant varies with the type of bio-digester and the location of its operation. So far, successful bio-digester plants are found to operate in India, Germany, Nigeria, and other countries. Following the costs incurred in some successful bio-digester plants all over the world (Kalia and Singh 1999; Adeoti et al. 2000; Omer and Fadall 2003; Singh and Sooch 2004), a generalized cost ratio factor has been established as suggested by Peter and Timmerhous (1991) in order to estimate the cost of a bio-digester plant. In this study, the cost and benefit of a biogas plant that can be operated by kitchen wastes from a 100 apartment building is evaluated. According to Khan et al. (2007b), the biogas plant of a 100 apartment building has a capacity of 16 cubic meters that can house 116 kg of biological
THE KNOWLEDGE ECONOMICS
687
wastes, which, in turn, produces 9.9 m3 of biogas, 69.7 kg of digested bio-manure, and 342.9 kg of ammonia leachate daily. Table 15.2 shows the cost estimation of a biogas plant with a capacity of 16 cubic meters with a 20 years economic life cycle. Table 15.2 Cost and benefit estimation of a biogas plant for 20 years economic life cycle. Item
Cost factors
Total Capital Investment (FC)
% of total capital investment
Amount
Cost (US$)
• Construction cost
55
1287.00
• Facilities and installation cost
22
514.80
• Labor cost (Construction)
20
468.00
• Land cost (if required to purchase)
10
234.00
Total Annual Expenditure
2340.00 % of annual expenditure
• Cost of kitchen waste (if required to purchase)
12
• Labor cost
25
190
8
60.8
55
418
• Curing and absorption cost • Operation and maintenance, cost
(116 k g / d a y χ 365 days) 42,340 kg
Total
91.2
760.00
Output and Economic Benefit • Cost of biogas US$0.138/m 3
(9.9 mVday x 365 days) 3613.5 m 3
498.66
688
THE GREENING OF PETROLEUM OPERATIONS
Table 15.2 (cont.) Cost and benefit estimation of a biogas plant for 20 years economic life cycle. Item • Cost of digested bio-manure • Cost of ammonia leachate
Cost factors
Amount
Cost (US$)
US$0.00904/kg
(69.7 k g / d a y x 365days) 25440.5 kg
229.95
US$0.00299/kg
(342.9 k g / d a y x 365 days) 125158.5 kg
374.7
Total
1074.4
The economics of a biogas plant involves the calculation of annual profit and payback period for the plant: Annual profit (Cash inflow) = Annual income - Annual expenditure (15.1) = $(1074.40 - 760.00) = US$314.4 If the cash inflows are the same for each year (neglecting the time value of money), the payback period can be calculated as follows (Blank and Tarquin, 1983): Payback Period =
Cost of the Plant (Present Value) Annual Cash Inflows
2340.00 314.4
= 7.44 years (15.2)
The payback period can be reduced with the increase of waste handling capacity, which can reduce the overall capital investment cost and the annual maintenance and operating cost. Bio-digesters with a mesophilic temperature range (25-40°C) need energy for heating (especially for cold countries), which can be supplied by solar technology during the day and partly by biogas burning during the night (or in the absence of sufficient solar radiation). Bio-digesters can be operated at low temperatures without any heating; however, the retention time and the plant size need to be increased. From the comparative study of mesophilic biodigestion and psychrophilic biodigestion performed by a number of researchers (Kashyap et al. 2003; Connaughton et al. 2006; McHugh et al. 2006), the cost of a biogas plant with or without heating is expected to be the same. The cost of heating will be offset by a long retention time.
THE KNOWLEDGE ECONOMICS
689
This payback period is calculated on the basis of the present biogas price. The cost of natural gas will increase due to limited supply. However, the cost of biogas production will decrease due to the improvement of the overall process, and thus the payback period will be reduced in the near future. Considering only the short-term tangible benefit, this payback period is competitive. However, if the socioeconomic, ecological, and environmental factors are considered, the establishment of a biogas plant is worthwhile due to its sustainable features. Moreover, in this economic study, only the output of bio-digesters as biogas, ammonia leachate, and bio-manure are considered to be the final products. However, the use of these products in other plants as raw materials can enhance the total economic benefit. Finally, any natural gas uses toxic additives that have long-term negative impacts on the environment. The biogas is free from these additives and, therefore, has the added benefit of eliminating environmental costs arising from the use of toxic additives that are used during the processing of natural gas. 15.4.1.2
Solar Parabolic Trough
Khan et al. (2007a) showed an experimental set up of a parabolic trough, which has a surface area of 4.02 m2. The economics of the experimental parabolic solar trough have been estimated from an assumption of 70% operating efficiency of the collector throughout the year. For this, the average yearly global solar radiation data of Halifax, NS, Canada from 1971 to 2000 (Environment Canada 2007) has been used to estimate the yearly solar absorption by the solar collector. Table 15.3 shows the actual yearly solar radiation in Halifax and the estimated solar absorption by the collector. It is found that the total annual solar radiation that can be absorbed by the solar collector in Halifax, Canada is 3145.31 MJ/m 2 . In this study, the cost of heating is 7.5 cents/kWh, which is the average cost of electricity in the U.S. As 1 kWh is equivalent to 3.6 MJ, the cost of electricity is US$0.0283/MJ. Table 15.4 shows a cost estimation of a parabolic solar trough with a 4.02 m 2 surface area and a 20 years economic life cycle. The economics of a parabolic solar trough of 4.02 square meters is as follows: Annual profit (Cash inflow) = Annual income - Annual expenditure (15.3) = $(263.3 - 150.00) = US$113.3
690
THE G R E E N I N G OF PETROLEUM OPERATIONS
Table 15.3 Annual solar radiation (1971-2000) and estimated solar absorption by solar collector in Halifax, NS, Canada. Month
Global (RF) (MJ/m2)
Estimated Solar Absorption by Solar Collector (MJ/m2) (based on 70% collector efficiency)
January
159.3
111.51
February
238.6
167.02
March
367.8
257.46
April
448.2
31374
May
543.8
380.66
June
582.9
408.03
July
605.9
424.13
August
550.2
385.14
September
417.3
292.11
October
290.2
203.14
November
160.1
112.07
December
129.0
90.3
Yearly
4493.3
3145.31
Solar absorption on 4.02 m2 surface = 3145.31 x 4.02 = 12644.15 MJ
Table 15.4 Cost and benefit estimation of a solar trough of 4.02 m 2 for a 20 years economic life cycle. Item
Costs
Total Capital Investment • Cost of solar collector tube (receiver)
450.00
• Cost of parabolic mirror
400.00
• Cost of mirror support (plywood)
100.00
• Cost of trough's support
200.00
THE KNOWLEDGE ECONOMICS
691
Table 15.4 (cont.) Cost and benefit estimation of a solar trough of 4.02 m2 for a 20 years economic life cycle. Item
Costs
• Cost of solar pump with PV module
300.00
• Cost of oil tank
50.00
• Piping cost
50.00
• Vegetable oil cost
100.00
• Labor cost
250.00
• Solar tracking cost
300.00
• Facilities and installation cost
100
Total
2300.00
Annual Expenditure • Cost of labor cost
100.00
• Operation and maintenance cost
50.00
Total
150.00
Annual Output and Economic Benefit • Annual heating value (12,644 Mega joule on the basis of US$0.02083/MJ)
263.3
And the payback period is found to be very high: Payback Period =
Cost of the Plant Annual Cash inflows 2300.00 113.3
(15.4)
; 20.3 years
According to this study, considering the electricity price of 7.5 cents per kwh, the parabolic solar collector technology is not found to be attractive. However, this is the process of using free energy without any depletion of a valuable, limited, natural source. If the current depletion of natural energy sources continues, the energy price will be increasing, which could then make the solar technology
692
THE GREENING OF PETROLEUM OPERATIONS
attractive. An analogy of unit energy price and payback period has been depicted in Figure 15.20 to evaluate the feasibility of solar technology from a purely short-term and tangible economical point of view. It is found from that the payback period will be reduced to nearly one year if the unit energy price increases to 50 cents. This study only shows the economic evaluation of solar absorption in a city within a cold country where the solar radiation is very low. However, this technique is found to be attractive for some cities where the unit energy price is high and solar radiation is much greater. If solar energy is abundant in a place, this technology is ideal, even from the short-term standpoint. For such applications, the payback period will be very competitive even with a low unit price of energy. Apart from direct tangible benefit, solar energy is very clean and ecologically beneficial as discussed in the earlier section. For a proper economic evaluation, every factor should be considered in order to understand the actual benefit from this renewable technology.
15.4.2
A New Approach to Energy Characterization
Besides material characterization, the characterization of a process and energy source is vital for the evaluation of the efficiency of a process. Chhetri (2007) recently outlined a new approach to energy characterization. This work shows that including all factors, rather than one factor (often selected based on price index), is important in ranking an energy source. Absorption refrigeration systems and vapor compression refrigeration systems are the most common refrigeration systems that can be operated by different energy sources. Considering the detail of a refrigeration system (Khan et al. 2007a) 25-,
s
20-
E
15-
Ot
>, ■~
Φ
n 10i~>
ίτΓ u_
50 -|
0
1
10
1
1
1
20 30 40 Unit energy price (cents/kwh)
Figure 15.20 Payback period vs. unit price of energy.
1
1
50
60
THE KNOWLEDGE ECONOMICS
693
with different processes and energy sources, an example of evaluation criteria can be presented as follows: 1. Absorption refrigeration system using direct solar energy - consider the efficiency to be EO 2. Absorption refrigeration system using wood as energy source - consider the efficiency to be El 3. Absorption refrigeration system using treated fossil fuel - consider the efficiency to be E2 4. Vapor compression refrigeration system using electricity from direct solar energy - consider the efficiency to be E3 5. Vapor compression refrigeration system using electricity from hydroplant - consider the efficiency to be E4 6. Vapor compression refrigeration system using electricity from untreated fossil fuel - consider the efficiency to beE5 7. Vapor compression refrigeration system using electricity from treated fossil fuel - consider the efficiency to be E6 8. Vapor compression refrigeration system using electricity from nuclear energy - consider the efficiency to beE7 Considering only the input and output of a process, it is speculated that the efficiency of a refrigeration process would be E7 > E6 > E5 > E4 > E3 > E2 > El > EO. This evaluation is based on shortterm and tangible considerations at a time of interest (denoted by Zatzman and Islam (2007a) as t = "right now"), neglecting the long term and intangible considerations. To obtain the true efficiency, an analysis of the pathway of any process is needed. Recently, Islam et al. (2006) identified three basic factors of energy that need to be analyzed before labeling a process as an efficient process. These three factors are global economics, environmental and ecological impacts, and quality. The extraction of energy from the source, its application, and its effect on the products and the consumers should be carefully analyzed to identify the efficient process. 15.4.2.1
Global Economics
It is important to find a mean to calculate the global economics of any process. It has already been reported that the vapor compressor
694
THE GREENING OF PETROLEUM OPERATIONS
cooling systems involve in the pollution of the environment in different directions (Khan et al. 2007a). If both the costs of the remedial processes of all the vulnerable effects and the plant cost are considered, the actual cost of the total process can be obtained. The actual cost involves the remedial cost of soil, the remedial cost of air/water pollution, the remedial costs of ecological loss, the medical costs for human beings, etc. (Zatzman and Islam 2007b). Electric power generation is a particularly illustrative example of an energy supply process in which immediate short-term per unit cost of output to the individual user can be made to appear low, affordable, and even tending to fall (relative to inflation) over time. Yet, over the last century, in which it became the key energy supply component of the increasingly socialized economy of developed countries, displacing all the costs over an ever-broadening base increased the output efficiency. The environmental and long-term costs of hydropower were borne by native peoples and agricultural producers whose water supplies were plundered. The costs of the spread of electric power production plants were distributed over society in the form of an increased diversion of coal, then oil, and, today, natural gas for generating power and redepositing it as waste in the forms of acid rain and other pollution. The costs of expanding the power transmission network were spread widely over larger urban and suburban residential populations and agricultural producers in the form of distortions of highest and best land-use planning principles in order to favor rights of way for transmissions lines. The effort to accomplish the ultimate in efficiency of output at the level of the basic fuel source has reached its apex with the promotion of nuclear-powered electricity, of which the costs to the entire planet in atomic waste alone are incalculable and can only grow with no possibility of being contained or reduced. On the other hand, by comparison, the pathway of the solar absorption cooling system indicates that it is not associated with the above vulnerable effects, which is why no additional cost is required. 15.4.2.2
Environmental
and Ecological Impact
Each process has an environmental impact, either positive or negative. The positive impacts are expected to keep an ecological balance. Most of the processes that are established to-date disrupt ecological balance and produce enormous negative effects on all living beings.
THE KNOWLEDGE ECONOMICS
695
For instance, the use of Freon in a cooling system disrupted ozone layers and allowed vulnerable rays of sun to hit the earth and living beings. Burning "chemically purified" fossil fuels also pollutes the environment by releasing harmful chemicals. Energy extraction from nuclear technology leaves harmful spent residues. 15.4.2.3
Quality of Energy
The quality of energy is an important phenomenon. However, when it comes to energy, the talk of quality is largely absent. In the same spirit as "chemicals are chemicals" that launched the mass production of various food and drugs, irrespective of their origin and pathway, energy is promoted as "energy," based on the spurious basis that "photons are the units of all energy." Only recently, it has come to light that artificial chemicals act exactly opposite of how natural products do (Chhetri and Islam 2007). Miralai et al. (2007) recently discussed the reason behind such behavior. According to them, chemicals with exactly the same molecular formulas but derived from different sources cannot have the same effect unless the same pathway is followed. With this theory, it is possible to explain why organic products are beneficial and chemical products are not. Similarly, heating from different sources of energy cannot have the same impact. Heating a home by wood is a natural burning process that has been practiced since ancient times and did not have any negative effect on humans. More recently, Khan and Islam (2007b) extended the "chemicals are chemicals" analogy to "energy is energy." They argued that energy sources cannot be characterized by heating value alone. Using a similar argument, Chhetri (2007) established a scientific criterion for characterizing energy sources and demonstrated that conventional evaluation leads to misleading conclusions if the scientific value (rather than simply "heating value") of an energy source is ignored. On the other hand, Knipe and Jennings (2007) indicated a number of vulnerable health effects on human beings due to chronic exposure of electrical heating. The radiation due to electro-magnetic rays might interference with the human's radiation frequency, which can cause acute long-term damage to human beings. Energy with a natural frequency is the most desirable. Alternate current is not natural, which is why there will be some vulnerable effects of this frequency on the environment and human beings (Chhetri 2007). Therefore, it can be inferred that heating by natural sources is better than heating by electricity.
696
THE GREENING OF PETROLEUM OPERATIONS
Microwave heating is also questionable. Vikram et al. (2005) reported that the nutrients of orange juice degraded the most by microwave oven heating compared to other heating methods. There are several other compounds that form during electric and electromagnetic cooking that are considered to be carcinogenic based on their pathway analysis. 15.4.2.4
Evaluation of Process
From the above discussion, it can be noted that considering only the energy efficiency based on the input and output of a process does not identify the most efficient process. All of the factors should be considered and carefully analyzed in order to claim that a process is efficient in the long term. The evaluation process of an efficient process should consider both the efficiency and quality of a process. Considering the material characterization developed by Zatzman et al. (2007), the selection of a process can be evaluated using the following equations: Ereal = E + (E-EO) x 5(s)
(15.5)
where Erml is the true efficiency of a process when long term factors are considered, E is the efficiency at present time (t = "right now"), EO is the baseline efficiency, and 5(s) is the sustainability index introduced by Khan (2007), such that 6(s) = 1 if the technology is sustainable, and 6(s) = - 1 if the technology is not sustainable. Qreal = Ereal/E0 + 6(s)xL(t)
(15.6)
where Qreal is the quality of the process and L(t) is the alteration of the quality of a process as a function of time. When both E . and Q , have positive values, the process is real
^-real
Γ
'
Γ
acceptable. However, the most efficient process will be the one that has a higher product value (Ereal x Q ,). After the evaluation of efficient processes, economic evaluation can be made to find the economical one. Today's economic evaluation of any contemporary process, based on tangible benefit, provides the decision to establish the process for commercial applications. However, decision making for any process needs to evaluate a number of criteria discussed earlier. Moreover, the economics of intangibles should be analyzed thoroughly to
THE KNOWLEDGE ECONOMICS
697
decide on the best solution. The time span may be considered to be the most important intangible factor in this economic consideration. Considering the long term, tangible effects, and intangible effects, a natural process is considered to be the best solution. However, to arrive at any given end-product, any number of natural processes may be available. Selection of the best natural one depends on what objectives have the greatest priority at each stage and what objectives can be accomplished at a given time span. If the time span is considered important, it is necessary to find the natural process that has a low pay back period or a high rate of return. However, irrespective of the time span, the best natural process to select would be that which renders the best quality output with no immediate impacts and no long-term impacts.
15.4.3 Final Words The time span contains and expresses the actual intention, in the sense of direction, of a process. Certain individuals or corporations that are positioned favorably will do best from providing and managing processes that accomplish all their benefits in the shortest possible term. This short-term is meaningless and lacks any positive value for society as a whole or for any individual consumer of that society. Communities, social collectives, and society as a whole, on the other hand, can do very well from processes that are planned to serve the long term. Processes whose components and pathways come entirely from nature can always provide this longterm benefit.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
16 Deconstruction of Engineering Myths Prevalent in the Energy Sector 16.1
Introduction
In Chapter 2, fundamental misconceptions in science (as in process) were deconstructed. Based on those misconceptions, a series of engineering myths have evolved in the energy sector, with an overwhelming impact on modern civilization. This chapter is dedicated to highlighting those myths and deconstructing them.
16.1.1 How Leeches Fell Out of Favor Over four years ago, the Associated Press reported that the FDA had approved the use of leeches for medicinal purposes (FDA 2004). The practice of drawing blood with leeches, which is thousands of years old, finally redeemed itself by being approved by the FDA. This practice of bloodletting and amputation reached its height of medicinal use in the mid-1800s. How and why did this perfectly natural engineering solution to skin grafting and reattachment surgery fall out of favor? Could it be that leeches couldn't be "engineered?" 699
700
THE GREENING OF PETROLEUM OPERATIONS
In the fall of 2007, as the time for Nobel Prize awards approached, a controversy broke. Dr. James Watson, the European-American who won the 1962 Nobel Prize for his role in discovering the double-helix structure of DNA, created the most widely publicized firestorm in the middle of the Nobel Prize awards month (October 2007). He declared that he personally was "inherently gloomy about the prospect of Africa" because "all our social policies are based on the fact that their intelligence is the same as ours, whereas all the testing says 'not really.'" Here, we see the clash between a first premise and a conclusion based on a different premise. "Their intelligence is the same as ours" stems from the unstated premise that "all humans are created equal," a basic tenet of the "nature is perfect" mantra. "All testing" to which Watson refers, on the other hand, is based on the premise that the theory of molecular genetics/DNA (which is linked with an essentially eugenic outlook) is true. The entire controversy, however, revolved around whether Dr. Watson is a racist. No one seemed interested in addressing the root cause of this remark, namely, an unshakeable conviction that New Science represents incontrovertible truth. This faith has the same fervor as those who once thought and disallowed any other theory regarding the earth being flat. Consider the apparently magical symmetry of the shapes perpetrated as the "double-helix" structure of DNA. These representations of the "founding blocks" of genes are aphenomenal; they are not consistent with the more detailed descriptions of the different bonding strengths of different amino-acid pairings in the actual molecule. Much as atoms were considered to be the founding block of matter (which was incidentally also rendered with an aphenomenal structure — one that could not exist in nature), these "perfectly" shaped structures are being promoted as founding blocks of a living body. It is only a matter of time before we find out just how distant the reality is from these renderings. The renderings themselves, meanwhile, are aphenomenal, meaning they do not exist in nature (Zatzman and Islam 2007). This is a simple logic that the scientific world, obsessed with tangibles, seems to not understand. Only a week before the Watson controversy unraveled, Mario R. Capecchi, Martin J. Evans, and Oliver Smithies received Nobel Prizes in Medicine for their discovery of "principles for introducing specific gene modifications in mice by the use of embryonic stem cells." What is the first premise of this discovery? Professor Stephen O'Rahilly of the University of Cambridge said, "The development of
DECONSTRUCTION OF ENGINEERING MYTHS
701
gene targeting technology in the mouse has had a profound influence on medical research...Thanks to this technology we have a much better understanding of the function of specific genes in pathways in the whole organism and a greater ability to predict whether drugs acting on those pathways are likely to have beneficial effects in disease." (BBC 2007) No one seems to ask why only "beneficial effects" should be anticipated from the introduction of "drugs acting on those pathways." When did intervention in nature, meaning at this level of very real and even profound ignorance about actual pathways, yield any beneficial result? Can one example be cited from the history of the world since the Renaissance? In 2003, a Canadian professor of medicine with over 30 years of experience was asked, "Is there any medicine that cures any disease?" After thinking for some time, he replied, "Yes. It is penicillin." Then, he was asked, "Why then do doctors tell you these days, 'don't worry about this antibiotic, it has no penicillin?'" "Oh, that's because nowadays we make penicillin artificially (synthetically)," the medicine professor quickly replied. Do we have a medication today that is not made synthetically? Of course, today, not only do synthetically-manufactured drugs monopolize FDA approvals. The medicinal value of drugs produced other than by an approved factory process are declared of "no medicinal value" or are otherwise extremely negatively qualified. So, when this "miracle drug" becomes a big problem after many decades of widespread application, can we then say that the big problem of penicillin highlights an inherent flaw of the mold, Penicillium notatum, which was used some 80 years ago to isolate penicillin? In 2007, the same question went to a Swedish American doctor. He could not name one medicine that actually cures anything. When he was told about the Canadian doctor's comment about penicillin, he quickly responded, "I was going to use the example of penicillin to say that medicines don't work." It is increasingly becoming clear that synthetic drugs (the only kind that is promoted as "medicinal") do not cure and in fact are harmful. Every week, study after study comes out to this effect. During the week of October 16, 2007, it was publicly recommended that children under six not be given any cough syrup. When will be able to upgrade safe use to, say, 96 years of age? The truth is that New Science has created a wealth of the most tangible kind. This wealth is based on perpetrating artificial products in the shortest-term interests of value-addition. "Artificial" always
702
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
w a s a n d a l w a y s r e m a i n s starkly o p p o s i t e to real - the essence of truth a n d k n o w l e d g e . N e w Science generated a vast w e a l t h of techniques for k e e p i n g the p r o d u c t i o n of certain desired effects from intervening in the natural environment, b u t only incidentally has this uncovered k n o w l e d g e a b o u t the actual p a t h w a y s of n a t u r e that p e o p l e did not possess before. In this sense, w h a t k n o w l e d g e h a s N e w Science created? Before a n y o n e gloats a b o u t the success of the 2007 Nobel Prize w i n n i n g discovery, think back 60 years. In 1947, a n o t h e r Nobel Prize in the s a m e discipline w a s a n n o u n c e d . The Nobel committee declaration w a s as follows: "Paul Müller went his own way and tried to find insecticides for plant protection. In so doing he arrived at the conclusion that for this purpose a contact insecticide was best suited. Systematically he tried hundreds of synthesized organic substances on flies in a type of Peet-Grady chamber. An article by the Englishmen Chattaway and Muir, gave him the idea of testing combinations with the CC13 groups, and this then finally led to the realization that dichloro-diphenyl-trichloromethylmethane acted as a contact insecticide on Colorado beetles, flies, and many other insect species under test. He determined its extraordinary persistence, and simultaneously developed the various methods of application such as solutions, emulsions, and dusts. In trials under natural conditions Müller was able to confirm the long persistent contact action on flies, Colorado beetles, and gnats (Culex). Recognition of the intense contact activity of dichlorodiphenyl-trichloromethylmethane opened further prospects. Indeed, the preparation might be successfully used in the fight against bloodsucking and disease-carrying insects such as lice, gnats, and fleas - carriers incapable of being reached by oral poisons. In the further trials now conducted, DDT showed a very large number of good properties. At requisite insecticidal dosages, it is practically non-toxic to humans and acts in very small dosages on a large number of various species of insects. Furthermore, it is cheap, easily manufactured, and exceedingly stable. A surface treated with DDT maintains its insecticidal properties for a long time, up to several months." (Nobel Prize presentation) Each of Paul Müller's premises, as stated above, w a s false. Today, Professor Yen w r o t e , "Every meal that w e take t o d a y has DDT in
DECONSTRUCTION OF ENGINEERING MYTHS
703
it." Had Dr. Müller acted on knowledge rather than a short-term instinct of making money, he would have realized that "practically non-toxic to humans" was a scientific fraud. How does this premise differ from that of James Watson or the trio who received the Nobel Prize sixty years after Paul Müller? There is a widespread belief that the Nobel awards systematically acknowledge transformations of humanity to higher levels of civilization, which are affected as the result of unique contributions of individuals in the sciences a n d / o r at the level of global public consensus-building (e.g., the peace prize). While the Nobel awards process has indeed always been about selecting unique individuals for distinction, the transformations supposedly affected have been another matter entirely. Enormous quantities and layers of disinformation surround the claims made for the transformative powers of these individuals' work. Far from being culturally or socially transformative, prize-winning work, whether it was work underpinning new life-saving medical technologies or work in the physical sciences underpinning advances in various engineering fields, the awards have been intimately linked with upholding a n d / or otherwise extending critically valuable bits of the political and/or economic status quo for the powers-that-be. The DDT example cited above is particularly rich not only in its upholding of an aphenomenal model of "science," but also — and certainly not least — because of the context in which it was found to have been proven so valuable. According to the Nobel Committee's presentation in 1948, Dr Müllers DDT work possessed the following: "...a short and crowded history which, from the medical point of view, is closely connected with the fight against typhus during the last World War. In order to give my presentation the correct medical background, I will first mention one or two points concerning this disease. "Typhus has always occurred as a result of war or disaster and, hence, has been named "Typhus bellicus," "war-typhus," or "hunger-typhus." During the Thirty Years' War, this disease was rampant, and it destroyed the remains of Napoleon's Grand Army on its retreat from Russia. During the First World War, it again claimed numerous victims. At that period more than ten million cases were known in Russia alone, and the death rate was great. Admittedly, the famous Frenchman Nicolle had already, in 1909, shown that the disease was practically solely transmitted by lice, for which discovery he received the Nobel Prize and, thus, paved the way for effective control. But really
704
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
successful methods for destroying lice in large quantities, thus removing them as carriers, were not yet at hand. "Towards the end of the Second World War, typhus suddenly appeared anew. All over the world research workers applied their energies to trying to discover an effective delousing method. Results, however, were not very encouraging. In this situation, so critical for all of us, deliverance came. Unexpectedly, dramatically practically out of the blue, DDT appeared as a deus ex machina. ... "A number of Swiss research workers such as Domenjoz and Wiesmann ...concerned themselves with further trials of the substance. Mooser's researches aimed directly at a prophylaxis of typhus. On the 18th of September 1942, he gave a significant lecture to the physicians of the Swiss First Army Corps, on the possibilities of protection against typhus by means of DDT. "At that time, the Allied Armies of the West were struggling with severe medical problems. A series of diseases transmittable by insects, diseases such as typhus, malaria and sandfly fever, claimed a large number of victims and interfered with the conduct of the War. The Swiss, who had recognized the great importance of DDT, secretly shipped a small quantity of the material to the United States. In December of 1942 the American Research Council for Insectology in Orlando (Florida) undertook a large series of trials which fully confirmed the Swiss findings. The war situation demanded speedy action. DDT was manufactured on a vast scale whilst a series of experiments determined methods of application. Particularly energetic was General Fox, Physician-in-Chief to the American forces. "In October of 1943 a heavy outbreak of typhus occurred in Naples and the customary relief measures proved totally inadequate. General Fox thereupon introduced DDT treatment with total exclusion of the old, slow methods of treatment. As a result, 1,300,000 people were treated in January 1944 and in a period of three weeks the typhus epidemic was completely mastered. Thus, for the first time in history a typhus outbreak was brought under control in winter. DDT had passed its ordeal by fire with flying colors." (Quote from Nobel Prize presentation speech) In 1942, it w a s b e c o m i n g crucial for the A m e r i c a n s a n d the British to find w a y s to preserve such limited troops because they then h a d d e p l o y e d against the Third Reich. But w h y ? The Red A r m y h a d not yet b r o k e n the back of v o n P a u l u s ' Sixth A r m y at Stalingrad, b u t it w a s a l r e a d y a p p a r e n t that the Soviet Union w a s not going to b e
DECONSTRUCTION OF ENGINEERING MYTHS
705
defeated by the Nazi invader. A growing concern among U.S. and British strategic planners at the time was that the Red Army could eventually break into eastern Europe before the Americans, and the British were in a position to extract a separate peace from the Hitler regime on the western front. By capturing control of the Italian peninsula before the Red Army reached the Balkan peninsula, the Anglo-Americans would be ready to confront the Red Army's arrival on the eastern side of the Adriatic (Cave Brown 1975). From 1933 to early 1939, the leading western powers bent over backwards to accommodate Italian fascist and German fascist aggression and subversion in Europe. Their hope was to strike a mutual deal that would maintain a threatening bulwark against their common enemy of that time, the Soviet Union (Taylor 1961). Awarding the 1938 Nobel Peace Prize to the Nansen Office, a body that had worked extensively with Germans refugeed by the misfortunes of the First World War (1914-18) in central Europe, was clearly designed to serve that purpose. At this point, the Third Reich had swallowed the German-speaking region of Czechoslovakia, the Sudetenland, and incorporated it into the Reich. The Anschluss annexing Austria to the Third Reich had been set up with the ruling circles in Vienna in the name of protecting German nationality and national rights from "Slavic race pollution," etc. Actions to "protect" German minorities in Poland were publicly placed on the agenda by the Hitler regime (Finkel and Leibovitz 1997). In brief, the track record of the Nobel awards, in general, has been to uphold the aphenomenal approach to the knowledge of nature and the truth in the sciences and to uphold only those "transformations" that bolster the status quo. This is precisely how and why the imprimatur of the Nobel Committee ends up playing such a mischievous role. Consider what happened within days of the Watson imbroglio. Many may have thought it ended with Watson's suspension from the Cold Spring Harbour Lab for his remarks. Others may have thought it ended when he retired permanently a week following his suspension. Less than 48 hours following Dr. Watson's retirement announcement, however, the weekly science column of The Wall Street journal demonstrated the matter was far from over (Hotz 2007): "...Whatever our ethnic identity, we share 99% of our DNA with each other. Yet, in that other one percent, researchers are finding so many individual differences they promise to transform the practice of medicine, enabling treatments targeted to our own
706
T H E G R E E N I N G OF P E T R O L E U M O P E R A T I O N S
unique DNA code. Viewed through the prism of our genes, we each have a spectrum of variation in which a single molecular misstep can alter the risk of disease or the effectiveness of therapy... Scientists and doctors struggle for ways to translate the nuances of genetic identity into racially defined medical treatments without reviving misconceptions about the significance, for example, of skin color. The problem arises because it may be decades before anyone can afford their own genetic medical profile. Meanwhile, doctors expect to rely on racial profiling as a diagnostic tool to identify those at genetic risk of chronic diseases or adverse reactions to prescription drugs. Researchers at Brown University and the University of London, and the editors of the journal PLoS Medicine, last month warned about inaccurate racial labels in clinical research. In the absence of meaningful population categories, researchers may single out an inherited racial linkage where none exists, or overlook the medical effects of our environment... It's not the first time that medical authorities have raised a red flag about racial labels. In 1995, the American College of Physicians urged its 85,000 members to drop racial labels in patient case studies because 'race has little or no utility in careful medical thinking.' In 2002, the editors of the Nezv England Journal of Medicine concluded that '"race" is biologically meaningless.' And in 2004, the editors of Nature Genetics warned that 'it's bad medicine and it's bad science.' No one denies the social reality of race, as reinforced by history, or the role of heredity. At its most extreme, however, the concept of race encompasses the idea that test scores, athletic ability, or character is rooted in the genetic chemistry of people who can be grouped by skin color. That's simply wrong, research shows. Indeed, such outmoded beliefs led to the resignation Thursday of Nobel laureate James Watson from the Cold Spring Harbor Laboratory in New York, for disparaging comments he made about Africans. Diseases commonly considered bounded by race, such as sickle cell anemia, are not. ... Researchers studying physical appearance and genetic ancestry in Brazil, for example, discovered that people with white skin owed almost a third of their genes, on average, to African ancestry, while those with dark skin could trace almost half of their genes to Europe, they reported in the Proceedings of the National Academy of Sciences. 'It's clear that the categories
DECONSTRUCTION OF ENGINEERING MYTHS
707
we use don't work very well/ said Stanford University biomedical ethicist Mildred Cho. Government reporting requirements in the U.S. heighten the difficulty. Since 2001, clinical researchers must use groupings identified by the U.S. Census that don't recognize the underlying complexities of individual variation, migration and family ancestry. In addition, medical reports in the PubMed, Medline and the U.S. National Library of Medicine databases were cataloged until 2003 by discredited 19th-century racial terms. Today, there are no rigorous, standardized scientific categories. A recent study of 120 genetics and heredity journals found that only two had guidelines for race and ethnic categories, though half of them had published articles that used such labels to analyze findings. Eventually, genomics may eliminate any medical need for the infectious shorthand of race. 'We need to find the underlying causes of disease,' said David Goldstein, at Duke University's Center for Population Genomics & Pharmacogenetics. 'Once we do, nobody will care about race and ethnicity anymore.' No one dares stop the merry-go-round to point out that this is all molecular eugenics, not molecular genetics. There is something fundamentally intriguing and deficient about connecting everything humans can become entirely to genetic/genomic inheritance. Unfortunately, and contrary to its false promise to figure out the science of the building blocks of life itself, this pseudo-science has developed by using the genome as tangible evidence for all the assumptions of eugenics regarding inferiority and superiority. These may be building blocks, but then there's the mortar, not to mention the pathway and the time factor involved by which a living organism emerges seemingly from "lifeless matter." These are matters of grave consequence on which men and women of science should take a clear stand.
16.1.2
When did Carbon Become the Enemy?
An aphenomenal first premise doesn't increase knowledge, no matter through how many steps we take the argument. We can cite numerous examples to validate this statement. In this, it is neither necessary nor sufficient to argue about the outcome or final conclusion. For instance, we may not be able to discern between two identical twins, but that is our shortcoming (because we spuriously
708
THE GREENING OF PETROLEUM OPERATIONS
consider DNA tests the measure of identity), and that doesn't make them non-unique. Two molecules of water may appear to be identical to us, but that's because we wrongly assumed molecules are made of spherical, rigid, non-breakable, particles, called atoms. In addition, the aphenomenal features inherent to atom (e.g., rigid, cylindrical, uniform, symmetric, etc.) and the presumed fundamental unit of mass became the norm of all subsequent atomic theories. Not only that, even light and energy units became associated with such aphenomenal features and were the predominant concepts used in electromagnetic theory as well as in the theory of light (Zatzman et al. 2008a, 2008b). The false notion of atoms as fundamental building blocks is at the core of modern atomic theory and is responsible for much of the confusion regarding many issues, including blaming carbon for global warming, blaming natural fat for obesity, sunlight for cancer, and numerous others. Changing the fundamental building block from real to aphenomenal makes all subsequent logic go haywire. Removing this fundamentally incorrect first premise would explain many natural phenomena. It would explain why microwave cooking destroys the goodness of food while cooking in woodstoves improves it, why sunlight is the essence of light and fluorescent light is the essence of death, why moonlight is soothing and can improve vision while dim light causes myopia, why paraffin wax candles cause cancer while beeswax candles improve lungs, why carbon dioxide from "refined oil" destroys greenery (global warming) and that from nature produces greenery, and the list truly goes on. However, this is not the only focus of New Science that is motivated by money and employs only the short-term or myopic approach. In 2003, Nobel Chemistry Laureate Robert Curl characterized the current civilization as a "technological disaster." We don't have to look very far to discover how correct this Nobel Prize-winner's analysis is, or why. Every day, headlines appear that show, as a human race, we have never had to deal with more unsustainable technologies. How can this technology development path be changed? We do not expect to change the outcome without changing the origin and pathway of actions. The origin of actions is intention. So, for those who are busy dealing with "unintended consequences," we must say that there is no such thing as "unintended." Every action has an intention and every human being has an option of acting on conscience or "desire." Theoretically, the former symbolizes long-term (hence, true) intentions and the latter symbolizes short-term (hence,
DECONSTRUCTION OF ENGINEERING MYTHS
709
aphenomenal) intentions. Zatzman and Islam (2007) explicitly pointed out the role of intentions on social change and technology. Arguing that the role of intentions, i.e., of the direction in which the scientific research effort was undertaken in the first place, has not been sufficiently recognized, they proposed explicitly to include the role of intentions in the line of social change. This theory was further incorporated by Chhetri and Islam (2008) with technology development. They argued that no amount of disinformation can render a technology sustainable that was not well-intended or was not directed at improving the overall human welfare, or what many today call "humanizing the environment." It is a matter of recognizing the role of intentions and being honest at the starting point of a technology's development. For science and knowledge, honesty is not just a buzzword. It is the starting point. Chhetri and Islam (2008) undertook the investigation that gave rise to their work because they could find no contemporary work clarifying how the lack of honesty could launch research in the direction of ignorance. From there they discussed how, given such a starting point, it is necessary to emulate nature in order to develop sustainable technologies for many applications, ranging from energy to pharmacy and health care. All aphenomenal logic promoted in the name of science and engineering have an aphenomenal motive behind it. Recently, Chilingar and associates presented a scientific discourse on the topic of global warming (Sorokhtin et al. 2007). Because the book gives the impression that global warming cannot be caused by human activities, it has the potential of alienating readers that are interested in preserving the environment. However, the book is scientifically accurate and the conclusions are the most scientific based on New Science. Chilingar and his associates are scientifically accurate, yet they made a serious mistake in ignoring some facts. As it turns out, this aspect is not considered by anyone, including those who consider themselves the pioneers of the pro-environment movement. When these facts are considered, the theory of global warming becomes truly coherent, devoid of doctrinal slogans.
16.2
The Sustainable Biofuel Fantasy
Biofuels are considered to be inherently sustainable. Irrespective of what source was used and what process was followed to extract
710
THE GREENING OF PETROLEUM OPERATIONS
high-value fuels, they are promoted as clean fuels. In this section, this myth is deconstructed.
16.2.1 Current Myths Regarding Biofuel Increasing uncertainty in global energy production and supply, environmental concerns due to the use of fossil fuels, and high prices of petroleum products are considered to be the major reasons to search for alternatives to petrodiesel. For instance, Lean (2007) claimed that the global supply of oil and natural gas from the conventional sources is unlikely to meet the growth in energy demand over the next 25 years. As a result of this cognition, biofuels are considered to be sustainable alternatives to petroleum products. Because few are accustomed to questioning the first premise of any of these conclusions, even the ardent supporters of the petroleum industry find merit in this conclusion. Considerable funds have been spent in developing biofuel technology, and even the mention of negative impacts of food (e.g., corn) being converted into fuel was considered to be anti-civilization. It is assumed that biodiesel fuels are environmentally beneficial (Demirbas 2003). The argument put forward is that plant and vegetable oils and animal fats are renewable biomass sources. This argument follows other supporting assertions, such as the idea that biodiesel represents a closed carbon dioxide cycle because it is derived from renewable biomass sources. Biodiesel has a lower emission of pollutants compared to petroleum diesel. Plus, it is biodegradable, and its lubricity extends engine life (Kurki et al. 2006) and contributes to sustainability (Khan et al. 2006; Kurki et al. 2006). Biodiesel has a higher cetane number than diesel fuel, no aromatics, no sulfur, and contains 10-11% oxygen by weight (Canakci 2007). Of course, negative aspects of biofuels are also discussed. For instance, it is known that the use of vegetable oils in the compression ignition engines can cause several problems due to its high viscosity (Roger and Jaiduk 1985). It is also accepted that the use of land for the production of edible oil for biodiesel feedstock competes with the use of land for food production. Moreover, the price of edible plant and vegetable oils is considered to be higher than petrodiesel. Based on this argument, alarms were sounded when oil prices dropped in fall 2008, as though a drop in petroleum fuels would kill the "environmentally friendly" biofuel projects, thereby killing the prospect of a clean environment. As a remedy to this
DECONSTRUCTION OF ENGINEERING MYTHS
711
unsubstantiated and aphenomenal conclusion, waste cooking oils and non-edible oils are promoted to take care of the economic concerns. It is known that the use of waste cooking oil as biodiesel feedstock reduces the cost of biodiesel production (Canakci 2007) since the feedstock costs constitutes approximately 70-95% of the overall cost of biodiesel production (Connemann and Fischer 1998).
16.2.2 Problems with Biodiesel Sources The main feedstocks of biodiesel are vegetable oils, animal fats, and waste cooking oil. These are the mono alkyl esters of fatty acids derived from vegetable oil or animal fat. The fuels derived may be alcohols, ethers, esters, and other chemicals made from cellulosic biomass and waste products, such as agricultural and forestry residues, aquatic plants (microalgae), fast growing trees and grasses, and municipal and industrial wastes. Subramanyam et al. (2005) reported that there are more than 300 oil-bearing crops identified that can be utilized to make biodiesel. Beef and sheep tallow, rapeseed oil, sunflower oil, canola oil, coconut oil, olive oil, soybean oil, cottonseed oil, mustard oil, hemp oil, linseed oil, microalgae oil, peanut oil, and waste cooking oil are considered potential alternative feedstocks for biodiesel production (Demirba 2003). However, the main sources of biodiesel are rapeseed oil, soybean oil, and, to a certain extent, animal fat, with rapeseed accounting for nearly 84% of the total production (Demirba 2003). Henning (2004) reported that Jatropha Curcus also has a great potential to yield biodiesel. The UK alone produces about 200,000 tons of waste cooking oil each year (Carter et al. 2005). This provides a good opportunity to utilize waste into energy. Various types of algae, some of which have an oil content of more than 60% of their body weight in the form of tryacylglycerols, are the potential sources for biodiesel production (Sheehan et al. 1998). Many species of algae can be successfully grown in wastewater ponds and saline water ponds utilizing C 0 2 from power plants as their food. Utilizing C 0 2 from power plants to grow algae helps to sequester CO, for productive use and at the same time reduces the build up of C 0 2 in the atmosphere. Waste cooking oil is also considered a viable option for biodiesel feedstock. Even though the conversion of waste cooking oil into usable fuel has not been in practice at a commercial level, the potential use of such oil can solve two problems: 1) environmental problems caused by its
712
THE GREENING OF PETROLEUM OPERATIONS
disposal to water courses and 2) problems related to competition with food sources. Because the pathway is not considered in conventional analysis, the role of the source or the processes involved is not evident. If the pathway were to be considered, it would become evident that biodiesel derived from genetically modified crops cannot be considered equivalent to biodiesel derived from organically grown crops. Recently, Zatzman et al. (2008) outlined the problems, much of which are not detectable with conventional means associated to genetic engineering. While genetic engineering has increased tangible gains in terms of crop yield and the external appeal of the crop (symmetry, gloss, and other external features), it has also added potential fatal, unavoidable side effects. In the context of honeybees, the most important impact of GE is through direct contact of genetically altered crops (including pollen) and through the plantproduced matters (including even organic pesticide and fertilizers). A series of scholarly publications have studied the effects of GE products on honey bees. Malone and Pham-Delegue (2001) studied the effects of transgenic products on honeybees and bumblebees. Obrycki et al. (2001) studied genetically engineered insecticidal corn that might have severe impacts on the ecosystem. PhamDelegue et al. (2002) produced a comprehensive report in which they attempted to quantify the impacts of genetically modified plants on honeybees. Similarly, Picard-Nioi et al. (1997) reported the impacts of proteins used in genetically engineered plants on honeybees. The need for including non-target living objects was highlighted by Losey et al. (2004). It is true that genetic engineering activities have been carried out at a pace unprecedented for any other technology. This subject has also been hailed to have made the most significant breakthroughs. Unfortunately, these "breakthroughs" only bear fruit in the very short term, within which period the impacts of these technologies do not manifest in measurable (tangible expression) fashion. Even though there is a general recognition that there are "unintended consequences," the science behind this engineering has never been challenged. Often, these "unintended consequences" are incorrectly attributed to the lack of precision, particularly in placing the location of the DNA in the new chromosome site. The correct recognition would be that it is impossible to engineer the new location of the gene, and at the same time it is impossible to predict the consequences of the DNA transfer without knowing all possible
DECONSTRUCTION OF ENGINEERING MYTHS
713
sites that the DNA will travel to throughout the time domain. Khan (2006) made this simple observation and contended that, unless the consequences are known for the time duration of infinity, an engineering practice cannot be considered sustainable. Similar, but not as bold, statements were previously made by Schubert (2005), who questioned the validity of our understanding of genetic engineering technology and recognized the unpredictability of the artificial gene. Zatzman and Islam (2007a) recognized that an "artificial" object, even though it comes to reality by its mere presence, behaves differently than the object it was supposedly emulating. This explains why vitamin C acts differently depending on its origin (e.g., organic or synthetic), and so does every other artificial product including antibiotics (Chhetri et al., 2007; Chhetri and Islam, 2007). Similar statements can be made about chemical fertilizers and pesticides that are used to boost crop yield as well as hormones and other chemicals that are used on animals. Therefore, biodiesel derived from organic crops and biodiesel derived from genetically modified crops infested with chemical fertilizer and pesticides would have quite different outputs to the environment, thereby affecting the sustainability picture. Similarly, if the source contains beef tallow from a cow that is injected with hormones and fed artificial feeds, the resulting biodiesel will be harmful to the environment and could not be compared to petrodiesel that is derived from fossil fuel. Note that the fossil fuel was derived from organic matters, with the exception that nature processed the organic matter to pack it with a very high energy content. If the first premise is that nature is sustainable, then fossil fuel offers much greater hope of sustainability than contemporary organic sources that are infested with chemicals that were not present even just 100 years ago.
16.2.3
The Current Process of Biodiesel Production
Recently (Chhetri and Islam 2008b) detailed the process involved in biodiesel production. Conventionally, biodiesel is produced either in a single-stage or a double-stage batch process or by a continuous flow type transesterification process. These are either acid catalyzed or base catalyzed processes. The acids generally used are sulfonic acid and sulfuric acid. These acids give very high yields in alkyl esters, but these reactions are slow, requiring high temperatures above 100°C and more than three hours to complete the conversion (Schuchardt 1998). Alkali catalyzed transesterification is much
714
THE GREENING OF PETROLEUM OPERATIONS
faster than acid catalyzed transesterification, and all commercial biodiesel producers prefer to use this process (Ma and Hanna 1999). The alkalis generally used in the process include NaOH, KOH, and sodium methoxide. For an alkali-catalyzed transesterification, the glycerides and alcohol must be anhydrous because water changes the reaction, causing saponification. The soap lowers the yield of esters and makes the separation of biodiesel and glycerin complicated. No catalysts are used in the case of supercritical methanol methods, where a methanol and oil mixture is superheated to more than 350°C, and the reaction completes in 3-5 minutes to form esters and glycerol. Saka and Kudsiana (2001) carried out a series of experiments to study the effects of the reaction temperature, pressure, and molar ratio of methanol to glycosides in methyl ester formation. Their results revealed that supercritical treatment of 350°C, 30 MPa, and 240 seconds with a molar ratio of 42 in methanol is the best condition for transesterification of rapeseed oil for biodiesel production. However, the use of methanol from fossil fuel still makes the biodiesel production process unsustainable and produces toxic by-products. Since this process uses high heat as catalysts, then the use of electricity to heat the reactants will increase the fossil fuel input in the system. The direct heating of waste cooking oil by solar energy using concentrators will help to reduce the fossil fuel consumption, and the process will be a sustainable option. The major objectives of transesterification are to breakdown the long chain of fatty acid molecules into simple molecules and reduce the viscosity considerably in order to increase the lubricity of the fuel. The transesterification process is the reaction of a triglyceride (fat/oil) with an alcohol, using a catalyst to form esters and glycerol. A triglyceride has a glycerin molecule as its base with three long chain fatty acids attached. During the transesterification process, the triglyceride is broken down with alcohol in the presence of a catalyst, which is usually a strong alkaline like sodium hydroxide. The alcohol reacts with the fatty acids to form the mono-alkyl ester, or biodiesel and crude glycerol. In most production, methanol or ethanol is the alcohol used, and it is base catalysed by either potassium or sodium hydroxide. After the completion of the transesterification reaction, the glycerin and biodiesel are separated (gravity separation). The glycerin is either re-used as feedstock for methane production or refined and used in pharmaceutical products. A typical transesterification process is given in Figure 16.1.
DECONSTRUCTION OF ENGINEERING MYTHS CHo—OCOR 1 I c CH—OCOR2
„ , , , Catalyst +
3CH3OH
CHj—OCOR 3 Triglyceride
Methanol
^
715
R1COOCH3
CH2OH ι d CHOH
+ R2COOCH3
CH2OH
R3COOCH3
Glycerol
Methyl ester;
J
Figure 16.1 Typical equation for transesterifkation, where R1, R2, R1 are the different hydrocarbon chains.
Interestingly, the current biodiesel production uses fossil fuel at various stages such as agriculture, crushing, transportation, and the process itself (Carraretto et al. 2004). Figure 16.2 shows the share of energy use at different stages from farming to biodiesel production. Approximately 35% of the primary energy is consumed during the life cycle from agriculture farming to biodiesel production. This energy basically comes from fossil fuel. To make the biodiesel completely green, this portion of energy also has to be derived from renewable sources. For energy conversion and crushing, direct solar energy can be effectively used while renewable biofuels can be used for transportation and agriculture. 16.23.1
Pathways of Petrodiesel and Biodiesel
Figure 16.3 below shows the pathways of additives for the production of conventional petrodiesel and biodiesel. Highly toxic catalysts, chemicals, and excessive heat are subjected to crude oil in the oil refineries. Chemicals such sulfuric acid (H 2 S0 4 ), hydrofluoric acid (HF), aluminium chloride (AlCl·,), aluminium oxide (A120,), and
Figure 16.2 Share of energy at different stages of biodiesel production.
716
THE GREENING OF PETROLEUM OPERATIONS
catalysts such as platinum and high heat are applied for oil refining. These all are highly toxic chemicals and catalysts. The crude oil that is originally non-toxic, yields a more toxic product after refining. Conventional biodiesel production follows the similar pathway. Methanol (CH 3 OH) made from natural gas, a highly toxic chemical that kills receptor nerves even without feeling pan, is used for the alcoholysis of vegetable oil. Sulfuric acid or hydroxides of potassium or sodium, which are highly toxic and caustic, are added to the natural vegetable oils. 16.2.3.2
Biodiesel Toxicity
The toxicity of biodiesel is measured by the fuel toxicity to the human body and by the health and environmental impacts due to exhaust emission. Tests conducted for acute oral toxicity of a pure biodiesel fuel and a 20% blend (B20) in a single-dose study on rats reported that the LD50 of pure biodiesel, as well as B20, was found to be greater than 5000 m g / k g (Chhetri et al. 2008). Hair loss was found on one of the test samples in the B20. The acute dermal toxicity of neat biodiesel tested for LD50 was greater than 2000 m g / k g . The United States Environmental Protection Agency (2002) studied the biodiesel effects on gaseous toxics and listed 21 Mobile Source Air Toxics (MSATs) based on that study. MSATs are significant contributors to the toxic emissions and are known or suspected to cause cancer or other serious health effects. Of the 21 MSATs listed, six are metals. Of the remaining 14 MSATs, the emission measurements were performed for the eleven components, namely, acetaldehyde, acrolein, benzene, 1,3-butadiene, ethylbenzene, formaldehyde, n-hexane, naphthalene, styrene, toluene and xylene. However, the trend in benzene, 1-3-butadiene, and styrene was inconsistent. It is obvious that the biodiesel produced from the current methods is highly toxic. This is because of the highly toxic chemicals and catalysts used. The current vegetable oil extraction method utilizes n-hexane, which also causes toxicity. This research proposes a new concept that aims at reducing the toxicity by employing non-toxic chemicals, natural catalysts, and the extraction process itself. Toxicity can emerge from the source as well as the process that is used to produce biodiesel. The amount of catalysts had an impact in the conversion of esters during the transesterification process. The titration indicated that the optimum amount of catalysts for the particular waste cooking oil was 8 grams per liter of oil (Chhetri et al. 2008). They carried the reaction using 4,6,8,10, and
DECONSTRUCTION OF ENGINEERING MYTHS Petroleum diesel pathway
Bio-diesel pathway Oil/Fat
Crude oil
Catalysts + Chemicals + High heat
Catalysts + Chemical + Heat
Oil Refining
Polymers, Wax, etc.
717
Gasoline, Diesel, etc.
C0 2 , CO, Benzene, acetaldehyde, toluene, formaldehyde, acrolein, PAHs, NOx, xylene, etc.
Transesterification
Bio-diesel
Glycerol
C0 2 , CO, Benzene, acetaldehyde, toluene, formaldehyde, acrolein, PAHs, NOx, xylene, etc.
Figure 16.3 Pathways of petrodiesel and biodiesel production.
12 grams per liter of sodium hydroxide catalyst. With 4 grams per liter, no reaction was observed because there was no separated layer of ester and glycerin. With the concentrations of 6, 8, and 10 grams per liter, approximately 50%, 94%, and 40% Ester yield, respectively, were obtained (Figure 16.4). It was observed that the production
100 90 80
5" "O φ
'5. l/l
til
70 60 50 40 30 20 100 6 8 10 Catalyst concentration (gram/litre)
12
Figure 16.4 Conversion efficiency under different catalysts concentration (Chhetri et al. 2008).
718
THE GREENING OF PETROLEUM OPERATIONS
of ester decreased with the increase in sodium hydroxide. With 12 grams per liter of catalysts, a complete soap formation was observed. This is because the higher amount of catalysts caused soap formation (Attanatho et al. 2004). The rise in soap formation made the ester dissolve into the glycerol layer. Triplicate samples were used, and the maximum standard deviation from the mean was found to be approximately 4%. The transesterification reaction usually takes place in ethanol and oil in two phases taking more time and energy to complete the reaction. However, Boocock et al. (1996, 1998) added tetrahydrofuran as a co-solvent, which transformed two oil-methanol phases into one phase systems, helping the reaction to occur in normal temperatures at a much faster rate. Because the tetrahydrofuran is a toxic chemical, they suggested that the tetrahydrofuran should be derived from biological sources. Typically, toxicity can also come from higher cetane number sources. Cetane numbers are the indicators of ignition properties of the diesel fuel. The higher the cetane number, the more efficient the fuel is. Because of higher oxygen content, biodiesel has a higher cetane number compared to petroleum diesel. Rapeseed oil, canola oil, linseed oil, sunflower oil, soybean oil, beef tallow, and lard are being used as feedstock for biodiesel production (Peterson et al. 1997; Ma and Hanna 1999). The cetane number of the diesel fuel is the indicator of the ignition quality. Figure 16.5 shows the cetane numbers of biodiesel fuels derived from different feedstocks. The cetane index of waste cooking oil from the experiment was found to be 61. The cetane number is not much different than the cetane index (cetane number = cetane index1.5 + 2.6) (Issariyakul et al. 2007). Hilber et al. (2006) reported the cetane numbers of rapeseed oil, soybean oil, palm oil, lard, and beef tallow to be 58, 53, 65, 65, and 75, respectively. Among these biodiesel feedstocks, beef tallow has the highest cetane number. The higher cetane number indicates a higher engine performance of beef tallow compared to other fuels, resulting in lower emissions of pollutants. Because beef tallow has the higher amount of saturated fatty acids, the increase in the saturated fatty acids content positively enhances the cetane number of biodiesel. The oxidative stability of biodiesel fuels also increases due to the presence of higher amounts of saturated fatty acids. However, the drawback of higher amounts of saturated fatty acid content in biodiesel fuel is that the cold filter plugging point occurs at a higher temperature.
DECONSTRUCTION OF ENGINEERING MYTHS
719
Figure 16.5 Cetane numbers of biodiesel derived from different feedstocks.
Kemp (2006) reported the distribution of biodiesel production costs as shown in Figure 16.6. Oil feedstock is the major cost of biodiesel production accounting for over 70% of the total costs. Hence, if the waste vegetable oil is used as biodiesel feedstock, the economics of biodiesel can be significantly improved. Moreover, the use of waste oil also reduces the waste treatment costs. Over 12% of biodiesel costs is the chemicals that are inherently toxic to the environment. For each gallon of diesel, this cost is higher from the biodiesel than for petrodiesel. In addition, because most chemicals are derived from fossil fuel, the cost incurred in biodiesel is more acute to the biodiesel industry. In terms of toxicity, these chemicals offer the same degree of toxicity (if not more) as the petrodiesel, even if the petrodiesel is produced by conventional means. In addition, it is actually easier to produce petrodiesel than biodiesel because
Figure 16.6 Distribution of biodiesel production costs, % (Kemp 2006).
720
THE GREENING OF PETROLEUM OPERATIONS
the crude oil is much more amenable to refining than conventional biodiesel sources. The disparity increases if alternative sources are used, such as waste oil, non-edible oil, etc.
16.3
"Clean" Nuclear Energy
16.3.1 Energy Demand in Emerging Economies and Nuclear Power The increasing global energy demand will put great pressure in fossil fuel resources. In order to meet this challenging energy demand, India and China have been attracted towards building nuclear power plants. Recent agreement between India and the U.S., in order to develop nuclear power for civil purposes, has opened up an opportunity for India to become a nuclear power intensive country in the region (BBC 2006). As a matter of fact, India already has several nuclear power facilities producing 2550 MWe and 3622 MWe under construction. China has also developed nuclear energy for power generation and has 5977 MWe as of December 31, 2003. By the end of 2007, 9GWe was attributed to nuclear energy in 11 nuclear power plants (WNA, 2010). Additional reactors are planned, including some of the world's most advanced, to give a sixfold increase in nuclear capacity to at least 60 GWe or possibly more by 2020, and then a further substantial increase to 160 GWe by 2030.
16.3.2 Nuclear Research Reactors There was a significant research focus over the last 50 years in nuclear energy technologies. According to IAEA (2004), there were 672 research reactors worldwide as of June 2004, all aimed at investigating different aspects of nuclear energy. The research reactors were used for various purposes including basic nuclear science, material development, and radioisotope management and their application in other fields such as medicine, food industries, and training. Figure 16.7 shows the total number of research reactors commissioned and shut down as of June 2004 (IAEA 2004). Of the 672 research reactors, 274 are operational in 56 countries, 214 are shut down, 168 have been decommissioned, and 16 are planned or under construction. Figure 16.7 shows the decline in the number of research
DECONSTRUCTION OF ENGINEERING MYTHS
721
Figure 16.7 Number of research reactors shut down and commissioned (IAEA 2004).
reactors that were operational earlier, which shows an increase in the number being shut down. This is due to the fairly stable research regime in this technology. There are few research reactors being built showing that the research opportunity is still available. Figure 16.8 shows that, altogether, 274 research reactors are operational in IAEA member states. Of these reactors, about 70% are in industrialized countries, with the Russian Federation and the United States having the largest number. Most of this research was carried out for fission reaction. Despite the significant amount of research carried out for fusion reaction, there was no positive outcome. Due to the severe environmental consequences created by fission reaction, fusion reaction has been considered to be comparatively less environmentally hazardous. However, scientists
Figure 16.8 Operational research reactors in IAEA Member States-273 reactors (IAEA 2004).
722
THE GREENING OF PETROLEUM OPERATIONS
are struggling to find ways to carry out fusion reaction in lower temperatures because, currently, it needs huge amounts of energy input, making this technology far from economic realization.
16.3.3 Global Estimated Uranium Resources The total amount of conven tional uranium that can be mined economically is estimated to be approximately 4.7 million tons (IAEA 2005). This amount is considered sufficient for the next 85 years to meet the nuclear electricity generation rate based on 2004. However, if the conventional technology were converted to fast reactor technology, the current resources would be enough for the next hundreds of years (Sokolov 2006). It has further been reported that, if the uranium in phosphate is considered, total uranium reserves will reach up to 35 million tons. The world's nuclear energy capacity is expected to increase from the present capacity of 370 GWe to somewhere between 450 GWe (+22%) and 530 GWe (+44%). To supply the increased requirement of the uranium feedstock, the annual uranium requirement will rise by about 80,000 tons or 100,000 tons (Sokolov 2006). The common belief is that nuclear energy sources would outlast fossil fuel resources. With the currently used sustainability criteria, which ignore the scientific features of natural resources, this argument appears to be valid. However, with the scientific argument put forward in recent years (see previous chapters), it becomes clear that there is an inherent continuity in natural resources, and there is no risk of running out of natural resources. At the same time, it is preposterous to suggest that nuclear technology, which has such a low global efficiency and such negative longterm impacts on the environment, is sustainable. Figure 16.9 shews the global approximate uranium reserves. As of today, Australia has the highest reserves (24%) in the world, followed by Kazakhstan (17%) and Canada (9%). Australia is the largest exporter of uranium oxide in the world and averaged almost 10,000 t/year of uranium oxide, which is about 22% of the world's uranium supply (NEA 2005; IAEA 2005). Australia has 38% of the world's lowest-cost uranium resources (under US$ 40/kg).
16.3.4 Nuclear Reactor Technologies Current nuclear power plants are nuclear fission reactors. Approximately 78% of the current reactors are cooled by light water
DECONSTRUCTION OF ENGINEERING MYTHS
723
Figure 16.9 Global uranium reserves (NEA 2005; IAEA 2005).
and, hence, are called light water reactors (LWRs) (Boyle et al. 2003). Light water is used as a moderator and coolant in the reactors system, and heat is removed to produce steam for turning the turbines of the electric generators. Light water reactors can be used only when the fuel uranium is enriched by about 3-5%, using gaseous SFft by diffusion or centrifugation. Light water reactors are of two types: pressurized water reactors (PWR) and boiling water reactors (BWR). A pressurized water reactor can work in high temperatures and pressures, thus increasing its efficiency. The other feature of PWR is that it does not flow in the primary loop but produces steam in the secondary loop, which drives the turbine. However, this reactor is more complex than LWRs. In a boiling water reactor, the same water loop serves as a moderator, core coolant, and source for the steam turbine. It is likely that any leak could make the water radioactive in water, turbines, and the whole loop. In the case of RBMK reactors, which are large tube type reactors, utilize graphite as neutron moderator water as a coolant. The Chernobyl reactor was a RBMK type large tube reactor, and there is a pressure to shut down such reactors in the future. Canada deuterium uranium (CANDU) reactors are the heavy water cooled technologies that use pressurized heavy water as a coolant and low pressure, cooled, liquid heavy water as a neutron moderator. The specialty of a CANDU reactor is that it utilizes the naturally occurring uranium without enrichment. This reduces a large amount of uranium enrichment costs. However, this utilizes heavy water
724
THE GREENING OF PETROLEUM OPERATIONS
containing 2H for cooling, instead of 1H. Production of heavy water for these reactors is another dimension that makes these reactors unsustainable and expensive because only about one atom of hydrogen in 6,600 occurs naturally as deuterium. Gases such as C 0 2 or helium are also used as coolants in some reactors, called gas cooled reactors. The pebble bed reactors design and prismatic boxes design are popular in gas cooled reactors. Despite having several modern reactors, all of them perform their work either on enriched fuel or synthetically produced heavy water. Both enrichment and synthetic production of heavy water are unsustainable processes, but neither is beneficial to environment. Thus, the production of nuclear energy using these technologies has several health and environmental implications.
16.3.5
Sustainability of Nuclear Energy
Sustainability of any technology is evaluated based on its positive and negative impacts on humans and society. In the case of nuclear energy, the sustainable development implies assessing its characteristics related to its economic, environmental, and social impacts. Both positive and negative impacts are to be considered for its analysis, under which nuclear energy can contribute or create problems for sustainable development. The availability of natural resources, which are necessary as inputs, is one of the indicators to determine whether any technology is sustainable or not. The intensity of energy use and material flow, including those to the environmental emissions such as carbon dioxide, during the project life cycle are also the issues to be considered in evaluating sustainability. The impact of technology on public health, the environment, land use, and the natural habitat of living beings, including potential for causing major and irreversible environmental damages, is to be evaluated in order to assess the sustainability of nuclear technology. Based on these facts, the sustainability of nuclear energy has been evaluated. 16.3.5.1
Environmental
Sustainability
of Nuclear Energy
The impacts and the pathway analysis of radioactive release from a nuclear energy chain is shown in Figure 16.10 (NEA-OECD, 2003). The impacts of a nuclear energy chain are both radiological and
DECONSTRUCTION OF ENGINEERING MYTHS
725
Radioactive release from nuclear energy chain
Gaseous release
Liquid waste
Solid waste
Atmospheric dispersion
Sea/river Dispersion
Deposition
Air
Water
Soil
Agriproducts
Inhalation
External Exposure
Fish or Sea food
Ingestion
Human Health Monetary Valuation Figure 16.10 Radioactive release from nuclear energy chain (Redrawn from NEA-OECD 2003).
non-radiological, which can be caused due to routine and accidental release of radioactive wastes into the natural environment. The sources of these impacts are the releases of materials through atmospheric, liquid, and solid waste pathways. Gaseous release directly reaches into the atmosphere and the air becomes contaminated. The contaminated air is either inhaled by humans or deposited into the soil. Once the soil is contaminated through the nuclear waste, surface and ground water bodies are affected, and the water will eventually be ingested through agricultural products or seafood. In this way, nuclear wastes will increase the cost of human health, the actual cost continuously increasing with time. Nuclear power generates spent fuels of roughly the same mass and volume as the fuel that the reactor takes in because there is
726
THE GREENING OF PETROLEUM OPERATIONS
only fission but not oxidation. Yet, there are different types of waste produced from overall nuclear power systems, primarily a solid waste, spent fuel, some process chemicals, steam, and heated cooling water. Non-nuclear counterparts of nuclear power, such as fossil fuels, also produce various emissions including solid, liquid, and gaseous wastes on a pound per pound basis. Therefore, the potential environmental cost of waste produced by a nuclear plant is much higher than the environmental cost of most wastes from fossil fuel plants (ElA 2007). 16.3.5.2
Nuclear Radiation Hazard
Figures 16.11-16.13 show the decay series of radionuclides of uranium208, thorium232, and uranium 235 , including the mode of radioactive
Uranium-234
Uranium-238 a 4.5 billio n years
T
a 240,000 years
V
linutes /1.2n Protactinium-234m
'_ ' _/ 2 4 ( jays Thorium-234
'' Thorium-230 a
77,000 years NOTES: The symbols a andß indicate alpha and beta decay, and the times shown are half-lives
r Radium-226
An asterisk indicates that the isotope is also significant gamma emitter.
1,600 ye3fs
a
a
Uranium-238 also decays by spontaneous fission.
r Radium-222 a
3.8 d£lys
Polonium-214
Polonium-21B 3.1 minutes
a
/*
/
'
-
/«
V
ninutes /20 Bismuth-214"
1 /27ΓΤ Lead-214·
Polonium-210
a 160 rnicr :>seconds
a 140 da ^s
days
/5.0 Bismuth-210
i
'
Lead-210
Ά
12 years
'' Lead-206 (stable)
Figure 16.11 Natural Decay Series: Uranium-238 (Argonne National Laboratory 2005).
DECONSTRUCTION OF ENGINEERING MYTHS
727
Uranium-235' a 700 million years
Protactinium-231' 33,000 years
P ?6 hours
Thorium-227·
Thorium-231
19 days
f
'
' 2 2 years (99%)
Actinium-227 a
The symbols a and/! indicate alpha and beta decay, and the times shown are half-lives
22 years (1%) i
An asterisk indicates that the isotope is also a significant gamma emitter.
r
Badium-223* 1 / /^
1
r
a
11 days
/ 22 minutes
Francium-223* 1
'
Radon-219· a
4-0 seco nds
Figure 16.12 Natural Decay Series: Uranium-235 (Argonne National Laboratory 2005).
decay. This occurs because the unstable isotope emits alpha or beta subatomic particles until it becomes stable. The decay chain also emits gamma radiation. Unlike the emission of alpha and beta particles, the mechanism of gamma radiation is the emission of excess energy from the nucleus of an atom (Argonne National Laboratory 2007). Gamma radiation may cause significant health hazards for all living beings. Alpha particles are the least penetrative compared to beta and gamma radiation. Because beta and gamma particles are more penetrative, they are more hazardous to human health.
728
THE GREENING OF PETROLEUM OPERATIONS Thorium-232
Thorium-228
14 billion years
a
/^
a
1.9 years
/ 6 . 1 hours Actinium-228* NOTES:
' i_ /
years
/5.8 Radium-228
The symbols a andß indicate alpha and beta decay, and the times shown ate half-lives
" Radium-224 a
An asterisk indicates that the isotope is also a significant gamma emitter.
3.7 days
Radon-220 a
56 seconds
1 Polonium-216
Polonium-212 310 nanoseconds
a 0.15 sec or ds '
61 minutes (64%)
Bismulh-212· a
f 1' Lead-212"
y
61 rrlinutes (36°/o)
1 hours
T
Lead-208 (stable)
'
/
3.1 minutes
Thalliu m-208*
Figure 16.13 Natural Decay Series: Thorium-232 (after Argonne National Laboratory, 2007).
16.3.5.3
Nuclear Wastes
Figure 16.14 is the total amount of cumulative, worldwide spent fuel reprocessing and storage for the past and also projected for 2020. The amount is increasing because of the limited processing facilities available and the delays created for disposal due to the unavailability of waste repository. According to IAEA (2004), there has been some progress for the development of repositories in Finland, Sweden, and the U.S. for disposing high level wastes. The repository in Finland is considered to be in operating condition by 2020 if all legal, licensing, and construction operations go as planned by the Finish government. The U.S. has decided to develop a spent fuel repository at the Yucca Mountain disposal
DECONSTRUCTION OF ENGINEERING MYTHS
729
Figure 16.14 Cumulative worldwide spent fuel reprocessing and storage, 1990-2020 (IAEA 2004).
site, ready to be operated by 2020. Recently, interests are growing to develop international or regional, rather than national and local, repositories (AP 2006). Kessler (2000) argued that the IEA proposed an increase in nuclear power (32 new nuclear plants annually for the next forty years) as a substitute for coal-fired power as one mean towards reducing carbon dioxide emissions. However, the challenge of waste generation is yet to be resolved. Therefore, the production of power by nuclear fission may not be a good choice. Based on the present study, each 1000 MW reaction would produce 30 tons of waste annually, which would need to be sequestered at least for 100 thousand years, whereas the life of a nuclear reactor is assumed to be 50 years. Moreover, it has also been argued that pursuing nuclear energy as an energy solution would be a dangerous move. The sustainability of any system is evaluated based on the total input into the system and the intended output (Figure 16.15). In any system, there are by-products of the systems, which are either beneficial or can be reused or harmful and need huge investments to treat before they are safely discharged. In a nuclear energy system, the difference between the input and output environmental resources (d ) before and after the system is operated should be at least equal to the system before it is implemented. However, in the case of nuclear energy, several by-products that are highly problematic to human health and the natural environment make the total benefit less than the advantages due to the systems implementation, which is expressed as ^ > 0. This system does not fulfill the sustainability
730
THE GREENING OF PETROLEUM OPERATIONS
By-products — ► Radiation hazard ► Hazardous waste/spent fuel * Health hazard — ► Mutation to plants Input Uranium
/
\
Output
/ Energy
Figure 16.15 Input and output in a nuclear energy system.
criterion developed by Khan et al. (2007). Hence, the nuclear energy technology is not acceptable due to its environmental problems 16.3.5.4
Social Sustainability of Nuclear Energy
Nuclear energy is one of the most debatable energy sources among others due to its environmental concerns. It is obvious that the potential contribution of even the most novel or sophisticated nuclear technique is compared with and judged against its non-nuclear competitors. However, the health and environmental concerns of nuclear energy is not comparable with any other sources of energy. Yet, the cost, reliability, safety, simplicity, and sustainability are important elements for the basis of decisions by governments, private companies, universities, and citizens or consumers. Despite the previously held notion that nuclear technology is a miracle technology, several nuclear accidents and strategic explosions have proven that nuclear technology can never be a socially responsible technology. The atomic explosion in Hiroshima and Nagasaki of Japan in 1945 left hundreds of thousands of people dead, and many people are still affected due to the release of radioactivity even today. The radiation contaminated everything it came into contact with, and people are still paying the price of the bombing in 1945. The Chernobyl accident was one of the most dangerous accidents that occurred due to the loss of the coolant in the system (Ragheb 2007). According to Wareham (2006), major radioisotopes present in a nuclear reactor core that have biological significance include caesium-137 (which mimics potassium and is deposited in muscles), strontium-90 (which mimics calcium and is, therefore,
DECONSTRUCTION OF ENGINEERING MYTHS
731
deposited mostly in bone), plutonium, and iodine-131. Iodine contamination contributes to the acceleration of thyroid cancers after the idodine is ingested through the air or radiation-contaminated milk. The health effect of the Chernobyl accident was short-term (deterministic) and long-term (stochastic). Out of 499 people that were admitted for observation right after the Chernobyl accident, 237 people suffered from acute radiation effects (Metivier 2007). Out of the 237, 28 died immediately. Ulcer, leukemia, and thyroid tumors were identified in some of them. The long term effects were different types of cancers, including thyroid cancer in children. The soil, water, vegetables, and other foods including milk were contaminated, which showed long-term implications for the health systems. Several patients were reported to have developed mental and physiological problems after the accidents. For a period of 12 years after the Chernobyl accident, thyroid carcinoma increased by 4,057 cases in Belarus, as compared to the same period of time before (Metivier 2007). Some studies linked several types of cancers, including leukemia in children, in Germany, Sweden, Greece, and other European countries. Moreover, it is expected that there are many incidences still to be expected from the Chernobyl nuclear accident. Non-radiological symptoms such as headaches, depression, sleep disturbance, the inability to concentrate, and emotional imbalance have been reported and are considered to be related to the difficult conditions and stressful events that followed the accident (Lee 1996). Radioactive effects of nuclear energy are not only compelling for the loss of billions of dollars but also for the irreversible damage to the living beings on Earth. The emission of radioactivity has also reportedly contributed to affecting the DNA of the living cells in several people. The problems start right from the uranium mining where several chemicals such as sulfuric acids are used to leach the uranium from the rocks. People gain exposure from the mining, milling, enrichment, fuel fabrication, and the hazardous waste it generates after the energy generation. Nuclear energy has created a severe fear among the citizens globally. This has also created conflicts among countries. The natural uranium as an input creates energy, hazardous radiation, and hazardous waste as an output of the systems. It is obvious that the benefits of the product output are negative if we consider all the environmental and health impacts compared to other energy sources, in which similar amounts of
732
THE GREENING OF PETROLEUM OPERATIONS
energy can be generated without having such adverse environmental and health impacts. Hence, the change in social capital (ds) with the introduction of nuclear energy with time (d() is negative ( ^ > 0 ) . Thus, nuclear technology is socially unsustainable and, hence, unacceptable. 16.3.5.5
Economic Sustainability of Nuclear Energy
Even though the economics of any energy technology is a function of local conditions and the way energy is processed, the economics of nuclear energy is one of the most contentious issues globally. Tester et al. (2005) reported that the cost of electric energy production from light water reactors (LWRs) is typically 57% capital, 30% operation, and 13% maintenance and fuel cost. Because the nuclear power plants have very high capital costs, any factors that affect capital costs such as inflation rates of the investment, interest rates, gestation period and power plant efficiencies will affect the overall economics of the nuclear energy. It was observed that the cost of existing nuclear power plants is competitive with that of conventional coal or gas fuelled power plants, but it is not competitive with the most recently developed gas-fired combined cycle power plants. Hence, nuclear energy is in general not competitive with the conventional energy sources. Despite several attempts, including modularized designs, automation, and optimization in construction, the capital costs of the nuclear plants have sufficiently increased the overall cost of the nuclear power system. Longer gestation periods were one of the major causes to make nuclear energy the most expensive. According to EIA (2001), more than half of the nuclear power plants ordered were not completed. Tester et al. (2005) cited that the per kilowatt cost of electricity generated from a Seabrook, New Hampshire nuclear power station reached $2200/kW even though it was estimated to cost $520/kW. In addition to this, changes in the regulatory requirement and their uncertainties and high interest rates have been other sensitive factors that have impacts on the energy cost. Delay in the licensing process due to the nuclear opponents is also a measurable factor that is making nuclear energy more expensive than anticipated earlier. Many proponents of nuclear energy consider it one of the cheapest forms of energy. However, there are debates on the issue of the cost of electricity produced from nuclear systems. Comparing the cost
DECONSTRUCTION OF ENGINEERING MYTHS
733
of electricity produced with other energy sources such as coal, natural gas, and wind energy gives a fairly good idea how the economics of the nuclear energy works. Table 16.1 shows a comparison of the cost per kilowatts of electricity generation from different energy sources. It is understandable that nuclear energy cannot compete with its non-nuclear counterparts. Nivola (2007) cited the study carried out at the Massachusetts Institute of Technology (MIT), which offered the cost of nuclear energy incorporating the cost of construction, licensing, and operating a newly commissioned light water reactor compared to coal or natural gas. The estimate for electricity from nuclear power was 6.7 cents per kilowatt, which was far higher than that of a pulverized coal-fired power plant (4.2 cents/kWh) and a combined cycle natural gas power plant (5.6 cents/kWh) if the gas price was considered to be $6.72 per thousand cubic feet. Moreover, the gestation period for a nuclear power plant is exceptionally long, which contributes to a higher cost. The other factors that contribute to the higher cost are from the nuclear power plants having to fulfill regulatory requirements for health, safety, social, and environmental reasons. Bertel and Merrison (Merriman and Burchard 1996) reported the average cost of electricity generation from nuclear systems in different parts of the world (Table 16.2). It is obvious that nuclear power is not the cheapest energy option. Because it has to compete with oil and natural gas and the price of natural gas has been fairly stable recently, it emerges as one of the expensive forms of electricity. Moreover, it is understood that nuclear energy has to be compared with the price of new forms of renewable energy sources such as solar, wind, and combined cycle biomass systems. Table 16.1 Cost in US$ per kW for different types of energy generation. Generating methods
Generating cost (US$/kW)
References
0.01-0.04
Service 2005
0.037
Uranium Information Center 2006
Natural gas
0.025-0.05
Service 2005
Wind
0.05-0.07
Service 2001
Coal Nuclear
734
THE GREENING OF PETROLEUM OPERATIONS
Table 16.2 Nuclear electricity generating costs (Bertel and Morrison 2001). Country
Discount Rate %
Investment %
O&M %
Fuel %
Total Cost US cents/ kWh
Canada
10
79
15
6
4.0
Finland
10
73
14
13
5.6
France
10
70
14
16
4.9
Japan
10
60
21
19
8
Republic of Korea
10
71
20
9
4.8
Spain
10
70
13
17
6.4
Turkey
10
75
17
9
5.2
United States
10
68
19
13
4.6
However, the comparison is impossible because they have different impacts on the environment, with nuclear energy being the most devastating form. Table 16.3 indicates the price of nuclear energy development at various stages compared to other energy sources such as coal and natural gas. Even though the per kilowatt cost for nuclear energy is reported to be lower than coal and natural gas, this analysis does not take into account the cost for waste management. In terms of waste management, nuclear energy performs worse than other sources. Hence, if the life cycle is considered in the economic evaluation, including waste disposal, social, and environmental costs, nuclear energy will have the highest per kilowatt cost among the energy sources compared. Nuclear power plants have one of the lowest global efficiencies, meaning that considering the whole life cycle of the nuclear energy chain from exploration to electricity production, the overall process efficiency is low. Moreover, large amounts of fossil fuel resources are used during exploration, mining, milling, fuel processing, enrichment, etc. Hence, nuclear energy sources contribute emissions of greenhouse gases. The cost that is necessary to offset the greenhouse gas emissions has never been taken into consideration,
DECONSTRUCTION OF ENGINEERING MYTHS
735
Table 16.3 Price and structure of price of the main sources of power in Europe in 2005 (Fiore 2006). Cost structure
Nuclear (%)
Coal (%)
Gas (%)
Investment
54
39
20
Exploitation
20
15
11
Fuel
25
46
69
33.7
35
R&D Average Price € / M W h
1 28.4
which shows the exaggeration of the cost that it has more benefits compared to its non-nuclear counterparts. Nuclear energy has more future liabilities such as decommissioning, the storage of spent fuel, the disposal of tools and equipment being used in the plants, clothes, globes, and other materials that are contaminated and cannot be disposed in the ordinary sewage systems. These issues should be considered during the inception period of a nuclear power project so as to make sure that these burdens are not passed on to future generations. This is true only theoretically. Knowing the characteristics of nuclear wastes, which have half-lives of millions and billions of years, it is impossible to contain the necessary arrangement of nuclear waste management in a short time. The economic benefit in the short term cannot be justified because its impacts remain for billions of years in the form of nuclear radiation. The proliferation of dangerous materials from exploration, mining, fuel processing, and waste disposal and its relation with energy security are also determining factors in the long-term cost. The safe disposal of waste has not been limited to a technical problem; it is a complex environmental, social, and political problem. The resolution of this issue, considering the abovementioned problems, needs clear-cut technical, economical, and socio-environmental responsibility towards society. The resolution needs amicable alternatives and justifiable reasons in order to avoid possible conflicts in the areas where nuclear plants are planned, which clearly involves a huge cost that needs to be taken into account during the project evaluation. Nuclear energy is considered to have the highest external costs, such as health and environmental costs, due to electricity
736
THE GREENING OF PETROLEUM OPERATIONS
production and use. The impacts created by radiation hazards are rarely curable, and irreversible damages are caused to the plant and human metabolisms. Economic evaluation of all radiation hazards to health and the environment will make nuclear power unattractive. (See Chapter 5, section 5.15 for further details.)
16.3.6 Global Efficiency of Nuclear Energy Milling uranium consists of grinding particles in uniform, which yields a dry powder-form of material consisting of natural uranium as U 3 0 8 . Milling uranium has up to 90% efficiency (η^ (Mudd 2000). The local efficiency of nuclear energy conversion has been reported to reach up to 50% (Ion 1997). However, considering the life cycle process from extraction to electric conversion and the end uses, the global efficiency of the system is one of the lowest because of the expensive leaching process during mining and series of gas diffusion or centrifugation required for the uranium enrichment. Conventional mining has an efficiency of about 80%, η 2 (Gupta and Mukherjee, 1990). In a milling process, which is a chemical plant and usually uses sulfuric acid for leaching, about 90% of the uranium is extracted (η3) (Mudd 2000). There is also a significant conversion loss in the conversion of uranium to UF6, the efficiency (η4) of which is considered approximately 70%. Enrichment efficiency (η5) is less than 20%. Considering 50% thermal to net electric conversion (η6) and 90% efficiency in transmission and distribution (η7), the global efficiency (η ) of the nuclear processing technology (η1χ η 2Χ η3Χ η4Χ η5Χ η6Χ η7) is estimated to be 4.95%. Because global efficiency considers the environmental and social aspects also, and if we consider the environmental impact due to the radioactive hazards and the cost of the overall system, the global efficiency is even lower than that which is mentioned above. Moreover, the utilization of wastes into valuable products would increase the overall efficiency of the system, in principle. However, there is no way that nuclear waste can be re-used. Instead, it will have more devastating impacts on humans and the environment. Nuclear energy has a negative efficiency for waste management.
16.3.7 Energy from Nuclear Fusion Nuclear fusion power is generated by the fusion reaction of two atomic nuclei. Two light atomic nuclei fuse together to form a
DECONSTRUCTION OF ENGINEERING MYTHS
737
heavier nucleus and release energy. The sun in the solar system is the perfect example of fusion reaction occurring every moment in the universe. Fusion can occur only at very high temperatures where materials are at a plasma state. At a very high temperature the electrons separate from the nuclei, creating a cloud of charged particles, or ions. This cloud of equal amounts of positively charged nuclei and negatively charged electrons is believed to be in a plasma state. Hydrogen fusion can produce a million times more energy than burning hydrogen with oxygen. Fusion will have less radioactive hazards than fission processes. The only problem with fusion is the amount of heat necessary for the reaction to take place. Hence, nuclear fusion is still not a dependable energy source for the foreseeable future (Tornquist 1997). It is generally considered that fusion needs external energy to trigger the reaction and may not be feasible until a breakthrough technology is developed. Some scientists have claimed that they have successfully achieved cold fusion at room temperature on a small scale (Kruglinksi 2006). However, other scientists think this could not be replicated. Goodstein (1994, 2000) made an earlier, similar claim. However, this claim was not confirmed and could not be replicated (Merriman and Burchard 1996). Unlike fission reactors, fusion reactors are considered less problematic for the environmental. Fusion reactors are considered effective at minimizing radioactive wastes. This is done by developing a waste management strategy including the maximum possible recycling of materials within the nuclear industry classifying the radioactive and non radioactive materials (Zucchetti 2005). Fusion reactors were promoted as a zero waste option by recycling the radioactive materials and disposing the non-radioactive materials. However, it is impossible to have non-radioactive materials in the reaction. Islam and Chhetri (2008) reported that recycling plastic could result in more environmental hazards due to the emissions of new by-products such as bisphenol-A. Similarly, all radioactive materials are not recyclable, and there are several activities related to nuclear industries including metallurgical activities that are linked to negative impacts on the environment and large amounts of energy consumption. However, if scientifically proven, fusion at room temperature would cause a major shift in the current nuclear technology.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
17 Greening of Petroleum Operations 17.1
Introduction
The evolution of human civilization is synonymous with how it meets its energy needs. Few would dispute that the human race has become progressively more civilized with time. Yet, for the first time in human history, an energy crisis has seized the entire globe and the sustainability of this civilization has suddenly come into question. If there is any truth to the claim that humanity has actually progressed as a species, it must exhibit, as part of its basis, some evidence that the overall efficiency in energy consumption has improved. In terms of energy consumption, this would mean that less energy is required per capita to sustain life today than, say, 50 years earlier. Unfortunately, exactly the opposite has happened. We used to know that resources are infinite and human needs are finite. After all, it takes relatively little to sustain an individual human life. Things have changed, however, and today we are told repeatedly that resources are finite and human needs are infinite. What's going on? Some Nobel Laureates (e.g., Robert Curl) or environmental activists (e.g., David Suzuki) have blamed the entire technology 739
740
THE GREENING OF PETROLEUM OPERATIONS
development regime. Others have blamed petroleum operations and the petroleum industry. Of course, in the teeth of increasing oil prices, the petroleum sector becomes a particularly easy target. Numerous alternate fuel projects have been launched, but what do they propose? They propose the same inefficient and contaminated process that got us in trouble with fossil fuel in the first place. Albert Einstein famously stated, "The thinking that got you into the problem, is not going to get you out." As Enron, once touted as the 'most creative energy managemenf organization, collapsed, everyone seemed more occupied with trying to recoup by using the same (mis)management scheme that led to its demise. In this chapter, the mysteries of the current energy crisis are unraveled. The root causes of unsustainability in all aspects of petroleum operations are discussed. It is shown how each practice follows a pathway that is inherently implosive. It is further demonstrated that each pathway leads to irreversible damage to the ecosystem and can explain the current state of the earth. It is shown that fossil fuel consumption is not the culprit. Rather, the practices involved during exploration all the way to refining and processing are responsible for the current damage to the environment. Discussion is carried out based on two recently developed theories, namely, the theory of inherent sustainability and the theory of knowledge-based characterization of energy sources. These theories explain why current practices are inherently inefficient and why new proposals to salvage efficiencies do not have any better chance to remedy the situation. It is recognized that critiquing current practices may be necessary, but it is not sufficient. The second part of the chapter deals with practices that are based on long-term. This can be characterized as the approach of obliquity, which is well-known for curing both long-term and shortterm problems. It stands 180° opposite to the conventional band-aid approach, which has prevailed in the Enron-infested decades. This chapter promises the greening of every practice of the petroleum industry, from management style to upstream to downstream. In the past, petroleum engineers only focused on drilling, production and transportation, and reservoir engineering. This chapter also deals with management and continues through exploration all the way up to refining and gas processing. In the past, exploration meant increasing the chance of production. Many of these cases ended up costing humanity much more in the form of environmental damages, which, as this chapter points out, were never part of the management equation. The main objective of this chapter is to lay out the framework of
GREENING OF PETROLEUM OPERATIONS
741
sustainability in the petroleum sectors pertaining to environmental, technological and management aspects. In addition, the framework of truly "sustainable management" for practically all aspects of oil and gas operations will be provided. With the greening of petroleum operations, global warming can be reversed without compromising healthy life styles (Chhetri and Islam 2007a). Finally, this chapter shows the true merit of the long-term approach that promotes sustainable techniques that are socially responsible, economically attractive, and environmentally appealing.
17.2 Issues in Petroleum Operations Petroleum hydrocarbons are considered to be the backbone of the modern economy. The petroleum industry that took off from the golden era of the 1930s never ceased to dominate all aspects of our society. Until now, there were no suitable alternatives to fossil fuels and all trends indicated a continued dominance of the petroleum industry in the foreseeable future (Service 2005). Even though petroleum operations have been based on solid scientific excellence and engineering marvels, only recently it has been discovered that many of the practices are not environmentally sustainable. Practically all activities of hydrocarbon operations are accompanied by undesirable discharges of liquid, solid, and gaseous wastes (Khan and Islam 2007), which have enormous impacts on the environment (Khan and Islam 2003a; Khan and Islam 2006b; Chhetri et al. 2007). Hence, reducing environmental impacts is the most pressing issue today, and many environmentalist groups are calling for curtailing petroleum operations altogether. Even though there is no appropriate tool or guideline available in achieving sustainability in this sector, there are numerous studies that criticize the petroleum sector and attempt to curtail petroleum activities (Holdway 2002). There is clearly a need to develop a new management approach in hydrocarbon operations. The new approach should be environmentally acceptable, economically profitable, and socially responsible. The crude oil is truly a non-toxic, natural, and biodegradable product, but the way it is refined is responsible for all the problems created by fossil fuel utilization. The refined oil is hard to biodegrade and is toxic to all living objects. Refining crude oil and processing natural gas use large amounts of toxic chemicals and
742
THE GREENING OF PETROLEUM OPERATIONS
catalysts including heavy metals. These heavy metals contaminate the end products and are burnt along with the fuels, producing various toxic by-products. The pathways of these toxic chemicals and catalysts show that they severely affect the environment and public health. The use of toxic catalysts creates many environmental effects that cause irreversible damage to the global ecosystem. A detailed pathway analysis of the formation of crude oil and the pathway of refined oil and gas clearly shows that the problem of oil and gas operations lies in their synthesis, or refining.
17.3 Pathway Analysis of Crude and Refined Oil and Gas Chapter 12 described the pathway analysis of crude and refined oil and gas. It is important to note that crude oil formation follows natural pathways, whereas the currently employed refining processes employ unnatural processes that are inherently unsustainable. Figure 17.1 shows various processes involved in the refining process.
17.4 Critical Evaluation of Current Petroleum Practices In very short historical time (relative to the history of the environment), the oil and gas industry has become one of the world's largest economic sectors, a powerful globalizing force with far reaching impacts on the entire planet. Decades of the continuous growth of oil and gas operations have changed, in some places transformed, the natural environment and the way humans have traditionally organized themselves. The petroleum sectors draw huge public attention due to their environmental consequences. All stages of oil and gas operations generate a variety of solid, liquid, and gaseous wastes that are harmful to humans and the natural environment (Currie and Isaacs 2005; Wenger et al. 2004; Khan and Islam 2003b; Veil 2002; de Groot 1996; Holdway 2002). Figure 17.2 shows that the current technological practices are focused on short-term, linearized solutions that are aphenomenal. As a result, technological disaster prevails practically in every aspect of the post-Renaissance era. Petroleum practices are considered the driver of today's society.
GREENING OF PETROLEUM OPERATIONS
Crude oil storage and transportation
743
Vacuum distillation Atmospheric distillation
Hydrocarbon separation Cracking, Coking etc. Hydrocarbon creation
Alkylation, reforming etc
Hydrocarbons blending
Removal of sulfur other chemicals
Cleaning impurities
Solvent dewaxing, caustic washing
Figure 17.1 General activities in oil refining (Chhetri and Islam, 2007b).
Non-linearity/ Complexity
"Technological disaster" • Robert Curl (Chemistry Nobel Laureal Aphenomenal Figure 17.2 Schematic showing the position of current technological practices related to natural practices.
Here, the modern development is essentially dependent on artificial products and processes. The modus operandi was summarized by Zatzman (2007). He called the post-Renaissance transition the honeysugar-saccharine-aspartame (HSSA) syndrome. In this allegorical
744
THE GREENING OF PETROLEUM OPERATIONS
transition, honey (a real source with real process) has been systematically replaced by Aspartame, which has both a source and pathway that are highly artificial. He argued that such a transition was allowed to take place because the profit margin increases with processing. This sets in motion the technology development mode that Nobel Laureate in Chemistry Robert Curl called a "technological disaster." By now, it has been recognized that the present natural resource management regime, governing activities such as petroleum operations, has failed to ensure environmental safety and ecosystem integrity. The main reason for this failure is that the existing management scheme is not sustainable (Khan and Islam 2007a). Under the present management approach, development activities are allowed so long as they promise economic benefit. Once that is likely, management guidelines are set to justify the project's acceptance. The development of sustainable petroleum operations requires a sustainable supply of clean and affordable energy resources that do not cause negative environmental, economic, and social consequences (Dincer and Rosen 2004,2005). In addition, it should consider a holistic approach where the whole system will be considered instead of just one sector at a time (Mehedi et al. 2007a, 2007b). Recently, Khan and Islam (2005b, 2006a) and Khan et al. (2005) developed an innovative criterion for achieving true sustainability in technological development. This criterion can be applied effectively to offshore technological development. New technology should have the potential to be efficient and functional far into the future in order to ensure true sustainability. Sustainable development has four elements: economic, social, environmental, and technological.
17.5 Management The conventional management of petroleum operations is being challenged due to the environmental damages caused by its operations. Moreover, the technical picture of the petroleum operations and management is very grim (Deakin and Konzelmann 2004). The ecological impacts of petroleum discharges, including habitat destruction and fragmentation, are recognized as major concerns associated with petroleum and natural gas developments in both terrestrial and aquatic environments. There is clearly a need to develop a new management approach in hydrocarbon operations.
GREENING OF PETROLEUM OPERATIONS
745
This approach will have to be environmentally acceptable, economically profitable, and socially responsible. These problems might be solved/overcome by the application of new technologies that guarantee sustainability. Figure 17.3 shows the different phases of petroleum operations, which are seismic, drilling, production, transportation and processing, and decommissioning, and their associated wastes and energy consumption. Various types of waste from ships, emissions of C0 2 , human related waste, drilling mud, produced water, radioactive materials, oil spills, release of injected chemicals, toxic release used as corrosion inhibitors, metals and scraps, flare, etc., are produced during petroleum operations. Even though petroleum companies make billions of dollars in profit from their operations each year, these companies take no responsibilities for various wastes generated. Hence, overall, society has deteriorated due to such environmental damages. Until petroleum operations become environmentally friendly operations, society as a whole will not be benefited from such valuable natural resources.
Figure 17.3 Different phases of petroleum operations and their associated wastes and energy consumption (Khan and Islam 2006a).
746
THE GREENING OF PETROLEUM OPERATIONS
Recently, Khan et al. (2005) and Khan and Islam (2005) introduced a new approach by means of which it is possible to develop a truly sustainable technology. Under this approach, the temporal factor is considered the prime indicator in sustainable technology development. Khan and Islam (2006a, 2007) discussed some implications of how the current management model for exploring, drilling, managing wastes, refining, transporting, and using by-products of petroleum has been lacking in foresight, and they suggest the beginnings of a new management approach. A common practice among all oil producing companies is to burn off any unwanted gas that liberates from oil during production. This process ensures the safety of the rig by reducing the pressures in the system that result from gas liberation. This gas is of low quality and contains many impurities, and by burning off the unwanted gas, toxic particles are released into the atmosphere. Acid rain, caused by sulfur oxides in the atmosphere, is one of the main environmental hazards resulting from this process. Moreover, flaring natural gas accounts for approximately a quarter of the petroleum industry's emissions (UKOOA 2003). At present, flaring gases onsite and disposing of liquid and solid containing less than a certain concentration of hydrocarbon are allowed. It is expected, however, that more stringent operating conditions will be needed to meet the objectives set by the Kyoto Protocol.
17.6
Current Practices in Exploration, Drilling, and Production
Seismic exploration is examined for the preliminary investigation of geological information in the study area and is considered to be the safest among all other activities in petroleum operations, having little or negligible negative impacts on the environment (Diviacco 2005; Davis et al. 1998). However, recent studies have shown that it has several adverse environmental impacts (Jepson et al. 2003; Khan and Islam 2007). Most of the negative effects are from the intense sound generated during the survey. Seismic surveys can cause direct physical damage to fish. Highpressure sound waves can damage the hearing system, swim bladders, and other tissues/systems. These effects might not directly kill the fish, but they may lead to reduced fitness, which increases their susceptibility to predation and decreases their ability to carry
GREENING OF PETROLEUM OPERATIONS
747
out important life processes. There might be indirect effects from seismic operations. If the seismic operation disturbs the food chain/ web, then it will cause adverse impacts on fish and total fisheries. The physical and behavioral effects on fish from seismic operations are discussed in the following sections. It has also been reported that seismic surveys cause behavioral effects among fish. For example, startle response, change in swimming patterns (potentially including change in swimming speed and directional orientation), and change in vertical distribution are some of the effects. These effects are expected to be short-term, with durations of less than or equal to the duration of exposure. These effects are expected to vary between species and individuals and be dependent on properties of received sound. The ecological significance of such effects is expected to be low, except where they influence reproductive activity. There are some studies of the effects of seismic sound on eggs and larvae or on Zooplankton. Other studies showed that exposure to sound may arrest development of eggs and cause developmental anomalies in a small proportion of exposed eggs a n d / o r larvae; however, these results occurred at numbers of exposures much higher than what are likely to occur during field operation conditions and at sound intensities that only occur within a few meters of the sound source. In general, the magnitude of mortality of eggs or larvae that could result from exposure to seismic sound predicted by models would be far below that which would be expected to affect populations. Similar physical, behavioral, and physiological effects in the invertebrates are also reported. Marine turtles and mammals are also significantly affected due to seismic activities. The essence of all exploration activities hinges upon the use of some form of wave that would depict subsurface structures. It is important to note that practically all such techniques use artificial waves, generated from sources of variable levels of radiation. Recently, Himpsel (2007) presented a correlation between the energy levels and the wavelength of photon energy (Figure 17.4). It is shown that the energy level of photon energy decreases with the increase in wavelength. The sources that generate waves that penetrate deep inside the formation are more likely to be of high-energy level, hence more hazardous to the environment. Table 17.1 shows the quantum energy level of various radiation sources. The γ-rays, which have the least wavelength, have the highest quantum energy levels. In terms of intensity, γ-rays have highest energy intensity among the others. More energy is needed
748
THE GREENING OF PETROLEUM OPERATIONS Photon Energy
10keV 1 keV
0.1 keV I ' ' 0.1nm
I 0.01 keV
1 ' ' ' ► Wave Length 10nm 100nm
' ' 1 nm
Figure 17.4 Schematic of wavelength and energy level of photon. Table 17.1 Wavelength and quantum energy levels of different radiation sources (Chhetri et al. 2009). Radiation
Wave length
Quantum energy
Infrared
1 mm-750 nm
0.0012-1.65 eV
Visible
750^00 nm
1.65-3.1 eV
400 nm-10 nm
3.1-124 eV
10 nm
124 eV
Ultraviolet X-rays γ-rays
,2
10" m
1 MeV
to produce this radiation, whether to use for drilling or any other application. For instance, laser drilling, which is considered to be the wave of the future, will be inherently toxic to the environment. Drilling and production activities also have adverse effects on the environment in several ways. For example, the blow-out and flaring of produced gas waste energy, carbon dioxide emissions into the atmosphere, and the careless disposal of drilling mud and other oily materials can have a toxic effect on terrestrial and marine life. Before drilling and production operations are allowed to proceed, the Valued Ecosystem Component (VEC) level impact assessment should be done to establish the ecological and environmental conditions of the area proposed for development and to assess the risks to the environment from the development. Bjorndalen et al. (2005) developed a novel approach to avoid flaring during petroleum operations. Petroleum products contain
GREENING OF PETROLEUM OPERATIONS
749
materials in various phases. Solids in the form of fines, liquid hydrocarbon, carbon dioxide, and hydrogen sulfide are among the many substances found in the products. According to Bjorndalen et al. (2005), by separating these components through the following steps, no-flare oil production can be established (Figure 17.5). By avoiding flaring, over 30% of the pollution created by petroleum operations can be reduced. Once the components for no-flaring have been fulfilled, value added end products can be developed. For example, the solids can be used for minerals, the brine can be purified, and the low-quality gas can be re-injected into the reservoir for EOR.
17.7
Challenges in Waste Management
Drilling and production phases are the most waste-generating phases in petroleum operations. Drilling muds are condensed liquids that may be oil- or synthetic-based wastes, and they contain a variety of chemical additives and heavy minerals that are circulated through the drilling pipe to perform a number of functions. These functions include cleaning and conditioning the hole, maintaining hydrostatic Component
Methods
Value Addition of Waste
EVTN system Solid-liquid separation
—►
Surfactant from
-►
Biodegradation
Use of cleaned fines as in construction material
Mineral extraction from cleaned fines
Paper material Purification of formation water using wastes (fish scale, human hair, ash)
Liquid-liquid separation Human hair Human hair
'
■
Gas-gas separation
Limestone -►
Re-injection of gas for enhance oil recovery processes
Hybrid: membrane + biological solvent
Figure 17.5 Breakdown of no-flaring method (Bjorndalen et al. 2005).
750
THE GREENING OF PETROLEUM OPERATIONS
pressure in the well, lubrication of the drill bit and counterbalance formation pressure, removal of the drill cuttings, and the stabilization of the wall of the drilling hole. Water-based muds (WBMs) are a complex blend of water and bentonite. Oil-based muds (OBMs) are composed of mineral oils, barite, mineral oil, and chemical additives. Typically, a single well may lead to 1000-6000 m3 of cuttings and muds depending on the nature of cuttings, well depths, and rock types (CEF 1998). A production platform generally consists of 12 wells, which may generate (62 x 5000 m3) 60,000 m3 of wastes (Patin 1999; CEF 1998). Figure 17.6 shows the supply chain of petroleum operations indicating the types of wastes generated. The current challenge of petroleum operations is how to minimize the petroleum wastes and their impacts in the long term. Conventional drilling and production methods generate an enormous amount of wastes (Veil 1998). Existing management practices are mainly focused on achieving sectoral success and are not coordinated with other operations surrounding the development site. The following are the major wastes generated during drilling and production: • • • • • • • • • • •
17.8
Drilling muds Produced water Produced sand Storage displacement water Bilge and ballast water Deck drainage Well treatment fluids Naturally occurring radioactive materials Cooling water Desalination brine Other assorted wastes
Problems in Transportation Operations
The most significant problems in production and transportation operations are reported by Khan and Islam (2006a). Toxic means are used to control corrosion. Billions of dollars are spent to add toxic agents so that microbial corrosion can be prevented. Other applications include the use of toxic paints, cathodic protection, etc., which all cause irreversible damage to the environment.
GREENING OF PETROLEUM OPERATIONS
Decomm issioning 4-8 months
751
Seismic expolration: It is done by generating sonar wave and receiving sounds with geological information. This study takes 20-30 days. Drilling: It composes of installation of rigs, drilling, and casing. Exploratory and development drilling take 3-5 years. Production: Depending on the size of the reserve, the production phase can last between 25-35 years. Transportation: Ships, tankers or pipelines are used to bring the oil/gas to onshore refinery. Decommission: CNSOPB guidelines require the preparation of a decommissioning plan. Generally, it takes 4-8 months. Output: 1. Ship source wastes; 2. Dredging effects; 3. Human related wastes; 4. Release of C0 2 ; 5. Conflicting with fisheries; 6. Sound effects; 7. Drilling muds; 8. Drilling cuttings; 9. Flare; 10. Radio-active materials; 11. Produced water; 12. Release of injected chemical; 13. Ship source oil spills; 14. Toxic chemical as corrosion inhibitor; 15. Release of metals and scraps. Input: A. Sound wave; B. Shipping operations; C. Associated inputs related to installation; D. Water-based drilling muds; E. Oil-based drilling muds; F. Synthetic-based drilling muds; G. Well testing fluids; H. Inputting casing; I. Cuttings pieces; J. Toxic compounds; K. Explosive.
Figure 17.6 Supply chain of petroleum operations (redrawn from Khan and Islam, 2007).
In addition to this, huge amounts of toxic chemicals (at times, u p to 40% of the gas stream) are injected to reduce the moisture content of a gas stream in order to prevent hydrate formation. Even if 1% of these chemicals remains in the gas stream, the gas becomes vulnerable to severe toxicity, particularly when it is burnt, contaminating the entire pathway of gas usage (Chhetri and Islam 2006a). Toxic solvents are used for preventing asphaltene plugging problems. Toxic resins are used for sand consolidation to prevent sand production.
752
17.9
THE GREENING OF PETROLEUM OPERATIONS
Greening of Petroleum Operations
17.9.1 Effective Separation of Solid from Liquid, Gas from Liquid, and Gas from Gas Chapter 11 presents sustainable ways to separate solid from liquid as well as gas from liquid. Chapter 12 and 13 present techniques for separation of gas from gas.
17.9.2 Natural Substitutes for Gas Processing Chemicals (Glycol and Amines) Glycol is one of the most important chemical used during the dehydration of natural gas. In search of the cheapest, most abundantly available and cheap material, clay has been considered as one of the bets substitute of toxic glycol. Clay is a porous material containing various minerals such as silica, alumina, and several others. Low et al. (2003) reported that the water absorption characteristics of sintered sawdust clay can be modified by the addition of saw dust particles to the clay. The dry clay as a plaster has water absorption coefficient of 0.067-0.075 (kg/m 2 s 1/2 ) where weight of water absorbed is in kg, surface area in square meter and time in second. The preliminary experimental result has indicated that clay can absorb considerable amount of water vapor and can be efficiently used in dehydration of natural gas (Figure 17.7). Moreover, glycol can be obtained from some natural source, which is not toxic as synthetic glycol. Glycol can be extracted from Tricholoma Matsutake (mushroom)
6,
.
I4O
y^^^^
3 -
ΛΓ
Q.
5
s'
2-
JT
6
<
/
1
"
§\r 0
^* , 20
, 40
, 60
80
Time (min) Figure 17.7 Water vapor absorption by Nova Scotia clay (Chhetri and Islam 2006a).
GREENING OF PETROLEUM OPERATIONS
753
which is an edible fungus (Ahn and Lee, 1986). Ethylene glycol is also found as a metabolite of ethylene which regulates the natural growth of the plant (Blomstrom and Beyer, 1980). Orange peel oils can replace this synthetic glycol. These natural glycols derived without using non-organic chemicals can replace the synthetic glycols. Recent work of Miralai et al. (2006) have demonstrated that such considerations are vital. Amines are used in natural gas processing to remove H2S and C O r Monoethanolamine (MEA), DEA and TEA are the members of alkanolamine compound. These are synthetic chemicals the toxicity of which has been discussed earlier. If these chemicals are extracted from natural sources, such toxicity is not expected. Monoethanolamine is found in the hemp oil which is extracted from the seeds of hemp (Cannabis Sativa) plant. 100 grams of hemp oil contain 0.55 mg of Monoethanolamine. Moreover, an experimental study showed that olive oil and waste vegetable oil can absorb sulfur dioxide. Figure 17.8 indicates the decrease in pH of de-ionized water with time. This could be a good model to remove sulfur compounds from the natural gas streams. Calcium hydroxides can also be utilized to remove CO z from the natural gas.
17.9.3
Membranes and Absorbents
Various types of synthetic membranes are used for gas separation. Some are liquid membranes and some are polymeric. Liquid membranes operate by immobilizing a liquid solvent in a microporous
7.1 -|
x
°~
1
6.9 - \ 6.8- \ 6.7V _ _ φ ^ ^
66
"
^~~"*^*-^*.
6.5 6.46.36.2 -I
0
* \ ^^*^-*^ ^^^* 1
1
10
20
1
1
30 40 Time (minutes)
1
50
60
Figure 17.8 Decrease of pH with time due to sulfur absorption in de-ionized water (Chhetri and Islam 2006a).
754
THE GREENING OF PETROLEUM OPERATIONS
filter or between polymer layers. A high degree of solute removal can be obtained when using chemical solvents. When the gas or solute reacts with the liquid solvent in the membrane, the result is an increased liquid phase diffusivity. This leads to an increase in the overall flux of the solute. Furthermore, solvents can be chosen to selectively remove a single solute from a gas stream in order to improve selectivity (Astrita et al. 1983). Saha and Chakma (1992) suggested the attachment of a liquid membrane in a microporous polymeric membrane. They immobilized mixtures of various amines such as monoethanolamine (MEA), diethanolamine (DEA), amino-methyl-propanol (AMP), and polyethylene glycol (PEG) in a microporous polypropylene film and placed it in a permeator. They tested the mechanism for the separation of carbon dioxide from hydrocarbon gases and obtained separation factors as high as 145. Polymeric membranes have been developed for a variety of industrial applications, including gas separation. For gas separation, the selectivity and permeability of the membrane material determines the efficiency of the gas separation process. Based on flux density and selectivity, a membrane can be classified broadly into two classes, porous and nonporous. A porous membrane is a rigid, highly voided structure with randomly distributed interconnected pores. The separation of materials by a porous membrane is mainly a function of the permeate character and membrane properties, such as the molecular size of the membrane polymer, pore size, and poresize distribution. A porous membrane is similar in its structure and function to the conventional filter. In general, only those molecules that differ considerably in size can be separated effectively by microporous membranes. Porous membranes for gas separation do exhibit high levels of flux but inherit low selectivity values. However, synthetic membranes are not as environment-friendly as biodegradable bio membranes. The efficiency of polymeric membranes decreases with time due to fouling, compaction, chemical degradation, and thermal instability. Because of this limited thermal stability and susceptibility to abrasion and chemical attack, polymeric membranes have found application in separation processes where hot reactive gases are encountered. This has resulted in a shift of interest toward inorganic membranes. Inorganic membranes are increasingly being explored to separate gas mixtures. Besides having appreciable thermal and chemical stability, inorganic membranes have much higher gas fluxes when
GREENING OF PETROLEUM OPERATIONS
755
compared to polymeric membranes. There are basically two types of inorganic membranes: dense (nonporous) and porous. Examples of commercial porous inorganic membranes are ceramic membranes, such as alumina, silica, titanium, glass, and porous metals such as stainless steel and silver. These membranes are characterized by high permeabilities and low selectivities. Dense inorganic membranes are specific in their separation behaviors. For example, Pd-metal based membranes are hydrogen specific and metal oxide membranes are oxygen specific. Palladium and its alloys have been studied extensively as potential membrane materials. Air Products and Chemical Inc. developed the Selective Surface Flow (SSF) membrane. It consists of a thin layer (2-3 (m) of nano-porous carbon supported on a macro-porous alumina tube (Rao et al. 1992). The effective pore diameter of the carbon matrix is 5-7 A (Rao and Sircar 1996). The membrane separates the components of a gas mixture by a selective adsorption-surface diffusion-desorption mechanism (Rao and Sircar 1993). A variety of bio-membranes are also in use today. These membranes, such as human hair, can be used instead of synthetic membranes for gas-gas separation (Basu et al. 2004; Akhter 2002). Khan and Islam have illustrated the use of human hair as a bio-membrane (2006a). Initial results indicated that human hairs have characteristics similar to hollow fiber cylinders, but are even more effective because of the flexible nature and a texture that can allow the use of a hybrid system through solvent absortion along with mechanical separation. Natural absorbents such as silica gels can also be used for absorbing various contaminants from the natural gas stream. Khan and Islam (2006a) showed that synthetic membranes can be replaced by simple paper membranes for oil-water separation. Moreover, limestone has the potential to separate sulfur dioxide from natural gas (Akhter, 2002). When caustic soda is combined with wood ash, it was found to be an alternative to zeolite. Since caustic soda is a chemical, waste materials such as okra extract can be a good substitute. The same technique can be used with any type of exhaust, large (power plants) or small (cars). Once the gas is separated, low quality gas can then be injected into the reservoir for enhanced oil recovery technique. This will enhance the system efficiency. Moreover, low quality can be converted into power by a turbine. Bjorndalen et al. (2005) developed a comprehensive scheme for the separation of petroleum products in different forms using novel materials with the value addition of the by-products.
756
THE GREENING OF PETROLEUM OPERATIONS
17.9.4
A Novel Desalination Technique
Management of produced water during petroleum operations offers a unique challenge. The concentration of this water is very high and cannot be disposed of outside. In order to bring down the concentration, expensive and energy-intensive techniques are being practiced. Recently, Khan et al. (2006b, 2006c) have developed a novel desalination technique that can be characterized as an environment-friendly process. This process uses no non-organic chemical (e.g., membrane, additives). This process relies on the following chemical reactions in four stages: (1) Saline water + C0 2 + NH3 —> (2) precipitates (valuable chemicals) + desalinated water —> (3) plant growth in solar aquarium —> (4) further desalination This process is a significant improvement on an existing U.S. patent. The improvements include the following: • The C 0 2 source is exhaust of a power plant (negative cost). • The NFL, source is sewage water (negative cost, plus the advantage of organic origin). • There is an addition of plant growth in the solar aquarium (emulating the world's first and the biggest solar aquarium in New Brunswick, Canada). This process works very well for general desalination involving seawater. However, for produced water from petroleum formations, it is common to encounter salt concentrations that are much higher than seawater. For this, water plant growth (Stage 3 above) is not possible because the salt concentration is too high for plant growth. In addition, even Stage 1 doesn't function properly because chemical reactions slow down at high salt concentrations. By adding an additional stage, this process can be enhanced. The new process should function as follows: (1) Saline water + ethyl alcohol —> (2) saline water + C 0 2 + NFL, -> (3) precipitates (valuable chemicals) + desalinated water —> (4) plant growth in solar aquarium —> (5) further desalination Care must be taken, however, to avoid using non-organic ethyl alcohol. Further value addition can be performed if the ethyl alcohol is extracted from fermented waste organic materials.
GREENING OF PETROLEUM OPERATIONS
17.9.5
757
A N o v e l Refining Technique
Khan and Islam (2007) have identified the sources of toxicity in conventional petroleum refining: the use of toxic catalyst, and the use of artificial heat (e.g., combustion, electrical, nuclear). The use of toxic catalysts contaminates the pathway irreversibly. Natural performance enhancers should replace these catalysts. Chhetri and Islam (2006c) have proposed such practices in the context of bio-diesel. In this proposed project, research will be performed in order to introduce catalysts that are available in their natural state. This will make the process environmentally acceptable and will reduce the cost significantly. The problem associated with efficiency is often covered up by citing the local efficiency of a single component (Islam et al. 2006). When global efficiency is considered, artificial heating proves to be utterly inefficient (Khan et al. 2006b; Chhetri and Islam 2006a). Recently, Khan and Islam (2006b) have demonstrated that direct heating with solar energy (enhanced by a parabolic collector) can be very effective and environmentally sustainable. They achieved up to 75% global efficiency compared to 15% efficiency when solar energy is used through electricity conversion. They also discovered that the temperature generated by the solar collector can be quite high, even for cold countries. In hot climates, the temperature can exceed 300°C, making it suitable for thermal cracking of crude oil. In this project, the design of a direct heating refinery with natural catalysts will be completed. Note that the direct solar heating or wind energy doesn't involve the conversion into electricity that would otherwise introduce toxic battery cells and would also make the overall process very low in efficiency.
17.9.6
Use of Solid Acid Catalyst for Alkylation
Refiners typically use either hydrofluoric acid (HF), which can be deadly if spilled, or sulfuric acid, which is also toxic and costly to recycle. Refineries can use solid acid catalysts, unsupported and supported forms of heteropolyacids and their cation exchanged salts, which has recently proved effective in refinery alkylation. A solid acid catalyst for alkylation is less widely dispersed into the environment compared to HF. Changing to a solid acid catalyst for alkylation would also promote more safety at a refinery. Solid acid catalysts are an environment-friendly replacement for liquid acids that are used in many significant reactions, such as alkylation of
758
THE GREENING OF PETROLEUM OPERATIONS
light hydrocarbon gases to form iso-octane (alkylate) used in reformulated gasoline. The use of organic acids and enzymes for various reactions should be promoted.
17.9.7
Use of Nature-based or Non-toxic Catalyst
The catalysts that are used today are very toxic and are wasted after a series of use, which will create pollution in the environment. Therefore, using catalysts with fewer toxic materials significantly reduces pollution. The use of nature-based catalysts such as zeolites, alumina, and silica should be promoted. Various biocatalysts and enzymes that are from renewable origins and are non-toxic and should be considered for future use.
17.9.8
Use of Bacteria to Break Down Heavier Hydrocarbons
Since the formation of crude oil is the decomposition of biomass by bacteria at high temperature and pressure, there must be some bacteria that can effectively break down the crude oil into lighter products. A series of investigations are necessary to observe the effect of bacteria on the crude oil.
17.9.9
Zero-waste Approach
Any chemical or industrial process should be designed in such a way that a closed-loop system is maintained, in which all wastes are absorbed within the assimilative capacity of the earth. Recycling the resources and using them for alternative use will lead to a zero-waste approach (Bjorndalen et al. 2005).
17.9.10
Use of Cleaner Crude Oil
Crude oil is comparatively cleaner than distillates because it contains less sulfur and less toxic metals. The use of crude oil for various applications should be promoted. This will not only help maintain the environment because of its less toxic nature, but it will also be less costly because it avoids expensive catalytic refining processes. Recently, the direct use of crude oil has been of great interest.
GREENING OF PETROLEUM OPERATIONS
759
Sawdust-Fueled Electricity Generator
Raw sawdust silo Moisture escapes to condenser that scavenges residual heat
Hot air plenum dries falling sawdust
Powered auger sawdust feeder
Jet exhaust goes to heat exchanger
Powered grinder turns super-dried sawdust into wood flour
Powered auger wood flour feeder
Turbo exhaust blades turn drive shaft
Combustion chamber Fuel injectors for biofuel used to start the engine and pre-heat the combustion chamber Compressor turbine blades Drive shaft Generator/ starter
Figure 17.9 Schematic of sawdust fuelled electricity generator.
Several studies have been conducted to investigate electricity generation from sawdust (Sweis 2004; Venkataraman et al. 2004; Calle et al. 2005). Figure 17.9 shows the schematic of a scaled model developed by Islam in collaboration with Veridity Environmental Technologies (Halifax, Nova Scotia).
760
THE GREENING OF PETROLEUM OPERATIONS
A raw sawdust silo is equipped with a powered auger sawdust feeder. The sawdust is inserted inside another feeding chamber that is equipped with a powered grinder that pulverizes sawdust into wood flour. The chamber is attached to a heat exchanger that dries the sawdust before it enters into the grinder. The wood flour is fed into the combustion chamber with a powered auger wood flour feeder. The pulverization of sawdust increases the surface area of the particles significantly. The electricity generated by the generator itself, requiring no additional energy investment, provides the additional energy required to run the feeder and the grinder. In addition, the pulverization chamber is also used to dry the sawdust. The removal of moisture increases flammability of the feedstock. The combustion chamber itself is equipped with a start-up fuel injector that uses biofuel. Note that initial temperature required to startup the combustion chamber is quite high and cannot be achieved without a liquid fuel. The exhaust of the combustion chamber is circulated through a heat exchanger in order to dry sawdust prior to pulverization. As the combustion fluids escape the combustion chamber, they turn the drive shaft blades, which rotate to turn the drive shaft, which in turn, turns the compressor turbine blades. The power generator is placed directly under the main drive shaft. Fernandes and Brooks (2003) compared black carbon (BC) derived from various sources. One interesting feature of this study was that they studied the impact of different sources on the composition, extractability, and bioavailability of resulting BC. By using molecular fingerprints, the concluded that fossil BC may be more refractory than plant-derived BC. This is an important finding because only recently there has been some advocacy that BC from fossil fuel may have a cooling effect, nullifying the contention that fossil fuel burning is the biggest contributor to global warming. It is possible that BC from fossil fuel has higher refractory ability. However, there is no study available to date to quantify the cooling effect and to determine the overall effect of BC from fossil fuel. As for the other effects, BC from fossil fuel appears to be on the harmful side as compared to BC from organic matters. For instance, vegetarian fire residues, straw ash, and wood charcoals had only residual concentrations of n-alkanes (<9 m g / g ) and polyclyclic aromatic (PAHs) of less than 0.2 m g / g . They compared these concentrations with diesel soot, urban dust, and chimney soot PAH concentrations of greater than 8 m g / g and n-alkanes greater than 20 m g / g .
GREENING OF PETROLEUM OPERATIONS
761
This design shows that even solid fuels can be used to produce electricity at high efficiencies. Burning sawdust produces fresh carbon dioxide, whereas burning fossil fuels produces older carbon dioxide (Chhetri et al. 2007). The use of crude oil (which is liquid) in such a system would be more efficient than solid fuel. If crude oil were used directly, similar to the sawdust electricity generator, the environmental problems associated with fossil fuel use and refining could be minimized, thus increasing the economical efficiency of using fossil fuel. The CO z produced from the direct use of crude oil would be acceptable to plants because it is fresh C 0 2 and no catalysts and chemicals are used for processing. Hence, the direct use of crude oil would solve the major environmental problems we face today.
17.9.11 Use of Gravity Separation Systems Heavier fractions can be settled out through the density difference method. Various settling tanks in different stages can be designed that allow sufficient time to settle the fractions based on their density. Even though, it would not solve all the problems, some of the environmental problems can be reduced by this method. This would be less costly compared to other processes but more time consuming.
17.10
Concluding Remarks
The modern age has been characterized as both 'technological disaster' (as per Nobel Laureate Chemist, Robert Curl) and 'scientific miracles' (as the most predominant theme of modern education). Numerous debates break out every day, resulting in the formation of various schools of thoughts, often settling for 'agreeing to disagree'. At the end, little more than band-aid solutions are offered in order to 'delay the symptoms' of any ill-effect of the current technology development. This modus operandi is not conducive to knowledge and cannot be utilized to lead the current civilization out of the misery that it faces, as evident in all sectors. In this regard the information age offers us a unique opportunity in the form of 1) transparency (arising from monitoring in space and time); 2) infinite productivity (due to inclusion of intangibles, zero-waste, and transparency); and 3) custom-designed solutions (due to transparency and infinite productivity). When one compares these
762
THE GREENING OF PETROLEUM OPERATIONS
features with the essential features of Nature, viz., dynamic, unique, and strictly non-linear, one appreciates that the information age has given us an opportunity to emulate nature. This gives us hope for correctly modelling effects of man-made activities on the global ecosystem. This chapter highlights the practices that need to be avoided. The nature-science standpoint provides a way out of this impenetrable darkness created by the endless addition of seemingly infinite layers of opacity. We have no case anywhere in nature where the principle of conservation of masss, energy or momentum has been violated. The truly scientific way forward, then, for modern engineering and scientific research would seem to lie on the path of sorting out the actual pathway of a phenomenon from its root or source to some output-point by investigating the mass-balance, the energy balance, the mass-energy balance and the momentum balance of the phenomenon.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
18 Conclusion
18.1
Introduction
This book offers a true paradigm shift in energy management, starting with a scientific discourse on what a true paradigm shift is. With a scientific discussion of change, the book shows how previous civilizations handled their energy needs, both in practice and theory. This delinearized history helped determine the root causes of the crises encountered in the information age. Addressing the root cause would invoke changes in the long term, avoiding cosmetic changes that have dominated the modern world. Scientifically, this is equivalent to re-examining the first premise of all theories and laws. If the first premise does not conform to natural laws, then the model is considered unreal (not just unrealistic) - dubbed an "aphenomenal model." With these aphenomenal models, all subsequent decisions lead to outcomes that conflict with the stated "intended" outcomes. At present, such conflicts are explained either with doctrinal philosophies or with a declaration of paradox. Our analysis shows that doctrinal philosophy is aphenomal science and is the main reason for the current crisis that we are experiencing. The statement of a paradox helps us procrastinate in solving the 763
764
THE GREENING OF PETROLEUM OPERATIONS
problem, but it does nothing to solve the problem. Both these states keep us squarely in what we call the Einstein box. (Albert Einstein famously said, "The thinking that got you into the problem, is not going to get you out.") Instead, if the first premises are replaced with a phenomenal premise, the subsequent cognition encounters no contradictions with the intended outcomes. The end results show how the current crises cannot only be arrested but also reversed. As a result, the entire process would be reverted from unsustainable to sustainable. With the above recasting of the problem, the science and engineering of energy management are worked out in this book, with clear directions of how to change the current practices so that they become sustainable and remain that way. Using these sustainable practices would cause a change in direction. If the previous practices were taking us from bad to worse, the proposed practices would take us from good to better. This conclusion holds true for all petroleum engineering operations and the economic analysis that justifies such operations at a decision making stage. This closes the vicious loop (unsustainable engineering —> technological disaster —> environmental calamity —> financial collapse) that we have become familiar with at the dawn of the Information Age. The introduction of this book was written to motivate the reader to be a believer in nature and the ability of its "best creation." The conclusion is written to motivate the reader to practice this belief.
18.2
The HSS®A® (Honey -> Sugar -> Saccharin®—>Aspartame®) Pathway
The HSS®A® pathway is a kind of metaphor for many other things that originate from a natural form and become subsequently engineered through many intermediate stages into "new" products. The following discussion lays out how it works. Once it is understood how disinformation works, one can figure out a way to reverse the process by avoiding aphenomenal practices. Over the years it has become a common idea among engineers and the public to associate an increase in the quality, and/or qualities, of a final product with the insertion of additional intermediate stages of refining the product. If honey - taken more or less directly from a natural source, without further processing - was fine, surely the sweetness that can be attained by refining sugar must be better. If
CONCLUSION
765
the individual wants to reduce their risk of diabetes, then surely further refining of the chemistry of "sweetness" into such products as Saccharin® must be better still. And why not even more sophisticated chemical engineering to further convert the chemical essence of this refined sweetness into forms that are stable in liquid phase, such as Aspartame®? In this sequence, each additional stage is defended and promoted as having overcome some limitation of the last stage. But at the end of this chain, what is left in, say, Aspartame® of the anti-bacterial qualities of honey? Looking from the end of this chain back to its start, how many lab rats ever contracted cancer from any amount of intake of honey? Honey is known to be the only food that has all the nutrients, including water, to sustain life. How many true nutrients does Aspartame® have? From the narrowest engineering standpoint, the kinds and number of qualities in the final product at the end of this Honey —> Sugar —> Saccharin®—» Aspartame® chain have been transformed, but from the human consumer's standpoint of the use-value of "sweet-tasting," has there been a net qualitative gain going from honey all the way to Aspartame®? From the scientific standpoint, honey fulfils both conditions of phenomenality, namely, origin and process. That is, the source of honey (nectar) is real (even if it means flowers were grown with chemical fertilizers, pesticides, or even genetic alteration) and the process is real (honeybees cannot make false intentions, therefore, they are perfectly natural), even if the bees were subjected to air pollution or a sugary diet. The quality of honey can be different depending on other factors, e.g., chemical fertilizer, genetic alteration, etc., but honey remains real. As we "progress" from honey to sugar, the origin remains real (sugar cane or beet), but the process is tainted with artificiality, starting from electrical heating, chemical additives, bleaching, etc. Further "progress" to Saccharin® marks the use of another real origin, but this time the original source (crude oil) is a very old food source compared to the source of sugar. With steady state analysis, they both will appear to be of the same quality! As the chemical engineering continues, we resort to the final transition to Aspartame®. Indeed, nothing is phenomenal about Aspartame®, as both the origin and the process are artificial. So, the overall transition from honey to Aspartame® has been from 100% phenomenal to 100% aphenomenal. Considering this, what economic calculations are needed to justify this replacement? It becomes clear, without considering the phenomenality feature, that any talk of economics
766
THE GREENING OF PETROLEUM OPERATIONS
would only mean the "economics" of aphenomenality. Yet, this remains the standard of neo-classical economics. There is an entire economics of scale that is developed and applied to determine how far this is taken in each case. For example, honey is perceptibly "sugar" to taste. We want the sugar, but honey is also anti-bacterial and cannot rot. Therefore, the rate at which customers will have to return for the next supply is much lower and slower than the rate at which customers would have to return to resupply themselves with, say, refined sugar. Or even worse, to extend the amount of honey available in the market (in many third world countries, for example), sugar is added. The content of this "economic" logic then takes over and drives what happens to honey and sugar as commodities. There are natural limits to how far honey as a natural product can actually be commodified, whereas, for example, refined sugar is refined to become addictive so that the consumer becomes hooked and the producer's profit is secured. The matter of intention is never considered in the economics of scale. As a result, however, certain questions go unasked. No one asks whether any degree of external processing of what began as a natural sugar source can or will improve its quality as a sweetener. Exactly what that process, or those processes, would be is also unasked. No sugar refiner is worried about how the marketing of his product in excess is contributing to a diabetes epidemic. The advertising that is crucial to marketing this product certainly won't raise this question. Guided by the "logic" of the economies of scale, and the marketing effort that must accompany it, greater processing is assumed to be and accepted as being ipso facto good, or better. As a consequence of the selectivity inherent in such "logic," any other possibility within the overall picture - such as the possibility that as we go from honey to sugar to saccharin to aspartame, we go from something entirely safe for human consumption to something cancerously toxic - does not even enter the frame. Such a consideration would prove to be very threatening to the health of a group's big business in the short term. All this is especially devastatingly clear when it comes to crude oil. Widely and falsely believed to be toxic before a refiner touches it, refined petroleum products are utterly toxic, but they are not to be questioned since they provide the economy's lifeblood. Edible, natural products in their natural state are already good enough for humans to consume at some safe level and process
CONCLUSION
767
further internally in ways useful to the organism. We are not likely to over-consume any unrefined natural food source. However, the refining that accompanies the transformation of natural food sources into processed-food commodities also introduces components that interfere with the normal ability we have to push a natural food source aside after some definite point. Additionally, with externally processed "refinements" of natural sources, the chances increase that the form in which the product is eventually consumed must include compounds that are not characteristic anywhere in nature and that the human organism cannot usefully process without excessively stressing the digestive system. After a cancer epidemic, there is great scurrying to fix the problem. The cautionary tale within this tragedy is that, if the HSS®A® principle were considered before a new stage of external processing were added, much unnecessary tragedy could be avoided. There are two especially crucial premises of the economics-ofscale that lie hidden within the notion of "upgrading by refining:" (a) unit costs of production can be lowered (and unit profit therefore expanded) by increasing output Q per unit time f, i.e., by driving dQ/dt unconditionally in a positive direction; and (b) only the desired portion of the Q end-product is considered to have tangible economics and, therefore, also intangible social "value," while any unwanted consequences - e.g., degradation of, or risks to, public health, damage(s) to the environment, etc. - are discounted and dismissed as false costs of production. Note that, if relatively free competition still prevailed, premise (a) would not arise even as a passing consideration. In an economy lacking monopolies, oligopolies, a n d / o r cartels dictating effective demand by manipulating supply, unit costs of production remain mainly a function of some given level of technology. Once a certain proportion of investment in fixed-capital (equipment and groundrent for the production facility) becomes the norm generally among the various producers competing for customers in the same market, the unit costs of production cannot fall or be driven arbitrarily below a certain floor level without risking business loss. The unit cost thus becomes downwardly inelastic. The unit cost of production can become downwardly elastic, i.e., capable of falling readily below any asserted floor price, under two conditions: (1) during moments of technological transformation of the industry, in which producers who are first to lower their unit costs by using more advanced machinery will gain market shares,
768
THE GREENING OF PETROLEUM OPERATIONS
temporarily, at the expense of competitors; or (2) in conditions where financially stronger producers absorb financially weakened competitors. In neoclassical models, which assume competitiveness in the economy, this second circumstance is associated with the temporary cyclical crisis. This is the crisis that breaks out from time to time in periods of extended oversupply or weakened demand. In reality, contrary to the assumptions of the neoclassical economic models, the impacts of monopolies, oligopolies, and cartels have entirely displaced those of free competition and have become normal rather than the exception. Under such conditions, lowering unit costs of production (and thereby expansion of unit profit) by increasing output Q per unit time t, i.e., by driving dQ/dt unconditionally in a positive direction, is no longer an occasional and exceptional tactical opportunity. It is a permanent policy option: monopolies, oligopolies, and cartels manipulate supply and demand because they can. Note that premise (b) points to how, where, and why consciousness of the unsustainability of the present order can emerge. Continuing indefinitely to refine nature out by substituting ever more elaborate chemical "equivalents," hitherto unknown in the natural environment, has started to take its toll. The narrow concerns of the owners and managers of production are at odds with the needs of society. Irrespective of the private character of their appropriation of the fruits of production, based on concentrating so much power in so few hands, production has become far more social. The industrialscale production of all goods and services as commodities has spread everywhere from the metropolises of Europe and North America to the remotest Asian countryside, the deserts of Africa, and the jungle regions of South America. This economy is not only global in scope but also social in its essential character. Regardless of the readiness of the owners and managers to dismiss and abdicate responsibility for the environmental and human health costs of their unsustainable approach, these costs have become an increasingly urgent concern to societies in general. In this regard, the HSS®A® principle becomes a key and most useful guideline for sorting what is truly sustainable for the long term from what is undoubtedly unsustainable. The human being that is transformed further into a mere consumer of products is a being that is marginalized from most of the possibilities and potentialities of the fact of his/her existence. This marginalization is an important feature of the HSS®A® principle.
CONCLUSION
769
There are numerous things that individuals can do to modulate, or otherwise affect, the intake of honey and its impacts, but there's little - indeed, nothing - that one can do about Aspartame® except drink it. With some minor modification, the HSS®A® principle helps illustrate how the marginalization of the individual's participation is happening in other areas. What has been identified here as the HSS®A® principle, or syndrome, continues to unfold attacks against both the increasing global striving toward true sustainability on the one hand, and the humanization of the environment in all aspects, societal and natural, on the other. Its silent partner is the aphenomenal model, which invents justifications for the unjustifiable and for "phenomena" that have been picked out of thin air. As with the aphenomenal model, repeated and continual detection and exposure of the operation of the HSS®A® principle is crucial for future progress in developing nature-science, the science of intangibles and true sustainability. Table 18.1 summarizes the outcome of the HSS®A® pathway. While this pathway is only less than a century old, the same pathway was implemented some millennia ago and has influenced the modern day thinking over the last millennium. This is valid for all aspects of life ranging from education to economics.
Table 18.1 The HSS®A® pathway and its outcome in various disciplines. Natural state
2nd stage of intervention
Second stage of intervention
3rd stage of intervention Aspartame®
Honey
Sugar
Saccharin®
Education
Doctrinal teaching
Formal education Computerbased learning
Science
Religion
Fundamentalism
Cult
Science and nature-based technology
New Science
Engineering
Computerbased design
Value-based (e.g. gold, silver) economy
Coins (non gold or silver)
Paper money (disconnected from gold reserve)
Promissory note (electronic)
770
18.3
THE GREENING OF PETROLEUM OPERATIONS
HSS®A® Pathway in Energy Management
If the first premise of "nature needs human intervention to be fixed" is changed to "nature is perfect," then engineering should conform to natural laws. This book presents detailed discussion on how this change in the first premise helps answer all questions that remained unanswered regarding the impacts of petroleum operations. It also helps demonstrate the false, but deeply rooted, perception that nuclear, electrical, photovoltaic, and "renewable" energy sources are "clean" and carbon-based energy sources are "dirty." This book establishes that crude oil, being the finest form of nature-processed energy source, has the greatest potential for environmental good. The only difference between solar energy (used directly) and crude oil is that crude oil is concentrated and can be stored, transported, and re-utilized without resorting to HSS® A® degradation. Of course, the conversion of solar energy through photovoltaics creates technological (low efficiency) and environmental (toxicity of synthetic silicon and battery components) disasters. Similar degradation takes place for other energy sources as well. Unfortunately, crude oil, an energy-equivalent of honey, has been promoted as the root of the environmental disaster (global warming and consequences, e.g., CNN 2008). Ignoring the HSS®A® pathway that crude oil has suffered has created the paradoxes, such as "carbon is the essence of life and also the agent of death" and "enriched uranium is the agent of death and also the essence of clean energy." These paradoxes are removed if the pathway of HSS®A® is understood. Table 18.2 shows the HSS® A® pathway that is followed for some of the energy management schemes. One important feature of these technologies is that nuclear energy is the only one that does not have a known alternative to the HSS®A® pathway. However, nuclear energy is also being promoted as the wave of the future for energy solutions, showing once again that every time we encounter a crisis we come up with a worse solution than what caused the crisis in the first place. It is important to note that the HSS®A® pathway has been a lucrative business because most of the profit is made using this mode. This profit also comes with disastrous consequences to the environment. Modern day economics do not account for such long-term consequences, making it impossible to pin down the real cost of this degradation. In this book, the intangibles that caused the technological and environmental disasters are explicitly pointed out both in engineering and economics. As an outcome of this analysis, the
CONCLUSION
771
Table 18.2 The HSS®A® pathway in energy management schemes. Natural state
l s l stage of intervention
2nd stage of intervention
3 rd stage of intervention
Honey
Sugar
Saccharin®
Aspartame®
Crude oil
Refined oil
High-octane refining
Chemical additives for combating bacteria, thermal degradation, weather conditions, etc.
Solar
Photo voltaics
Storage in batteries
Re-use in artificial light form
Organic vegetable oil
Chemical fertilizer, pesticide
Refining, thermal extraction
Genetically modified crop
Organic saturated fat
Hormone, antibiotic
Artificial fat (transfat)
No-transfat artificial fat
Wind
Conversion into electricity
Storage in batteries
Re-usage in artificial energy forms
Water and hydro-energy
Conversion into electricity
Dissociation utilizing toxic processes
Recombination through fuel cells
Uranium ore
Enrichment
Conversion into Re-usage in electrical energy artificial energy forms
entire problem is re-cast in developing true science and economics of nature that would bring back the old principle of value proportional to price. This is demonstrated in Figure 18.1. This figure can be related to Table 18.2 in the following way: •
Natural state of economics = economizing (waste minimization, meaning "minimization" and "ongoing intention" in the Arabic term) • First stage of intervention = move from intention-based to interest-based
772
THE GREENING OF PETROLEUM OPERATIONS Natural state Real value (sustainable pricing)
Artificial value (unsustainable pricing)
Artificial state
Value
Figure 18.1 Economics and accounting systems have to be reformulated in order to make stated value proportional to real value.
• Second stage of intervention = make wasting the basis of economic growth • Third stage of intervention = borrow more from future to promote the second stage of intervention The above model is instrumental in turning a natural supply and demand economic model into an unnatural perception-based model. This economic model then becomes the driver of the engineering model, closing the loop of the unsustainable mode of technology development.
18.4 The Conclusions Based on the analyses presented in this book, the following conclusions can be drawn: 1. Crude oil and natural gas are the best sources of energy available to mankind. They have the potential of becoming the agent of positive change to the environment. 2. The environmental consequences that have been observed ever since the golden era of petroleum production are not inherent to natural crude oil but
CONCLUSION
3.
4.
5.
6.
are due in part to the processes involved in petroleum refining, gas processing and material manufacturing (e.g., plastic). Environmental consequences of all other alternatives to fossil fuel are likely to be worse in the long run. This is not obvious because engineering calculations as well as economical analysis do not account for the entire history (the continuous time function or intangibles) of the process. If intangibles were included, other alternatives would clearly become less efficient and more harmful than petroleum products, even if the current mode of petroleum operations continues. By changing the current mode of petroleum operations (particularly in treating petroleum products with artificial chemicals) to sustainable processes, the current trend of environmental disasters can be effectively reversed. By including the intangibles, new models of energy pricing would increase profitability as the real value of the final product is increased, as opposed to the current model that sees the highest profit margin for the most toxically processed products. The phenomenal, knowledge-based model presented in this book has the potential of revolutionizing the current mode of technology development and economic calculations.
773
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
Appendix 1 Origin of Atomic Theory as Viewed by the European Scientists From website: Chemistry encyclopedia at h t t p : / / w w w . chemistryexplained.com/Ar-Bo/Atoms.html, accessed October 24, 2008
Atoms An atom is the smallest possible unit of an element. Since all forms of matter consist of a combination of one or more elements, atoms are the building blocks that constitute all the matter in the universe. Currently, 110 different elements, and thus 110 different kinds of atoms, are known to exist. Our current understanding of the nature of atoms has evolved from the ancient, untested ideas of Greek philosophers, partly as a result of modern technology that has produced images of atoms.
The Greek Atomistic Philosophy The earliest ideas concerning atoms can be traced to the Greek philosophers, who pursued wisdom, knowledge, and truth through argument and reason. Greek scientific theories were largely based 775
776
APPENDIX I
on speculation, sometimes based on observations of natural phenomena and sometimes not. The idea of designing and performing experiments rarely occurred to Greek philosophers, to whom abstract intellectual activity was the only worthy pastime. Empedocles, a Greek philosopher active around 450 B.C., proposed that there were four fundamental substances—earth, air, fire, and water—which, in various proportions, constituted all matter. Empedocles, thus, formulated the idea of an elemental substance, a substance that is the ultimate constituent of matter; the chemical elements are modern science's fundamental substances. An atomic theory of matter was proposed by Leucippus, another Greek philosopher, around 478 B.C. Our knowledge of the atomic theory of Leucippus is derived almost entirely from the writings of his student, Democritus, who lived around 420 B.C. Democritus maintained that all materials in the world were composed of atoms (from the Greek atomos, meaning indivisible). According to Democritus, atoms of different shapes, arranged and positioned differently relative to each other, accounted for the different materials of the world. Atoms were supposed to be in random perpetual motion in a void; that is, in nothingness. According to Democritus, the feel and taste of a substance was thought to be the effect of the atoms of the substance on the atoms of our sense organs. The atomic theory of Democritus provided the basis for an explanation of the changes that occur when matter is chemically transformed. Unfortunately, the theory was rejected by Aristotle (384-322 B.C.) who became the most powerful and famous of the Greek scientific philosophers. However, Aristotle adopted and developed Empedocles's ideas of elemental substances. Aristotle's elemental ideas are summarized in a diagram (shown in Figure 1), which associated the four elemental substances with four qualities: hot, moist, cold, and dry. Earth was dry and cold; water was cold and moist; air was moist and hot; and fire was hot and dry. Every substance was composed of combinations of the four elements, and changes (which we now call chemical) were explained by an alteration in the proportions of the four elements. One element could be converted into the other by the addition or removal of the appropriate qualities. There were, essentially, no attempts to produce evidence to support this fourelement theory, and, since Aristotle's scientific philosophy held sway for 2,000 years, there was no progress in the development of the atomic concept. The tenuous relationship between elements and atoms had been severed when Aristotle rejected the ideas of
APPENDIX I
777
Fire
Air
Earth
Moist
Water Figure 1 Aristotle's four-element diagram.
Democritus. Had the Greek philosophers been open to the idea of experimentation, atomic theory, indeed all of science, could have progressed more rapidly.
The Rise of Experimentation The basis of modern science began to emerge in the seventeenth century, which is often recognized as the beginning of the Scientific Revolution. Conceptually, the Scientific Revolution can be thought of as a battle between three different ways of looking at the natural world: the Aristotelian, the magical, and the mechanical. The seventeenth century saw the rise of experimental science. The idea of making observations was not new. However, Sir Francis Bacon (1561-1626) emphasized that experiments should be planned and the results carefully recorded so they could be repeated and verified, an attitude that infuses the core idea of modern science. Among the early experimentalists was Robert Boyle (1627-1691), who studied quantitatively the compression and expansion of air, which led him to the idea that air was composed of particles that he called corpuscles, which he maintained were in constant motion. Boyle's description of corpuscular motion presages the kinetic molecular theory.
The Chemical Atom An atomic theory based on chemical concepts began to emerge from the work of Antoine Lavoisier (1743-1794), whose careful
778
APPENDIX I
quantitative experiments led to an operational definition of an element: An element was a substance that could not be decomposed by chemical processes. In other words, if a chemist could not decompose a substance, it must be an element. This point of view obviously put a premium on the ability of chemists to manipulate substances. Inspection of Lavoisier's list of elements, published in 1789, shows a number of substances, such as silica (Si0 2 ), alumina (A1203), and baryta (BaO), which today are recognized as very stable compounds. The chemists of Lavoisier's time simply did not have the tools to decompose these substances further to silicon, aluminum, and barium, respectively. The composition of all compounds could be expressed in terms of the elemental substances, but it was the quantitative mass relationship of compounds that was the key to deducing the reality of the chemical atom. Lavoisier's successful use of precise mass measurements essentially launched the field of analytical chemistry, which was thoroughly developed by Martin Klaproth (1743-1817). Lavoisier established the concept of mass conservation in chemical reactions, and, late in the eighteenth century, there was a general acceptance of the concept of definite proportions (constant composition) in chemical compounds, but not without controversy. Claude-Louis Berthollet (1748-1822) maintained that the composition of compounds could be variable, citing, for example, analytical results on the oxides of copper, which gave a variety of results, depending on the method of synthesis. Joseph-Louis Proust (1754-1826), over a period of eight years, showed that the variable compositions, even with very accurate analytical data, were due to the formation of different mixtures of two oxides of copper, CuO and Cu 2 0. Each oxide obeyed the law of constant composition, but reactions that were supposed to lead to "copper oxide" often produced mixtures, the proportions of which depended on the conditions of the reaction. Proust's proof of the law of constant composition was important, because compounds with variable composition could not be accommodated within the evolving chemical atomic theory.
Democritus Of Abbera Little is known for certain about Democritus of Abbera (c.460 B.C.E.-C.362 B.C.E.). None of his writings has survived intact. It is known from others (both students and detractors) that
APPENDIX I
779
Democritus was one of the earliest advocates of a theory that all matter exists as collections of very small, permanent, indivisible particle called atoms. —David A. Bassett John Dalton (1766-1844), a self-educated English scientist, was primarily interested in meteorology and is credited with being the first to describe color blindness, a condition with which he was burdened throughout his life. Color blindness is a disadvantage for a chemist, who must be able to see color changes when working with chemicals. Some have suggested that his affliction was one reason why Dalton was a rather clumsy and slip-shod experimenter. Gaseous behavior had been well established, starting with the experiments of Boyle. Dalton could not help supposing, as others previously did, that gaseous matter was composed of particles. But Dalton took the next and, ultimately, most important steps in assuming that all matter—gaseous, liquid, and solid—consists of these small particles. The law of definite proportions (constant composition) as articulated by Proust, suggested to Dalton that a compound might contain two elements in the ratio of, for example, 4 to 1, but never 4.1 to 1 or 3.9 to 1. This observation could easily be explained by supposing that each element was made up of individual particles. Dalton's atomic theory can be succinctly summarized by the following statements: Elements are composed of extremely small particles called atoms. All atoms of a given element have identical properties, and those properties differ from those of other elements. Compounds are formed when atoms of different elements combine with one another in small whole numbers. The relative numbers and kinds of atoms are constant in a given compound. Dalton recognized the similarity of his theory to that of Democritus, advanced twenty-one centuries earlier when the Greek philosopher called these small particles atoms, and, presumably, implied by using that word that these particles were indivisible. In Dalton's representation (Figure 2) the elements were shown as small spheres, each with a separate identity. Compounds of elements were shown by combining the appropriate elemental representations in the correct proportions, to produce complex symbols that seem to echo
780
APPENDIX I
Figure 2 Dalton's atomic symbols are described as "simple." The increasingly complex combination of symbols represent binary, ternary, quaternary, etc., compounds. Thus, Number 28 is a compound atom of carbonic acid (carbon dioxide), and number 31 is a compound atom of sulphuric acid (sulphyr trioxide).
our present use of standard chemical formulas. Dalton's symbols— circles with increasingly complex inserts and decorations—were not adopted by the chemical community. Current chemical symbols (formulas) are derived from the suggestions of Jons Berzelius (17791848). Berzelius also chose oxygen to be the standard reference for atomic mass (O = 16.00 AMU). Berzelius produced a list of atomic masses that were much closer to those that are currently accepted because he had developed a better way to obtain the formulas of substances. Whereas Dalton assumed that water had the formula HO, Berzelius showed it to be H 2 0. The property of atoms of interest to Dalton were their relative masses, and Dalton produced a table
APPENDIX I
781
Table 1 Dalton's first set of atomic weight values (1805). Hydrogen
1
Azot
4.2
Carbon
4.3
Ammonia
5.2
Oxygen
5.5
Water
6.5
Phosphorus
7.2
Phosphuretted hydrogen
8.2
Nitrous gas
9.3
Ether
9.6
Gaseous oxide of carbon
9.8
Nitrous oxide
13.7
Sulphur
14.4
Nitric acid
15.2
Sulphuretted hydrogen
15.4
Carbonic acid
15.3
Alcohol
15.1
Sulphureous acid
19.9
Sulphuric acid
25.4
Carburetted hydrogen from stagnant water
6.3
Olefiant gas
5.3
of atomic masses (Table 1) that was seriously deficient because he did not appreciate that atoms did not have to be in a one-to-one ratio; using more modern ideas, Dalton assumed, incorrectly, that all atoms had a valence of one (1). Thus, if the atomic mass of hydrogen is arbitrarily assigned to be 1, the atomic mass of oxygen is 8 on the Dalton scale. Dalton, of course, was wrong, because a water molecule contains two atoms of hydrogen for every oxygen atom, so that the individual oxygen atom is eight times as heavy as two hydrogen atoms or sixteen times as heavy as a single hydrogen atom. There was no way that Dalton could have known, from the data available, that the formula for water is H 2 0.
782
APPENDIX I
Dalton's atomic theory explained the law of multiple proportions. For example, it is known that mercury forms two oxides: a black substance containing 3.8 percent oxygen and 96.2 percent mercury, and a red compound containing 7.4 percent oxygen and 92.6 percent mercury. Dalton's theory states that the atoms of mercury (Hg) and oxygen (O) must combine in whole numbers, so the two compounds might be HgO and Hg z O, for example. Furthermore, Dalton's theory states that each element has a characteristic mass— perhaps 9 mass units for Hg and 4 mass units for O (the numbers were chosen arbitrarily, here). Given these assumptions, the relevant concepts are shown in Table 2. The assumed formulas are presented in line 1. The percent composition of each compound, calculated in the usual way, is presented in line 3, showing that these two compounds, indeed, have different compositions, as required by the law of multiple proportions. Line 4 contains the ratio of the mass of mercury to the mass of oxygen, for each compound. Those ratios can be expressed as the ratio of simple whole numbers (2.25:4.5 = 1:2), fulfilling a condition required by the law of multiple proportions. Notice that Dalton's ideas do not depend upon the values assigned to the elements or the formulas for the compounds involved. Indeed, the question as to which compound, red or black, is associated with which formula cannot be answered from the data available. Thus, although Dalton was unable to establish an atomic mass scale, his general theory did provide an understanding of the three mass-related laws: conservation, constant composition, and multiple proportion. Other information was required to establish the relative masses of atoms. The other piece of the puzzle of relative atomic masses was provided by Joseph-Louis Gay-Lussac (1778-1850), who published a paper on volume relationships in reactions of gases. Gay-Lussac made no attempt to interpret his results, and Dalton questioned the paper's validity, not realizing that the law of combining volumes Table 2 Law of multi Die proportions. Assumed formula
HgO
H&O
Total mass of compound
9 + 4 = 13
9 + 9 + 4 = 22
% composition
% Hg 69.2; % O = 30.8
% Hg = 81.8; % 0 = 18.2
Mass Hg/Mass O
9/4 = 2.25
18/4 = 4.5
APPENDIX I
783
was really a verification of his atomic theory! Gay-Lussac's experiments revealed, for example, that 2 volumes of carbon monoxide combine with 1 volume of oxygen to form 2 volumes of carbon dioxide. Reactions of other gaseous substances showed similar volume relationships. Gay-Lussac's law of combining volumes suggested, clearly, that equal volumes of different gases under similar conditions of temperature and pressure contain the same number of reactive particles (molecules). Thus, if 1 volume of ammonia gas (NH3) combines exactly with 1 volume of hydrogen chloride gas (HC1) to form a salt (NH4C1), it is natural to conclude that each volume of gas must contain the same number of particles. At least one of the implications of Gay-Lussac's law was troubling to the chemistry community. For example, in the formation of water, 2 volumes of hydrogen gas combined with 1 volume of oxygen gas to produce 2 volumes of steam (water in the gaseous state). These observations produced, at the time, an apparent puzzle. If each volume of gas contains n particles (molecules), 2 volumes of steam must contain 2 n particles. Now, if each water particle contains at least 1 oxygen atom, how is it possible to get two oxygen atoms (corresponding to 2 n water molecules) from n oxygen particles? The obvious answer to this question is that each oxygen particle contains two oxygen atoms. This is equivalent to stating that the oxygen molecule consists of two oxygen atoms, or that oxygen gas is diatomic (0 2 ). Amedeo Avogadro (1776-1856) an Italian physicist, resolved the problem by adopting the hypothesis that equal volumes of gases under the same conditions contain equal numbers of particles (molecules). His terminology for what we now call an atom of, for instance, oxygen, was half molecule. Similar reasoning involving the combining of volumes of hydrogen and oxygen to form steam leads to the conclusion that hydrogen gas is also diatomic (H2). Despite the soundness of Avogadro's reasoning, his hypothesis was generally rejected or ignored. Dalton never appreciated its significance because he refused to accept the experimental validity of Gay-Lussac's law. Avogadro's hypothesis—equal volumes of gases contain equal numbers of particles—lay dormant for nearly a half-century, until 1860 when a general meeting of chemists assembled in Karlsruhe, Germany, to address conceptual problems associated with determining the atomic masses of the elements. Two years earlier, Stanislao Cannizzaro (1826-1910) had published a paper in which, using Avogadro's hypothesis and vapor density data, he was able
784
APPENDIX I
to establish a scale of relative atomic masses of the elements. The paper, when it was published, was generally ignored, but its contents became the focal point of the Karlsruhe Conference. Cannizzaro's argument can be easily demonstrated using the compounds hydrogen chloride, water, ammonia, and methane, and the element hydrogen, which had been shown to be diatomic (H2) by using Gay-Lussac's reasoning and his law of combining volumes. The experimental values for vapor density of these substances, all determined under the same conditions of temperature and pressure, are also required for Cannizzaro's method for establishing atomic masses. The relevant information is gathered in Table 3. The densities of these gaseous substances (at 100°C and one atmosphere pressure) are expressed in grams per liter. The masses of the substances (in one liter) are the masses of equal numbers of molecules of each substance; the specific number of molecules is unknown, of course, but that number is unnecessary for the Cannizzaro analysis. If that unknown number of molecules is called N, and if mH represents the mass of a single hydrogen atom, then mH x 2N is the total mass of the hydrogen atoms in the 1 liter sample of hydrogen molecules; recall that hydrogen was shown to be diatomic (H2) by Gay-Lussac's law. From this point of view, the relative masses of the molecules fall in the order of the masses in 1 liter (or their densities). The mass of the hydrogen atom was taken as the reference (H = 1) for the relative atomic masses of the elements. Thus, the mass of all the hydrogen chloride molecules in the one liter sample is m QN, and the ratio of the mass of a hydrogen chloride molecule to a hydrogen atom is given by: Table 3 Cannizzaro's method of molecular mass determination. Density Gaseous g/L' Substance
Relative % Relative Number Formula to Mass of Hydrogen H Atoms Mass an H Atom ofH Present (Molecular Present in a Molecule Mass, Relative to H = l) 2.00 100 2.00 2 H2 36.12 2.76 1 HC1 1.00
Mass of "Other" Atoms
Hydrogen
0.0659
Hydrogen chloride Water
1.19 0.589
17.88
11.2
2.00
2
H20
0 = 15.88
Ammonia
0.557
16.90
17.7
3.00
3
NH,
N = 13.90
Methane
0.524
15.90
25.1
4.00
4
CH4
C = 11.90
'Density reported for conditions of 100°C and one atmosphere pressure
H=l Cl = 35.2
APPENDIX I
785
That is, if the mass of a hydrogen atom is taken to be 1 unit of mass, the mass of the hydrogen chloride molecule is 36.12 units. All the molecular masses listed in column 3 of the table can be established in the same way—twice the ratio of the density of the molecule in question to the density of hydrogen. Using experimental analytical data (column 4), Cannizzaro was able to establish the relative mass of hydrogen in each molecule (column 5), which gave the number of hydrogen atoms present in each molecule of interest (column 6), which, in turn, produced the formula of the molecule (column 7); analytical data also quantitatively indicate the identity of the other atom in the molecule. Thus, analysis would tell us that, for example, methane contains hydrogen and carbon. Knowing the total mass of the molecule (column 3) and the mass of all the hydrogen atoms present, the mass of the "other atom" in the molecule can be established as the difference between these numbers (column 8). Thus, if the mass of the HCl molecule is 36.12 and one atom of hydrogen of mass 1.00 is present, the mass of a Cl atom is 35.12. Relative mass units are called atomic mass units, AMUs. This very convincing use of Gay-Lussac's law and Avogadro's hypothesis by Cannizzaro quickly provided the chemical community with a direct way of establishing not only the molecular formulas of binary compounds but also the relative atomic masses of elements, starting with quantitative analytical data and the density of the appropriate gaseous substances. The long struggle to establish the concept of the chemical atom involved many scientists working in different countries using different kinds of equipment to obtain self-consistent data. All were infused with ideas of Sir Francis Bacon, who defined the classic paradigm of experimental science—results that are derived from careful observations and that are openly reported for verification. However, not all chemists equally embraced these ideas, which were to become fundamental to their craft. For example, the great physical chemist and Nobel Prize winner Friedrich Wilhelm Ostwald (1853-1932) refused to accept the existence of atoms well into the twentieth century. Ostwald held a strong personal belief that chemists ought to confine their studies to measurable phenomena such as energy changes. The atomic theory was to Ostwald nothing more than a convenient fiction. There are, of course, other lines of observations and arguments that lead to the conclusion that matter is particulate and, subsequently, to an ultimate atomic description of matter. One of these
786
APPENDIX I
involves the Brownian motion of very small particles. Robert Brown (1773-1858), a Scottish botanist, observed in 1827 that individual grains of plant pollen suspended in water moved erratically. This irregular movement of individual particles of a suspension as observed with a microscope is called Brownian motion. Initially, Brown believed that this motion was caused by the "hidden life" within the pollen grains, but further studies showed that even nonliving suspensions behave in the same way. In 1905 Albert Einstein (1879-1955) worked out a mathematical analysis of Brownian motion. Einstein showed that if the water in which the particles were suspended was composed of molecules in random motion according to the requirements of the kinetic molecular theory, then the suspended particles would exhibit a random "jiggling motion" arising from the occasional uneven transfer of momentum as a result of water molecules striking the pollen grains. One might expect that the forces of the water molecules striking the pollen grains from all directions would average out to a zero net force. But Einstein showed that, occasionally, more water molecules would strike one side of a pollen grain than the other side, resulting in a movement of the pollen grain. The interesting point in Einstein's analysis is that even if each collision between a water molecule and a pollen grain transfers a minuscule amount of momentum,
Photomicrograph of atoms in a tungsten crystal, magnified 2,700,000 times.
APPENDIX I
787
the enormous number of molecules striking the pollen grain is sufficient to overcome the large momentum advantage of the pollen grain (because of its considerably larger mass than that of a water molecule). Although the Swedish chemist Theodor Svedberg (1884-1971) suggested the general molecular explanation earlier, it was Einstein who worked out the mathematical details. Einstein's analysis of Brownian motion was partially dependent on the size of the water molecules. Three years later, Jean-Baptiste Perrin (18701942) set about to determine the size of the water molecules from precise experimental observations of Brownian motion. In other words, Perrin assumed Einstein's equations were correct, and he made measurements of the particles' motions, which Brown had described only qualitatively. The data Perrin collected allowed him to calculate the size of water molecules. Ostwald finally yielded in his objection to the existence of atoms because Perrin had a direct measure of the effect of water molecules on macroscopic objects (pollen grains). Since water was composed of the elements hydrogen and oxygen, the reality of atoms had been experimentally proved in Ostwald's view of how chemistry should be pursued. Ostwald's reluctance to accept the chemical atom as an entity would surely have yielded to the overwhelming evidence provided by scanning tunneling microscopy (STM). Although Ostwald did not live to see it, this technique provides such clear evidence of the reality of simple atoms that even he would have been convinced. SEE ALSO Avogadro, Amedeo; Berthollet, Claude-Louis; Berzelius, Jons JaKob; Boyle, Robert; Cannizzaro, Stanislao; Dalton, John; Einstein, Albert; Gay-Lussac, Joseph-Louis; Lavoisier, Antoine; Ostwald, Friedrich Wilhelm; Svedberg, Theodor; MOLDCULES. /. /. Lagowski
Bibliography Hartley, Harold (1971). Studies in the History of Chemistry. Oxford, U.K.: Clarendon Press. Ihde, Aaron J. (1964). The Development of Modem Chemistry. New York: Harper and Row. Lavoisier, Antoine; Fourier, Jean-Baptiste Joseph; and Faraday, Michael (1952). Great Books of the Western World, Vol. 45, tr. Robert Kerr and Alexander Freeman. Chicago: Encyclopedia Britannica.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
Appendix 2 Nobel Prize in Physics (2008) given for discovering breakdown of symmetry From the website: http://www.sciencebase.com/science-blog/ nobel-prize-for-physics-2008.html, last accessed on October 24, 2008)
Nobel Prize for Physics 2008 The Nobel Prize for Physics 2008 is announced here Tuesday, October 7. The Nobel Prize in Physics goes to Yoichiro Nambu (born 1921) of the Enrico Fermi Institute, University of Chicago "for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics" and to Makoto Kobayashi (b. 1944) of the High Energy Accelerator Research Organization (KEK) Tsukuba, Japan, and Toshihide Maskawa (b. 1940) of the Yukawa Institute for Theoretical Physics (YITP), Kyoto University Kyoto, "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature". You can read the full press release from the Nobel org here. As I mentioned in my previous post on the 2008 Nobel Prize for Medicine and Physiology item, yesterday, the team, led by Simon Frantz have been using modern web 2.0 type technologies, including RSS and twitter to get the word out to journalists as fast as they
789
790
APPENDIX 2
can. Part of the reason, apparently, was to save journalists from suffering serious F5 button finger strain at announcement time. Anyway, here's the twitter update page - Nobel tweets. They also created a neat little widget so that we could embed the timetable into a website (see left). As you can see, the 2008 Nobel Prize for Chemistry will be announced Wednesday October 8. I'm hoping once again for some straight chemistry, rather than bio-flavoured molecules, as this will give me a chance to get my teeth into my journalistic alma mater as it were.
APPENDIX 2
Appendix 2.1
791
Fascination for symmetry
(From the BBC website: http://www.bbc.co.uk/radio4/history/ inourtime/inourtime_20070419.shtml, last accessed Oct. 24, 2008)
Symmetry
Today we will be discussing symmetry, from the most perfect forms in nature, like the snowflake and the butterfly, to our perceptions of beauty in the human face. There's symmetry too in most of the laws that govern our physical world. The Greek philosopher Aristotle described symmetry as one of the greatest forms of beauty to be found in the mathematical sciences, while the French poet Paul Valery went further, declaring; "The universe is built on a plan, the profound symmetry of which is somehow present in the inner structure of our intellect". The story of symmetry tracks an extraordinary shift from its role as an aesthetic model - found in the tiles in the Alhambra and Bach's compositions - to becoming a key tool to understanding how the physical world works. It provides a major breakthrough in mathematics with the development of group theory in the 19lh century. And it is the unexpected breakdown of symmetry at sub-atomic level that is so tantalising for contemporary quantum physicists. So why is symmetry so prevalent and appealing in both art and nature? How does symmetry enable us to grapple with monstrous numbers? And how might symmetry contribute to the elusive Theory of Everything?
792
APPENDIX 2
Contributors Fay Dowker, Reader in Theoretical Physics at Imperial College, London Marcus du Sautoy, Professor of Mathematics at the University of Oxford Ian Stewart, Professor of Mathematics at the University of Warwick
APPENDIX 2
Appendix 2.2
793
Ct Scan Study was Paid by Tobacco Companies (New York Times story)
The conclusion? Smoke all you want, we have the cure (detection=cure, no?) and we have a patent to prove you are safe. March 26, 2008 Cigarette Company Paid for Lung Cancer Study By Gardiner Harris In October 2006, Dr. Claudia Henschke of Weill Cornell Medical College jolted the cancer world with a studysaying that 80 percent of lung cancer deaths could be prevented through widespread use of CT scans. Small print at the end of the study, published in The New England Journal of Medicine, noted that it had been financed in part by a little-known charity called the Foundation for Lung Cancer: Early Detection, Prevention & Treatment. A review of tax records by The New York Times shows that the foundation was underwritten almost entirely by $3.6 million in grants from the parent company of the Liggett Group, maker of Liggett Select, Eve, Grand Prix, Quest and Pyramid cigarette brands. The foundation got four grants from the Vector Group, Liggett's parent, from 2000 to 2003. Dr. Jeffrey M. Drazen, editor in chief of the medical journal, said he was surprised. "In the seven years that I've been here, we have never knowingly published anything supported by" a cigarette maker, Dr. Drazen said. An increasing number of universities do not accept grants from cigarette makers, and a growing awareness of the influence that companies can have over research outcomes, even when donations are at arm's length, has led nearly all medical journals and associations to demand that researchers accurately disclose financing sources. Dr. Henschke was the foundation president, and her longtime collaborator, Dr. David Yankelevitz, was its secretary-treasurer. Dr. Antonio Gotto, dean of Weill Cornell, and Arthur J. Mahon, vice chairman of the college board of overseers, were directors.
794
APPENDIX 2
Appendix 2.3
Problems with Nanomaterials
Micro materials that could pose major health risks Panel issues warning for products with nanomaterials, saying tiny substances in everything from sunscreen to dieselfuel may be toxic By Martin Mittelstaedt The Globe and Mail [Toronto], Thu 10 July 2008, Page A10 http://www.theglobeandmail.com/servlet/story/LAC.20080710.NANO10/ TPStory/?query=MARTIN+MITTELSTAEDT
A blue-ribbon scientific panel has waved a yellow flag in front of a rapidly expanding number of products containing nanomaterials, cautioning that the tiny substances might be able to penetrate cells and interfere with biological processes. The warning is contained in a report from the Council of Canadian Academies that will be released publicly today. It is one of the most authoritative to date in this country about the risks of engineered nanomaterials, which companies are adding to products ranging from sunscreens to diesel fuels. The council, which was asked by Health Canada and several other federal agencies to study the state of knowledge about these novel substances and the regulatory changes needed to oversee their use, concluded that "there are inadequate data to inform quantitative risk assessments on current and emerging nanomaterials." Their small size, the report says, may allow them "to usurp traditional biological protective mechanisms" and, as a result, possibly have "enhanced toxicological effects." Although backers of nanomaterials say they hold enormous promise for developing improved medicines and stronger and more durable products, the report cautioned that many useful items once thought to be harmless, such as polychlorinated biphenyls - the now-banned transformer oils known as PCBs - and the herbicide Agent Orange, were later determined to be extremely dangerous.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
References and Bibliography Abdel-Ghani, M.S. and Davies, G.A., 1983, Simulation of Non-Woven Fibre Mats and the Application to Coalescers. Chemical Engineering Science 40: 117-129. Ackermann, R., 1961, Inductive Simplicity, Philosophy of Science 28(2): 152-61 Adelaide, 2006, Waite Solid-state NMR Facility, University of Adelaide, Australia accessed: December 30,2006. Adelman, M.A., 2001, The Clumsy Cartel: Opec's Uncertain Future, Harvard International Review, Vol. 23 (1) - Spring. Adeoti, O., Ilori, M.O., Oyebisi, T.O., and Adekoya, L.O., 2000, Engineering design and economic evaluation of a family-sized biogas project in Nigeria, Technovation 20:103-108. Agarwal, RN. and Puvathingal, J.M., 1969, Microbiological deterioration of woollen materials, Textile Research Journal, Vol. 39: 38. Agus, D. B., Vera, J. C , and Golde, D. W., 1999, Stromal Cell Oxidation: A Mechanism by Which Tumors Obtain Vitamin C. Cancer Research 59:4555-4558. Ahn, J.S., and Lee, K.H., 1986, Studies on the volatile aroma components of edible mushroom (Tricholoma matsutake) of Korea, journal of the Korean Society of Food and Nutrition, 15:253-257 [cited in BUA, 1994]. Ahnell, A., and O'Leary, H., 1997, Drilling and production discharges in the marine environment. In Environmental technology in the oil industry. Edited by S.T. Orszulik. Blackie Academic & Professional, London, U.K. pp. 181-208. 795
796
R E F E R E N C E S A N D BIBLIOGRAPHY
Akhter, }., 2002, Numerical and Experimental Modelling of Contaminant Transport in Ground Water and of Sour Gas Removal from Natural Gas Streams. Chalmers University of Technology, Göteborg, Sweden. Al-Darbi, M.M., Muntasser, Z.M., Tango M., and Islam., M.R., 2002. Control of Microbial corrosion Using Coatings and Natural Additives. Energy Sources 24(11), p. 1009. Al-Darbi, M., Muntasser, Z., Tango, M., and Islam, M.R., 2002, Control of microbial corrosion using coatings and natural additives. Energy Sources, 24(11):1009-18. Al-Darbi, M.M., Saeed N.O., and Islam, M.R., 2002a, Biocorrosion and Microbial Growth Inhibition using Natural Additives. The 52rd Canadian Chemical Engineering Conference (CSChE 2003). Vancouver, BC, Canada. Al-Darbi, M.M.,Saeed, N.O., Ackman, R.G., Lee,K., and Islam,M.R.,2002b, "Vegetable And Animal Oils Degradation In Marine Environments", Proc. Oil and Gas Symposium, CSCE Annual Conference, refereed proceeding, Moncton, June. Al-Darbi, M.M., Saeed, N.O., Islam, M.R., and Lee, K., 2005, "Biodegradation of Natural Oils in Sea Water", Energy Sources, vol. 27, no. 1-2,19-34. Al-Maghrabi, I., Bin-Aqil, A.O., Chaalal, O., and Islam, M.R., 1999, "Use of Thermophilic Bacteria for Bioremediation of Petroleum Contaminants", Energy Sources, vol. 21 (1/2), 17-30. Al-Marzouki, M., 1999, Determining Pore Size Distribution of Gas Separation Membranes from Adsorption Isotherm data. Energy Sources, Vol. 21:(l-2), 31-38. Al-Sulaiman, F.A. and Ahmed, Z., 1995. The Assessment of Corrosion Damage to Automobiles in the Eastern Coast Area of Saudi Arabia. Proceedings of the Institution of Mechanical Engineers -D-Jnl of Automobile Engineering 209(1), p. 3. AlAdhab, H., Kocabas, I., Islam, M.R., 1998, "Field-Scale Modeling of Asphaltene Transport in Porous Media", paper SPE paper no 49557, Proc. SPE ADIPEC 98, Abu Dhabi, UAE. AlDarbi, M.M., Agha, K.R., and Islam, M.R., 2004b, "Natural Additives for Corrosion Prevention", Int. Eng. Conf., Jordan, April. AlDarbi, M.M., Agha, K.R., Hoda, R. and Islam, M.R., 2004a, "A Novel Method for Preventing Corrosion", Int. Eng. Conf., Jordan, April. AlFalahy, M.A., Abou-Kassem, J.H., Chakma, A., Islam, M.R., 1998, 'Sour Gas Processing, Disposal and Utilization as Applied in UAE Reservoirs', SPE paper 49504 presented at the 8th Abu Dhabi International Petroleum Exhibition and Conference, Abu Dhabi, UAE, October 11-14. Alguacil, D.M., Fischer, P., and Windhab, E.J., 2006, Determination of the Interfacial Tension of Low Density Difference Liquid-Liquid Systems
R E F E R E N C E S A N D BIBLIOGRAPHY
797
Containing Surfactants by Droplet Deformation Methods, Chemical Engineering Science, Vol. 61:1386-1394. Ali, M. and Islam, M.R., 1998, "The Effect of Asphaltene Precipitation on Carbonate Rock Permeability: An Experimental and Numerical Appraoch", SPE 38856, SPE Production & Facilities, Aug., 78-83. Aliev, R.R., and Lupina, M.I., 1995, Formulation of cracking catalyst based on zeolite and natural clays, Chemistry and Technology of Fuels and Oils, Volume 31, Number 4, April, 150-154. Allen,C.A.W., 1998, Prediction of Biodiesel Fuel Atomization Characteristics Based on Measured Properties. Ph.D. Thesis, Faculty of Engineering, Dalhousie University, pp 200. Anfiteatro, D.N., 2007, Dom's Tooth-Saving Paste, accesssd: March 22, 2007. Annual Book of ASTM Standards. 2002. Paints, Related Coatings, and Aromatics (vol. 06.01). Anon, 1996, Greenhouse Issues, Newsletter of the IEA Greenhouse Gas R & D Program, Number 22, January 1996. Anonymous, 2008, http://www.mc.vanderbilt.edu/biolib/hc/journeys/ book7.html, accessed, Nov., 2008. AP, 2004, "FDA approves leeches as medical devices", available on website: http://www.msnbc.msn.com/id/5319129/, last accessed on December 12, 2008. AP, 2006, Nuclear Tower Crumbles on Purpose Sunday, May 21, (www. cbsnews.com/Stories/2006/05/21 / National/Mainl638433.html). AP, 2006b, Nuclear Tower Crumbles on Purpose Sunday, May 21,2006, (www. cbsnews.com/Stories/2006/05/ 21 /National/Mainl638433.Shtml) Appleman, B.R., 1992. Predicting Exterior Marine Performance of Coatings from Salt Fog: Two Types of Errors, journal of Protective Coatings and Linings, October, p. 134. Argonne National Laboratory. 2005, Natural Decay Series: Uranium, Radium, and Thorium. Human Health Fact Sheet, August, 2005. Ariew, R. and Barker, P., 1986, Duhem on Maxwell: Case-Study in Interrelations of History of Science & Philosophy of Science, Proceedings of the Biennial Meetings of the Philosophy of Science Association I "Contributed Papers" pp 145-156. Askey, A., Lyon, S.B., Thompson, G.E., Johnson, J.B., Wood, G.C., Cooke, M., and Sage, P., 1993. The Corrosion of Iron and Zinc by Atmospheric Hydrogen Chloride. Corrosion Science 34, p. 233. ASM International Handbook. 1987. Metals. ASM (vol. 13). Astrita, G., Savage, D., and Bisio, A., 1983, Gas Treating with Chemical Solvents. John Wiley and Sons, New York. ATSDR Agency for Toxic Substances and Disease Registry, 2006, Medical Management Guide Line for Sodium Hydroxide, accessed: June 07,2006.
798
R E F E R E N C E S A N D BIBLIOGRAPHY
Attanatho, L., Magmee, S., and Jenvanipanjakil, P., 2004, The Joint International Conference on 'Sustainable Energy and Environment'. 1-3 December 2004, Huan Hin, Thailand. Attridge, T.H., 1990, Light and Plant Responses: A Study of Plant Photophysiology and the Natural Environment, Routledge, Chapman and Hall, Inc., New York, NY, USA. Ayala, J., Blanco, F., Garcia, P., Rodriguez, P., and Sancho, J., 1998, Asturian Fly Ash as a Heavy Metals Removal Material, Fuel, Vol. 77 11):1147-1154. Bachu, S., Gunter, W.D., and Perkins, E.H., 1994, Aquifer Disposal of C0 2 : Hydrodynamical and Mineral trapping. Energy Conversion and Management, volume 35,264-279. Bailey, R.T. and McDonald M.M., 1993, 'C0 2 Capture and Use for EOR in Western Canada, General Overview', Energy Conversion and Management, Volume 34, no. 9-11 1145-1150. Baillie-Hamilton, P., 2004, The Body Restoration Plan: Eliminate Chemical Calories and Repair Your Body's Natural Slimming System, Penguin Group, New York, 292 pp. Baird, Jr, W.C., 1990, Novel Platinum-Iridium Refining Catalysts, US Patent no. 4966879, Oct. 30. Baker, K.H. and Herson, D.S., 1990,"In situ biodegradation of contaminated aquifers and subsurface soils", Geomicrobiology } . vol. 8, 133-146. Bamhart, W.D. and Coulthard, C , 1995, 'Weyburn C 0 2 Miscible Hood Conceptual Design and Risk Assessment, 6lh Petroleum Conference of the South Saskatchewan Section, Petroleum Society of CIM, Regina, Oct. 16-18. Bansal, A. and Islam, M.R., 1994, 'Scaled Model Studies of Heavy Oil Recovery from an Alaskan Reservoir Using Gravity-assisted Gas Injection', /. Can. Pet. Tech. Vol. 33, no. 6,52-62. Barnwal, B.K and Sharma, M.P., 2005, Prospects of biodiesel production from vegetable oils in India: Renewable and Sustainable Energy Reviews, Vol.9: 363-378. Basu, A., Akhter, J., Rahman, M.H., and Islam, M.R., 2004, "A review of separation of gases using membrane systems", / Pet Set Tech, vol. 22, no. 9-10,1343-1368. Basu, A., White, R.L., Lumsden, M.D., Bisop, P., Butt, S., Mustafiz, S., and Islam, M.R, 2007, Surface Chemistry of Atlantic Cod Scale, /. Nature Science and Sustainable Technology, Vol. 1, no. 1:69-78. BBC, 2006, US and India seal nuclear accord, March 02, 2006, Thursday. http://news.bbc.co.Uk/2/hi/south_asia/4764826.stm (Accessed on January 09,2007). BBC, 2007, Key gene work scoops Nobel Prize, Oct. 8, available on h t t p : / / news.bbc.co.uk/2/low/health/733491.stm, last accessed Dec. 12, 2008.
R E F E R E N C E S A N D BIBLIOGRAPHY
799
BBC, 2008, Oil sets fresh record above $109. http://news.bbc.co.Uk/2/hi/ business/7289070.stm. Bellassai, S.J., 1972, Coating Fundamentals. Materials Performance, no. 12, p. 33. Bentsen, R.G., 1985, Ά New Approach to Instability Theory in Porous Media', Soc. Pet. Eng. ]., Oct, 765-779. Bergman, P.D., Drummond, C.J., Winter, E.M., and Chen, Z-Y, 1996, 'Disposal of Power Plant CO, in Depleted Oil and Gas Reservoirs in Texas', Proceedings of the third International Conference on Carbon Dioxide Removal, Massachusetts Institute of Technology, Cambridge, MA, USA, 9-11 Sept. Bertel, E. and Morrison, R, 2001, Nuclear Energy Economics in a sustainable Development Perspective. NEA News-No. 19.1, pp 14-17. Washington DC.US Department of Energy. Beveridge T.J. and Fyfe, W.S., 1985, Metal Fixation by Bacteria Cell Walls, Canada Journal of Earth Science, vol. 22, pp 1893-1898. Beyer, K.H., Jr., Bergfeld, W.F., Berndt, W.O., Boutwell, R.K., Carlton, W.W., Hoffmann, D.K., and Schroeter, A.L.,1983, Final report on the safety assessment of triethanolamine, diethanolamine, and monoethanolamine. /. Am. Coll. Toxicol. 2,183-235. Bezdek, R.H., 1993, The environmental, health, and safety implications of solar energy in central station power production, Energy, Volume 18, Issue 6, June, 681-685. Bjorndalen, N., Mustafiz, S., and Islam, M.R., 2005, No-flare design: converting waste to value addition. Energy Sources, 27(4), 371-80. Blank, L.T. and Tarquin, A.J., 1983, Engineering economy, McGraw-Hill, Inc. NY. USA. Blomstrom, D.C. and Beyer, E.M. Jr., 1980, Plants metabolise ethylene to ethylene glycol. Nature, 283:66-68. Boehman, A.L., 2005, Biodiesel production and processing. Fuel processing technology 86:1057-1058. Bone III, L. 1989, Accelerated Testing of Atmospheric Coatings for Offshore Structures. Material Performance, November, p. 31. Boocock, D.G.B., Konar, S.K., Mao, V. and Sidi, H., 1996, Fast one-phase oilrich processes for the preparation of vegetable oil methyl esters, Biomass Bioenergi/, vol. 11,43-50. Boocock, D.G.B., Konar, S.K., Mao, V., Lee, C , and Buligan, S., 1998, Fast formation of high purity methyl esters from vegetable oils. ]AOCS 75 (9):1167-1172. Bork, A.M., 1963. "Maxwell, Displacement Current, and Symmetry", American Journal of Physics 31: 854-9. Bork, A. M. (1967). "Maxwell and the Vector Potential", Isis 58(2): 210-22. Boyle, G., Everett, B., and Ramage, J. (ed.), 2003, Energy Systems and Sustainability, Power for a Sustainable Future. Oxford University Press Inc., New York, 2003.
800
R E F E R E N C E S A N D BIBLIOGRAPHY
Brecht, B., 1947. Selected Poems. New York: Harcourt-Brace. Translations by H.R. Hays. Brenner, D.J. and Hall, E.J., 2007, "Computer Tomography - An Increasing Source of Radiation Exposure", New England Journal of Medicine, issue 357, 2277-2284. Budge, S.M., Iverson, S.J., and Koopman, H.N., 2006, Studying trophic ecology in marine ecosystems using fatty acids: A primer on analysis and interpretation. Marine Mammal Science, In press. Buisman, C.J.N., Post, R., Ijspreet, P., Geraats, S., and Lettinga, G., 1989, Biotechnological process for sulphide removal with sulphur reclamation. Ada Biotechnologica 9:271-283. Bunge, Mario., 1962. "TheComplexity of Simplicity", Journal of Philosophy 59(5): 113-35. Burk, J.H., 1987, Comparison of Sodium Carbonate, Sodium Hydroxide, and Sodium Orthosilicate for EOR, SPE Reservoir Engineering, 9-16. Butler, N., 2005, The Global Energy Challenge. Council on Foreign Relations, the Corporate Conference, New York, N.Y., March 11, 2005. Calle, S., Klaba, L., Thomas, D., Perrin, L., and Dufaud, O., 2005, Influence of the size distribution and concentration on wood dust explosion: Experiments and reaction modeling, Powder Technology, Vol. 157 (1-3), September, pp. 144-148. Cameco. U., 2007, 101- Nuclear Energy., May, 2006, www.cameco.com/ uranium_101 /nuclear__electricity (Acessed on June 30,2007). Campbell, T.C., 1977, A Comparison of Sodium Orthosilicate and Sodium Hydroxide for Alkaline Waterflooding, Journal for Petroleum Technology, SPE 6514:1-8. Canadian Gas Potential Committee 1997, 'Natural Gas Potential in Canada'. Canakci, M., 2007, The Potential of Restaurant Waste Lipids as Biodiesel Feedstocks, Bioresource Technology 98:183-190. Carraretto, C , Macor, A., Mirandola, A., Stoppato, A., Tonon S., 2004, Biodiesel as alternative fuel: Experimental analysis and energetic evaluations. Energy 29:2195-2211. Carter, D., Darby, D., Halle, ]., and Hunt, P., 2005, How to make biodiesel. Low impact living initiative, Redfield Community, Winslow, Bucks, UK. ISBN: 0-9549171-0-3. Carter, D., Darby, D., Halle, ]., and Hunt, P., 2005, How To Make Biodiesel, Low-Impact Living Initiative, Redfield Community, Winslow, Bucks. ISSN 0-9649171-0-3. Cave Brown, Anthony. 1975, Bodyguard of Lies. New York: Harper & Row. Caveman Chemistry, 2006, July22, 2006. CEF Consultants Ltd, 1998, Exploring for Offshore Oil and Gas (Nov). No. 2 of Paper Series on Energy and the Offshore. Halifax, NS, [Accessed on May 18, 20051.
R E F E R E N C E S A N D BIBLIOGRAPHY
801
CERHR, 2003, NTP-CERHR, expert panel report on reproductive and developmental toxicity of propylene glycol. National Toxicology Program U.S. Department of Health and Human Services. NTP-CERHR-PG-03. Chaisson, E. and McMillan, S., 1997, Astronomy Today, 2nd Edition, Prentice-Hall. Chakma, 1997, 'CO z Capture Processes Opportunities for Improved Energy Efficiencies', Energy Conversion and Management, volume 38, 51-56. Chakma, A., 1996, 'Acid Gas Re-injection a practical way to eliminate C 0 2 emissions from gas processing plants', Proceedings of the third International Conference on C 0 2 Removal, Massachusetts Institute of Technology, Cambridge, MA, USA, 9-11 September, 1996. Chakma, A., 1999, Formulated solvents: new opportunities for energy efficient separation of acid gases. Energy Sources 21(1-2). Chalmers, A. F., 1973a. The Limitations of Maxwell's Electromagnetic Theory, Isis 64(4): 469-483. Chalmers, A.F., 1973b, On Learning from Our Mistakes, British Journal for the Philosophy of Science 24(2): 164-173. Chang, F.F. and Civan, F., "Practical Model for Chemically Induced Formation Damage",/. Pet. Sei. Eng., vol. 17,123-137. Chemistry Store, 2005, June 07,2006. Chengde, Z., 1995, Corrosion Protection of Oil Gas Seawater Submarine Pipeline for Bohai Offshore Oilfield. The International Meeting on Petroleum Engineering. Beijing, China. SPE 29972. Chhetri, A.B., 1997, An Experimental Study of Emissions Factors from Domestic Biomass Cookstoves. A Thesis Submitted for the Partial Fulfilment of the Requirements for the Degree of Master of Engineering, Asian Institute of Technology, Bangkok, Thailand, AIT Thesis no. ET-97-34,147 pp. Chhetri, A.B. and Islam, M.R., 2007a, Reversing Global Warming. /. Nat. Set. and Sust. Tech. l(l):79-\U. Chhetri, A.B. and Islam, M.R.,2007b, Pathway Analysisof Crude and Refined Oil and Gas. Int. Journal of Environmental Pollution. Submitted. Chhetri, A.B. and Islam, M.R., 2008, Inherently Sustainable Technology Development. Nova Science Publishers, New York, 452 pp. Chhetri, A.B., 2007, Scientific Characterization of Global Energy Sources. /. Nat. Sei. and Sust. Tech., l(3):359-395. Chhetri, A.B., 2007, Scientific characterization of global energy sources, /. Nat. Sei. and Sust. Tech, vol. 1, no. 3. Chhetri, A.B., Islam, M.R., 2009, Greening of petroleum operations. Advances in Sustainable Petroleum Engineering and Science, vol. 1, no. 1,1-35. Chhetri, A.B., Khan, M.I., and Islam, M.R., 2008, A Novel Sustainably Developed Cooking Stove. J. Nat. Sei. and Sust. Tech., 1(4), 589-602.
802
R E F E R E N C E S A N D BIBLIOGRAPHY
Chhetri, A.B., Tango, M.S., Budge, S.M., Watts, K.C., and Islam, M.R., 2008, "Non-Edible as New Sources of Biodiesel Production", Int. ]. Mol. Sei. 2008, vol. 9,169-180. Chhetri, A.B., Rahman, M.S., and Islam, M.R., 2006, Production of Truly 'Healthy' Health Products, 2nd Int. Conference on Appropriate Technology, July 12-14, Zimbabwe. Chhetri, A.B., Rahman, M.S. and Islam, M.R., 2007, Characterization of Truly 'Healthy' Health Products, / Characterization and Development of Novel MAterials, submitted. Chhetri, A.B., Watts, K.C., and Islam, M.R., 2008c, Soapnut extraction as a natural surfactant for Enhanced Oil Recovery, in press. Chhetri, A.K., Zatzman, G.M., Islam, M.R., 2008, "Book review: O.G.Sorokhtin, G.V. Chilingar and L.F. Khilyuk, 2007, Global Warming and Global Cooling, Evolution of Climate on Earth", / Nat Sei & Sust Tech. Vol. 1, No. 4, 693-698. Chouparova, E. and Philp, R.R, 1998, Geochemical monitoring of waxes and asphaltenes in oils produced during the transition from primary to secondary water flood recover, Org. Geochem., Vol. 29(13), pp. 449-461. Civan, F. and Engler, T, "Drilling Mud Filtrate Invasion-Improved Model and Solution",/. Pet. Sei. Eng., vol. 11,183-193. Clayton, M.A. and Moffat, J.W., 1999, Dynamical Mechanism for Varying Light Velocity as a Solution to Cosmological Problems, Physics Letters B, vol. 460, No.3-4, pp. 263-270. ClearTech, 2006, Industrial Chemicals, North Corman Industrial Park, Saskatoon S7L 5Z3, Canada, accessed: May08, 2006. CM AI, 2005, Chemical Market Associates Incorporated. <www.kasteelchemical.com/slide.cfm> accessed: May 20, 2006. CNN, 2008, Archeologist finds 3,000-year old Hebrew text, online-version, Oct. 30. Cockburn, Alex. 2007, "Al Gore's Peace Prize", Counterpunch (1314 October), at http://www.counterpunch.org/cockburnl0132007, html. Cohen, I. B., 1995, Newton's method and Newton's style, in Cohen & Westfall, R.S., ed., Newton: Texts, Background, Commentaries (New York: WW Norton): 126-143. Cohen, D.E., 2008, What matters, Sterling Publishing Co Inc (United States), 2008, Hardback, 336 pages. Collias, N.E and Collias, E.C., 1996, Social organization of a red junglefowl, Gallus gallus, population related to evolution theory, Animal Behaviour, Volume 51, Issue 6, pp. 1337-1354. Coltrain, D., 2002, Biodiesel: Is It Worth Considering? Risk and Profit Conference Kansas State University Holiday Inn, Manhattan, Kansas August 15-16.
REFERENCES AND BIBLIOGRAPHY
803
Connaughton, S., Collins, G., and Flaherty, V.O', 2006, Psychrophilic and mesophilic anaerobic digestion of brewery effluent: a comparative study, Water Research 40(13):2503-2510. Connemann, J., Fischer, J., 1998, Biodiesel in Europe 1998: biodiesel processing technologies, Paper presented at the International Liquid Biofuels Congress, Brazil, 15 pp. Cooke, C.E. Jr., Williams, R.E., and Kolodzie, PH., 1974, Oil Recovery by Alkaline Water Flooding, jour. Pet. Tech., 1356-1374. Correra, S., 2004, "Stepwise Construction of An Asphaltene Precipitation Model", vol. 22, issue 7&8, 943-959. Cortex, D. H. and Ladelfa, C. J., 1981, Production of synthetic crude oil from coal using the TOSCOAL pyrolysis process, Intersociety Energy Conversion Engineering Conference, 16th, Atlanta, GA, August 9-14, 1981, Proceedings. Volume 3. (A82-11701 02-44) New York, American Society of Mechanical Engineers, p. 2178-2183. Coskuner, G. and Bentsen, R.G., J. 1990, Ά Scaling Criterion for Miscible Displacements', Can. Pet. Tech., volume 29, no. 1,86-88. Craig, R.G., Eick, J.D., and Peyton, F.A., 1967, Strength Properties of Waxes at Various Temperatures and their Practical Application, journal of Dental Research, Vol. 46, pp. 300-301. Crump, K.S., 1976,"Numerical Inversion of Laplace Transforms Using Fourier Series Approximations", /. Assoc. Comput. Mach., vol. 23 (1), 89-96. Currie, D.R. and Isaacs, L.R., 2005, Impact of exploratory offshore drilling on benthic communities in the Minerva gas field, Port Campbell, Australia, Marine Environmental Research, Vol. 59:217-233. Daly, H. E., 1992, Allocation, distribution, and scale: towards an economics that is efficient, just and sustainable. Ecological Economics, Vol. 6: 185-193. Davis,R.A.,Thomson,D.H.,Malme,C.I.,andMalme,C.I.,1998,Eni;iro«me«toZ Assessment of Seismic Explorations. Canada/Nova Offshore Petroleum Board, Halifax, NS, Canada. De Esteban, F., 2002, The Future of Nuclear Energy in the European Union. Background Paper for a Speech Made to a Group of Senior Representatives from Nuclear Utilities in he Context of a "European Strategic Exchange", Brussels, 23rd May 2002. De Groot, S.J.D., 1996, Quantitative assessment of the development of the offshore oil and gas industry in the North Sea, ICES Journal of Marine Science, vol. 53,1045-1050. Deakin, S. and Konzelmann, S.J., 2004, Learning from Enron, Corporate Governance, vol 12, No. 2:134-142. Degens, E.T. and Ittekkot, V, 1982, In Situ Metal-Staining of Biological Membranes in Sediments, Nature, vol. 298, pp 262-264. Demirba, A., 2003, Biodiesel fuels from vegetable oils via catalytic and noncatalytic supercritical alcohol transesterifications and other methods, a survey. Energy Conversion and Management; Volume 44 (13):2093-2109.
804
R E F E R E N C E S A N D BIBLIOGRAPHY
Demirbas, A. 2003, Biofuels from Vegetable Oils via Catalytic and NonCatalytic Supercritical alcohol Transesterifications and Other Methods: A Survey. Energy Convers Manage 44:2099-109. Department of Environment and Heritage, 2005, Australian Government, Greenhouse Office. Fuel consumption and the environment.<www. greenhouse.gov.au/fuellabel/ environment.html> accessed on February 18, 2006. Deydier, E., Guilet, R., Sarda, S., and Sharrock, P., 2005, Physical and Chemical Characterisation of Crude Meat and Bone Meal Combustion Residue: "Waste or Raw Material?" journal of Hazardous Materials, Vol. 121(1-3):141-148. deZabala, E.F. and Radke, C.J., 1982, The Role of Interfacial Resistances in Alkaline Water Flooding of Acid Oils, paper SPE 11213 presented at the 1982 SPE Annual Conference and Exhibition, New Orleans, 26-29. Dietze, P., 1997, Little Warming with New Global Carbon Cycle Model. ESEF Vol. II, http://www.john-daly.com/carbon.htm> [accessed: January 11,2006]. Dincer, I. and Rosen, M.A., 2005, Thermodynamic aspects of renewable and sustainable development. Renewable & Sustainable Energy Reviews, 9,169-89. Dingle, H., 1950, A Theory of Measurement, The British Journal for the Philosophy of Science, Vol. 1, No. 1 (May), 5-26 . Diviacco, P. 2005, An open source, web-based, simple solution for seismic data dissemination and collaborative research. Commuters & Geosciences, 31, 599-605. Donaldson, E.C. and Chernoglazov, V, 1987, "Drilling Mud Fluid Invasion Model", /. Pet. Sei. Eng., vol. 1(1), 3-13. Donnet, M., Bowen, P., Jongen, N., Lemaitre, J. and Hofmann, H., 2005. Use of Seeds to Control Precipitation of Calcium Carbonate and Determination of Seed Nature,Langmuir, vol. 21, pp 100-108. Drake, S., 1970. "Renaissance Music and Experimental Science", journal of the History of Ideas, Vol. 31, No. 4. (October-December), 483-500. Drake, S., 1973. "Galileo's Discovery of the Law of Free Fall". Scientific American v. 228, #5, 84-92. Drake, S., 1977. "Galileo and the Career of Philosophy", Journal of the History of Ideas, Vol. 38, No. 1 (January-March), 19-32. Du, W., Xu, Y., Liu, D., Zeng, J. 2004, Comparative study on lipasecatalyzed transformation of soybean oil for biodiesel production with different acyl acceptors. Journal of Molecular Catalysis B: Enzymatic 30:125-129. Duhem, P., 1914, The Aim and Structure of Physical Theory. Princeton: Princeton U P, translation by P. Wiener for an English-language ed. published 1954.
REFERENCES AND BIBLIOGRAPHY
805
Dunn, K., 2003, Caveman Chemistry, Chapter-8, Universal Publishers, USA. Dyer, S.B., Huang, S., Farouq Ali, S.M., Jha, K.N., 1994, 'Phase Behavior and Scaled Model Studies of Prototype Saskatchewan Heavy Oils with Carbon Dioxide', /. of Can. Pet. Tech., 42-48. Eckhardt, F.E.W., 1985, Solulization, transport and deposition of mineral cations by microorganisms—Efficient rock weathering agents, In Drever, J.I. ed., The Chemistry of weathering, Dordresht, Netherlands, D. Reidd,pp 161-173.1 Ehrlich, H.L., 1974, The Formation of Ores in the Sedimentary Environment of the Sea with Microbial Participation: The Case of Ferro-Manganese Concretions, Soil Science, vol. 119, pp 36—41. Ehrlich, H.L., 1983, Manganese Oxidizing Bacteria from a Hydrothermally Active Region on the Galapogos Rift, Ecol. Bull. Stockholm, vol. 35, pp 357-366. EIA (Energy Information Administration), 2001, Annual Energy Review, 2001. El A (Energy information Administration), 2003, International Energy Annual 2003 Report. [Washington DC: U.S. Department of Energy, 2005]). EIA, 2006, Energy Information Administration/International Energy Outlook, 2006b. EIA, 2006a Energy Information Administration, System for the Analysis of Global Energy Markets (2006). International Energy Outlook 2006, Office of Integrated Analysis and Forecasting U.S. Department of Energy Washington, DC 20585, 2006a, www.eia.doe.gov/oiaf/ieo/ index.html. EIA, 2006b, Energy Information Administration, International Energy Annual, 2003 (May-July, 2005). EIA, 2008, Short-Term Energy Outlook, http://www.eia.doe.gov/emeu/ s t e o / p u b / contents.html (Accessed on March 11,2008). EIA, Annual Energy Outlook 2005, Market Trends- Energy Demand, Energy Information Administration, Environmental Issues and World Energy Use. El 30, 1000 Independence Avenue, SW, Washington, DC 20585, 2005. EIA, Nuclear Issues Paper, 2006, Energy Information Administration, Official energy Statistics from the U.S. government, 2006c, www.eia. doe.gov/cneaf/nuclear/page/nuclearenvissues.html (accessed on January 11,2007). El-Etre, A.Y., 1998. Corrosion Science 39(11), p. 1845. El-Etre, A.Y. and Abdallah. M., 2000, Corrosion Science 42(4), p. 731. Elkamel, A., Al-Sahhaf, T., and Ahmed, A.S., 2002, Studying the Interactions Between an Arabian Heavy Crude Oil and Alkaline Solutions, Journal Petroleum Science and Technology, Vol. 20 (7):789-807. Ellwood, C. A., 1931, Scientific Method in Sociology, Social Forces Vol. 10, No. 1 (October) 15-21.
806
R E F E R E N C E S A N D BIBLIOGRAPHY
Emmons, F.R., 1986, 'Nitrogen Management at the East Binger Unit Using an Integrated Cryogenic Process', SPE paper 15591 presented at the SPE Annual Technical Conference and Exhibition, New Orleans, LA, Oct. 5-8. Energy Information Administration, 2005, EIA's International Energy Outlook 2005 <www.eia.doe.gov/neic/experts/expertanswers. html > accessed on February 18, 2006. Environment Canada, 2003, Transportation and environment, Environment Canada <www.ec.gc.ca/transport/publications/biodiesel/biodiesell2. html>[Accessed:November25, 2005]. Environment Canada, 2007, Canadian Climate Normals 1971-2000 [online] Available: (http://www.climate.weatheroffice.ec.gc.ca/climate_normals/stnselec __e.html) [February 10,2007]. Environmental Defense, 2004, (Accessed on June 2, 2006). EPA, 2000, Development Document for Final Effluent Limitations Guidelines and Standards for Synthetic-Based Drilling Fluids and other Non-Aqueous Drilling Fluids in the Oil and Gas Extraction Point Source Category. EPA- 821-B-00-013, U.S. Environmental Protection Agency, Office of Water, Washington, DC 20460, December, < h t t p : / / www.epa.gov/waterscience/guide/sbf/fi nal/eng.html>. EPA, 2002, A Comprehensive analysis of biodiesel impacts on exhaust emissions. Air and radiation. Draft technical report. EPA420-P-02-001. Erol,M.,Kucukbayrak,S.,andErsoy-Mericboyu,A.,2007,Characterization of Coal Fly Ash for Possible Utilization in Glass Production, Fuel, Vol. 86:706-714. Farouq Ali, S.M., Redford, D.A., and Islam, M.R., 1987, ' Scaling Laws for Enhanced Oil Recovery Experiments', Proc. of the China-Canada Heavy Oil Symposium, Zhou Zhou City, China. Farquhar, G.D., Ehleringer, J.R., and Hubick, K.T., 1989, Carbon Isotope Discrimination and Photosynthesis. Annu.Rev.Plant Physiol.Plant Mol. Biol. 40:503-537. FDA, 2004, FDA Approves medicinal leaches, Associated Press release, reported by R. Rubin, USA Today, July 7. Fernandes, M.B. and Brooks, P., 2003, Characterization of carbonaceous combustion residues: II. Nonpolar organic compounds, Chemosphere, Volume 53, Issue 5, November, pages 447-458. Ferris, F.G., Fyfe, W.S., and Beveridge, T.J., 1987, Manganese Oxide Deposition in a Hot Spring Microbial Mat, Geomicrobiologv Journal, vol. 5, No. l , p p 33-42. Ferris, FG., Fyfe, W.S., and Beveridge, T.J., 1988, Metallic Ion Binding by Bacillus Subtilis: Implications for the Fossilization of Micro-organisms, Geology, vol. 16, pi49-152. Feuer, L. S., 1957, The Principle of Simplicity, Philosophy of Science 24(2): 109-22.
R E F E R E N C E S A N D BIBLIOGRAPHY
807
Feuer, L.S., 1959, "Rejoinder on the Principle of Simplicity", Philosophy of Science 26(1): 43-5. Fink, F.W. and Boyd, W.K., 1970, The Corrosion of Metals in Marine Environments. Bayer & Company Inc., USA. Finkel, Alvin & Leibovitz, Clement.1997, The Chamberlain-Hitler Collusion. London: Merlin. Fiore, K, 2006, Nuclear energy and sustainability: Understanding ITER. Energy Policy, 34:3334-3341. Fischer, A. and Hahn, C , 2005, Biotic and abiotic degradation behaviour of ethylene glycol monomethyl ether (EGME), Water Research, Vol. 39:2002-2007. Fontana, M. and Green, N., 1978, Corrosion Engineering. McGraw Hill International. Fraas, L. M , Partain, L. D., McLeod, P. S. and Cape, J. A., 1986. NearTerm Higher Efficiencies with Mechanically Stacked Two-Color Solar Batteries. Solar Cells 19 (l):73-83, November. Frank, J., 2006, "Inconvenient Truths About the Ozone Man: AI Gore the Environmental Titan?", Counterpunch, 31 May, at h t t p : / / w w w . counterpunch.org/frank05312006,html. Frazer, L.C., and Boiling, J.D., 1991, 'Hydrogen Sulfide Forecasting Techniques for the Kuparuk River Field', SPE paper 22105 presented at the International Arctic Technology Conference, Anchorage, Alaska. Freud, P. and Ormerod, W, 1996, 'Progress towards storage of CO z ', Proceedings of the Third International Conference on C0 2 Removal, Massachusetts Institute of Technology, Cambridge, MA, USA, 9-11 September. Fuchs, H.U., 1999, A Systems View of Natural Processes: Teaching Physics the System Dynamics Way. The Creative Learning Exchange 8 (1): 1-9. Gale, C.R., Martyn, C.N., Winter, P.D., and Cooper, C , 1995, Vitamin C and Risk of Death from Stroke and Coronary Heart Disease in Cohort of Elderly People. BMJ 310:1563-1566. Geesey, G., Lewandewski, Z., and Flemming, H., 1994, Biofouling and Biocorrosion in Industrial Water Systems. Lewis publishers, MI, USA. Geilikman, M.B. and Dusseault, M.B., 1997, "Fluid Rate Enhancement from Massive Sand Production in Heavy-Oil Reservoirs", /. Pet. Sei. Eng., vol. 17,5-18. Gerpen, J.V., Pruszko, R., Shanks, B., Clements, D., and Knothe, G., 2004, Biodiesel Analytical methods. National Renewable Energy Laboratory. Operated for the U.S. Department of Energy. GESAMP (I MO/FAO/UNESCO/WMO/WHO/IAEA/UNEP) Joint Group of Experts on the Scientific Aspects of marine Pollution. 1993. Impacts of Oil and Related Chemicals and Wastes on the Marine Environment. GESAMP Reports and Studies No. 50. London: International Maritime Organization.
808
R E F E R E N C E S A N D BIBLIOGRAPHY
Gesser, H.D., 2002, Applied Chemistry. Klu wer Academic, Plenum Publishers, NY, USA. Gessinger, G., 1997, 'Lower C 0 2 emissions through better technology', Energy Conversion and Management, volume 38, 25-30. Gilbert, S.R., Bounds, C O . , and Ice, R.R., 1988, "Comparative Economics Of Bacterial Oxidation And Roasting As A Pre-Treatment Step For Gold Recovery From An Auriferous Pyrite Concentrate". CIM 81(910). Giridhar, M., Kolluru, C , Kumar, R., 2004, "Synthesis of Biodiesel in Supercritical Fluids," Fuel. 83,2029-2033. Gleick, ]., 1987, Chaos - making a new science, Penguin Books, NY, 352 pp. Godoy, ]., 2006, Environment: Heat Wave Shows Limits of Nuclear Energy. Inter Press Service News Agency. July 27, 2006. Goldberg, N.N., and Hudock, J.S., 1986, "Oil and Dirt Repellent Alkyd Paint", United States Patent 4600441. Gollapudi, U.K., Knutson, C.L., Bang S.S., and Islam, M.R., 1995, A New Method for Controlling Leaching Through Permeable Channels, Chemosphere, vol. 46, pp 749-752. Gonzalez, G. and Moreira, M.B.C., 1994, "The Adsorption of Asphaltenes and Resins on Various Minerals", Asphaltenes and Asphalts, Yen and Chilingar eds., Elsevier Science B.V., Amsterdam, 249-298. Goodman, Nelson., 1961. "Safety, Strength, Simplicity", Philosophy of Science 28(2): 150-1. Goodstein, D., 2000, Whatever Happened to Cold Fusion? Accountability in Research, Vol. 8, p. 59. Goodstein, D., 2004, Whatever Happened to Cold Fusion? The American Scholar, 527 pp. Goodwin, L., 1962, The Historical-Philosophical Basis for Uniting Social Science with Social Problem-Solving, Philosophy of Science, Vol. 29, No. 4. (October), 377-392. Gore, A., 1992, Earth in the Balance: Ecology and the Human Spirit, Houghton Mifflin Company, Boston, New York, London, 407 pp. Gore, A., 2006, An Inconvenient Truth. New York: Rodale. Also a DVD, starring AI Gore presenting the book's content as a public lecture, plus additional personal reflections. Produced by Davis Guggenheim. Gore, Albert. 2006, An Inconvenient Truth. New York: Rodale. Also a DVD, starring AI Gore presenting the book's content as a public lecture, plus additional personal reflections. Produced by Davis Guggenheim. Government of Saskatchewan, Energy and Mines News Release, 1997, '$1.1 Billion Oil Project Announced in Southeast Saskatchewan', June 26. Grigg, R.B. and Schechter, D.S., 1997, 'State of the Industry in C 0 2 Floods', SPE paper 38849 presented at the Annual Technical Conference and Exhibition, San Antonio, Texas, October 5-8. Gruesbeck, C. and Collins, R.E., 1982, "Entrainment and Deposition of Fine Particles in Porous Media", Soc. Pet. Eng. ]., Dec. 847-856.
R E F E R E N C E S A N D BIBLIOGRAPHY
809
Guenther, W.B., 1982, Wood Ash Analysis: An Experiment for Introductory Courses. /. Chem. Educ, Vol. 59:1047-1048. Gunal, G.O. and Islam, M.R., 2000, "Alteration of asphaltic crude rheology with electromagnetic and ultrasonic irradiation", /. Pet. Sei. Eng., vol. 26 (1-4), 263-272. Gunnarsson, S., Heikkilä, M., Hultgren, J., and Valros, A., 2008, Ά note on light preference in layer pullets reared in incandescent or natural light', Applied Animal Behaviour Science, Volume 112, Issues 3-4, pp. 395-399. Gunnarsson, S., Keeling, L.J., and Svedberg, J., 1999, 'Effect of rearing factors on the prevalence of floor eggs, cloacal cannibalism and feather pecking in commercial flocks of loose housed laying hens', British Poultry Science, Volume 40, Issue 1, pp. 12-18. Gunter, W.D., Gentzis, T., Rottenfuser, B.A., and Richardson, R.J.H., 1996, 'Deep Coal-bed Methane in Alberta, Canada: A Fossil Fuel Resource with the Potential of Zero Greenhouse Gas Emissions', Proceedings of the Third International Conference on CO z Removal, Massachusetts Institute of Technology, Cambridge, MA, USA, 9-11 September. Gunter, W.D., Bachu, S, Law, D.H-S, Marwaha, V, Drysdale, D.L., Macdonald, D.E., and McCann, T.J., 1996, 'Technical and Economic Feasibility of C 0 2 disposal in aquifers within the Alberta sedimentary basin', Energy Conversion and Management, volume 37, nos. 6-8,1135-1142. Gunter, W.D., Bachu, S., Law, D., Marwaha, V, Drysdale, D.L., Macdonald, D.E., and McCann, T.J., 1995, 'Technical and Economic Feasibility of CO z disposal in aquifers within the Alberta Sedimentary Basin. Canada,' Energy Conversion and Management, volume 37,1135-1142. Gunter, W.D., Gentzis, T, Rottenfusser, B.A., and Richardson, R.J.H., 1997, 'Deep coal-bed Methane in Alberta, Canada: A Fossil Fuel Resource with the Potential of Zero Greenhouse Gas Emissions', Energy Convers. Mgmt. vol. 38 suppl., S217-S222. Gupta, R., Ahuja, P., Khan, S., Saxena, R.K., and Mohapatra, H., 2000, Microbial biosorbents: meeting challenges of heavy metal pollution in aqueous solutions. Current Sei., vol. 78, pp. 967-973. Gupta, V.K., Jain, C.K., Ali I., Sharma, M., and Saini, S.K., 2003, Removal of cadmium and nickel from wastewater using bagasse fly ash - a sugar industry waste. Water Research, vol. 37, pp. 4038- 4044. Gupta, C.K. and Mukherjee, T.K., 1990, Hydrometallurgy in Extraction Processes, vol. 1, CRC Press, New York, 248 pp. Haidane, J.B.S., 1957, Karl Pearson, 1857-1957, Biometrika, Vol. 44, Nos. 3/4. (December), 303-313. Hale, N.C., 1993, Abstraction in Art and Nature, Courier Dover Publications, Mineola, NY, USA. Hall, D.O. and Overend, R.P, 1987, Biomass: Regenarable Energy. John Wiley and Sons-A wiley-interscience publication, 504 pp.
810
R E F E R E N C E S A N D BIBLIOGRAPHY
Hamming, R. W., 1973, Numerical Methods for Scientists and Engineers, New York: McGraw-Hill, 2nd Edition. Ix, 719pp. Hanson, R.S., Hanson, T.E., 1996, Methanotrophic bacteria. Microbial. Rev. 60:439-471. Harris, G.M. and Lorenz, A.M., 1993, New Coatings for the Corrosion Protection of Steel Pipelines and Pilings in Severely Aggressive Environments. Corrosion Science 35(5), p. 1417. Haque, K.E., 1999, Microwave Energy for Mineral Treatment Processes - A Brief Review, International journal of Mineral Processing, 57(1), 1-24. Harvey, A.H. and Henry, R.L., 1977, Ά Laboratory Investigation of Oil Recovery by Displacement with Carbon dioxide and Hydrogen sulfide', SPE 6983, unsolicited manuscript. Hau, L.V., Harris, S.E., Dutton, Z., and Behroozi, C.H., 1999, Light Speed Reduction to 17 Meters Per Second in an Ultra Cold Atomic Gas, Nature, vol. 397, pp. 594-598. Haynes, H.J., Thrasher, L.W., Katz, M.L., and Eck, T.R., 1976, Enhanced Oil Recovery, National Petroleum Council. An Analysis of the Potential for Enhanced Oil Recovery from Known Fields in the United States. Henda, R., Herman, A., Gedye, R., and Islam, M.R., 2005, Microwave enhanced recovery of nickel-copper ore: communication and floatability aspects, journal of Microwave Power & Electromagnetic Energy, 40(1), 7-16. Hendriks, CA., 1994, 'Carbon Dioxide Removal from Coal-Fired Power Plants', Ph.D. Thesis, Department of Science, Technology and Society, Utrecht University, Utrecht, The Netherlands. Henning, R.K., 2004, Integrated Rural Development by Utilization of Jatropha Curcas L. (JCL) as Raw Material and as Renewable Energy: Presentation of The Jatropha System "at the international Conference, Renewables 2004" in Bonn, Germany. Herzog, H.E., Drake, J. Tester, and Rosenthal, R. 1993, Ά Research Needs Assessment for the Capture, Utilization and Disposal of Carbon Dioxide from Fossil Fuel-Fired Power Plants', DOE/ER-30194, US Department of Energy, Washington, D.C. Hesse, M., Meier, H., and Zeeh, B., 1979, Spektroscopische Methoden in Der Organischen Chemie, Thieme VerlagStuttgart. Hilber, T., Mittelbach, M., and Schmidt, E., 2006, Animal fats perform well in biodiesel, Render, February 2006, www.rendermagazine.com (accessed on Aug 30, 06). Hill, D.E., Bross, S.V., and Goldman, E.R., 1990, 'The Impacts of Microbial Souring of a North Slope Oil Reservoir', paper presented at the International Congress on Microbially Influenced Corrosion, Knoxville, TN, Oct. Hills, R.G., Porro, I., Hudson, D.B., and Wierenga, P.J., 1989, "Modeling one-dimensional infiltration into very dry soils 1. Model development and evaluation", Water Resources Res. vol. 25,1259-1269.
R E F E R E N C E S A N D BIBLIOGRAPHY
811
Himpsel, F., 2007, Condensed-Matter and Material Physics: The Science of the World Around Us. The National Academies Press, ISBN-13: 9780-309-10965-9, pp 224. Hitchon, B. 1996, Aquifer Disposal of Carbon Dioxide—hydrodynamics and Mineral Trapping: Proof of Concept, Geo-science Publ. Ltd., Sherwood Park, Alberta, Canada. Holbein, B.E., Stephen, J.D., Layzell, D.B., 2004, Canadian Biodiesel Initiative, Final Report; Biocap Canada, Kingston, Ontario, Canada, 2004. Holdway, D.A., 2002, The Acute and Chronic Effects of Wastes Associated with Offshore Oil and Gas Production on Temperature and Tropical Marine Ecological Process. Marine Pollution Bulletin, 44:185-203. Holloway, S. and van der Straaten, R., 1995, The joule II project, T h e Underground Disposal of Carbon Dioxide', Energy Conversion and Management, volume 36, no. 6-9, 519-522. Holloway, S., 1996, An Overview of the Joule II Project, 'The Underground Disposal of Carbon Dioxide', Proceedings of the Third International Conference on C 0 2 Removal, Massachusetts Institute of Technology, Cambridge, MA, USA, 9-11 September. Holmberg, S.L., Claesson, T., Abul-Milh, M. and Steenari, B.M., 2003, Drying of Granulated Wood Ash by Flue Gas from Saw Dust and Natural Gas Combustion, Resources, Conservation and Recycling, Vol. 38:301-316. Holt, T., Jensen, J.I., and Lindeberg, E., 1995, 'Underground Storage of C 0 2 in Aquifers and Oil Reservoirs,' Energy Conversion and Management, volume 36, no. 6-9, 535-538. Holton, G., 1969, Einstein, Michelson, and the 'Crucial' Experiment, Isis 60(2): 132-197. Hotz, Robert Lee. 2007, "Scientists Using Maps Of Genes for Therapies Are Wary of Profiling", The Wall Street Journal, Friday 26 October, Page Bl, (http://online.wsj.com/article/SB119334828528572037.html). Hoyle, B. and Beveridge, 1983, The Binding of Metallic Ions to the Outer Membrane of Escherichia Coli., Applied and Environment Microbiology, vol. 46, pp 749-752. Hu, P.Y., Hsieh, Y.H., Chen, J.C., and Chang, C.Y., 2004, Characteristics of Manganese-Coated Sand Using SEM and EDAX Analysis, /. Colloid and Interface Sc. Vol. 272:308-313. Huang, S. de Wit, P., Shatilla, N., Dyer, S., Verkoczy, B., and Knorr, K., 1989, 'Miscible Displacement of Saskatchewan's Light and Medium Oil Reservoirs', Confidential Technical Report by Saskatchewan Research Council, Petroleum Research. Huang, S.S., de Wit, P., Srivastava, R.K., and Jha, K.N., 1994, A Laboratory Miscible Displacement Study for the Recovery of Saskatchewan's Crude Oil, /. Can. Pet. Tech., 43-51, April. Hughes, L. and Scott, S., 1997, Canadian Greenhouse Gas Emissions:19902000. Energy Conversion and Management 38 (3).
812
R E F E R E N C E S A N D BIBLIOGRAPHY
Hughs, M.N. and Poole, R.K., 1989, Metals and Microorganisms. London, Chapman and Hall, 303-357. Hull, T.R., Quinn, R.E., Areri, I.G., Purser, D.A., 2002, Combustion toxicity of fire retarded EVA. Polymer Degradation and Stability 77:235-242. IAEA, 2004, Nuclear Technology Review, International Atomic Energy Agency, P.O. Box 100, Wagramer Strasse 5, A-1400 Vienna, Austria, 2004. IPCC, 2001. Climate Change 2001: The Scientific Basis. Houghton J.T., Ding, Y., Griggs, D.J., Noguer, M., Van der Linden, P.J., Dai, X., Maskell, K. and C.A. Johnson, (eds), Cambridge University Press, Cambridge, UK, 881 pp. IPCC, 2007. Climate Change 2007: The Physical Science Basis. Summary for Policymakers. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, February, 2007. Imberger, J. 2007, Interview on debate on climate change, CNN, Aug. 19. International, Energy Agency GHG 1994a, 'Carbon Dioxide capture from Power Stations', IEA Greenhouse Gas R&D Program, Cheltenham, UK. International, Energy Agency GHG 1996b, 'Ocean Storage of CO z , environmental impact', IEA Greenhouse Gas R&D Program, Cheltenham, UK. International, Energy Agency GHG, 1995, R & D program report, 'Carbon Dioxide Utilization', IEA Greenhouse Gas R&D Program, Cheltenham, UK. Ion, S.E., 1997, Optimising Our Resources. The Uranium Institute. Twenty Second Annual Symposium 3.5 September, London. Islam, M.R and Bang S.S., 1993, Use of Silicate and Carbonate Producing Bacteria in Restoration of Historic Monuments and Buildings, Patent Disclosure, South Dakota School of Mines and Technology. Islam, M.R., 1990, New Scaling Criteria for Chemical Flooding Experiments, journal of Canadian Petroleum Technology, Vol. 29(l):30-36. Islam, M.R., 1996, Emerging Technologies in Enhanced Oil Recovery, Energy Sources, Vol. 21:97-111. Islam, M.R. and Chakma, A., 1992, Ά New Recovery Technique for Heavy Oil Reservoirs with Bottom-Waters', SPE Res. Eng., vol. 7, no. 2, 180-186. Islam, M.R. and Farouq Ali, M.S., 1989, Numerical Simulation of Alkaline/ Cosurfactant/Polymer Flooding; Proceedings of the UNI-TAR/UNDP Fourth Int. Conf., Heavy Crude and Tar Sand. Islam, M.R., and Chakma, A. 1993, 'Storage and utilization of CO z in Petroleum Reservoirs - A Simulation Study', Energy Conversion and Management, volume 34 9,1205-1212. Islam, M.R., Chakma, A., and Jha, K., 1994, 'Heavy Oil Recovery by Inert Gas Injection with Horizontal Wells', /. Pet. Sei. Eng. 11 3, 213-226.
R E F E R E N C E S A N D BIBLIOGRAPHY
813
Islam, M.R., Erno, B.P., and Davis, D., 1992, 'Hot Water and Gas Flood Equivalence of In Situ Combustion', /. Can. Pet. Tech., 31 8, 44-52. Islam, M.R. 2004, Unraveling the Mysteries of Chaos and Change: The Knowledge-Based Technology Development, EEC Innovation, Vol. 2(2):45-87. Islam, M.R. 2006, A Knowledge-Based Water and Waste-Water Management Model. International Conference on Management of Water, Wastewater and Environment: Challenges for the Developing Countries, September 13-15, Nepal. Islam, M.R. and Farouq Ali, S.M., 1990, 'New Scaling Criteria for Chemical Flooding Experiments', /. Can. Pet. Tech., volume 29 1, 29-36. Islam, M.R., 1994, "Role of Asphaltenes on Oil Recovery and Mathematical Modeling of Asphaltene Properties", Asphaltenes and Asphalts, Yen and Chilingar eds., Elsevier Science B.V., Amsterdam, 249-298. Islam, M.R., 1994, 'Role of Asphaltenes in Heavy Oil Recovery', in Asphaltenes and Asphalts 1, Eds. Yen and Chilingarian, Elsevier Scientific Publishers, Amsterdam-New York, 249-295. Islam, M.R., 1995, "Potential of Ultrasonic Generators for Use in Oil Wells and Heavy Crude Oil/Bitumen Transportation Facilities", AsphaltenesFundamentals and Applications, Sheu and Mullins eds., Plenum Press, New York (1995), 191-218. Islam, M.R., 1999, 'Emerging Technologies in Enhanced Oil Recovery', Energy Resources volume 21, no. 1-2, 97-112. Islam, M.R., 2003, Adding Value to Atlantic Canada's Offshore Industry, presented at the 3 rd Atlantic Canada Deepwater Workshop, Halifax, N.S., April 9-10, 2003. Islam, M.R., 2004, "Unraveling the mysteries of chaos and change: knowledge-based technology development", EEC Innovation, vol. 2, no. 2 and 3,45-87. Islam, M.R., Shapiro, R., and Zatzman, G.M., 2006, "Energy Crunch: What more lies ahead?" The Dialogue: Global Dialogue on Natural Resources, Center for International and Strategic Studies, Washington DC, April 3-4, 2006. Islam, R., 2008a, If Nature is perfect, what does 'denaturing' mean?, Perspective on Sustainable Technology, M.R. Islam (editor), Nova Science Publishers, New York, 191 pp. Islam, R., 2008b, Editorial: How much longer can humanity afford to confuse 'theory' that supports only those practices that make the most money in the shortest time with knowledge of the truth?, journal of Nature Science and Sustainable Technology, vol. 1, no. 4, 510-519. Islam, M.R., Verma. A., and Farouq Ali, S.M., 1991, "In Situ Combustion The Essential Reaction Kinetics", Heavy Crude and Tar Sands Hydrocarbons for the 21st Century, vol. 4, UNITAR/UNDP.
814
R E F E R E N C E S A N D BIBLIOGRAPHY
Issariyakul, T., Kulkarni, M.G., Dalai, A.K., and Bakhshi, N.N., 2007, Production of biodiesel form waste fryer grease using mixed methanol/ ethanol system. Fuel Processing Technology 88:429-436. Jack T.R., Ferris, F.G., Stehmeier, L.G., Kantzas, A. and Marentette, D.F., 1992, Bug Rock: Bactriogenic Mineral Precipitation System for Oil Patch Use, In Remuzic, E. and Woodhead A., ed.: Microbial Enhancement of Oil- Recovery- Recent Advances, pp 27-36, Elsevier Publishing Co., New York. Jack T.R., Stehmeier, L.G., Ferris, F.G., and Islam, M.R., 1991, Microbial Selective Plugging to Control Water Channeling, In Microbial Enhancement of Oil-Recovery -Recent Advances, Elsevier Publishing Co., New York. Jack, T.R., 1988, Microbially Enhance Oil Recovery, Biorecovery, vol. 1, pp 59-73. Jememan, G.E., Knapp, R.M., Mclnerey, M.J., and Menzie, E.O., 1984, Experimental Studies of In-Situ Microbial Enhanced Oil Recovery, Society of Petroleum Engineers journal, vol. 24, pp 33-37. Jennings, H.Y. Jr., 1975, A Study of Caustic Solution-Crude Oil Interfacial Tensions, Society of Petroleum Engineers Journal, SPE-5049-.197-202. Jepson, P.D., Arbelo, M., Deaville, R., Patterson, I.A.P., Castro, P., Baker, J.R., Degollada, E., Ross, H.M., Herräez, P., Pocknell, A.M., Rodriguez, F., Howie, F.E., Espinosa, A., Reid, R.J., Jaber, J.R., Martin, J., Cunningham, A.A., and Fernandez, A., 2003, Gas-bubble lesions in stranded cetaceans. Was sonar responsible for a spate of whale deaths after an Atlantic military exercise? Nature, 425, 575-6. Jones-Meehan, J. and Walch. M., 1992, ASME, International Power Generation Conference. Atlanta, GA, USA, Publ. ASME, NY, USA, p.l. Jowett, F., 1984, 'Paraffin waxes', In Petroleum Technology, ed. G.D. Hobson, New York: J. Wiley, pp. 1021-1042. Kalia, A.K. and Singh, S.P., 1999, Case study of 85 m3 floating drum biogas plant under hilly conditions, Energy Conversion & Management, 40:693-702. Kamath, V.A., Yang, J., and Sharma, G.D., 1993, 'Effect of Asphaltene Deposition on Dynamic Displacement of oil by Water,' SPE paper 26046 presented at SPE Western Regional Meeting, Anchorage, May 26-28. Kane, Robert L. and Klein, Daniel E. 1997, 'United States Strategy for Mitigating Global Climate Change', Energy Conversion and Management, volume38,S13-S18. Kantzas, A., Ferns, F.G., Stehmeier, L., Marentette, D.F., Jha, K.N., and Mourits, F.M., 1992, A New Method of Sand Consolidation through Bacteriogenic Mineral Plugging, Petroleum Society of C1M, CIM paper No. 92-4. Kao, M.J., Tien, D.C., Jwo, C.S., and Tsung, T.T., 2005, The study of hydrophilic characteristics of ethylene glycol, Journal of Physics: Conference Series 13:442-445.
REFERENCES AND BIBLIOGRAPHY
815
Kaoma, J. and Kasali, G.B., 1994, Efficiency and Emissions of Charcoal use in the Improved Mbuala Cookstoves. Published by the Stockholm Environment Institute in Collaboration with SIDA, ISBN:91 88116:94 8. Kashyap, D.R., Dadhich, K.S., and Sharma, S.K., 2003, Biomethanation under psychrophilic conditions: a review, Bioresource Technology 87(2):147-153. Katz, D.A., 1981, "Polymers", Available online at: http://www.chymist. com/Polymers.pdf Accessed February 15th, 2006. Kelly, N., 2006, The Role of Energy Efficiency In Reducing Scottish and UK C 0 2 Emissions. Energy Policy 34:3505-3515. Kelly, N., 2006,The Role of Energy Efficiency In Reducing Scottish and UK CO z Emissions. Energy Policy 34:3505-3515. Kemp, W.H., 2006, Biodiesel basics and beyond. A comprehensive guide to production and use for the home and farm. Aztext Press, 300 pp. Kessler, E., 2000. Energies and Policies. Editorial, Energies, 1:38-40, DOI: 10.3390/enl010038. Khan, M.I., Zatzman, G.M., and Islam, M.R., 2005, A Novel Sustainability Criterion as Applied in Developing Technologies and Management Tools. In Second International Conference on Sustainable Planning and Development. Bologna, Italy. Khan, M.I., Zatzman, G.M., and Islam, M.R., 2005b, A Novel Sustainability Criterion as Applied in Developing Technologies and Management Tools, In Second International Conference on Sustainable Planning and Development. Bologna, Italy. Khan, M.I, and Islam, M.R., 2005a, A Novel Sustainability Criterion as Applied in Developing Technologies and Management Tools, Sustainable Planning 2005,12-14 September 2005, Bologna, Italy. Khan, M.I, Zatzman, G., and Islam, M.R., 2005, New Sustainability Criterion Development of Single Sustainability Criterion as Applied in Developing Technologies. Jordan International Chemical Engineering Conference V, Paper No.: JICEC05-BMC-3-12, Amman, Jordan, 12-14 September 2005. Khan, M.I. and Islam, M.R,. 2005a, Assessing sustainability of technological developments: an alternative approach of selecting indicators in the case of Offshore operations. ASME Congress, 2005, Orlando, Florida, Nov 5-11, 2005, Paper no.: IMECE2005-82999. Khan,M.I. and Islam, M.R., 2005b, Assessing Sustainability of Technological Developments: An Alternative Approach of Selecting Indicators in the Case of Offshore Operations. Proceedings of ASME International, Mechanical Engineering Congress and Exposition, Orlando, Florida, November 5-11. Khan, M.I. and Islam, M.R., 2003a, Ecosystem-Based Approaches to Offshore Oil and Gas Operation: An Alternative Environmental Management Technique. SPE Annual Technical Conference and Exhibition, Denver, USA. October 6-8.
816
R E F E R E N C E S A N D BIBLIOGRAPHY
Khan, M.I. and Islam, M.R., 2003b, Wastes Management in Offshore Oil and Gas: A Major Challenge in Integrated Coastal Zone Management. In: L.G. Luna (ed), CAR1COSTA 2003-lst International Conference on Integrated Coastal Zone Management 1CZM, University of Oriente, Santiago du Cuba, May 5-7,2003. Khan,M.I. and Islam,M.R.,2005,AssessingSustainability of Technological Developments: An Alternative Approach of Selecting Indicators in the Case of Offshore Operations. Proc. ASME International, Mechanical Engineering Congress and Exposition, Orlando, Florida, Nov 5-11, 2005. Khan, M.I. and Islam, M.R., 2005b, Sustainable marine resources management: framework for environmental sustainability in offshore oil and gas operations. Fifth International Conference on Ecosystems and Sustainable Development. Cadiz, Spain, May 03-05,2005. Khan, M.I. and Islam, M.R., 2005c, Achieving True technological sustainability: pathway analysis of a sustainable and an unsustainable product, International Congress of Chemistry and Environment, Indore, India, 24-26 December 2005. Khan, M.I. and Islam, M.R., 2006, Achieving True Sustainability in Technological Development and Natural Resources Management. Nova Science Publishers, New York, USA, 381 pp. Khan, M.I. and Islam, M.R., 2007, The Petroleum Engineering Handbook: Sustainable Operations, Gulf Publishing Company, Houston, TX, 461 pp. Khan, M.I. and Islam, M.R., 2007, True Sustainability in Technological Development and Natural Resources Management. Nova Science Publishers, New York, USA, pp 381. Khan, M.I. and Islam, M.R., 2007b, Handbook of petroleum engineering sustainable operations, Gulf Publishing Co., Houston, USA. Khan, M.I., 2006, Development and Application of Criteria for True Sustainability. journal of Nature Science and Sustainable Technology, Vol. 1, No. 1:1-37. Khan, M.I., 2006, Towards Sustainability in Offshore Oil and Gas Operations, Ph.D. Dissertation, Department of Civil and Resource Engineering, Dalhousie University, Canada, 442 pp. Khan, M.I. and Islam, M.R., 2005b, Assessing the Sustainability of Technological Developments: An Alternative Approach of Selecting Indicators in the Case of Offshore Operations, ASME International Mechanical Engineering Congress and Exposition (IMECE), Orlando, Florida, USA, November. Khan,M.I.,Chhetri,A.B.,andIslam,M.R.,2008, Achieving True Technological Sustainability: Pathway Analysis of a Sustainable and an Unsustainable Products. Nat. Sei. and Sust. Tech. vol. 1, no. 3. Khan, M.I., Chhetri, A.B., and Islam, M.R., 2008, Analyzing Sustainability of Community-Based Energy Development Technologies. Energy Sources Part B, vol. 2, 403-419.
R E F E R E N C E S A N D BIBLIOGRAPHY
817
Khan, M.M, David, P., and Islam, MR., 2007a, A novel sustainable combined heating/cooking/refrigeration system, / Nat. Set. Sust. Tech., vol. 1, no. 1,133-163. Khan, M.M, David, P., and Islam, M.R., 2007b, Zero-waste living with inherently sustainable technology, /. Nature Science and Sustainable Technology, vol. 1, no. 2, 263-270. Khan, M.M. and Islam, M.R., 2006c, A new downhole water-oil separation technique, /. Pet. Sei. Tech., vol. 24, no. 7, 789-305. Khan, M.M., Mills, A., Chaalal, O., and Islam, M.R., 2006c, "Novel Bioabsorbents for the Removal of Heavy Metals from Aqueous Streams", Proc. 36"' Conference on Computer and Industries, June, Taiwan. Khan, M.M., Prior, D. and Islam, M.R., 2005, Jordan International Chemical Engineering Conference V, 12-14 September, Amman, Jordan. Khan, M.M., Prior, D. and Islam, M.R., 2007a. A novel combined heating, cooling, refrigeration system. /. Nat. Sei. and Sust. Tech., 1(1):133-162. Khan, M.M., Prior, D., and Islam, M.R., 2006a, A Novel, Sustainable Combined Heating/Cooling/Refrigeration System. /. Nat. Sei. and Sust. Tech. 1(1 ):133-162. Khan, M.M., Prior, D., and Islam, M.R., 2007, "Zero-Waste Living with Inherently Sustainable Technologies", /. Nature Science and Sustainable Technology, vol. 1, no. 2,263-270. Khan, M.M., Prior, D., Islam, M.R., 2005, Direct- usage solar refrigeration: from irreversible thermodynamics to sustainable engineering. Jordan International Chemical Engineering Conference V, Amman, Jordan. Khan, M.M., Zatzman, G.M., and Islam, M.R. The Formulation of a Comprehensive Mass and Energy Balance Equation, Proc. ASME International Mechanical Engineering Congress and Exposition, Boston, MA, Nov. 2-6,2008. Khilyuk, L.F. and Chilingar, G.V., 2003, Global Warming: Are We Confusing Cause and Effect? Energy Sources 25:357-370. Khilyuk, L.F. and Chilingar, G.V., 2004, Global Warming and Long Term Climatic Changes: A Progress Report. Environmental Geology 46(6-7):970-979. Khilyuk, L.F. and Chilingar, G.V., 2006, On Global Forces of Nature Driving the Earth's Climate. Are Humans Involved? Environmental Geology. Published on line. Khilyuk, L.F. and Chilingar, G.V., 2006, On Global Forces of Nature Driving the Earth's Climate. Are Humans Involved? Environmental Geology 50(6):899-910. Khilyuk, L.F., Katz, S.A., Chilingarian, G.V., and Aminzadeh, F., 2003, Global Warming: Are We Confusing Cause and Effect? Energy Sources 25:357-370.
818
REFERENCES AND BIBLIOGRAPHY
Kim, B.W., Chang, HN., Kim, I.K., and Lee, K.S., 1992, Growth kinetics of the photosynthetic bacterium Chlorobium thiosulfatophilum in a fed-batch reactor. Biotechnology Bioengineering 40:583-592. Klass, L.D., 1998, Biotnassfor Renewable Energy, Fuels and Chemicals; Academic Press: New York, pp 1-2. Klaassen, C D . , 2001, Toxicology the Basic Science of Poisons, McGraw-Hill, USA. Kline, M., 1972, Mathematical Thought from Ancient to Modern Times, Oxford Univ. Press, New York, 1972. Kline, R., 1995. "Construing 'Technology' as 'Applied Science'- Public Rhetoric of Scientists and Engineers in the United States, 1880-1945", Isis Vol. 86 No. 2 (June), 194-221. Klins, M.A., 1984, Carbon Dioxide Flooding: Basic Mechanisms and Project Design, HRDC, Boston, MA. Knipe, P. and Jennings, P., 2007, Electromagnetic radiation emissions from raps equipment[online]Available:(http://wwwphys.murdoch. edu.au/Solar2004/Proceedings/Systems/Knipe_Paper_EM.pdf) [February 10,2007]. Koama, J. and Kasali, G.B., Ellegard, A.,1994, Efficiency and Emissions of Coal Combustions in Two Unvented Cookstoves. Energy, Environment and Development Series no 4. Published by Stockholm Environment Institute, ISBN: 9188714 020. Kocabas, I., Islam, M.R., 1998, "A Wellbore Model for Predicting Asphaltene Plugging", SPE paper 49199, Proc. of the SPE Annual Technical Conference and Exhibition, New Orleans. Kocabas, I., Islam, M.R., 2000, "Field-Scale Modeling of Asphaltene Transport in Porous Media", /. Pet. Sei. Eng., vol. 26(1-4), 19-30. Koch, G.H., Brongers, M.P.H., Thompson, N.G., Virmani, Y.P., and Payer. J.H., 2002, A Supplement to Materials Performance 41(7), p. 2. Koertge, N., 1977,"Galileo and the Problem of Accidents", Journal of the History of Ideas, Vol. 38, No. 3. (July-September), 389-408. Koh, CA., Westacott, R.E., Zhang, W., Hirachand, K., Creek, J.L., and Soper, A.K., 2002, Mechanisms of gas hydrate formation and inhibition. Fluid Phase Equilibria, 294-197:143-151. Koide, H., Takahashi, M., Shindo, Y, Noguchi, Y, Nakayama, S., Iijima, M., Ito, K., and Tazaki, Y, 1994, 'Subterranean Disposal of Carbon Dioxide at Cool Formation Temperature', Conference Proceedings, CLEAN AIR 94, 63-72. Koide, H., Tazaki, Y., Noguchi, Y, Nakayama, S., Iijima, M., Ito, K., Shindo, Y, 1992, 'Subterranean Containment and long-term Storage of C 0 2 in Unused Aquifers and in Depleted Natural Gas Reservoirs, Energy Conversion and Management', volume 33, nos. 5-8, 619-626. Kondratyev, K.Y.A. and Cracknell, A.P., 1998, Observing Global Climate Change. Taylor & Francis. ISBN- 0748401245, 544 pp.
R E F E R E N C E S A N D BIBLIOGRAPHY
819
Korbul, R. and Kaddour, A., 1995, 'Sleipner Vest C 0 2 disposal - injection of removed C 0 2 into the Utsira Formation', Energy Conversion and Management, volume 36, nos. 6-9,509-512. Kotsiomiti, E. and McCabe, J.F., 1997, Experimental Wax Mixtures for Dental Use. Journal of Oral Rehabilitation, Vol. 24(7), pp. 517-521. Kruglinski, S., 2006, Whatever Happened to Cold Fusion? Discover, March, 27(03). Krumbein, W.E., 1979, Photolithotropic and Chemoorganotrophic Activity of Bacteria and Algae as Related to Beachrock Formation and Degradation Gulf of Aqaba Sinai, Geomicrobiology, vol. 1, No. 2, pp 139-203. Krumbein, W.K., 1974, On the Precipitation of Aragronite on the Surface of Marine Bacteria, Naturwissenschaften, vol. 61, pp 167. Krupa, I. and Luyt, A.S., 2001, 'Physical properties of blends of LLDPE and an oxidized paraffin wax', Polymer, Vol. 42, pp. 7285-7289. KRÜSS 2006, Instruments for Surface Chemistry, Measuring Principle of KRÜSS Tensio Meters, KRÜSS GmbH, Wissenschaftliche Laborgeräte, Hamburg, Germany. Kulkarni, M.G. and Dalai, A.K., 2006, Waste Cooking Oils an Economical Source for Biodiesel: A Review. Ind. Eng. Chem. Res. 45:2901-2913. Kumar, A., Jain, S.K., and Bansal, N.K., 2003, Disseminating energyefficient technologies: a case study of compact fluorescent lamps (CFLs) in India. Energy Policy 31:259-272. Kunisue, T., Masayoshi Muraoka, Masako Ohtake, Agus Sudaryanto, Nguyen Hung Minh, Daisuke Ueno, Yumi Higaki, Miyuki Ochi, Oyuna Tsydenova, Satoko Kamikawa et al., 2006, Contamination status of persistent organochlorines in human breast milk from Japan: Recent levels and temporal trend, Chemosphere: in press. Kurki, A., Hill, A., Morris, M., 2006, BiodiesekThe sustainability dimensions. ATTRA, pp 1-12. http://attra.ncat.org/attra-pub/PDF/biodiesel_sustainable.pdf (accessed on October 27,2007). Kuroda, H., 2006, Emerging Asia in the Global Economy: Prospects and Challenges. Remarkby President, Asian Development Bank at the Council on Foreign Relations. February 17, Washington, D.C., USA, 2006. Kutchko, B.G. and Kim, A.G., 2006, 'Fly ash characterization by SEM-EDS', Fuel, Vol. 85(17-18), pp. 2537-2544. Kuuskaraa, V.A., Charles, M., Boyer II., and Jonathan, A.K., 1992, 'Coalbed Gas-1: Hunt for quality Basins goes abroad', Oil & Gas Journal, October, 80-85. Kyoto Protocol, 1997, Conference of the Parties Third Session Kyoto, 1-10 December 1997. Kyoto Protocol to the United Nations Framework Convention on Climate Change.
820
R E F E R E N C E S A N D BIBLIOGRAPHY
Labuschange, C , Brent, A.C., and Erck, R.P.G., 2005, Assessing the sustainability performances of industries. Journal of Cleaner Production, 13:373-385. Lacey, J., 1990, "Isolation of thermophilic microorganisms", Isolation of Biotechnological Organisms from Nature, Labeda, D.P. (ed.). New York, MCGraw-Hill Publishing Co., 141-181. Lähateenmäkia, L., Klaus, G., Ueland, Ö., Äström, A., Arvolaa, A. and Tino Bech-Larsen, T., 2002, Acceptability of genetically modified cheese presented as real product alternative, Food Quality and Preference, vol. 13, pp. 523-533. Lakhal, S., H'mida S., and Islam, R., 2005, A Green supply chain for a petroleum company, Proceedings of 35th International Conference on Computer and Industrial Engineering, Istanbul, Turkey, June 19-22, 2005, Vol. 2:1273-1280. Lakhal, S.L., Khan, M.I., and Islam, M.R., 2006a, A framework for a green decommissioning of an offshore platform, in Proceedings of 36"' CIE Conference on Computer and Industrial Engineering, July, Taiwan, pp. 4345-56. Lakhal, S.Y. and H'Mida, S., 2003, A gap analysis for green supply chain benchmarking, in 32th International Conference on Computers & Industrial Engineering, Volume 1, August 11-13, Ireland, pp. 44-49. Lang, X., Dalai, A.K., Bakhsi, N.N., Reaney, M.J., and Hertz, P.B., 2001, Preparation and characterization of bio-diesels from various bio-oils. Bioresource Technology 80:53-62. Lange J-R, 2002, Sustainable development: efficiency and recycling in chemicals manufacturing. Green Chem, 4:546-50. Larrondo, L.E., Urness, C M . , and Milosz, G.M., 1985, Laboratory Evaluation of Sodium Hydroxide, Sodium Orthosilicate, and Sodium Metasilicate as Alkaline Flooding, Society of Petroleum Engineering, Vol. 13577:307-315. Lastella, G., Testa, C , Cornacchia, G., Notornicola, M., Voltasio, F., and Sharma, V.K., 2002, Anaerobic digestion of semi-solid organic waste: Biogas production and its purification. Energy Conversion and Management 43: 63-75. Le Roux, N., "Going for gold with microbes", Chemical Eng. Jan.:432,1987. Leal Filho, W.L., 1999, Sustainability and university life: some European perspectives. W. Leal Filho (ed.), Sustainability and University Life: Environmental Education, Communication and Sustainability (pp. 9-11). Berlin: Peter Lang. Lean, G., 2007, Oil and gas may run short by 2015. The Independent, UK. http://environment.independent.co.uk/climate_change/article 2790960.ece, 22 July, 2007 (Accessed on 23 July 2007). Leclercq, B., 2006, Beeswax, Accessed September 22,2006, from Beekeeping website: http://www.beekeeping.com/leclercq/wax.htm.
R E F E R E N C E S A N D BIBLIOGRAPHY
821
Lecomte du Nou'y, P. 1919, /. Gen. Physiol. Vol.1:521. Lee, I., Johnson, L.A., and Hammond, E.G., 1995, Use of branched-chain esters to reduce the crystallization temperature of biodiesel. JAOCS 72(10):1155-1160. Lee, ST., LO, H., and Dharmawardhana, B.T., 1988, 'Analysis of Mass Transfer Mechanisms Occurring in Rich Gas Displacement Process,' SPE paper 18062 presented at the Annual Technical Meeting, Houston Oct. 2-5. Lee, S.C., Choi, B.Y., Lee, T.J., Ryu, C.K., Ahn, Y.S., and Kim, J.C., 2006, C 0 2 absorption and regeneration of alkali metal-based solid sorbents, Catalysis Today, 111:385-390. Lee, T.R., 1996, Environmental stress reactions following the Chernobyl accident, One Decade after Chernobyl accident, summing up the consequences of the accident. Proceeding of an International Conference, Vienna, STI/PUB/1001.IAEA, Vienna, 1996. 238-310. Lehman-McKeeman. L.D. and Gamsky, E.A., 1999, Diethanolamine inhibits choline uptake and phosphatidylcholine synthesis in Chinese hamster ovary cells. Biochem. Biophys. Res. Commim. 262(3):600-604. Lems S, van derKooi H.J., deSwaan Arons J., 2002, The sustainability of resource utilization. Green Chem, Vol.4:308-13. Leontieff, W., 1973, Structure of the world economy: outline of a simple input-output formulation, Stockholm: Nobel Memorial Lecture, 11 December, 1973. Lerner, L., 2000, Good Science, Bad Science: Teaching Evolution in the States, The Thomas Fordham Instite Foundation, Washington DC. Lerner, L., 2005, Review of Creationism's Trojan Horse: The Wedge of Intelligent Design, in Physics & Society, A Forum of the American Physical Society, January. Letcher, T.M. and Williamson, A., 2004, Forms and Measurement of Energy. Encyclopedia of Energy, 2004, 2:739-748. Leung, D.Y.C. and Guo, Y., 2006, Transesterification of neat and used frying oil: optimization for biodiesel production. Fuel processing technology 87:883-890. Lewis, R.J., Sr., 2002, Hawley's Condensed Chemical Dictionary, 14lh Edition, New York: John Wiley and Sons. Li, D.H.W., and Lam, J.C., 2004, Predicting solar irradiance on inclined surfaces using model for heat input and output of heat storing stoves, Applied Thermal Engineering, 25(17-18):2878-2890. Li, D.H.W., and Lam, J.C., 2004. Predicting solar irradiance on inclined surfaces using sky radiance data, Energy Com>ersion and Management 45(11 -12): 1771 -1783. Li, T, Gao, J., Szoszkicz, R., Landman, U., and Riedo, E., 2007, Structured and viscous water in subnanometer gaps, Physical Reviezv B, vol. 75, 115415, March 15, pp. 115415-1-115415-6.
822
R E F E R E N C E S A N D BIBLIOGRAPHY
Liberman, ]., 1991, Light: Medicine of the future: How we can use it to heal ourselves now, Bear & Company, Inc., Sata Fe, NM, USA. Lindzen, R.S., 2002, Global Warming: The Origin and Nature of the Alleged Scientific Consensus. Regulation: The Cato Review of Business and Government, http://eaps.mit.edu/faculty/lindzen/ 153_Regulation.pdf. Lindzen, R.S., 2006, Climate Fear. The Opinion Journal, April, 12, 2006. www.opinionjournal.com/ extra/?id=ll0008220 (Accessed on June, 30, 2006). Liodakis, S., Katsigiannis, G., Kakali, G. 2005, Ash Properties of Some Dominant Greek forest species, Thermochimica Acta, Vol. 437:158-167. Liu, P., 1993. Introduction to energy and the environment. Van Nostrand Reinhold, New York. Livingston, R.J., and Islam, M.R., 1999, "Laboratory modeling, field study and numerical simulation of bioremediation of petroleum contaminants", Energy Sources, vol. 21(1/2), 113-130. Logan, R.K., 1986, The Alphabet Effect: The Impact of the Phonetic Alphabet on the Development of Western Civilization, St. Martin's Press, New York, 1986. Losey, J.E., Obrycki, J.J. and Hufbauer, R.A., 2004, Biosafety Considerations for Transgenic Insecticidal Plants: Non-Target Herbivores, Detritivores, and Pollinators, Encyclopedia of Plant and Crop Science, Marcel Dekker, Inc., New York, pp. 153-155. Low, N.M.P., Fazio, P., and Guite, P., 1984, Development of light-weight insulating clay products from the clay sawdust-glass system. Ceramics International 10(2):59-65. Lowe E.A., Warren J.L., Moran S.R., 1997, Discovering industrial ecology— an executive briefing and sourcebook. Columbus: Battelle Press. Lowy, ]., 2004, Plastic left holding the bag as environmental plague. Nations around world look at a ban. . Lozada, D., and Farouq Ali, S.M., 1988, 'Experimental Design for NonEquilibrium Immiscible Carbon Dioxide Flood', Fourth UNITAR/ UNDP International Conference on Heavy Crude and Tar Sands, Edmonton, August 7-12, paper no. 159. Lu, Y, Zhang, Y, Zhang, G., Yang, M., Yan, S., Shen, D., 2004, "Influence of thermal processing on the perfection of crystals in polyamide 66 and polyamide 66/clay nanocomposites", Journal of Polymer, Elsevier, V. 45, Issue 26, pp. 8999-9009. Lubchenco, J.A., et al. 1991, The sustainable biosphere initiative: an ecological research agenda. Ecology 72:371-412. Lumley, S. and Armstrong, P., 2004, "Some of the Nineteenth Century Origins of the Sustainability Concept", Env. Devlopment and Sustainability, vol. 6, no. 3, Sept., 367-378.
R E F E R E N C E S A N D BIBLIOGRAPHY
823
Lunder, S. and Sharp, R., 2003, Mother's milk, record levels of toxic fire retardants found in American mother's breast milk. Environmental Working Group, Washington, USA. Ma, F. and Hanna, M.A., 1999, Biodiesel production: a review, Bioresource Technology, 70:1-15. Maclallum, M.F. and Guhathakurta K., 1970, The Precipitation of Calcium Carbonate from Seawater by Bacteria Isolated from Bahama Bank Sediments, Journal of Applied Bacteria, vol. 33, pp 649-655. MacLeod, F.A., Lappin-scott, H.M., and Costerton, J.W., 1988, Plugging of a Model Rock System by Using Starved Bacteria, Applied and Environment Microbiology, vol. 51, pp 1365-1372. Makeig, K., 2002, Funding the Future: Setting our S&T Priorities, Technology in Society, vol. 24, pp. 41-47. Malcolm, P., 1998, "Polymer chemistry: an introduction", Oxford University press, London, p.3. Mallinson, R.G., 2004, Natural Gas Processing and Products. Encifdopedia of Energy Vol. IV, Elsevier Publication. Okhlahama, USA. p p 235-247. Mancktelow, N.S., 1989, 'The rheology of paraffin wax and its usefulness as an analogue for rocks', Bulletin of the Geological Institutions of the University of Uppsala, Vol. 14, pp. 181-193. Mann, H., 2005, Personal communication, Professor, Civil Engineering Department, Dalhousie University, Halifax, Canada. Manning, D.G., 1996, Corrosion Performance of Epoxy-Coated Reinforcing Steel: North American Experience. Construction and Building Materials 10(5), p. 349. Manser, C.E., 1996, 'Effects of lighting on the welfare of domestic poultry: A review', Animal Welfare, Volume 5, Number 4, pp. 341-360. Mansoori, G.A., 1997, "Modeling of Asphaltene and Other Heavy Organic Depositions", /. Pet. Sei. Eng., vol. 17,101-111. Mansour, E.M.E., Abdel-Gaber, A.M., Abd-El-Nabey, B.A., Khalil, N., Khamis, E., Tadros, A., Aglan, H., and Ludwick. A., 2003. Corrosion 59(3), p. 242. Market Development Plan, 1996, Market status report: postconsumer plastics, business waste reduction, Integrated Waste Development Board, Public Affairs Office. California. Markiewicz, GS., Losin, MA., and Campbell, K.M., 1988, The Membrane Alternative for Natural Gas Treating:Two Case Studies. SPE 18230. 63"' Annual Teohnical Conference end Exhibition of the Society of Petroleum Engineers held in Houston, Tx, October 2-5. Marsden, ]., 1992, The Chemistn/ of Gold Extraction. Ellis Horwood Ltd., 221-235. Martinot, E., 2005, Global Renewable Status Report. Paper prepared for the REN21 Network. The Worldwatch Institute.
824
R E F E R E N C E S A N D BIBLIOGRAPHY
Martinot, E., 2005, Renewable Energy Policy Network for the 21st Century. Global Renewables Status Report Prepared for the REN 21 Network by the Worldwatch institute. Marx, K. 1883, Capital: A critique of political economy Vol. II: The Process of Circulation of Capital, London, Edited by Frederick Engels. Maske, J. 2001, Life in PLASTIC, it's fantastic, GEMINI, Gemini, NTNU and SINTEF Research News, N-7465 Trondheim, Norway. Maskell, K, 2001, Climate Change: The Scientific Basis. Technical Summary. Cambridge University Press, Cambridge, UK. Matsuoka, K., Iriyama, Y, Abe, T, Matsuoka, M., Ogumi, Z., 2005, Electrooxidation of Methanol and Ethylene Glycol on Platinum in Alkaline Solution: Poisoning Effects and Product Analysis. Electrochimica Ada Vol.51:1085-1090. Matsuyama, H., Teramoto, M. and Sakakura, H., 1996, Selective Permeation of C0 2 through polyf 2-(N,N-dimethyl)aminoethyl methacrylate^ Membrane Prepared by Plasma-Graft Polymerization Technique, /. Membr. Sei. 114 (1996) 193-200. Mayer, E.H., Berg, Carmichael, R.L., and Weinbrandt, R.M., 1983, Alkaline Injection for Enhanced Oil Recovery-A Status Report, journal of Petroleum Technology, 209-221. McCarthy, B.J., Greaves, PH., 1988, Mildew-causes, detection methods and prevention. Wool Science Review, Vol. 85,27-48. McHugh, S., Collins, G., and Flahert,. V.O'., 2006, Long-term, high-rate anaerobic biological treatment of whey wastewaters at psychrophilic temperatures, Bioresource Technology 97(14):1669-1678. MEA (Millennium Ecosystem Assessment), 2005, The millennium ecosystem assessment, Commissioned by the United Nations, the work is a four-year effort by 1,300 scientists from 95 countries. Meher, L.C., Vidya Sagar, D., Naik, S.N., 2004, Technical aspects of biodiesel production by transesterification-a review. Renewable and sustainable energy review. 1-21. Merriman, B. and Burchard, P., 1996, An Attempted Replication of CETI Cold Fusion Experiment, 1996, http://www.lenr-canr.org/PDetail6. htm#2029. Metivier, H., 2007, Update of Chernobyl: Ten Years On. Assessment of Radiological and Health Impacts. Nuclear Energy Agency, Organisation For Economic Co-Operation and Development, 2002. h t t p : / / w w w . nea.fr/html/rp/ reports/2003/nea3508-chernobyl.pdf (Acessed on December 17,2007). Miao, X. and Wu., Q., 2006, Biodiesel production from heterotrophic microalgal oil: Bioresource Technology, Vol. 97, (6):841-846. Miller, G., 1994, Living in the Environment: Principles, Connections and Solutions. California: Wadsworth Publishing.
REFERENCES AND BIBLIOGRAPHY
825
Miralai, S., Khan, M.M., and Islam, M.R., 2007, Replacing artificial additives with natural alternatives, / Nat. Sei. Säst. Tech., vol. 1, no. 2. Mita, K., Ichimura, S., and James, T., 1994, "Highly repetitive structure and its organization of the fibroin gene" Journal of Molecular Evolution, V.38, PP.583-592. Mittelstaedt, M., 2006, Toxic Shock, 5-part series, Globe and Mail, starting from May 27-June 1. Mittelstaedt, M., 2006a, Chemical used in water bottles linked to prostate cancer, The Globe and Mail, Friday, 09 June 2006. Mittelstaedt, M., 2007, "Vitamin D casts cancer prevention in new light", The Globe and Mail [Toronto], Sat 4 April 28, page Al Mittelstaedt, M., 2008, Coalition of public health and environmental advocates says federal government hasn't gone far enough in regulating controversial chemical, Globe and Mail., Dec. 16. Moffet, A.S., 1994, "Microbial mining boosts the environment, bottom line", Science. 264, May 6. Moire, L., Rezzonico, E., and Poirier, Y, 2003, "Synthesis of novel biomaterials in plants", Journal of Plant Physiol, V.160, pp. 831-839. Molero, C , Lucas, A.D., and Rodriguez, J.F., 2006, Recovery of polyols from flexible polyurethane foam by "split-phase" glycolysis: Glycol influence. Polymer Degradation and Stability. Vol. 91:221-228. Mollet, C , Touhami, Y, Hornof, V, 1996, A Comparative Study of the Effect of Ready Made and in-Situ Formed Surfactants on IFT Measured by Drop Volume Tensiometry, /. Colloid Interface Sei. Vol. 178:523. Morgan, ]., Townley, S., Kemble, G., and Smith, R., 2002, 'Measurement of physical and mechanical properties of beeswax', Materials Science and Technology, Vol. 18(4), pp. 463-467. Moritis, G., 1994, 'EOR Dips in US but Remains a Significant Factor', Oil and Gas ]., 51-79, Sept 26. Moritis, G., 1998, 'EOR Oil Production Up Slightly', Oil and Gas Journal, April 20, 49-56. Moritis, G., 2004, Point of View: EOR Continues to Unlock Oil Resources. Oil and Gas Journal. ABI/INFORM Global, Vol.l02(14):45-49. Morrow, H., 2001, Environmental and human health impact assessments of battery systems. Industrial Chemistry Library 10:1-34. Mortimer, N., 1989, Friends of Earth, Vol 9. In: Nuclear Power and Global Warming by Thompson, B., 1997, 1989 (http://www.seaus.org.au/ powertrip.html (acessed on November 10, 2007). Mortimer, N., 1989. Friends of Earth, Vol 9. In:Nuclear power and global warmingbyThompson,B.,1997(http:// www.seaus.org.au/powertrip. html). Mossop, G. and Shesten, I., 1994, 'Geological Atlas of Western Canada Sedimentary Basin', Cdn. Soc. Pet. Geol. and Alberta Research Council, Calgary, Alberta.
826
R E F E R E N C E S A N D BIBLIOGRAPHY
MSDS Material Safety Data Sheet, 2006, Canadian Centre for Occupational Health and Safety, 135 Hunter Street East, Hamilton ON Canada L8N 1M5. MSDS,2005,EthyleneGlycolMaterialSafety Datasheet, www.sciencestuff. com/msds/C1721.html. MSDS, 2006, Material safety data sheet for ethylene glycol. www. jtbaker.com/msds/englishhtml/o8764.htm (Accessed on January 28, 2008). MTR, 2007, Membrane Technology and Research: Natural Gas Liquids Recovery/Dewpoint Control, http://www.mtrinc.com/natural_ gas_liquids_recovery.html (accessed on 8th August, 2006). Mudd, G.M., 2000, Remediation of Uranium Mill Tailings Wastes in Australian Critical Review. Contaminated Site Remediation: From Source Zones to Ecosystems. CSRC, Melbourne, Vic.,4-8 Dec, 2000. Mundy, B., 1989, Distant Action in Classical Electromagnetic Theory, British journal for the Philosophy of Science 40 [1]: 39-68. Munger, C. 1999., Corrosion Prevention by Protective Coatings. NACE International, Houston, USA. Munger, C.G., 1990, COC-Compliant Inorganic Zinc Coating. Material Performance, October, p. 27. Munger, C.G., 1992, Coating Requirements for Offshore Structures. Material Performance, June, p. 36. Muntasser, Z. 2002, The Use of Coatings to Prevent Corrosion in Harsh Environments. M.Sc. Thesis, Dalhousie University, Halifax, NS, Canada. Muntasser, Z.M., Al-Darbi M.M., and Islam. M.R., 2001. Prevention of Corrosion in a Harsh Environment using Zinc Coating. SPE Production Conference. Oklahoma, USA. Murphy, M. 2003, Technical Developments in 2002: Organic Coatings, Processes, and Equipment. Metal Finishing 101(2), p. 47. Murrell, J.C., 1994, Molecular genetics of methane oxidation. Biodegradation 5:145-149. Mustafiz, S. 2002, A Novel Method forHheavy Metal Removal from Aqueous Streams, MASc Dissertation, Dalhousie University, Canada. Nabi, M.N., Akhter, M.S., Shahadat, M.M.Z., 2006, Improvement of engine emissions with conventional diesel fuel and diesel-biodiesel blends. Bioresource Technology. Vol. 97:372-378. Nallinson, R.G., 2004, Natural Gas Processing and Products. Encyclopedia of Energy Vol. IV, Elsevier Publication. Okhlahama, USA, pp235-247. Narayan, R., 2004, Drivers & rationale for use of biobased materials based on life cycle assessment (LCA). GPEC 2004 Paper. NASA, 1999, Biomass Burning and Global Change, http://asdwww.larc. nasa.gov/biomass_burn/biomass_burn.html. NASA/ESA 2004 'Sun's storms create spectacular light show on earth', NOAA News, National Oceanic and Atmospheric Administration
REFERENCES A N D BIBLIOGRAPHY
827
(NOAA), United States Department of Commerce, accessed March 5, 2008, http://www.noaanews.noaa.gov/stories2004/s2337.htm. NASA/European Space Agency, 2004, Sun's storms create spectacular light show on earth, NOAA News, National Oceanic and Atmospheric Administration (NOAA), United States Department of Commerce, accessed March 5, 2008, http://www.noaanews.noaa.gov/stories2004/s2337.htm. Natural Gas Org., 2004, www.naturalgas.org/naturalgas/processing_ ng.asp(accessed on August 08, 2007). Natural Gas. Org, 2004, Overview of natural gas. http://www.naturalgas. org/overview/ background.asp (May 8, 2008). Natural Resource Canada, 2004. Publications and Softwares, www.canren. gc.ca/prod serv/index.asp?CaId=196&PgId=1309 accessed on August 23, 2007). Natural Resources Canada, 1998, Alberta Post-Consumer Plastics Recycling Strategy Recycling. Texas, Society of Petroleum Engineers 1997. Naylor, R.H., 1980, "Galileo's Theory of Projectile Motion", Isis, Vol. 71, No. 4. (December), 550-570. Naylor, R.H., 1976, "Galileo: Real Experimentand Didactic Demonstration", Isis, Vol. 67, No. 3. (September), 398-419. Naylor, R.H., 1990, "Galileo's Method of Analysis and Synthesis", Isis, Vol. 81, No. 4. (December), 695-707. NEA and IAEA, 2005, Uranium 2005, Resources, Production and Demand. OECD, International Atomic Energy Agency (IAEA). Published by.OECD Publishing pp.388 ISBN: 9264024255, 2005. NEA-OECD, 2003, Nuclear Electricity Generation:What Are the External Costs? Nuclear Development. ISBN 92-64-02153-1,2003. Neep, J.P., 1995, 'Robust Estimation of P-wave Attenuation from Full Waveform Array Sonic Data', /. Seismic Exploration, vol. 4, 329-344. Nichols, C , Anderson, S., and Saltzman, D., 2001, A Guide to Greening Your Bottom Line Through a Resource-Efficient Office Environment. City of Portland, Office of Sustainable Development, Portland. Nikiforuk, A., 1990, Sustainable Rhetoric. Harrowsmith, 14-16. Nivola, P.S., 2004, The Political Economy of Nuclear Energy in the United States. The Brooking Institution. Policy Brief, #138., 2004, www. brookings.edu/comm/policybriefs/pbl38.htm (Accessed on January 15,2007). NOAA, 2005, Greenhouse Gases, Global Monitoring Division, Earth System Research Laboratory, National Oceanic and Atmospheric Administration, USA. NOAA, 2005, Trends in Atmospheric Carbon Dioxide. NOAA-ESRL Global Monitoring Division, www.cmdl.noaa.gov/ccgg/trends/(accessed on June 04, 2006). Norris, P., Nixon, A, and Hart, A., 1989,"Acidophilic, mineral-oxidizing bacteria: The utilization of carbon dioxide with particular reference to
828
R E F E R E N C E S A N D BIBLIOGRAPHY
autotrophy in sulfolobus." Microbiology of Extreme Environments and its Potential for Biotechnology. FEMS Symposium no. 49, Da Costa, M.S., Duarte, J.C., Williams, R.A.D., 24-39. Novosad, Z. and Constain, T.G., 1988, 'New Interpretation of Recovery Mechanisms in Enriched Gas Drives', Journal of Canadian Pet. Tech., volume 27,54-60. Novosad, Z. and Constain, T.G., 1989, 'Mechanisms of Miscibility Development in hydrocarbons Gas Drives: New Interpretation', SPE Res. Eng., 341-347. NTP (National Toxicology Program), 1993, NTP technical report on the toxicology and carcinogenesis studies of ethylene glycol (CAS Nos. 107-21-1) in B6C3F1 mice (feed studies). National Toxicology Program, US Department of Health and Human Services. NIH Publication 93-3144. Obernberger,I.,Biedermann,F., Widmann, W.,andRiedl,R.,1997,Concentrations of Inorganic Elements in Biomass Fuels and Recovery in the Different Ash Fractions, Biomass and Bioenergy, Vol. 12 (3):211-224. Obrycki, ]., Losey, ]., Taylor, O. and Jesee, L., 2001, Transgenic Insecticidal Corn: Beyond Insecticidal Toxicity to Ecological Complexity, Bioscience, May, vol. 51, No. 5, pp. 353-361. OCED, 1998, Towards sustainable development: environmental indicators. Paris: Organization for Economic Cooperation and Development; 132 pp. OECD, 1993, Organization for Economic Cooperation and development core set of indicators for environmental performance reviews. A synthesis report by the Group on State of the Environment, Paris. Office of Energy Efficiency and Renewable Energy, 2004, Biodiesel Analytical Methods, Report NREL/SR-510-36240,100 pp. Oldenburg, CM., Pruess, K., and Benson, S.M., 2001, Process Modeling of C 0 2 Injection into Natural Gas Reservoirs for Carbon Sequestration and Enhanced Gas Recovery. Energy Fuels, 15(2), 293-298. Olsson, M. and Kjällstrand, J., 2004, Emissions from burning of softwood pellets, Biomass and Bioenergy, Vol. 27, No. 6:607-611. Omer, A.M. and Fadalla. Y., 2003, Biogas energy technology in Sudan, Renewable Energy 28:499-507. OSHA, 2003, Petroleum refining processes. OSHA Technical Manual. Section IV, Chapter 2, 2003, h t t p : / / w w w . o s h a . g o v / d t s / o s t a / o t m / otm_iv/otm_iv_2.html (accessed on June 18,2008). Osujo, L.C, and Onoiake, M., 2004, Trace heavy metals associated with crude oil: A case study of Ebocha-8 Oil-spill-polluted site in Niger Delta, Nigeria, Chemistry and Biodiversity, vol. 1, issue 11, 1708-1715. Ott, J.N., 2000, Health and Light: The Effects of Natural and Artificial Light on Man and Other Living Things, Pocket Books, New York, NY, USA.
REFERENCES A N D BIBLIOGRAPHY
829
Oyarzun, P., Arancibia, F, Canales C , and Aroca, E.G., 2003, Biofiltration of high concentration of hydrogen sulphide using Thiobacillus thioparus. Process Biochemistrx/ 39:165-170. Patin, S., 1999, Environmental Impact of the Offshore Oil and Gas Industry, Eco Monitor Publishing, East Northport, New York. Paul D.B., Spencer H.G., 2008, It's Ok, We're Not Cousins by Blood: The Cousin Marriage Controversy in Historical Perspective, PLoS Biol 6(12): e320. doi:10.1371/journal.pbio.0060320. Pearson, K.W., 1892, The Grammar of Science. London: Walter Scott. Peart, J. and Kogler. R., 1994, Environmental Exposure Testing of Low VOC Coatings for Steel Bridges, journal of Protective Coatings and Linings, January, p. 60. Pershing, J. and Cedric, P., 2002, Promises and Limits of Financial Assistance and the Clean Development Mechanism, Beyond Kyoto: Energy Dynamics and Climate Stabilization. Paris: International Energy Agency, 94^98. Peters, M.S. and Timmerhaus, K.D., 1991, Plant design and economics for chemical engineers, Fourth Edition, McGraw-Hill, Inc. New York, USA. Peterson, B.E., 2008, Oregon State University, internet lecture, last accessed Nov. 12, 2008. Peterson, C.L., Reece, D.L., Hammond, B.L., Thompson, J., Beck, S.M., 1997, Processing, characterization and performance of eight fuels from lipids. Appl. Eng. Agric. 13(l):71-79. Pham-Delegue, M.H., Jouanin, L., and Sandoz, J. C , 2002, Direct and Indirect Effects of Genetically Modified Plants on the Honey Bee, in Honey Bees: Estimating the Eiwironmental Impact of Chemicals, Ed. P. Lurquin, Taylor & Francis, vol.15, pp. 312-326. Piipari R., Tuppurainen, M., Tuomi, T, Mantyla, L., Henriks-Eckerman, M.L.,Keskinen, H., and Nordman, H., 1998, Diethanolamine-induced occupational asthma, a case report, Clin. Exp. Allergi/ 28(3):358-362. Piro, G., Canonico, L.B., Galbariggi, G., Bertero, L., Carniani, C , 1995, "Experimental Study on Asphaltene Adsorption onto Formation Rock: An Approach to Asphaltene Formation Damage prevention", SPE 30109, Proc. European Formation Damage Conf., The Hague. Plastic Task Force 1999. Adverse health effects of plastics.
830
R E F E R E N C E S A N D BIBLIOGRAPHY
Pokharel,G.R., Chhetri, A.B., Khan,M.I., and Islam, M.R.,2006, Decentralized micro hydro energy systems in Nepal: en route to sustainable energy development, Energy Sources: in press. Polsby, E., 1994, Marketplace: what to do when the lights go out. Home energy Magazine online, November/December 1994: http://www. homeenergy.org/eehem/94/941115.html (accessed on March 24,2008). Pope, D.H. and Morris III E.A., 1995, "Mechanisms of microbiologically induced corrosion (MIC)", Materials Performance, vol. 34, May, pp.24-28. Postek, M.T., Howard, K.S., Johnson, A.H., and McMichael, K.L., 1980, Scanning electron microscopy: A student's handbook, Burlington: Ladd Research Industries. Prescott, N.B., Kristensen, H.H., and Wathes, CM., 2004, 'Light'. In: Weeks, C. and Butterworth, A. (Editors.), Measuring and Auditing Broiler Welfare. CABI Publishing, Wallingford, UK, pp. 101-116. Puri, R. and Yee, D. 1990, 'Enhanced Coalbed methane recovery,' SPE paper 20732 presented at the 65th Annual Technical Conference and Exhibition, New Orleans, U.S.A., September 23-26. Putin, S., 1999. Environmental impact of the offshore oil and gas indiistry. EcoMonitor Publishing, East Northport, New York. 425 pp. Putthanarat, S., Eby, R.K., Rajesh R.N., Shane B.J., Walker M.A., Peterman, E., Ristich, S., Magoshi, M., Tanaka, T., Stone, M.S., Farmer, B.L., Brewer, C , Ott, D.,2004, "Nonlinear optical transmission of silk/green fluorescent Protein (GFP) films", Journal of polymer, Elsevier, V. 45, Issue 25, pp.8451-8457. Putthanarat, S., Zarkoob, S., Magoshi, }., Chen, J.A., Eby, R.K., Stone, M. and Adams,W.W.,2002 "Effect of processing temperature on the morphology of silk membranes" Journal of polymer, V.43, Issue 12, pp. 3405-3413. Quine, W.V., 1937, Review of Harold Jeffreys, Scientific Inference (Reissued with additions, pp. vii; 272. Cambridge UP 1937 [1st pub 1931]), in Science [New Series] 86(2243): 590. Radich, 2006, Biodiesel performance, costs, and use. US Energy Information Administration website, /http://www.eia.doe.gov/oiaf/analysispaper/biodiesel/ index.htmlS. Ragheb, M.Chernobyl Accidents, 2007, https://netfiles.uiuc.edu/ mragheb/www/NPRE%20402%20ME%20405%20Nuclear%20 Power%20Engineering/Chernobyl%20 Accident.pdf. Rahbar S., Khan, M.M., Satish, E. A., Ma, F. and Islam, M.R.,2005, Experimental & numerical studies on natural insulation materials, ASME Congress, 2005, Orlando, Florida, Nov 5-11,2005, IMECE2005-82409. Rahman, M.H., Wasiuddin, N., and Islam, M.R., 2004, Experimental and Numerical Modeling Studies of Arsenic Removal with Wood Ash from Aqueous Streams, Can. ]. Chem. Eng., Vol. 82(5):968-977.
R E F E R E N C E S A N D BIBLIOGRAPHY
831
Rahman, M.H., Wasiuddin, N.M. and Islam, M.R., 2004, Experimental and Numerical Modeling Studies of Arsenic Removal with Wood Ash from Aqueous Streams. The Canadian journal of Chemical Engineering, 82:968-977. Rahman, M.S., 2006, Effect of natural alkaline solution on EOR during chemical flooding, Project Report, Department of Civil and Resource Engineering, Dalhousie University, Canada, pp. 28. Rahman, M.S., 2007, The Prospect of Natural Additives in Enhanced Oil Recovery and Water Purification Pperations, M.A.Sc. Dissertation, Dalhousie University, Canada. Rahman, M.S. and Islam, M.R., 2007, Physico-Chemical Characterization of Ashes from Acer nigrum by SEM, XRD and NMR technique, Int. }. Materials and Product Technology accepted. Rahman, M.S., 2007, The Prospect of Natural Additives in Enhanced Oil Recovery and Water Purification Operations. M.A.Sc Thesis, Faculty of Engineering, Dalhousie University, Halifax, Canada. Rahman, M.S., Hossain, M.E., and Islam, M.R., 2006, An EnvironmentFriendly Alkaline Solution for Enhanced Oil Recovery, JPST in press, Ref# PET/06/076. Raily, K., 2007, Hemptons. www.hemptons.co.za/Users/seeds/htm (Accessed on October 20, 2007). Ramakrishnan, T.S., and Wasan, D.T., 1983, A Model for Interfacial Activity of Acidic Crude Oil-Caustic Systems for Alkaline Flooding, SPE Journal, SPE-10716: 602-618. Ramesh, C , Keller, A., 1994, Eltink SJEA. Journal of Polymer, V.35, pp.5293-9. Randall, W., 1999, Technical handbook for marine biodiesel in recreational boats. Prepared for report, prepared by system lab services, a division of Williams pipe Lines Company. Rangaswamy, N., Vedhalakshmi R., and Balakrishnan, K., 1995, Evaluation of Coated Rebar Validity of Short-Term Accelerated Corrosion Tests in Relation to Long-Term Field Evaluation. ACM & M (vol. 42), p. 7. Rao, M.B. and Sircar, S., 1996, Performance and pore characterization of nanoporous carbon membranes for gas separation, journal of Membrane Science, 3(7), 109-18. Rao, M.B. and Sirkar, S., 1993, Liquid-phase adsorption of bulk ethanolwater mixtures by alumina. Adsorption Science and Technology, 10(1-4), 93-104. Rao, M.B., Sircar, S., and Golden, T.C., 1992, Gas separation by adsorbent membranes, US Patent, 5,104, 425. Rao, P., Ankam, S., Ansari, M., Gavane, A.G., Kumar, A., Pandit, V.l., and Nema, P., 2005, Monitoring of hydrocarbon emissions in a petroleum refinery. Environmental Monitoring and Assessment, 2005, 108:123-132.
832
R E F E R E N C E S A N D BIBLIOGRAPHY
Rao, S and Parulekar, B. B.,1999, Renewable Technology. Non Conventional, Renewable and Conventional. Khanna Publisher, Delhi, India. ISBN NO: 81-7409-040-1. Rassamdana, H., Mirzaee, N., Mehrabi, A.R, and Sahimi, M., "Field-Scale Asphalt Precipitation During Gas Injection into a Fractured Carbonate Reservoir", SPE 38313, SPE Western Regional Meeting, June, Long Beach, CA, 1997. Ray, R., Little, B., Wagner, P., and Hart. K., 1997, Scanning 19, p. 98. Rechtsteiner, G.A. and Ganske, J.A., 1998, 'Using Natural and Artificial Light Sources to Illustrate Quantum Mechanical Concepts', The Chemical Educator, Volume 3, Number 4, pp. 1-12. Rees, W., 1989, Sustainable development: myths and realities. Proceedings of the Conference on Sustainable Development Winnipeg, Manitoba: USD. Regert, M., Langlois,}., and Colinart,S., 2005, 'Characterization of wax works of art by gas Chromatographie procedures', Journal of Chromatograph]/ A, Vol. 1091, pp. 124-136. Reis, J.C., 1996, Environmental control in petroleum engineering. Gulf Publishing Company, Houston, Texas. Renton, JJ. and Brown, H.E., 1995, An Evaluation of Fluidized Bed Combustor Ash as a Source of Alkalinity to Treat Toxic Rock Materials, Engineering Geology, Vol. 40:157-167. Rice, D.D., Law, B.E.,and Clayton, J.L., 1993, 'Coal-bed Gas-an Undeveloped Resource. In the Future of Energy Gases', US Geological Survey professional paper no. 1570, United States Government Printing office, Washington, DC, 389-404. Ridgeway, James. 2000 "Eco Spaniel Kennedy: Nipping at Nader's Heels," Village Voice. New York, 16-22 August. Riemer, P. 1996, 'Greenhouse Gas Mitigation Technologies, an Overview of the C 0 2 Capture, Storage and Future Activities of the IEA Greenhouse Gas R&D Program', Energy Conversion and Management, volume 37, nos. 6-8,665-670. Robinson, J.G., 1993, The limits to caring: sustainable living and the loss of biodiversity. Conservation Biology 7:20-28. Robinson, R.J., Burseil, C.G., and Restine, J.L., 1977, A Caustic Steam flood Pilot-Kern River Field, paper SPE 6523 presented at SPE AIME 47'h Annual California Regional Meeting, 13akerafield,California, April 13-15. Roger, A.K. and Jaiduk, J.O., 1985, A rapid engine test to measure injector fouling in diesel engines using vegetable oil fuels. /. Am. Oil Chem. Soc.62(ll):1563-4. Rojas, G. and Farouq Ali, S.M., 1985, 'Dynamics of sub-critical C 0 2 Brine Floods for Heavy Oil Recovery', SPE paper 13598 presented at the SPE California Regional Meeting, Bakersfield, March. Rojas, G., Dyer, S., Thomas, S., and Farouq Ali, S.M., 1995, 'Scaled Model Studies of CO z Floods', SPE Res. Eng. Vol. 10, no. 3, May, 169-178.
R E F E R E N C E S A N D BIBLIOGRAPHY
833
Rojey, A., Jaffret, C , Cornot-Gandolphe, S., Durand, B., Jullian, S., and Valais, M., 1997, "Natural Gas Production processing and Transport", Editions Teclmip, 252-276. Roth, S.H., 1993, Hydrogen sulfide. In Handbook of Hazardous Materials. New York, NY: Academic Press. Rudner, R., 1961, An Introduction to Simplicity, Philosoph}/ of Science 28(2): 109-119. Rybak, L.R, 1981, Cis-platinum associated hearing loss, Journal of Laryngology & Otology, 95, 745-747. Rydh, C.J. and Sande, B.A., 2005, Energy analysis of batteries in photovoltaic systems Part II: Energy return factors and overall battery efficiencies. Energy Conversion and Management 46:1980-2000. Rywotycki, R., 2002, The effect of fat temperature on heat energy consumption during frying of food, journal of food engineering, 54:257-261. Saastamoinen, J., Tuomaala, P., Paloposki, T., and Klobut, K., 2005, Simplified dynamic model for heat input and output of heat storing stoves, Applied Thermal Engineering, 25(17-18):2878-2890. Saastamoinen, J., Tuomaala, P., Paloposki, T., and Klobut, K., 2005, Simplified dynamic. Saeed, N.O., Al-Darbi, M.M., and Islam, M.R., 2003, Canadian Society for Civil Engineering 31st Annual Conference. Moncton, NB, Canada, paper code: GCR-535. Saeed, N.O., Ajijolaiya, L.O., Al-Darbi, M.M., and Islam, M.R., 2003, Mechanical properties of mortar reinforced with hair fibre. In Proc. Oil and Gas Symposium, CSCE Annual Conference, Moncton. Saeed, N.O., Al-Darbi, M.M., and Islam, M.R., 2003, "Antibacterial Effects of Natural Materials On Shewanella Puterfaciens", Proc. Oil and Gas Symposium, CSCE Annual Conference, refereed proceeding, Moncton, June. Saha, S. and Chakma, A., 1992, An energy effi cient mixed solvent for the separation of CO,. Energy Conversion Management, 33,413. Sahimi, M., Mehrabi, A.R., Mirzaee, N., and Rassamdana, H.,2000, "The effect of asphalt precipitation on flow behavior and production of a fractured carbonate oil reservoir during gas injection", Transport in Porous Media, vol. 41, no. 3, Dec, 325-347. Sahimi, M., Rasasmdana, H., and Dabir, B., 1997, "Asphalt Formation and Precipitation: Experimental studies and Theoretical Modeling", SPE], vol. 2, June, 157-169. Saito, K., Ogawa, M., Takekuma, M., Ohmura, A., Migaku Kawaguchi a, Rie Ito a, Koichi Inoue a, Yasuhiko Matsuki c, Hiroyuki Nakazawa, 2005, Systematic analysis and overall toxicity evaluation of dioxins and hexachlorobenzene in human milk, Chemosphere, Vol. 61:1215-1220. Saka, S., and Kudsiana, D., 2001, Methyl esterification of free fatty acids of rapeseed oil as treated in supercritical methanol. J Chem Eng Jpn, 34(3):373-387.
834
R E F E R E N C E S A N D BIBLIOGRAPHY
SAL, 2006, Soil Acidity and Liming: Internet Inservice Training, Best Management Practices for Wood Ash Used as an Agricultural Soil Amendment, accessed: June 7, 2006. Salbu, B. Janssens, K., Lind, O.C., Proost, K., Gijsels L., Danesi, PR., 2005, Oxidation States of Uranium in Depleted Uranium Particles from Kuwait. Journal of Environmental Radioactivity 2005, 78:125-135. Samuel, E. and Steinman, D., 1995, The Safe Shopper's Bible: A Consumer's Guide to Nontoxic Household Products, Cosmetics and Food, Macmillan Publishers, New York. Santagata, D., Sere, P., Eisner, C , and Di Sarli, A., 1998, Evaluation of the Surface Treatment Effect of the Corrosion Performance of Paint Coated Carbon Steel. Progress in Organic Coatings 33, p. 44. Sarwar, M., and Islam, M., 1996, "Non-Fickian Surface Excess Model for Chemical Transport Through Fractured Porous Media", Chem. Eng. Comm., vol. 160,1-34. Schewe, P.F. and Stein, B., 1999, Light has been Slowed to a Speed of 17 m / s , American Institute of Physics, Bidletin of Physics News, No. 415, February 18. Schlesinger, G. 1959, The Principle of Simplicity and Verifiability, Philosophy of Science 26(1): 41-42. Schlomach, J., Quarch, K., and Kind, M., 2006, Investigation of Precipitation of Calcium Carbonate at High Supersaturations, Chem. Eng. Technol, vol. 29, No. 2, pp 215-219. Schroeder, D.V., 2003, Radiant Energy, online chapter for the course, Energy, Entropy, and Everything, Physics Department, Weber State University, accessed March 5, 2008, http://physics.weber.edu/schroeder/eee/ chapter6.pdf. Schubert, D., 2005, Regulatory Regimes for Transgenic Crops, Nature Biotechnology, vol. 23, pp. 785-787. Schuchardt, U., Sercheli, R., Vargas, R.M, 1998, Transesterification of vegetable oils: a review. / Braz Chem Sco, 9(1):199-210. Seo, K-S., Han, C , Wee, J-H., Park, J-K., Ahn, J-W., 2005, Synthesis of Calcium Carbonate in a Pure Ethanol and Aqueous Ethanol Solution as the Solvent, Journal of Crystal Growth, vol. 276, pp 680-687. Sercu, B., Nunez, D., Langenhove, V.H., Aroca, G., and Verstraete, W., 2005, Operational and microbiological aspects of a bioaugmented two-stage biotrickling filter removing hydrogen sulfide and dimethyl sulfide. Biotechnology and Bioengineering 90(2):259-269. Service, R.F., 2005, Is it time to shoot for the sun? Science, 309:549-551. Sh, A.M.M., 2006, Murugappa Chettiar Research Centre, Photosynthesis and Energy Division, Tharamani, Madras- 600 113, India. Shapiro, R., Zatzman, G.M. and Mohiuddin, Y., 2007, Towards Understanding the Science of Disinformation: Lies, and Public-Opinion
R E F E R E N C E S A N D BIBLIOGRAPHY
835
Polls, Journal of Nature Science and Sustainable Technology, vol. 1, No. 3, pp. 471-504. Shastri, C.M., Sangeetha, G., and Ravindranath, N.H., 2002, Dissemination of Efficient. Shastri, C.M., Sangeetha, G., and Ravindranath, N.H., 2002, Dissemination of Efficient ASTRA stove: case study of a successful entrepreneur in Sirsi, India. Energy for Sustainable Development. Volume VI., No. 2. Shaw, J., 2002, The Global Experiment, Harvard Magazine, Nov-Dec, 2002. Sheehan, J., Dunahay, T, Benemann, J., Roessler, P., 1998, A Look Back at the U.S. Department of Energy's Aquatic Species Program—Biodiesel from Algae. NREL/TP-580-24190. Shekhovtsov, G.A. and Shekhovtsov, B.A., 1970, The influence of the reflecting powers of on the ranges of optical instruments, journal of Mining Science, 6(1):122-123. Shimada, H., 1996, Study on supported binary sulfide catalysts for secondary hydrogenation of coal-derived liquids, Fuel and Energy Abstracts, Volume 37, Number 2, March, 94. Shumaker, G.A., McKissick, J., Ferland, C., Doherty, B., 2003, A Study on the Feasibility of Biodiesel Production in Georgia, February 2003, FR-03-02, Center of Agribusiness and Economic Development, 26 pp. Siegel, D. M., 1975, Completeness as a Goal in Maxwell's Electromagnetic Theory, Isis 66(3): 361-368. Simpson, T. K., 1966, Maxwell and the Direct Experimental Test of His Electromagnetic Theory, Isis 57(4): 411-32. Singh, K.J. and Sooch, S.S., 2004, Comparative study of economics of different models of family size biogas plants for State of Punjab, India, Energy Conv. and Mant. 45:1329-1341. Sircar, S., Novosad, J., Myers, A.L., 1972, "Adsorption from Liquid Mixtures on Solids. Thermodynamics of excess Properties and Their Temperature Coefficients", / & EC Fund., vol. 11, 249. Smalley, R.E., 2005, Materials Matters, MRS Bulletin. Vol.30 www.mrs.org/ publications/bulletin. Smart, J.S., 1997, journal of Protective Coatings and Linings, February, p. 56. Smith, P., 2001, How green is my process? A practical guide to green metrics. In: Proceedings of the Conference Green Chemistry on Sustainable Products and Processes; 2001. Sokolov, Y., 2006, Uranium Resources: Plenty to Sustain Growth of Nuclear Power. IAEA/NEA Press Conference on Uranium Resources, Vienna, Austria. www.iaea.org/NewsCenter/Statements/DDGs/ 2006/sokolov01062006,html, June 1,2006.
836
R E F E R E N C E S A N D BIBLIOGRAPHY
Sondi, I. and Sondi, B.S., 2005, Influence of the Primary Structure of Enzymes on the Formation of CaC03 Polymorphs: A Comparison of Plant (Canavalia ensiformis) and Bacterial (Bacillus pasteurii) Ureases, Langmuir, vol. 21, pp 8876-8882. Sondi, I., and Matijevic, E., 2001, Homogeneous Precipitation of Calcium Carbonates by Enzyme Catalyzed Reaction, Journal of Colloid and Interface Science, vol. 238, pp 208-214. Sorokhtin, O.G., Chilingar G.V., and Khilyuk, L.F., 2007, Global Warming and Global Cooling, Evolution of Climate on Earth. Developments in Earth & Environmental Sciences series. ISBN :978-0-444-52815-5, ISSN: 1571-9197, 313 pp. Spangenberg, J.H. and Bonniot, O., 1998, Sustainability indicators-a compass on the road towards sustainability. Wuppertal Paper No. 81, February 1998, ISSN No. 0949-5266. Speight, J.C., 1991, The chemistry and technology of petroleum, New York: M. Dekker, 760. Spies, PH., 1998, Millenium Megatrends: Forces Shaping the 21st Century, Key-Note Address to the Annual Conference of the International Association ofTechnological University Libraries (IATUL), at the University of Pretoria, South Africa. Sraffa, P., 1960, Production of Commodities by Means of Commodities. Cambridge, Cambridge University Press. SRI, 2003, Chemical Economics Handbook (CEH) Product Review: Mono-, Di- and Triethylene Glycols. SRI International, Menlo Park, CA. November, 2003. Srivastava, R.K. and Huang, S.S., 1995, Technical Feasibility of C 0 2 Loading in Weyburn Reservoir - A Laboratory Investigation', CIM. paper no. 95-1119 presented at the 6lh Saskatchewan Petroleum Conference, Regina, Oct. 16-18. Srivastava, R.K. and Huang, S.S., 1997, 'Laboratory Investigation of Weyburn CO z Miscible flooding', 7lh Saskatchewan Petroleum Conference, Regina, Oct. 19-22, C. I. M., paper no. 97-154. Srivastava, R.K., Huang, S.S., Dyer, S.B., and Mourits, F.M., 1995, 'Measurement and Prediction of PVT properties of Heavy and Medium Oil with Carbon Dioxide', 6th UNITAR, International Conference on Heavy Crude and Tar Sands, Houston, Feb. 12-17. Srivastava, R.K., Huang, S.S., Dyer, S.B., and Mourits, F.M., 1994, 'Heavy oil recovery by sub-critical Carbon Dioxide Flooding,' SPE paper 27058 presented at the HI LACPEC, Buenos Aires, Argentina, April 26-29. Srivastava, R.K., Huang, S.S., Dyer, S.B., and Mourits, F.M., 1993, Ά Scaled Physical Model for Saskatchewan Heavy Oil Reservoirs Design, Fabrication and Preliminary C 0 2 Flood Studies', 5 th Petroleum Conference of South Saskatchewan Section, the Petroleum Society of CIM, Regina, Oct. 18-20.
R E F E R E N C E S A N D BIBLIOGRAPHY
837
St. Clair, Jeffrey. 2004, Been Brown So Long It Looked Like Green to Me: The Politics of Nature. Monroe ME, Common Courage Press. 408 pp. Statistics Canada, 2006, Canada's Population Clock. Statistics Canada, Demography Division. Updated in October 27, 2006. Steenari, B.M. and Lindqvist, O., 1997, Stabilisation of Biofuel Ashes for Recycling to Forest soil., Biomass and Bioenergy; Vol. 13(l-2):39-50. Stefest, H., 1970, "Algorithm 368 Numerical Inversion of Laplace Transforms", Commun. ACM, vol. 13 (1), 47-49. Stewart DJ., Mikhael N.Z., Nanji A.A., Nair R.C., Kacew S., Howard K., Hirte W., Maroun J.A., 1985, "Renal and hepatic concentrations of platinum: relationship to cisplatin time, dose, and nephrotoxicity", J Clin Oncol. 1985 Sep; 3(9):1251-6. Stewart, J.T. and Klett, M.G., 1979, "Converting coal to liquid/gaseous fuels", Mechanical Engineering, vol. 101, June, 34-41. Stock, J., 1776, An Account of the Life of George Berkeley, D.D. Late Bishop of Cloyne in Ireland, available at the website: http://www.maths.tcd. ie/~dwilkins/Berkeley/Stock/Life.pdf, also see Berkeley, Bishop George. 1735. A Defence of Free Thinking in Mathematics (Dublin) {Last accessed 24 March 2008}. Straube, J.F., 2000, Moisture Properties of Plaster and Stucco for Strawbale Buildings, Report for Canada Mortgage and Housing Corporation, June 2000. Struik, D.J., 1967, A Concise History ofMathematics, 3rd ed., Dover Publications, New York, 1967. Subramanian, A.K., Singal, S.K., Saxena M, Singhal, S., 2005, Utilization of liquid biofuels in automotive diesel engines: An Indian perspective. Biomass and Bioenergy, 9:65-72. Sudaryanto, A., Kunisue, T, Kajiwara, N., Iwata, H., Adibroto, T.A., Hartono, P., and Tanabe, S., 2006, Specific accumulation of organochlorines in human breast milk from Indonesia: Levels, distribution, accumulation kinetics and infant health risk, Environmental Pollution, Vol. 139, No. 1:107-117. Sugie, H., Sasaki, C , Hashimoto, C , Takeshita, H., Nagai, T, Nakamura, S., Furukawa, T. Supple, B., Howard, H.R., Gonzalez, G.E., Leahy, J.J., 1999, The effect of steam treating waste cooking oil on the yield of methyl ester. J. Am. Oil Soc. Chem. 79(2):175-178. Sugie, H.,Sasaki, C , Hashimoto,C,Takeshita, H., Nagai,T.,Nakamura,S., Furukawa, M., Nishikawa, T, Kurihara, K., 2004,, Three cases of sudden death due to butane or propane gas inhalation: analysis of tissues for gas components. Forensic Science International 143(2—):211—14. Suskind, R. 2004, "Without a doubt", [Sunday] New York Times Magazine (17 October). Sustainability Institute. 2007, Two Approaches to Sewage Treatment and to the World [online]Available:(http://www.sustainabilityinstitute.org/
838
R E F E R E N C E S A N D BIBLIOGRAPHY
dhm archive/search.php?display_article=vnl77todded) [February 15, 2007]. Sweis, F.K., 2004, The effect of admixed material on the flaming and smouldering combustion of dust layers, Journal of Loss Prevention in the Process Industries, vol. 17 (6), pp. 505-508. Syed, M., Soreanu, G., Falletta, P. and Beland, M., 2006, Removal of hydrogen sulfide from gas streams using biological processes-A review. Canadian Biosystem Engineering, 48:2.1-2.14. Szklo, A. and Schaeffer, R., 2007, Fuel specification, energy consumption and Commission in oil refineries. Energy 2007,32:1075-1092. Szokolik, A. 1992, Evaluating Single-Coat Inorganic Zinc Silicates for Oil and Gas Production Facilities in Marine Environment. Journal of Protective Coatings and Linings, March, p. 24. Szostak-Kotowa, ]., 2004, Biodeterioration of textiles, International Biodeterioration & Biodegradation, Vol. 53:165-170. Taber, }.)., 1988, 'The Use of Flue Gas for Enhanced Recovery of Oil', EOR by Gas Injection, Symposium, International Energy Agency Collaborative Research Program on EOR, Copenhagen, Denmark, September 14. Taber, J.J., 1994, Ά Study of Technical Feasibility for the Utilization of C 0 2 for Enhanced Oil Recovery,' The Utilization of Carbon Dioxide from Fossil Fuel Fired Power Stations, IEA Greenhouse Gas R & D program, Cheltenham, England. Taber, J.J., 1990, 'Environment, Improvements and Better Economics in EOR Operations', In Situ, vol. 14 no. 4,345-404. Tanaka, S., Koide, H., and Sasagawa A., 1994, 'Possibility of C 0 2 underground sequestration in Japan', Energy Conversion Management, vol. 36, 527-530. Tang, D.E. and Peaceman, D.W., 1987, "New Analytical and Numerical Solutions for the Radial Convection Dispersion Problems", SPE 16001 presented at the Ninth SPE Symposium on Reservoir Simulation, San Antonio, TX. Tanner, D., 1995, Ocean Thermal energy Conversion: Current Overview and Future Outlook. Renewable Energy 6(3): 367-373. Täte, R.E., Watts, K.C., Allen, C. A. W., and Wilkie, K.I., 2006, The densities of three biodiesel fuels at temperatures u p to 300°C. Fuel 85:1004-1009. Tator, K.B., 1977, How Coatings Protect and Why they Fail. Corrosion 77, NACE, paper no. 4, p. 1. Taylor, A.J.P, 1961, The Origins of the Second World War. London: Atheneum, 2nd Ed. Teel, D., 1994, Liquid fuel solutions of methane and liquid hydrocarbons, US Patent 5315054. Tester, J.W, Drake, E.M., Golay, M.W., Driscoll, M.J., and Peters, W.A., 2005, Sustainable Energy, Choosing Among Options. The MIT Press, Cambridge, Massachusetts, London, England, pp 864,2005.
REFERENCES AND BIBLIOGRAPHY
839
Tetsuya, D., Kitaoka, Y., Kakezawa, M., and Tomoaki Nishida, T, 1998, Purification and Characterization of a Nylon-Degrading Enzyme, Appl Environ Microbiol, April, Vol. 64, No. 4,1366-1371. The Epoch Times, 2006, Potato Farms a Hot Bed for Cancer. March 24-30, www.theepochtimes.ca. The Globe and Mail, 2006, Toxic shock: Canada's Chemical reaction, May 27, Saturday, 2006. The Globe and Mail, Saturday 27 May 2006, Page A4. The New York Time, 2006, Citing Security, Plants Use Safer Chemicals, April, 25, 2006. Thibodeau, L., Sakanoko, M. and Neale, G.H., 2003, Alkaline Flooding Processes in Porous Media in the Presence of Connate Water, Powder Technology, Vol. 32:101-111. Thipse, S.S., Schoenitz, M., and Dreizin, E.L., 2002, Morphology and Composition of the Fly Ash Particles Produced in Incineration of Municipal Solid Waste, Fuel Processing Technology, Vol. 75(3):173-184. Thipse, S.S., Schoenitz, M., and Dreizin, E.L., 2002, 'Morphology and composition of the fly ash particles produced in incineration of municipal solid waste', Fuel Processing Technology, Vol. 75(3), pp. 173-184. Thomas P. and Nowak, M.A., 2006, Climate Change: All in the Game, Nature (441) June 1,2006. Thorpe, T. W., 1998, Overview of Wave Energy Technologies, AEAT-3615 for the marine foresight panel, May 1998. Tickell J, 2003, From the fryer to the fuel tank: The complete guide to using vegetable oil as an alternative fuel. Tilley, J. 1997, 'Technology Responses to Global Climate Change Concerns: The Benefits from International Collaboration', Energy Conversion Management, volume 38, S3-S12. Tipton, T, Johnston, CT., Trabue, S.L., Erickson, C , and Stone, D.A.,1993, Gravimetric/FT-IR Apparatus for the Study of Vapor Sorption on Clay Films. Rev.Sci.Instrum 64(4):1091-1092. Tiwari, G.N. 2002. Solar energy: fundamentals, design, modelling and application, Narosa Publishing House, New Delhi, India. Toninello, A., Pietrangeli P., De Marchi, U. Salvi, M., and Mondov, B., 2006, Amine Oxidases in Apoptosis and Cancer. Biochimica et Biophysica Acta 1765:1-13. Tornquist, C , 1997, Nuclear Fusion Still No Dependable Energy Source. CNN News. April 5, www.cnn.com/US/9704/05/fusion.confusion/ (accessed on Jan 16, 07). Trujillo, E.M., 1983, The Static and Dynamic Interracial Tensions between Crude Oils and Caustic Solutions, SPEJ: 645. Tschulakow, A.V., Yan, Y. and Klimek, W., 2005, A New Approach to the Memory of Water, Homeopathy 94(4):241-247. Tsoutsos, T, Frantzeskaki, N., and Gekas, V, 2005, Environmental impacts from the solar energy technologies. Energy Policy 33:289-296.
840
R E F E R E N C E S A N D BIBLIOGRAPHY
Tulloch, A.P., 1970, "The composition of beeswax and other waxes secreted by insects", Lipids, volume 5, no. 2, 247-258. Turkenberg, WIM C. 1997, 'Sustainable Development, Climate Change, and Carbon Dioxide Removal', Energy Conversion Management, vol. 38, S3-S12. Twu, C.H, Tassone, V., Sim, W.D., and Watanasiric, S, 2005, Advanced Equation of State Method for Modeling TEG-Water for Glycol Gas Dehydration. Fluid Phase Equilibria 228-229:213-221. U.S. BoLS (Bureau of Labour Statistics), 2006, Consumer Price Index, Washington DC: January 18. U.S. DoE, 2004, Annual Energy Review 2004, p 98. US DoE, 2004, Annual Energy Review. Washington DC: Department of Energy. US DoE, 2005, Canada - Country Analysis Brief. Washington DC: Department of Energy - Energy Information Administration [February]. US EPA, 2002, A Comprehensive Analysis of Biodiesel Impacts on Exhaust Emissions, US EPA draft report, Oct., 118 pp. UNCSD (United Nations Commission on Sustainable Development), 2001, Indicators of Sustainable Development: Guidelines and Methodologies, United Nations, New York. United Kingdom Offshore Operations Association: Atmospheric Emission (UK OOA), 2003, Available http://www.ukooa.co.uk/ issues/1999report/enviro99_atmospheric.htm, March 11, 2003. United States Coast Guard, 1990, Update of inputs of petroleum hydrocarbons into the oceans due to marine transportation activities. National Research Council. National Academy Press, Washington, D.C. (1990). Uranium Enrichment. 2006, Nuclear Issues Briefing Paper 33, March 2006, Uranium Information Centre Ltd, GPO Box 1649N, Melbourne 3001, Australia, 2006. Uranium Information Center, 2006, The Economics of Nuclear Power, Briefing Paper no 8, Australia, 2006, http://www.uic.com.au/nip08. htm . USDoE. 2006, , US Department of Energy accessed : April 06,2006. USHR, 1999, Oil refineries fail to report millions of pounds of harmful emissions. A report prepared for Rep.Henry A. Waxman, by minority staff, special investigation division, committee on government reforms, U.S.House of Representative, 10th November, 1999, pp-19. Vallejo, F., Tomas-Barberan, F.A., and Garcia-Viguera, C , 2003, Phenolic compound contents in edible parts of broccoli inflorescences after domestic cooking. /. of the Science of Food and Agriculture 83:1511-1516. van der Meer, L.G.H., 1995, 'The C 0 2 storage efficiency of Aquifers', Energy Conversion and Management, volume 36, nos. 6-9, 513-518.
R E F E R E N C E S A N D BIBLIOGRAPHY
841
van der Meer, L.G.H., 1992, 'Investigation Regarding the Storage of Carbon Dioxide in Aquifers in the Netherlands', Energy Conversion Management, volume 33, no. 5-8, 611-618. Van Niel, C.B., 1931, On the morphology and physiology of the purple and green sulfur bacteria. Archiv für Mikrobiologie 3:1-112. Vassilev, S.V. and Vassileva, C.G., 2005, 'Methods for characterization of composition of fly ashes from coal-fired power stations: A critical overview', Energy Fuels, Vol. 19, pp. 1084-98. Veil, J.A., 2002, Drilling Waste Management: past, present and future. Annual Technical Conference and Exhibition, San Antonio, Texas, 29 Septermber-2 October, 2002, SPE paper no. 77388. Venkataraman, C , Joshi, P., Sethi, V, Kohli, S., and Ravi, M.R., 2004, Aerosol Science and Technology, vol. 38, no. 1, 50-61. Vikram, V.B., M.N. Ramesh, and S.G. Prapulla. 2005, Thermal degradation kinetics of nutrients in orange juice heated by electromagnetic and conventional methods, journal of Food Engineering 69(l):31-40. Volckova,E.,Evanics,F.,YangW.W.,andBose,R.N.,2003,UnwindingofDNA polymerasesbytheantitumordrug,ris-diamminedichloroplatinum(II), Chem. Commun., 2003,1128-1129. Voss, A., 1979. Waves Currents, Tides-Problems and Prospects. Energy 4(5): 823-831. Wackernagel, M., & Rees, W., 1996, Our ecological footprint. Gabriela Island, New Society Publishers. Wagner, G.J., 1993, Accumulation of cadmium in crop plants and its consequence to human health. Adv. Agron. vol. 51, pp. 173-212. Wangnick, K., 2002, IDA Worldwide Desalting Plants Inventory Report No.l 7, Wangnick Consulting GmbH and the International Desalination Association (IDA), Vienna, July, 2002. Wareham, S., 2006, The Health Impacts of Nuclear Power. Nuclear Power Forum, UNSW, October 18, 2006, Medical Association for Prevention of War. www.mapw.org.au. Wasiuddin, N.M., Ali, N., Islam, M.R., 2002b, "Use of Offshore Drilling Waste in Hot Mix Asphalt (HM A) Concrete as Aggregate Replacement", paper no. EE 29168, ETCE '02, Feb. 4-6,2002, Houston, Texas. Wasiuddin, N.M., Tango, M. and Islam, M.R., 2002, A Novel Method for Arsenic Removal at Low Concentrations. Energy Sources 24,1031-1041. Waste Online, 2005, Plastic recycling information sheet. < h t t p : / / w w w . wasteonline. org.uk/ resources/InformationSheets/Plastics.htm> [Accessed: February 20, 2006]. WCED (World Commission on Environment and Development) 1987, Our common future, World Conference on Environment and Development. Oxford: Oxford University Press; 1987, 400pp. Website 1: http://en.wikipedia.org/wiki/Nicolaus_Copernicus (quoting J.W. von Goethe's appreciation of Copernicus' role in European science); last accessed 24 September 2007.
842
R E F E R E N C E S A N D BIBLIOGRAPHY
Website 2: http://en.wikipedia.Org/wiki/Galileo_Galilei#Church_ controversy; last accessed 24 September 2007. Website 2a: http://en.wikipedia.org/wiki/Two_New_Sciences (last accessed 1 February 2008). Website 2b: Einstein, Albert. 1905. "On The Electrodynamics Of Moving Bodies" (1923 English translation of the first published version of the theory of special relativity) (Last accessed 24 March 2008). Website 3a: Newton, Isaac.1687. Philosophiae naturalis principia mathematica [1729 translation by Andrew Motte], Ch 1. "Of the method of first and last ratios of quantities, by the help whereof we demonstrate the propositions that follow" (Last accessed 24 March 2008}. Website 3b: Newton, op.cit., Ch. 2 "Of the invention of centripetal forces"(Last accessed 24 March 2008}. Website 3c: Newton, op.cit., Ch. 3 "Of the motion of bodies in eccentric conic sections" (Last accessed 24 March 2008}. Website 3d: Newton, op.cit., Ch. 4 "Of the finding of elliptic, parabolic, and hyperbolic orbits, from the focus given" (Last accessed 24 March 2008}. Website 3e: Newton, op.cit., "Axioms, or Laws of Motion" (precedes Ch. 1 of Book I) (Last accessed 24 March 2008}. Website 4: http://www.emachineshop.com/engine/ Website 5 British plastic federation, "The history of plastic", Available online at: http://www.bpf.co.uk/bpfindustry/History_of_Plastics. cfm Accessed November 15th, 2005. Website 6: www.wasteonline.org.uk/resources/InformationSheets/Plastics. htm [Accessed: May 12,2006]. Website 7: http://www.solcomhouse.com/recycling.html (Last accessed, Feb. 20, 2010). Website 8 http://www.americanchemistry.com/s_plastics/doc.asp? CID= 1102&DID=4664 Accessed on November 20th, 2005. Website 9 available at: http://www.plastics.ca/news/default.php?id=197 Accessed on February 2 nd , 2007. Website 10: http://encarta.msn.com/media_461531189/Oil_Refining_and_ Fractional_Distillation.html (accessed on May 20,2008). Website 10: American chemistry council, Available online at: h t t p : / / w w w . americanchemistry.com/plastics/ Accessed November 10th, 2005. Website ll:http://invsee.eas.asu.edu/nmodules/engmod /manipulation. html, Accessed on September 4th, 2006. Website 11: http://www.tribecaradio.net/blog/categories/steal ThisRadio/ Website 12: http:/ /www. plasticsresource.com/s_plasticsresource/ Accessed on November 10lh, 2005. Website 13: http://www.pslc.ws/mactest/natupoly.htmAccessed on July 12 ,h ,2005. Website 13: Balanced Solution.com. Moisture Properties of Plaster and Stucco for Strawbale Buildings, www.ecobuildnetwork.org/pdfs/ Straube_Moisture_Tests.pdf (accessed on 8th Aug, 20).
REFERENCES A N D BIBLIOGRAPHY
843
Website 14: http://www.psigate.ac.uk/roads/cgibin/ Websitel 4: www.hemptons.co.za/Users/seeds/htm(accessed on December 15,2006). Website 15: http://hyperphysics.phy-astr.gsu.edu/hbase/ems3.html (accessed on November 5, 2006). Website 15. (http://www.fpl.fs.fed.us/documnts/techline/fuel-valuecalculator.pdf, http://www.epa.gov/ttn /chief/ ap42/ch01/final/ c01s04.pdf). Website 16. (http://www.hrt.rnsu.edu/Energy/Notebook/pdf/Sec4/Approximate_Heating_Values by %20Bartok.pdf). Website 17: (http://www.etc-cte.ec.gc.ca/databases/OilProperties/oil prop_e.html). Website 18: http://www.mindfully.org/Plastic/Ethylene-Gas.htm. Website 25: http://unfcc.int/resource/docs/2009/copl5/eng/107.pdf. Website 26: http://unfccc.int/resource/docs/2009/copl5/eng/107.pdf (last accessed June 8,2010). WEC, 2006.The World Energy Council: How to Avoid a Billion Tones of C 0 2 Emission, http://www.worldenergy.org/wec-geis/default.asp . Welford, R., 1995, Environmental strategy and sustainable development: the corporate challenge for the 21st Century. London: Routledge. Wenger, L.M., Davis, C.L., Evensen, J.M., Gormly, J.R., and Mankiewicz, P.J., 2004, Impact of modern deepwater drilling and testing fluids on geochemical evaluations, Organic Geochemistry, Vol. 35:1527-1536. Weyl, H., 1944, How Far Can One Get With a Linear Field Theory of Gravitation in Flat Space-Time?, American Journal of Mathematics 66(4): 591-604. Wiener, P.P., 1943, A Critical Note on Koyre's Version of Galileo, Isis, Vol. 34, No. 4. (Spring, 1943), 301-302. Wikipedia, 2008, http://en.wikipedia.org/wiki/Lemon_juice, accessed Nov., 2008. Wills, ]., Shemaria, M., and Mitariten, M.J., 2004, Production of Pipeline Quality Natural Gas. SPE 87644. SPE/EPA/DOE Exploration and Production Environmental Conference, San-Antonio-Texas, 10-12 March 2003. Williams, L. P., 1965, Michael Faraday: A Biography. London: Chapman & Hall, xvi, 531 pp. Winter, E.M. and Bergman, P.D., 1996, 'Potential for Terrestrial Disposal of Carbon Dioxide in the US', US/Japan Joint Technical Workshop, US Dept. of Energy, State College, PA, Sept. 30-Oct. 2. Winterton N., 2001, Twelve more green chemistry principles. Green Chem Vol. 3:G73-5. Wise Uranium Project, 2005, Uranium Radiation Properties, www.wiseuranium.org/ rup.html, (accessed on March 19, 2006), 2005. Wise Uranium Project, 2005, Uranium Radiation Properties, www.wiseuranium.org/rup.html (accessed on March 19, 2006).
844
R E F E R E N C E S A N D BIBLIOGRAPHY
Wittwer, R.F. and Immel, M.J., 1980, Chemical composition of five deciduous tree species in four-year-old closely spaced plantations, Plant and Soil, vol. 54, no. 3, Oct., 461-467. WNA (World Nuclear Association), 2010, http://www.world-nuclear. org/info/inf63.html, last viewed Feb. 22, 2010. Woodruff, A. E., 1968, The Contributions of Hermann von Helmholtz to Electrodynamics, his 59(3): 300-311. World Health Organization (WHO), 1994, Brominated diphenyl ethers. Environmental Health Criteria, Vol.162, International Program on Chemical Safety. Wright, T., 2002, Definitions and frameworks for environmental sustainability in Higher education. International Journal of Sustainability In. Higher Education Policy, Vol. 15, (2). Wu, H., Zong, M.H., Luo, Q., Wu, H.C., 2003, Enzymatic conversion of waste oil to biodiesel in a solvent free medium. Prepr. Pap.-Am.Chem. Soc, Div. Fuel Chem. 48(2) 533. Xiaoling M. and Qingyu, W., 2006, Biodiesel production from heterotrophic microalgal oil. Bioresource Technology, vol.97, (6):841-846. Yang, H-H., Chien, S-M., Lo, M-Y., Lan, J.C.-W, Lu, W-C, and Ku, Y-Y, 2007, Effects of biodiesel on emissions of regulated air pollutants and polycyclic aromatic hydrocarbons under engine durability testing. Atmospheric Environment 41:7232-7240. Yen, T.F., Preface, True Sustainability in Technological Development and Natural Resource Management, Nova Science Publishers, NY, 381 pp. York, M., 2003, One Spoonful of Bee Pollen Each Day, and You, Too, Might Make It to 113. The New York Times, December, 2003 occessed on June 06,2006>. Yu, J, Lei, M., Cheng, B. and Zhao, X., 2004, Facile Preparation of Calcium Carbonate Particles with Unusual Morphologies by Precipitation Reaction, journal of Crystal Growth, vol. 261, pp 566-570. Zatzman, C M . , Khan, M.M., Chhetri, A.B., and Islam. M.R., 2008, "A Delinearized History Of Time And Its Roles In Establishing And Unfolding Knowledge Of The Truth", journal of Information, Intelligence and Knowledge, vol. 1, no. 1,1-38. Zatzman, G., 2007, "The Honey —> Sugar —» Saccharin™ —» Aspartame™ Syndrome: A Note", Journal of Nature Science and Sustainable Technology, vol. 1, no. 3,397-401. Zatzman, G., Chhetri, A.B., Khan, M.M., Maamari, R., and Islam, M.R., 2008, Colony Collapse Disorder- The Case for a Science of Intangibles , Journal of Nature Science and Sustainable Technology, vol. 2, no. 3. Zatzman, G.M. and Islam, M.R., 2007, "Truth, Consequences and Intentions: The Study of Natural and Anti-Natural Starting Points and
R E F E R E N C E S A N D BIBLIOGRAPHY
845
Their Implications", /. Nature Science and Sustainable Technology, vol. 1, no. 2,169-174. Zatzman, G.M., 2008, Some Inconvenient Truths About Al Gore's Inconvenient Truth. /. Nat.Sci. and Sust.Tech., vol. 1, no. 4, 699-707. Zatzman, G.M. and Islam, M.R., 2004, A New Energy Pricing Model, MPC2004, Tripoli, March, 2004. Zatzman, G.M., and Islam, M.R., 2007a, The Economics of Intangibles, Nova Science Publishers, New York, 407 pp. Zatzman, G.M., and Islam, M.R., 2006, Natural Gas Energy Pricing, Chapter 2 in Handbook of Natural Gas Transmission and Processing by S. Mokhatab, J.G. Speight, and W. A. Poe (eds), Gulf Professional Publishing, Elsevier. Zero-Waste, 2005, The Case for Zero Waste, [Accessed on August 12, 2006]. Zevenhoven, R. and Kohlmann,}., 2001, C 0 2 sequestration by magnesium silicate mineral carbonation in Finland. Second Nordic Minisymposium on Carbon Dioxide Capture and Storage, Göteborg, October 26, page 13-18. Zhang, Y, Dube, M.A., McLean, D.D., Kates, M., 2003, Biodiesel Production from Waste Cooking Oil: 1. Process Design and Technological Assessment. Bioresour. Technol., 89:1-16. Zheng, S., Kates, M., Dube, M.A., McLean, D.D., 2006, Acid-catalyzed production of biodiesel from waste frying oil. Biomass and bioenergy 30:267-272. Zick, A.A., 1986, ΆCombined Condensing/Vapourizing Mechanism in the Displacement of Oil By Enriched Gases', SPE paper 15493 presented at the 615t SPE Technical Meetting, New Orleans, LA, October 5-8. Zucchetti, M., 2005, The zero-waste option for nuclear usion reactors: Advanced fuel cycles and clearance of radioactive materials. Technical note, Annals of Nuclear Energy, 32:1584-1593.
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
Index Abacus 67, 89 Accidental discharge 172,175 Accidental risk 202 Accidents 218, 238, 244, 446,450, 730, 731 Acetaldehyde 235, 236, 316, 369,517 Acetic acid 396 Acetylene 307 Acid 28, 235, 246, 318, 368, 372, 373, 440, 442, 524, 593 Acid catalyst 157, 438, 471, 757 Acid gas 469, 488,491, 508, 800 Acid rain 359 Acidosis 495 Activated carbon 458, 502 Adaptation 52, 298-302, 571 Of bacteria 623 Addiction oil 23 chemical 27 consumption 685 AddiHves 155,157,162,174,291,311, 352,355,367,369,395,439,444, 449, 450, 471, 479, 483, 511, 553 biodiesel 235 chemical 246, 291, 304,510, 525, 638, 749, 750, 765, 771 toxic 304, 326, 375, 445, 450,465, 689 natural 450, 532, 546, 554, 555, 557,577, 585, 598, 795, 796, 820
Adipic acid 361, 397, 398 Absorption 151,210, 289,307,420, 442,443,485,487,489,491,492, 493,496-501,523,752, 753,817 Adsorption 168,439, 459, 460, 477, 485, 487-489, 567,607, 755, 805, 825, 827, 830 Adsorption /desorption 148,153 African 78, 94,169, 300, 301, 583, 661,674,670,700,723,768 Agreement, Copenhagen 296-298, 310 Agreement, Kyoto 311 Agreement, Paris Club 668 Agreement, Nuclear 710 Agriculture 311, 715, 835 AIDS 358 Air emission 172,174,175, 238, 359, 360 Air pollutants 360, 464,481-483, 505,584, 637, 765,838 Airgunl73 Alcohol 4, 232, 344,368, 373, 374, 448,530,531,711,714,802 Ethyl 482, 756 Methyl 527 Organic 318 Alcoholysis 235, 716 Algae 242, 470,522, 555, 618, 711, 815,830 Allergic reaction 184 Allergy 825
847
848
INDEX
Amino acid 339, 340-342, 345, 347, 349,399,400,403,619,700 Ammonia 39,174,209, 234, 290, 307, 313, 323, 352,345,446, 448, 449, 482, 499,501, 529, 533, 540, 580, 582,587, 688, 689, 781, 783, 784 Anthropogenic 3,99, 283,284, 289, 299,328, 330, 332 Antibiotic 25,144 Natural 213 Chemical 150, 213, 532, 701, 713,771 Antioxidants 303, 315,552 Aphenomenal 8,9, 24, 31, 34, 37, 44, 46, 53,54, 63, 68, 69, 72-75, 83,84, 87, 91, 96-102,105,122, 123,140-143,158,163,164,171, 176,177,179,194,205,206,208, 209,211,212,255,365,408,409, 411, 638,639,642,651,652,671, 678, 679, 700, 705, 707-709, 711, 742, 743, 763-766 Aquatic 174, 238, 519, 685, 711, 744, 830 Aristotle 12, 28, 37,42, 43, 45, 47, 58, 65, 66,67, 69, 74, 78,80,88, 91,93,100,117,136,141,171, 407, 776, 777, 791 Ascorbic acid 171 Assessment, environmental 178, 748,801,822 IPCC 289,299, 310,312 risk 794, 797 safety 578, 798 sustainability, 335, 336 Atlantic cod 798 Atlantic region 460, 657, 809 Asthma 39, 347, 348, 363,495, 496, 515,825 Avalanche theory 63,135,136 Bacillus 349,504, 505, 570,621, 629, 804,830
Bacteria 150,208, 242,246, 304, 331, 332, 349, 350, 356,483, 484, 504, 505, 532, 541, 543, 618-622 Bacteria, acidophilic 570 Bacteria, cryophilic/psychrophilic 540,542, 543 Bacteria, sulphate reducing 512, 513,545-547,549,569 Bacteria, thermophilic 453, 570-572 Bacteria population 626 Bacterial degradation 348,349, 440, 470,471 Bacterial prevention of asphaltene and wax formation 569,572-575 Bee population 636 Behavior 39, 73,140,258,276, 307, 346,370, 646, 678, 747 Behaviorism 7 Benzoic acid 482 Bernouilli 51 Bifurcation 5,142,143,145,153, 163,164,166,167,407,639, 664, 665 Biodegradation 347,349, 452,453, 483,494, 504, 523,550, 575, 749, 796, 797, 822,832 Biodiversity 227, 238, 320, 333, 824,827 Biological 244,313, 339, 349, 617 activities 252, 347-374 ethanol production 247 hydrate control 540,541 hydrogen production 246 laws 331 membrane 802 polymer 355 processes 146, 332 solvent 452, 454, 749 treatment of glycol 494 treatment of H2S 504, 505 waste water treatment 348 Biodiesel 24,220, 326,235, 236, 448,472, 710-716, 719, 720, 801,802,805
INDEX
Bisphenol 352, 737 Black hole 657 Blink of an eye 63, 97,100,145 Bose-Einstein theory 12 Breast cancer 12, 220 Brain 12, 23, 32, 35-37, 45, 258, 664 Brain cancer 39 Brain tumor 188 Brain, effect of DEA on 346 Brain, effect of lead on 481 Cadmium 352, 353, 412, 413, 480, 481,482 California 57,155,156, 466, 534, 819,820,828 Cancer 12,28,33, 39,67,171,184, 220,238,243, 303, 315, 346,358, 363,413,479,480,708, 716, 731, 765, 766,767, 793,913,821 Carbon 18, 22, 23, 30, 33, 54,156, 230,249,258,260,288,289, 291, 322,325,347,352,377,394, 397, 399, 401, 413, 432, 435, 438, 466, 474,476,502,504,512,513, 527, 529,545,707, 770 Carbon dioxide, C0 2 152,153, 292, 316,323,423,506,599,717 Carbon monoxide, CO 222, 241, 346,445, 446, 449, 452, 530, 783 Carbon tetrachloride 395,413 Carcinogenetic 494, 823 Carnot cycle 50, 51,191 Catalytic cracking 449 Catalysts 23, 31, 34,148,153,156, 157,167,168,171,172,220, 232,235,241,242,246,250, 252,285,316,317,324,326, 343, 345, 365, 369, 375, 403, 436-440, 444, 445,447, 448, 464, 465, 469, 479, 482, 505, 519,529,533,540,599,644, 645, 714-719, 742, 757, 758, 761 Cementing 618 Central nervous system 48,595
849
Chaos 2,42,43,48, 64,105,133, 296, 665, 680, 809 Chaotic behavior 39, 73,140, 258, 276, 307, 346, 370,646, 678, 747 Characteristic time 34,100,101, 151,166,167,181 Violation of 180 Chemical community 780, 783, 785 Chemical enhanced oil recovery 820 Chloride Aluminum 156,449,479,501,715 Ammonium 323, 501 Calcium 448 Carbon tetra 295, 413 Cupric 168 Ferrous 453 Hydrogen 449, 783-785, 797 Lithium 187 Methylene 449 Polyvinyl 363 Vinyl 361 Chloroform 295,373, 374 Cholesterol 641 Chlorogenic acid 226, 232, 234 Chromium 226, 248,412, 448, 458, 464, 505 Cigarette 52,184,358, 598, 644, 793 Citric acid 170 Clean energy 14,23, 227, 238, 770 Climate change 14,155,283, 286, 297, 288, 298, 299,304, 307, 308,311,328-331,510,601, 808,815, 816, 834 C0 2 , industrial 243 C 0 2 minimum miscibility pressure C0 2 , natural 243 C 0 2 removal 485 C0 2 emission See Emission, C 0 2 Coal 14,153,155,166,175, 216-218, 220-222, 225, 231,248, 249, 252,290,291,320,321,411, 428,432, 459, 470,473, 602, 653,654, 655, 659, 667, 733, 735,804,806,807,814,832
850
INDEX
Coatings 259, 275, 276, 278,344, 355, 358, 368, 377, 380, 394, 457,492,546,550,551, 555-565, 795, 797,822, 829 Cobalt 412,437,439, 448,458, 513 Coke 30, 39,156, 291, 352, 436,441, 442,466 Combustion 34,151,153,155,156, 174,192, 231-233,235, 238, 240,243,290,304,307,321, 324,326, 327, 420,421, 424, 439, 475,504, 528,588, 610, 759, 760, 802,808, 809 Community based energy development 813 economic development 676 Organizations 230 role of 202 Compressive strength 369, 384-387, 389, 628,629, 631 Convection 151, 239, 329,330, 417,833 Cooking 39,163, 226,231, 232, 325,422,425,469,504, 696, 708,711,800 Cooking oil 711, 714, 718, 815, 832,839 Cooling process 210 Copper 57,153, 306, 322,414, 446, 448, 449,482, 513, 554, 569, 778,806 Copper mine 56 Corn 25, 232, 233, 246,247, 318 Corn law 287, 288,470,473, 538, 639,640,644,710,712 Corrosion inhibitors 511, 528, 579, 581 Cracking 156,157,306, 320, 356, 368,434, 437, 439,441, 443, 445^47, 477, 479 Criterion 9-12,15,16, 45-48, 72, 75,159,163,164,165,171,178, 185,194,196,210,297,314, 315,365,366,404,407,410,
471, 603, 632, 636, 663, 676, 695, 730, 744, 801,811 Crude oil 2,11,18,21,27,155-157, 175,216,251,253,254,255, 288,293,312,316,319,324,336, 353,356,369,370,371,396, 397, 401,432,433,434,436,437,439, 463,467,470,474-478,481-486, 505,571-575,578,603,604,607, 612,638,644, 667, 669,670, 715-717, 720, 741-743, 761, 770-772,803, 808,810, 824, 826,834 Cryogenic 485,489, 490, 493,803 C-scan imaging 391 Cuttings (drilling) 173, 745, 750 Cyanide 351, 482 Dark matter 657 DDT 4,19,23,26,27,32, 37,152, 333, 424,433, 522,702, 703, 704 DEA 491, 493-495, 503,525, 529, 532,541 Death 8,12, 21,22,108,182,184, 193,194,297,303,410,481, 494,504, 525, 584, 653, 708, 770, 793, 810,818 Debt 79, 652,657, 661, 662, 677, 678, 679 Decision making 12,15,164,200, 664,696, 764 Degradation 9,169,211,219,245, 251,301,306,320,336,347-350, 356,366,444,452,453,467,483 Density 373 Media 12, 74,120, 375,387, 389,433,436, 453,474, 594, 602-604, 754, 761,784, 785 Particle 269 Chemical 329, 331,336, Desalination 241, 323, 324, 685, 750, 756, 835 Detergent 583 Natural 232
INDEX
Developed countries 220, 296, 298,300,301,306,310,678, 681,682,694 Developing countries 210, 223, 230, 231,247,253,298,299,300, 302, 678, 681, 683, 684, 721, 809 Devonian shale 222, 251 Diabetes 26,167, 315, 765, 766 Diarrhea 185, 496 Diesel engine 220,828, 832 Diesel fuel 443, 451,477, 482, 483,484,710,713,714,715, 717,718,794 Diffusion 148, 237, 238,459, 493, 723, 736, 755 Thermal 151 Dimensionality 133, 201, 305, 612 Dioxin 27, 30, 39,168, 320, 360, 371,829 Disaster 27, 54, 57,123,161, 306, 346, 410, 660, 661, 703, 708, 742, 743, 744, 770, 773 Disinformation 25, 26, 38, 53, 101,130,131,184,210,532, 640,642,643,645,654,671, 680, 703, 709, 764 Displacement 106 Diversity 202,337, 382, 400, 584 DNA 70, 71, 479, 515, 521, 542, 700,705,706,708,712,713, 731,796,835 Drinking water 480, 481,495,524 Durability 39,194, 208, 314, 336, 355,372, 838 Ecology 199, 799,805,818,825 Economic behavior 646 Economic development 7, 215, 216, 219, 221, 227, 288, 299, 336, 645, 649, 660, 674, 676, 830 Economic growth rate 247 Economic indicator 197 Economic system 199, 636, 653, 656, 657, 684
851
Economics 317, 609, 616, 635-638 Economics of intangibles 663, 665, 672,675, 676, 685, 696, 839 Ecosystem 3,11,14, 24, 25, 30, 33, 41,155,156,159,194,198, 308, 333,336,359,361,407,439, 463-466, 471, 482, 550, 584, 585, 712, 740, 742, 744, 762, 799,812, 820, 822 Efficiency 203,227,309, 321, 412, 419,423 Einstein, Albert 3, 8,12,14, 31, 46,47,53,68,74,75,115,118, 120,121-124,132,139,142, 148,157,158,164,188,191, 284, 305, 407,408, 416, 636, 637, 740, 764, 786, 787 Electric energy 204,224, 247, 419, 420, 732, 771 Electric power 421, 667 Electricity 95,137-140,192, 223226,228, 233, 237, 239, 240, 244,246, 254, 305, 318, 319, 322,326,394,406,411,415, 416,419,421,422,427,432, 469,514, 653, 669, 689, 693, 694,695, 722, 732-735, 757, 759-762, 799, 823 Electromagnetic 137,139,140, 141,142,262,835 Electromagnetic irradiation 75, 806,814 Electromagnetic forces 131 Electromagnetic theory 137,141,708 Electromagnetic waves 140 Electromagnetic communication 230 Electromagnetic field 261 Electromagnetic cooking 696 Emission 14, 33,156,172,199, 230,238, 240, 246,290, 300, 310,320,359,360,414,426, 428,429, 444, 450,458, 469, 478, 716, 718, 726, 727, 734, 737,804,811,814,822,824
852
INDEX
Certification 312 C0 2 155,174,175, 217, 218, 241, 245, 290-292, 294, 298,312, 318,321,327,333,464,465, 482,483, 600, 745,800,805, 811, 832 Fugitive 483 Greenhouse gas 14,228,229, 230, 233, 242, 283, 285, 291, 297, 301,311,312,313,327,328, 329,330,334,360,361,406, 428,465, 527, 806 Light 91 Natural C 0 2 325 NOx 632 Nuclear 724 Organic 62 Radioactivity 731, 814 Refinery 445,464,481 Emulsion 363,454,456,457,575,702 Energy balance 8, 50, 91,105, 131,133,136,148,150,151, 158,161,166,182,189,192, 193,417,422,762 Energy balance equation 193, 519,813 Energy consumption 1,155,161, 215-219, 221,237, 359,429, 482, 737, 739, 745,828,832 Manufacturing 415 Per capita 220, 655, 669 Engineering approach 121,151,158 Enhanced oil recovery 18,414,452, 458,469,541,579,577,584, 585,600,604,609,610,611, 755,806, 809, 826,833 Chemical 820 In situ combustion 610, 809 Microbial 570,810 Scaling laws 804 Enron 19, 37,175,223, 740, 801 Entropy 182,193, 829 Environmental damage 685, 740, 744, 745
Environmental effects/impacts 16, 162,176,227,229,238,285,309, 310,320,327,349,358,415,429, 491,694,741,746,808,824 of compact fluorescent lamp 414 of natural gas processing 493, 445,505 of natural pathway 347 of nuclear radiation 223, 243 of refining 464,465, 742 of solar energy 224,226, 248,327 positive 317 Environmental regulations 162, 218,522 Environmental threats 238, 491 Enzyme 32,234, 235, 345, 396,403, 448,503, 542, 545, 617, 644, 758,830 EPA 235,236, 290, 315, 316, 323, 361,524,584,750,804 Equilibrium 27, 44, 51, 52, 98,105, 126,130,182,272,410,818 Equilibrium prices 648, 649 Equity 198,299 Erosion 560 Erosion of soil 218 Ester 368, 382, 711, 713-718, 715 Ethanol 231, 232, 233,234, 235,246, 247, 318, 326, 472,528, 533, 718,810,827,829 Ethers 494,495, 711, 838 Ethyl alcohol 482, 756 Ethylene 307, 396,445,446, 482, 487,493, 509, 516,534-536, 538,539, 798 Ethylene amine 531 Ethylene dichloride 482 Ethylene glycol 346, 396,449, 487,493,494,495,515,516, 523,524, 526, 527,533, 539, 798,804, 811, 819 Ethylene oxide 529, 540 Ethylene vinyl acetate 496 Eurocentric 7, 42, 45, 77, 522, 676
INDEX
Evolution 2, 66, 82,102,108,109, 329, 331, 362, 658, 675, 739, 800,810,830 Fabric 364, 457, 618, 619 Famine 287, 288 FAO 480 Faraday, Micahel 138,139-141, 158,509,787 Farmer 97,287 Farms Wind 230 Potato 315, 833 Fat 14, 39, 232, 315, 316, 361, 363, 518,598,641,708,711,712, 717,771,807,828 Fatty acid 350, 518, 542, 711, 714, 718,799 FDA 361, 699, 701,796 Fermentation 233, 234, 246,247 Fertilizer 14, 34, 35, 52,144,148, 165,181,232,339,424,427, 480,585,637,712,713,771 Filter 490 Fish 725 Fish scales 452, 749 Fishery 745 Fission 248, 726, 737 Flue gas 321,323, 421, 807, 833 Fluorescent light 24, 39, 62, 209, 226,275,415 Fluorescent lamp 263, 265, 276, 412,414 Foam 344, 346, 354, 363, 364,396, 448,519, 821 Formaldehyde 28, 39, 235, 236, 307,316, 324, 364, 482, 528, 529,716,717 Formation damage 800, 825 Formic acid 482, 527,528, 530 Fossil fuel 31, 75, 147,154,155, 174,215,216,217,219,225, 227, 228, 233, 235, 238, 242, 243, 245, 247
853
Fouling 237, 456, 754,805, 828 Fracture remediation with bacteria 627-629, 63 Free energy 246,253, 321, 691 Free market economy 222 Freezing 306 Freezing point 514,517, 594 Fresh water 519, 574 Fructose 640, 641 Fuel cell 22, 240, 241,246, 771 Fuel type 216, 250,451, 470 Fugitive emission 483 Fungus 234, 753 Furfural acid 246 Fusarium 349 Galaxy 86,87 Galileo 59, 68,81,82, 85, 87, 88, 116-121,138,141,161,205, 802,814,823,836 Gas diffusion 238, 736 Gas permeance 459, 487 Gas turbine 421 Gasification 231 Gasoline 156,163,218,219, 220,233,240,246,316,318, 435-437, 441, 443, 449, 451, 466, 477, 479, 482,484, 604 Gasoline engine 318, 717 Gel 542 Genetic 707 Genetic changes 364 Genetic engineering 25,370,713,771 Genetic identity 706 Genetic intervention 181 Genetically modified 14, 24, 25, 30,165, 208, 432, 637,641, 642, 700,712 GDP 7,161, 247,311, 663, 677, 678, 681,682 Global climate 293, 304, 306, 308, 312,332,333 Global climate change 313, 601, 810,815,834
854
INDEX
Global Efficiency 428 Global warming 14,15,19, 26, 41, 54,154,155,160, 215,242, 243, 245, 283, 284, 285, 290, 292,296, 304, 307, 308, 318, 324,327, 328, 330,333, 334, 465,469, 505, 527, 708, 709, 770,800,813,814,821,830 Globalization 24 Glucose 194,234 Glutamic acid 340 Glycerine 714, 717 Glycerol 316, 518, 519, 711, 714, 715,717,718 Glycol 10,162,209, 222, 241,306, 346, 396, 448, 449, 464, 467, 469,485, 487, 488,481, 492, 494,514,516,520,530,531, 533, 644, 752, 753, 821, 823, 834 Glycol aldehyde 493, 494,526 Glycol from natural source 752, 753 Glycolysis 346, 821 GNP 200, 661 GPS 67 Gravitational forces 85,125,131, 137,138,142,146,157,261, 329, 637 Gravity 68,117,137, 205,453, 455, 486, 594, 603 Gravity number 614 Gravity override 610 Gravity segregation 606, 608,612 Gravity separation 714, 764 Gravity stabilization 606, 608, 612 Great depression 4, 649 Greeks 58,65, 66, 71, 74, 77, 78 Green chemistry 34, 830, 838 Green revolution 5,427 Greenhouse gas emission, see Emission, Greenhouse gas Ground water 306, 412,450, 725, 795 Growth rate 230, 247, 292, 495, 542, 565,571,572,574,600
Habitat 13, 22,61,161,202, 331-333, 724, 744 Half life Cadmium 480 Ethylene glycol 523 Methanol 524 Uranium 426 Halogen 293 Hazardous material 412,413, 440,487, 802,828 Health effects 346, 363,364, 695, 716,731, 825 Health problems 235,315, 346, 464,518 Hearing loss 480 Heart 494,524,525 Heart attack 39 Heart disease 184 Heat 8,14,17,31, 35,53,62, 72, 148,156,157,160,182,210, 213, 224,225, 235-237, 239, 240,250, 304-309,324, 330, 371,372, 394, 406,410, 415, 419,424,425,436,470, 471, 550 Heat absorption 544,689, 694 Heat exchanger 228 Heat source 30,165, 254, 540,543 Heat transfer 322, 419,420, 495 Heating value 208, 209,472 Heavy metals 24,155, 226,248, 306, 324, 327, 343, 345,358, 412, 414,425,427,440,457,465, 466,473,479,505,618 Hebrew 55 Helium 260,485, 724 Helium removal 492, 493 Helmholtz 140 Heterogeneity 581,610, 612, 613, 615,620 Heterogeneous 96,182, 259,267, 336,355, 365, 382, 409, 440, 527,565, 546, 646,830 Holistic 207, 255
INDEX
Hollow fiber 427, 492, 755 Holocaust, thermonuclear 27 Homo sapiens 11, 20, 22 Honey 765, 766 HSSA Syndrome 530, 642, 743 Hydrate 18, 209, 222, 251, 414, 467, 468,484,485, 486, 487, 489, 492,504, 508-511, 514-518, 520-522,588,751,814 Hydrate inhibitors 513, 515, 516, 517,520,521,531 Hydrate prevention with bacteria 530 Hydrochloric acid 232 Hydrocracking 437, 441 Hydrofluoric acid 412, 446-450, 505, 715, 757 Hydrogen peroxide 454 Immiscible flooding 567,600, 603, 606 C 0 2 607, 608, 615, 818 Sour gas 611, 615 Unstable 608, 610,613, 614 Incandescent light 92, 209, 257, 259 Incandecent lamp 263, 264, 267, 272, 274, 275, 276, 277-280,414 Information age 68,101,162,193, 296,636,685,761,763,764 Information theory 145 Infra-structure 1, 58, 247,288, 507, 511,663 Infra red 272, 273,351, 352, 546, 601,748 Infusion 674 Initial conditions 52,182 Ion exchange 439, 757 In-situ combustion 610, 809 Jacobinism 659 Jatropa oil 236
855
Jevons, W. Stanley 645, 646, 647, 652, 63, 654, 655, 656,657, 658, 659, 660 Jevon's paradox 654 Kelvin 7-9,16, 21, 22, 50, 95, 193,410 Knowledge-based 46, 85,128,150, 165,166,254,255,286,663, 664, 740, 773, 809 Knowledge-based energy model 253 Kyoto protocol 15, 229, 230,283, 285,296,297,299,300,310, 311,312,334,491,746,816 Least developed countries, LDC 296, 300, 301, 684 Life cycle 11, 24,175,180, 245, 317, 320,348, 349, 365, 403, 406, 412,414,426, 444, 450,467, 469,493,496, 686,687, 688, 689, 690, 691, 715, 724, 734, 736, 822 Lung cancer 33,39, 303, 793 Manufacturing energy consumption 415 Mass balance 24, 72,148,151,153, 158,166,167,168,172,183, 189,431,432,762 Mass balance equation 190, 192,193 Mass balance of water 304 Material balance 150 Maxwell's theory 50,138,139,140, 141,143,148,158 Mayan 4 ME A 485, 491,493, 503, 523, 525, 527,528,529,530,531,533 Media density, see Density, media Memory of water 286, 305, 306, 308, 346, 834 Memory function 407
856
INDEX
Mercury 23,168,363, 412,414, 445, 446, 448, 479, 782 Mercury vapor 275 Methyl alcohol 527 Microbial enhanced oil recovery 570,810 Miscible flooding 567, 609, 614, 615,801,808 C 0 2 600, 603-608, 612, 616, 797,831 Sour gas 611 Unstable 608, 610-614 Mobility 459,595, 597, 602, 604 Mobility control 610 Mobility ratio 613, 614 Molybdenum 437, 439, 448, 482, 552, 553, 554 Momentum balance 96,150, 151,158, 762 Monitoring 32,312,545,761,823,827 Morpholine 449 Morphology 368, 380, 570, 586, 617, 826, 833, 835, 838 Mushroom 536, 752, 795 Natural catalysts 232,235, 246, 318, 466, 716, 757 Natural C 0 2 1 5 , 242, 243, 284, 291, 292,317,325,326,333,334 Natural C 0 2 emission 325 Natural design 95 Natural detergent, see Detergent, natural Natural energy source 17, 228,432 Natural gas 10,18,155,168, 174-177, 216-218, 220-222, 225,240, 249, 250,252, 451 Natural gas composition 468 Natural gas reserve 221, 250, 251 Natural gas stove 325 Natural light source 12, 30, 257, 266 Natural pathway 2, 401, 425 Natural pesticide 35
Natural process 5, 6, 7,15,48,49, 102,106,107,109, 111, 122,132, 148,149,150,165,167,182,189, 194,204,239,399,403,431 Natural resource 2, 3, 8,11, 21,101, 197,198,200, 387 Natural traits 49 Natural vitamins 28,169,171 Nature 3, 5, 7, 8,9,11,17,18,20-28, 30-37, 41^16,53,62, 64, 65, 67, 69, 71, 74-76, 79,83, 88,89, 91, 94-96, 98-101,108 Nature science 115-117,118,120, 158,762,769,798,810,812, 813,838,839 Neon 260, 275 Newton, Sir, Isaac 6, 7,8, 26,44, 46, 47, 51,52, 68, 70,124,132,136, 139,161,182,836 Newtonian calculus 96,128 Newtonian mechanics 95,121,123, 126,134,138,240,144 Newtonianism 128 Newton's laws of motion 43, 51, 68,69,85,96,124,128,130, 137,157,182,183 Nickel 153,156,412, 414, 437,439, 446,448, 449,479,482,806 Nitric acid 232,307, 412,413, 781 Nobel prize 32,33,34,37,54,100, 161,171, 248, 410, 433,522, 675, 700, 702, 703, 705, 706, 708, 739, 743, 744, 761, 785, 789, 790,798,817 Non-linear 36,44,115,116,120, 142,143,177,182,183,185, 331,381, 397, 743, 762 Non-linear equations 147,154 Non-linear methods 144 Non-linear problems 144 Non-renewable energy sources 221,249,685 NOx emission 632 Nuclear energy consumption 237
INDEX
Opacity 158,646,762 Olive oil 31, 93,94,144, 208, 213, 373,533, 537, 555-557, 559, 560,562, 563, 565, 599, 711, 753 Olive oil press 41 Onion 537 Organic acid 440, 447, 448, 450, 595, 597, 758 Organic alcohol 318 Organic CO, 26,54,167, 299 Organic coatings 822, 829 Organic emission 62 Organic energy 194 Organic energy source 166 Organic evolution 108,109 Organic fertilizer 35, 52,144,148 Organic food 5,30 Organic growth 106 Organic matter 13,18, 528, 713, 760 Organic methane 444 Organic molybdenum 553 Organic orange 54 Organic process 11,216 Organic saturated fat 771 Organic source 28,151,189 Orthosilicate 574, 580, 581,582, 799,808,816,832,839 Ozone 33 Paradigm shift 1, 9,19, 26, 96, 122,167,215,246,255,405, 429, 505, 763 Particle density, see Density, particle Pathway 8, 9,10,14, 20, 22, 32, 33, 36, 44, 46, 47, 48, 53, 54, 59, 60, 61, 68,75, 78,88, 90,102,122, 129,144,147,148,149,151,153, 156,157,164,169,171,189,191, 196,206,208,211,213,225,226, 235,243,259,283,285,294,301, 306,308,408,409,428,577,665, 701,770,771 Pathway analysis 18,336, 346, 366, 464,471, 493, 696,800, 813, 814
857
Pathway, aphenomenal 99 Pathway, divergent 403 Pathway, irreversible 177,183 Pathway of amine 495 Pathway of chemical additives 525 Pathway of conversion process 406, 407 Pathway of crude oil 293, 395,396, 469,470, 505 Pathway of energy system 309, 316 Pathway of glycol 346 Pathway of light 266, 267 Pathway of nature 185 Pathway of nuclear energy 427 Pathway of nylon 397 Pathway of organic compounds 209 Pathway of petrodiesel, biodiesel 715,717 Pathway of polyurethane 343, 345 Pathway of rubber and latex 400,401 Pathway of sustainability 423 Pathway of wood 307 Pathway of refining process 157 Per capita energy consumption, see Energy consumption, per capita Permeability 458, 459, 574, 593, 606,620,621,754,796 damage 567 dimensionless 568, 569 Phosphatidycholine 495, 817 Phospholipid 542 Photosynthesis 323,425 Photovoltaic 14, 23, 24, 248,254, 327,411, 432, 543, 770, 771, 828 Pinhole camera 93 Platinum 156,157,437,438,440,449, 478-480, 505, 716, 819,820,831 Plutonium 236, 731 Poiseuille 51, 93 Population 161, 202, 218,220,223, 230,247, 248, 253,287, 357, 619, 625, 681-683, 694, 706, 707, 747,831
858
INDEX
Poverty 299, 675 Pressure Minimum miscibility 600, 603, 604, 615 Propaganda 289, 362 Propane 240,396, 467,468, 470, 472,484, 485, 489,508, 509, 550,818 Propanol 492, 754 Psychology 7,92,122 Psychrophilic 560 Radioactivity emission, see Emission, radioactivity Refinery emission, see Emission, Refinery Relativity theory 50, 74, 75, 111, 113,124,132,142,148,157, 158, 407,408 Renewable energy 1, 24, 216, 219, 220,221, 224, 227,228,235, 242, 245,249,252,311,315,405, 421,426,427, 444, 684-686, 733, 770, 797, 805, 807,814, 819, 820,824, 825 Renewable source 223, 225, 230, 255,326,420,428,432,715 Scattered electron Imaging 376 Separation, downhole 455, 456, 457,813 Silica 24, 232, 411, 412,424, 437, 440, 496, 641, 752, 755 Silica fume 628 Silica gel 458, 485,487, 488, 755, 778, 542 Silicate 329, 502,593, 618 Silicon 14, 23, 226,248, 260, 327, 394,412,414,448,587,588, 618,619,770,778 Silicon dioxide 23, 411, 587 Soap nut (natural) 818 Solar absorption 690, 692, 693 Stability 52,182, 510, 608, 659, 718,808,821
Sulfuric acid 232, 235,246, 316,412, 446-449, 479,482, 505, 713, 715,716,721,757,781 Sustainability 2, 4,8,9-13,15, 21, 25,29, 41,47,53, 76,148,159, 178,207,216,246,314,335, 336,365, 366, 439,440, 465, 636,710,713,741 analysis 150 conditions 196 criterion 15, 33,159,163 indicator 199 index 696 model 179 of biodiesel 317 of nuclear energy 724 of photo voltaic 24 of wood ash 598 Synthetic acid 471 TEA 493, 330,531, 615, 644 Thermal diffusion, see Diffusion, thermal Tobacco companies 33, 793 Tobacco processing 644 Tobacco products 368 Tobacco technology 184 Total energy consumption 221, 245, 406, 464, 656 Uranium 16,18,34,99,225, 236, 237,238, 239, 244, 341,425, 426,427, 730, 731, 733, 736, 770, 771, 797, 799,822, 823, 829, 835, 838 Uranium Information Center 733 Uranium reserve 722, 723, 726, 727 US Coast guard 175 Viscosity 51, 433,441, 475, 516, 572, 602, 603, 604, 606, 612, 710, 714 Crude oil 474, 475, 594 Viscous fingering 612 Volatile organic compound, VOC 174,360,476,482
INDEX
Wall Street Journal 705, 808 Whole number 69 Whole sale 651, 676 World war I 288, 658, 665, 705 World war II121,122,519, 652, 662, 703, 704, 833 X-ray 33,108, 351,352, 546, 564-567, 586, 587, 629, 630, 748
859
Xenograft 220 Xenon 275 Zeolite 437, 439, 440,449, 458, 460, 755, 758 Zero waste 16,17,162,192,225,233, 253,255,284,319,320,322,323, 324,325,334,425,431,440,444, 447,450,660,684-686,737,758, 761,813,839
The Greening of Petroleum Operations by M.R. Islam, A.B. Chhetri and M.M. Khan Copyright © 2010 Scrivener Publishing LLC.
Also of Interest Check out these forthcoming related titles coming soon from Scrivener Publishing Acid Gas Injection and Carbon Dioxide Sequestration, by John Carroll, ISBN 9780470625934. Provides a complete overview and guide on the very important topics of acid gas injection and C 0 2 sequestration. PUBLISHED Advanced Petroleum Reservoir Simulation, by M.R. Islam, S.H. Mousavizadegan, Shabbir Mustafiz, and Jamal H. Abou-Kassem, ISBN 9780470625811. The state of the art in petroleum reservoir simulation. PUBLISHED Energy Storage: A New Approach, by Ralph Zito, PUBLISHED, ISBN 9780470625910. Exploring the potential of reversible concentrations cells, the author of this groundbreaking volume reveals new technologies to solve the global crisis of energy storage. Formulas and Calculations for Drilling Engineers, by Robello Samuel, September 2010, ISBN 9780470625996. The only book every drilling engineer must have, with all of the formulas and calculations that the engineer uses in the field. Ethics in Engineering, by James Speight and Russell Foote, December 2010, ISBN 9780470626023. Covers the most thought-provoking ethical questions in engineering. Zero-Waste Engineering, by Rafiqul Islam, February 2011, ISBN 9780470626047. In this controvercial new volume, the author explores the question of zero-waste engineering and how it can be done, efficiently and profitably.
Fundamentals of LNG Plant Design, by Saeid Mokhatab, David Messersmith, Walter Sonne, and Kamal Shah, August 2011. The only book of its kind, detailing LNG plant design, as the world turns more and more to LNG for its energy needs. Flow Assurance, by Boyun Guo and Rafiqul Islam, September 2011, ISBN 9780470626085. Comprehensive and state-of-the-art guide to flow assurance in the petroleum industry.